Saturday, December 18, 2010

RPATH, RUNPATH and LD_LIBRARY_PATH

Long time since my last post - my bad.

Recently I have been working on a renewed FWTools for linux - hopefully I'll follow up on that later. Much like in the past FWTools is intended to work across a variety of linux systems (ie. Ubuntu, Redhat, Suse, Gentoo) and to not interfere with the native packing system nor to require special permissions to install.

To that end we try to distribute as many of the required shared libraries as possible, and we use the LD_LIBRARY_PATH environment variable to ensure FWTools finds and uses these shared libraries in preference to any system provided versions which might be incompatible (though often aren't).

In my local testing I was getting frustrated as my shared libraries were not being used. Instead the system versions of things like libsqlite3.so.4 were getting picked up from /usr/lib64 despite my LD_LIBRARY_PATH pointing to a copy distributed with FWTools. I check this with the ldd command which shows what sublibraries an executable or shared library need and which is being picked according to the current policy.

I carefully read the man page for ld.so, which is responsible for loading shared libraries at runtime, and it indicated:

The necessary shared libraries needed by the program are searched for
in the following order

o Using the environment variable LD_LIBRARY_PATH
(LD_AOUT_LIBRARY_PATH for a.out programs). Except if the exe‐
cutable is a setuid/setgid binary, in which case it is ignored.
...

Well, my programs are not setuid/setgid nor are they a.out (very old style of linux binary), so what gives? Due to poor time synchronization on my systems and use of nfs, I noticed that some shared library dates were in the future. I thought perhaps some funky security policy prevented them from being used, but that theory did not pan out. I tried doing it on a local file system incase there were some rule about not trusting shared libraries from remote file systems in some cases. Still no go.

However, googling around the net I found that my local ld.so man page was not very comprehensive. Another ld.so man page gave me a hint, referring to DT_RPATH and DT_RUNPATH attributes being used before LD_LIBRARY_PATH. I was vaguely aware of shared libraries having an "rpath" attribute which could be set at link time and might have some sort of influence on searching. Distribution packaging folks have sometimes bugged me about it, especially back in the old days before libtool was used for GDAL. But I had never really understood it well and it just seemed like unnecessary stuff.

Further digging confirmed that ELF (the modern linux object file format) executables and shared libraries may have an RPATH attribute built right into the file. This is essentially a directory that should be searched first for shared libraries that this executable or shared library depends on. RUNPATH is similar, but is apparently evaluated after LD_LIBRARY_PATH. I got my critical information from this blog post by Trevor King.

My old favorite, the ldd command, does not show the rpath settings though it does honour them. I also use nm to dump the symbols in object files and shared libraries, but it doesn't address special attributes like rpath. Digging around I found the readelf command which can report all sorts of information about a shared library or executable, including the RPATH, and RUNPATH. It also helpfully lists the shared libraries that are needed, something that often gets obscured in ldd by the chaining of dependencies.

The rpath attribute is generally set by libtool when linking shared libraries. GDAL includes a build option to build without libtool, and I contemplated using that which would help alot. But it wouldn't help for other libraries in FWTools that were not directly pulled in by GDAL. My goal with the modern FWTools is to - as much as possible - use standard CentOS/RedHat rpm binaries as the basis for the product. So I don't want to have to rebuild everything from scratch. What I really wanted was a way to suppress use of the rpath, switch it to runpath or remove it entirely. Low and behold, I am not the first with this requirement. There is a command called chrpath that can do any of these "in place" to a binary (see chrpath(1) man page).

chrpath -d usr/lib64/libgdal.so

This command isn't installed by default on my system, but it was packaged and it works very nicely. I'll preprocess the shared libraries and binaries to strip out rpath so my LD_LIBRARY_PATH magic works smoothly. I had even considered using it to replace the rpath at install time with the path inside the FWTools distribution for libraries. This, unfortunately, does not appear to be possible as it seems chrpath can only replace rpath's with rpaths of the same or a shorter length. In any event, FWTools still depends on cover scripts to set other environment variables so doing this wouldn't have made it practical to avoid my environment setup magic.

I owe a lot to some other blog and mailing list posts turned up by google. This blog entry is partly an attempt to seed the net with helpful answers for others that run into similar problems. I will say it is very annoying that a company decided to name itself "rPath" - it made searching quite a bit harder!

Sunday, July 18, 2010

OGR DXF Upgrade

Well, I just finished up a week of work implementing PCIDSK Vector write and update support. My next task is an upgrade to the OGR DXF driver requested by Stadt Uster to better meet their production requirements.

They need the ability to control layer naming, production of dash patterns, and producing objects as block references.

Currently the DXF writer is dependent on everything except the entities section coming directly from a template file. This has meant it was not practical to create layers, line styles and block references on the fly. This is going to have to change now, though we will continue to use the template header extensively.

The first approach considered was just to require the user to develop a template header with all the layer names, line styles and block references predefined. This might have been adequate for Uster who have specific needs and once a template was prepared they could generally just reuse it for additional products. But it would have made the new capabilities of very little utility to other users of GDAL.

So the planned approach has two parts. First we scan the template header for layer definitions, and block definitions (and possibly line style definitions). Then as we go through the entities if we find these layers or blocks referenced we just use them directly.

However, if we find layers referenced from the objects being written to the DXF file (based on the "Layer" attribute) we will automatically create a new layer in the header, matching the configuration of the default layer ("0"). This means we can automatically create layers that have identity even if they are otherwise indistinguishable.

For block references we use a roughly similar approach. We prescan for block definitions, but then we extend them with any entities written to an OGR layer named "Blocks". Normally all DXF entities are exposed through an OGR layer called "entities" though when writing we accept any layer name, except now for Blocks which is special. Then when we write we allow these blocks to be referenced based on a BlockName attribute.

Corresponding behavior will also be available when reading. If desired (a config option turned on) we will expose block definitions as a Blocks layer, and the actual block references in the entities layer will just be a point feature with insertion information. The default behavior will remain what it does not - which is to inline copies of the block geometries for each block reference as this is the only approach that will be handled gracefully by most applications or writers.

For line styles it is not clear that it is helpful to predefine them (though I'll examine that during implementation). The only aspect we are interested in preserving and producing is dash-dot patterns. But while writing I will create new line styles for each such pattern encountered, and then write them out to the header.

Currently the DXF writer copies the template header to the output file, then writes entities, then appends the trailer template. In the future I will need to write the entities to a temporary file, storing up block, layer and line style definitions in memory. Then on closing the dataset I will need to compose the full header, append the entities and trailer.

I dislike this pattern. It introduces a need for temporary files which can lead to surprising disk use requirements. Also, if something goes wrong the temp files may not get cleaned up properly. We also lose any hope of streaming operation. However, to achieve what we want to with the DXF driver it seems unavoidable.

Hopefully I'll have the DXF changes in trunk within a couple weeks. If there are folks interested in DXF generation, keep an eye on SVN for updates. Beta testers appreciated!

Thursday, June 17, 2010

Travelling

I have spent this week in beautiful Chicoutimi / Saguenay this week at the Rendez-vous OSGeo Québec. This is a two day conference put on by the local chapter here in Quebec in association with another local geomatics conference.

It has been some time since I have been to a significant event, unfortunately having missed FOSS4G in Sydney, so it was good to be involved again. As long ago as the Ottawa OSGIS conference in 2004 I recall Daniel Morissette, Steve Lime, and me hanging out in Dave McIllhagga's penthouse and talking about visiting Chicoutimi. Perhaps because of that Daniel, who was a primary organizer, made an effort to invite Steve and me. Unfortunately Steve was unable to attend in person due to a knee injury, but many other friends were here including Paul Ramsey and Jeff McKenna. As you can imagine, in addition to an excellent program, there were great social activities.

Today, during the other geomatics conference, I am helping to man the OSGeo booth. I unfortunately speak no french, and I was worried about having nothing I could say or provide to french-only speakers. However, I was immensely pleased to discover that the booth was well equipted with the full set of OSGeo promotional materials all translated into french, printed out and very professional looking.

I have at times not been all that excited about translation efforts. I am uni-lingual, and can't really help. I have also had the impression that at least some of the translation efforts have been rather haphazard and might not reflected particularly well on the projects or OSGeo. But here I was very impressed with the complete and professional job that has been done with the promotional materials, and also how useful it is. It really is very helpful for our credibility today, and I can see this would apply in many other venues. I understand the translation is done by the Francophone chapter, and that one major contributor has been Yves Jacolin. I am sure there are many others also involved. So, my hat is off to their efforts.

I should also mention, I finally got to see Paul's fantastic talk "Beyond Nerds Bearing Gifts: The future of the Open Source Economy". I can see why everyone was raving about it after Sydney. One message that I took away from the talk was that as all companies become open source companies to some degree, we need to be cautious about thinking of ourselves primarily as open source companies or consultants. Our excellent integration into the open source ecosystem is one of our selling points, but we need to ensure our primary focus is on serving the needs of our clients effectively and efficiently.

Today I was also invited to attend a technical meeting in Zurich at the end of the month as part of the GEM (Global Earthquate Model) project. I was not aware of this activity at all until Steve Arnold (aka nerdboy on IRC) brought it to my attention recently. Among other things, GEM is interested in using open source components in their IT infrastructure and I will provide some feedback on their plans and efforts so far, particularly on the geospatial aspects.

This week, after discussions with Paul Ramsey, Jeff McKenna, and Venkatesh Raghavan, I am tentatively planning to go to Japan in November for the OSGeo meetings held there. Everyone who has attended these meetings in the past as an invited speaker has raved about what a good experience it is, so I am looking forward to it very much. I believe that Paul Ramsey will also likely be there, and Jeff may be back in Japan at the same time for his OSGeo4W cooperation.

Also, everyone is very much looking forward to Barcelona. I am confident it will be the best FOSS4G ever. I have registered, but still need to book my flight and hotel. I'm thinking of taking a little extra time to see the area. I sometimes find travel tiring and puts me behind in my work. But I think the social and networking opportunities are too good to miss. These meetings reengerize me in ways that last much longer than the jet lag.

Sunday, March 14, 2010

Slaying the Datum Shift Dragon

In the last few weeks, I believe I have made substantial improvements to the way datum shift information is derived from the EPSG dictionary for use in GDAL/OGR, PROJ.4 and related packages.

The EPSG database model supports having multiple datum shift options for a particular datum, such as "Potsdam Rauenberg 1950 DHDN" to get to WGS84. This reflects the reality that datum conversion is often an approximation for which there may be multiple reasonable solutions. Often different datum shift parameters are appropriate depending on the sub-region of the datum being used.

Many other coordinate system dictionary models and GIS systems do *not* provide for a multiplicity of datum shift options. For instance, the OGC Well Known Text representation of a coordinate system only allows for one TOWGS84[] clause indicating the preferred mapping to WGS84. This is also true of many software packages, including those based on PROJ.4 and it's epsg based dictionary.

In the past, the code I had implemented for translating the epsg database to a dictionary format assumed that datum shift parameters should only be carried into the dictionary if there was one, and only one shift available in the EPSG database. I took this approach because it seemed very dangerous to arbitrarily select one of several possible datum shifts without any intrinsic knowledge about which was most appropriate. I depending on users seeing that no datum shift was available in the default translation and taking this to mean that they had to do some research to establish what was most appropriate for them.

Predictably, the real result was massive confusion and complaints. At one time if there was no datum shift available, PROJ.4 would just do a transformation based on the change of ellipsoid which was often a very poor choice. So in PROJ 4.6.0 I altered the code not attempt any datum shift transformation if either or both of the source and destination coordinate system lacked datum shift information.

While this got rid of one family of errors, it also triggered lots of additional frustration and confusion. Part of this was just because there was a change of behavior. But it was also pushing people a bit harder to determine the appropriate datum shift and they did not like having to do this.

So, at last, I have taken the plunge and reworked the scripts used to translate the EPSG dictionary so that they attempt to pick one shift if there are several available. I follow a few heuristics in this effort.

1) I discard any datum shifts marked as deprecated.

2) I examine the supersession table to identify any datum shifts that have been superceeded by newer forms and ignore the superceeded forms.

3) I try to pick the datum shift with the larges "area of use" region under the assumption that it will likely be the broad use shift rather than a shift only applicable in a small area.

4) I examine the datum_shift_pref.csv file to see if there is a user supplied preferred datum shift to use. If so, I use that.

The result of all this is a datum_shift.csv file which includes all the datum shifts, with one of them marked as preferred. That preferred version goes into the gcs.csv file for use with the associated geographic coordinate system.

This seems to be working reasonable well, and I did a big pass through open tickets in GDAL, libgeotiff and PROJ.4 to find outstanding datum shift issues. I believe that the bulk are now resolved.

Currently GDAL, and the PROJ.4 dictionary are just using the preferred datum shift. But the intention of keeping the datum_shift.csv file with all the possible shifts for any given GCS is that savvy applications could let the user choose. Also, my hope is that an each mechanism will be added in the future so that users can alter the preferred setting in the datum_shift.csv file, and then have the gcs.csv regenerated. That waits to be done.

I'm also fixing a few other issues in the coordinate system realm including support for axis orientation (ie. South Orientated Transverse Mercator) and fixing a few other translations. I'd like to thank INGRES who have supported this work as part of an effort to bring top notch coordinate system support into the INGRES geospatial project. I'd also like to thank Jan Hartmann, and Mikael Rittri who assisted with suggested approaches, and verification of the results.

Wednesday, February 10, 2010

GDAL 1.7.0 is dead, long live GDAL 1.7.1

Due to a serious bug in GDAL 1.7.0 resulting in all Erdas Imagine files produced by the release being unreadable to any non-GDAL applications, including Erdas Imagine, the project has decided to retract the 1.7.0 release. In it's place today we have issued a GDAL/OGR 1.7.1 release. I sincerely hope that use of 1.7.0 in the wild will not end up producing unreadable Erdas Imagine files that haunt us for years.

The introduction of the bug was due to a very subtle issue, and it didn't affect GDAL's ability to read the files. So our extensive regression tests for this file format did not detect any problems. But the lesson would appear to be that we still need mechanisms to do cross testing with other packages as often as possible. We are looking into ways of more vigorous and organized cross testing at major releases.

Tuesday, February 9, 2010

World Mapping

Over much of the last decade I have fought with world mapping issues in MapServer. This falls into three broad areas - problems with requests crossing the dateline, problems with requests including the poles, and problems with requests extending beyond the domain of validity of a projection. Typically these are not a big issue when working on local mapping projects, but do become issues when doing world level views, or when exploring areas far from the domain of validity of some of the data.

Several years ago I made a modest effort to collect some hints and tips, and knowledge about the issues into a topic in the old MapServer wiki. With the planned decommissioning of the old mapserver web site at UMN (and in light of recent work in this area), I have migrated the old wiki topic to the MapServer Trac Wiki as OldWorldMappingIssues, and started an updated wiki topic on the issues.

The problems I have been investigating in recent months on behalf of a client, are related to providing WMS service in web mercator projection with MapServer, but cascading to another WMS providing data in EPSG:4326 (geographic). The initial problem report was with WMS requests in the Alaska/Siberia area that were oddly pixelated. The problem was that the request region from -21000000 to -17000000 easting was getting reprojected to geographic as -179 to 179 in longitude. This was occurring because value further east than the dateline (roughly -20000000 meters) were getting returned as positive longitudes since PROJ.4 returns longitude values in the range -180 to 180. This transformed BBOX was sent to the cascaded WMS, but with a width and height that reflected the approximate size of the original request. The result was that the map pulled from the cascaded WMS had very poor horizontal resolution (since 360 degrees of longitude were being reflected in relatively few pixels instead of the roughly 10 degrees that should have been fetched).

The solution I developed for this as part of ticket #3179 was to utilize MapServer's special "dateline line/polygon wrapping logic" in msProjectRect(). This logic basically makes it so that small polygons crossing the dateline will be kept local even if the longitudes go outside the -180 to 180 default range. The special dateline wrapping logic was implemented several years ago but was only used when reprojecting shapes. This worked well for this specific problem, but I avoided incorporating it into MapServer 5.6 since I was concerned it might be quite a disruptive change to drop in near release time.

Last week my client contacted me with a related problem. The default view of the world is roughly -21000000,-11000000,21000000,11000000 in web mercator. This is slightly beyond the real extent of the world, so by default PROJ.4 maps -2100000 easting to around +179 longitude even though I might prefer to have thought of it as -181 longitude. The simplest transformation of the web mercator BBOX to lat/long gives an area of roughly -179 to 179 longitude, and -90 to 90 latitude. This is a reasonably close approximation, but loses a bit of data at the edges. This is an issue that has come up in several forms in the past and is sometimes is helped by using of the LOAD_WHOLE_IMAGE processing option.

But my client was running into a different problem. They were seeing WMS requests with a BBOX of -179.79,-89.00,323.96,85.07. The funny item being the longitude of 323. It turns out that this is due to relatively poor handling of world scale polygons in the dateline wrapping logic. It rewraps some points and then realizes that the polygon is really not a small local polygon and it leaves some points unaltered. This can result, as here, in polygons more than 360 degrees wide. This causes some sort of flipping anomaly which I did not investigate.

After some analysis and hair pulling I came to the conclusion that it was essentially impossible to make the polygon dateline wrapping robust in the face of mercator to geographic transformations that had already wrapped things into the -180 to 180 domain. Even if I backed out the approach entirely, I'd still end up with substantial over-requesting in some cases. For this particular problem I did institute a bit of logic in msProjectRect() that said a geographic rect that is wider than 360 degrees should be remapped as minx=-180 and maxx=180. But I was able to construct other requests that would still behave badly.

What I *really* wanted was for PROJ.4 to reproject mercator values of less than -20000000 easting as less than -180 longitude, and more than 20000000 easting as more than 180 longitude. In desperation I finally delved into PROJ.4 to better understand how this wrapping occurs and if their might be some way of avoiding it. I was flabbergasted to discover PROJ.4 was doing the -180/180 wrapping in pj_inv() and that there was a +over switch that could be used to turn this wrapping behavior off!

For my client, I am now advising they use +over in their web mercator projection definition, and this helps avoid many dateline related reprojection problems. I've also added some documentation on this in the PROJ.4 General Parameters documentation. (BTW, a shout out to Hamish who has improved this doc quite a bit!)

PROJECTION
"+proj=merc +lon_0=0 +k=1 +x_0=0 +y_0=0 +a=6378137 +b=6378137
+units=m +over +no_defs"
END

I'm planning to incorporate my reproject rectangle as a polygon logic in MapServer trunk and I hope people will keep it in mind as a potential source of other problems. But I think wider knowledge of, and use of +over may be the big win of this whole effort! I also hope people will contribute to the World Mapping Issues topic in the MapServer wiki.

Friday, January 15, 2010

Visiting China This Month

In a diversion from my usual programming related geo-geeking, I'm heading to China on January 20th to do some in person, on the ground examination of geography. I will be visiting a friend there, and returning on the 29th. I will be visiting Shanghai, and Fuzhou (on the coast roughly half way between Shanghai and Hong Kong).

Today I received my visa from the Chinese embassy. This is the first time I've ever had to get a visa for international travel, and I have a new respect for those who have to do it regularly to attend events like FOSS4G. It is nearly a five hour activity to travel to Ottawa and visit the embassy, which I had to do to drop off my request, and to pick up the visa. Also, they were going to refuse to accept my request until I could provide a clear address I would be at. I had to rush out to an internet cafe to book a hotel for the first night just so I would have an address to write on the form. I barely got back before the office closed (9am to 1pm!), which would have cost me another day of travel. Normally I prefer to play things by ear after I arrive.

I'm flying via Newark despite my preference to avoid going through the USA given the security hassles that entails. But the price difference over flying Air Canada through Vancouver was just too significant.

I'm not sure what my connectivity will be, so I'm hoping to load up my laptop with some work I can do offline if needed. Likely I will focus on write/update access for PCIDSK vectors in the PCIDSK SDK.

Monday, January 4, 2010

hsv_merge.py

Recently Trent Hare indicated an interest in an extension to the gdaldem program to produce a color relief image mixed with a hillshade.

The gdaldem utility is a utility program for producing various products from DEMs (elevation models). It was originally written by Matthew Perry, with contributions by Even Rouault, Howard Butler, and Chris Yesson to incorporate it into GDAL proper. It can produce color relief maps, hillshades, slope, aspect and various roughness values.

But what Trent wanted was a product with both a hillshade and a color relief image merged. The color relief shows overall elevation with color, and the hillshade gives a sense of local structure. He has been doing this with ImageMagick to produce the product from the hillshade and relief produced by gdaldem, but this resulted in the georeferencing being lost. It is accomplished by transforming the relief image to HSV (hue, saturation and value) color space, and then replacing the "value" portion with the hillshade greyscale image. The value is really the intensity or brightness. This is a common technique and is often used to merge high resolution greyscale imagery with lower resolution RGB image to produce a result with meaningful color and high resolution.

I reviewed the gdaldem.cpp code with the thought of adding a new mode, but it turns out the program is fairly complicated and a new hillshade/relief mode would likely be hard to maintain properly. So instead I set out to create a utility that could do the rgb/greyscale merge in hsv space as a standalone utility. When possible I prefer to do non-core utilities as python instead of C++. It feels lighter and easier for folks to munge for other purposes.

In Python we generally try to do image processing using the Python numpy package, so that is what I set out to do. For some things numpy lets us treat arrays as if they were scalars, and operate on the whole array with a single statement. So the "scalar" function to convert hsv to rgb looks like:


def hsv_to_rgb(h, s, v):
if s == 0.0: return v, v, v
i = int(h*6.0) # XXX assume int() truncates!
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
if i%6 == 0: return v, t, p
if i == 1: return q, v, p
if i == 2: return p, v, t
if i == 3: return p, q, v
if i == 4: return t, p, v
if i == 5: return v, p, q
# Cannot get here


I was able to change this into a numpy based function that looked fairly similar in parts, but somewhat different in other areas:


def hsv_to_rgb( hsv ):

h = hsv[0]
s = hsv[1]
v = hsv[2]

i = (h*6.0).astype(int)
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))

r = i.choose( v, q, p, p, t, v )
g = i.choose( t, v, v, q, p, p )
b = i.choose( p, p, t, v, v, q )

rgb = numpy.asarray([r,g,b]).astype(numpy.byte)

return rgb

The first step just breaks a single hsv array into three component arrays and is just bookkeeping:

h = hsv[0]
s = hsv[1]
v = hsv[2]

The next few statements look very much like the original and are essentially simple math that happens to be done on whole arrays instead of single values. The only odd part is using "astype(int)" to convert to int instead of the int() builtin function that was used for scalars.

i = (h*6.0).astype(int)
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))

But following this there are many if/then else statements which had looked like:

if i%6 == 0: return v, t, p
if i == 1: return q, v, p
if i == 2: return p, v, t
if i == 3: return p, q, v
if i == 4: return t, p, v
if i == 5: return v, p, q

Unfortunately we can't really do something quite like an if statement with numpy that will be evaluated on a per-pixel (array entry) basis. The old scalar logic assumed that if a condition was met, it could do something for that case, and return. But for arrays we are going to have to do things quite differently. The above is essentially a case statement, so we instead use the numpy choose() function like this:

r = i.choose( v, q, p, p, t, v )
g = i.choose( t, v, v, q, p, p )
b = i.choose( p, p, t, v, v, q )

The choose function goes through each entry in the "i" array, and uses it's value as an index into the set of possible return values (v,q,p,p,t,v in the first case). So if i[0] is 0, then the first item v[0] is assigned to r[0]. If i[0] was 4 then t[0] would be assigned to r[0]. Thus the six if statements are handled as a choose() with six possible return results, and the choose is evaluated for each array entry (pixel).

This works reasonably well for this very structured case. We then finish by merging the red, green and blue values into an 3 band array:

rgb = numpy.asarray([r,g,b]).astype(numpy.byte)

One thing missing in my rendition of this function is the saturation == 0 case:

if s == 0.0: return v, v, v

It is very difficult to have the pixels for which the saturation is zero handled differently than all the rest since we can't actually return control to the caller for only some pixels (array entries). In this particular situation it seems that this was just an optimization and if s is zero the result will be v,v,v anyways so I just omitted this statement.

But this helps highlight the difficulty of taking different logic paths for different entries in the array using numpy. The rgb_to_hsv() logic was even more involved. The original was:


def rgb_to_hsv(r, g, b):
maxc = max(r, g, b)
minc = min(r, g, b)
v = maxc
if minc == maxc: return 0.0, 0.0, v
s = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
if r == maxc: h = bc-gc
elif g == maxc: h = 2.0+rc-bc
else: h = 4.0+gc-rc
h = (h/6.0) % 1.0
return h, s, v

This became:

def rgb_to_hsv( rgb ):

r = rgb[0]
g = rgb[1]
b = rgb[2]

maxc = numpy.maximum(r,numpy.maximum(g,b))
minc = numpy.minimum(r,numpy.minimum(g,b))

v = maxc

minc_eq_maxc = numpy.equal(minc,maxc)

# compute the difference, but reset zeros to ones to avoid divide by zeros later.
ones = numpy.ones((r.shape[0],r.shape[1]))
maxc_minus_minc = numpy.choose( minc_eq_maxc, (ones, maxc-minc) )

s = (maxc-minc) / numpy.maximum(ones,maxc)
rc = (maxc-r) / maxc_minus_minc
gc = (maxc-g) / maxc_minus_minc
bc = (maxc-b) / maxc_minus_minc

maxc_is_r = numpy.equal(maxc,r)
maxc_is_g = numpy.equal(maxc,g)
maxc_is_b = numpy.equal(maxc,b)

h = numpy.zeros((r.shape[0],r.shape[1]))
h = numpy.choose( maxc_is_b, (h,4.0+gc-rc) )
h = numpy.choose( maxc_is_g, (h,2.0+rc-bc) )
h = numpy.choose( maxc_is_r, (h,bc-gc) )

h = numpy.mod(h/6.0,1.0)

hsv = numpy.asarray([h,s,v])

return hsv

The key mechanism here is the computation of true/value masks (maxc_is_r, maxc_is_g, maxc_is_b) which are used with choose() statements to overlay different results on the pixels that match particular conditions. It may not be immediately evident, but we are essentially evaluating all the equations for all pixels even though the values are only going to be used for some of them. Quite wasteful.

The final result, hsv_merge.py, works nicely but I am once again left frustrated with the unnatural handling of logic paths in numpy. It took me over three hours to construct this small script. Most of that time was spent poring over the numpy reference trying to figure out how to do things that are dead easy in scalar python.

I love Python, but to be honest I still can't say the same for numpy.