Talk:True Offset Process

From OpenStreetMap Wiki
Jump to navigation Jump to search

Discuss True Offset Process here:

Meters or Degrees?

The text says offset is expressed in terms of meters. The tag scheme says (in bold text) offset distance is expressed in degrees. For now I will translate this into Japanese preserving the above mixture. User:Hideot

Fixed. It should be degrees everywhere. The first version of the specification went for degrees, but was changed for greater compatibility with JOSM imagery offset bookmarks. --Mackerski 20:38, 25 May 2011 (BST)

Update Intervall

How often is the data updated? --chris66 14:44, 18 February 2011 (UTC)

Process of accurately determining offset

It may be useful to describe the processes by which one might accurately determine the offset of imagery. For example, many people seem to be under the impression that GPS traces are more accurate (in general) than sat imagery. That is almost never the case when using high-res sat imagery from reliable sources, like USGS 0.25m or 1ft per pixel HRO, though there are exceptions. Even with an external antenna, my Garmin 60CSx gets estimated errors of about 3-6m while moving (more when changing direction or in mountainous areas), which is less accurate than most of the imagery I use.

So I've only done this a few times, but this is what I've done when I think there may be an alignment problem:

1. Find a benchmark on the ground that is visible in the satellite imagery. This is often the intersection of two road centerlines, which has been benchmarked by the local survey authority (usually the county in the US). There is usually a large cross painted on the pavement (visible on the sat imagery), with the benchmark at the middle.

2. If the published co-ordinates (lat/lon) of the benchmark have a reasonably low expected error (<~1m), you don't need to survey it - go to step 3. Otherwise, use a GPS receiver's averaging mechanism to get the co-ordinates on the ground. Without obstructions, my Garmin 60CSx gets down to accuracy of 0.6-1.5m within 30-60 mins. In the case of road centerline intersections, you'll probably have to measure on the ground from the benchmark to a place where you can safely put your GPS receiver for long enough (like the sidewalk) and then use bearings and distance to calc the benchmark's co-ordinates.

3. Create a point in your favorite editor in OSM based on the imagery and compare its co-ordinates with that of the benchmark to determine the offset distance and bearing.

For max accuracy, this should be done for multiple points in an area to determine the exact nature of the error (i.e. linear shift in one direction, magnification/reduction, pincushion, etc.).

Does this make sense?

AM909 04:58, 26 November 2010 (UTC)

I tend to use a simpler alignment procedure: Average a large number of tracks collected over several weeks for the same road, often just visually by looking for a cluster/centerline and then use that to align the imagery. If all tracks are collected by e.g. walking down the center of the sidewalk, you then shift the imagery so that the center of the sidewalk aligns with the track average. Do this for roads running east-west and north-south to get the offsets for both directions. Clearly, it is not as precise or refined as the process above but so far, has done the trick for me.
A point to note is that the proposed web service is completely independent of how the correct offset is determined. So it may be that in regions where we have no better information, a simple method like mine will be used at first. Then, when someone puts in the extra effort, the alignment is refined. Whenever the offsets are updated, the web service picks up the change and passes it on to editors.
Another point is that for now, in version 1.0 of the proposal, only a shift is to be supported. The JOSM WMS plug-in allows shifting only and this appears sufficient to fix most alignment problems. So it is a good starting point. If the proposal catches on, a future version can then add more complex adjustments by means of control points warping the imagery layer as required.
Undo 12:26, 26 November 2010 (UTC)
Your point makes sense - I'll note that not all imagery sources are well-aligned. Those which can be considered at least as accurate as can be determined by GPS should not IMO be adjusted using this process. However, in many areas of aerial imagery, both on Yahoo and now with Bing, there are indeed gross errors that will lead to broken map data if not corrected. And many mappers will be unaware both of the inaccuracy and of their ability to calibrate to compensate for that.
It is because this process is intended to deal with such gross cases (10m or more) that some of the adjustments you mention - pincushion, scaling etc. - are currently out of scope. Existing editor tools allow simple alignment shifts, but users are not using them because of lacking awareness. This is for now the single problem we seek to solve.
--Mackerski 13:18, 26 November 2010 (UTC)

The fool's errand

[copied from the NZOpenGIS mailing list]

> if it is a problem, is there a viable/reliable/scientific way to
> modify the imaging in the editors? perhaps someone needs to make a
> 'JOSM auto-correction plugin', which uses a lookup table to adjust the
> imagery at various points round the world?

nope, not really possible. It's a moving target and we've got no control
over it or idea when the upstream data changes or gets improved/fixed.
We could go to a lot of trouble and make a transform matrix for every
5km in the country at every zoom level, and as a function of elevation
and image angle/distance from center,... and then every six months they'd
swap out to new imagery with new quirks or improvements and we'd have
to start all over. I'm afraid there is not much that can be done, we
just have to document the black box as best we can to get a better
idea of where it can be trusted and where not.
Lcmortensen points out that the base of the sky tower is very close in both,
(where a bit of 3D building work has probably happened), so the cities may
be more trusted. [maybe]

> At the moment you can manually adjust the JOSM layer but I agree an
> auto-correct one based on some lookup seems like a better solution.
> Most people are not going to know/care that the Bing maps are of and
> will simply trace over the photos (I have been doing that myself).

I don't really know a solution beyond to get the word out that the satellite
backdrops shouldn't be accepted as 100% truth, so don't disturb existing
data unless you are sure of it, and to please add source=bing_imagery to things.
How to enforce that? impossible..... :-/

Sorry, it just ain't gonna work.

--Hamish 06:36, 27 December 2010 (UTC)
Well that's very defeatist. But certainly this is a difficult problem. I don't think you'd be the first one to point out many of the complexities.
Let's look at the main point there. aerial imagery providers may make updates without us being aware, which would invalidate the offset data. I wonder how much of a problem that is. How often are changes made? (mainly thinking about Bing) In most areas I'm pretty sure the answer is... not very often at all, verging on never. Maybe the "True Offset Process" could include some automated checks to see if imagery tiles are the same month-on-month. This would only need to be a sampling. Not checking entire tilesets.
-- Harry Wood 12:24, 29 January 2011 (UTC)
Actually, just to give an example, just one month after the Bing imagery became an allowed source, they updated the images for a big portion of southern Finland. For Helsinki we used to have photos from 2006, identified as a image set used by the pros and with a listed mean maximum error of 1.5 meters; now we have imagery (likely satellite) from 2010, but the alignment is much worse (resolution too). I read that a big area in Sweden got updated images at the same time, too. The switch started a week ago, but sometimes we still get tiles from the older set. IMO there is no other solution than to keep logging and sending the gpx files, and constantly aligning the imagery - unless locals identify the photos as being accurate from the start. When they're not properly aligned and rectified, they're more likely to have a suboptimal elevation model, and thus likely to contain visible distortions. For the better, older imagery, I was able to determine that they indeed were offset by about 2 meters near my home - a big junction with possibly hundreds of traces next to (locally significant) slopes. Alv 14:59, 29 January 2011 (UTC)
"sometimes we still get tiles from the older set" hmmm yeah that does make things awkward. I expect in your area it will settle down and stay the same for a few years now though. In general I can't believe they're making so many changes to imagery that we wouldn't be able to keep up with it.
A nice thing with Bing is that they provide a date set in the HTTP headers for each tile: see them with mvexel's tool which would mean an automated check could spot changes without having to compare tile image contents.
-- Harry Wood 17:22, 31 January 2011 (UTC)
How often are changes made? Every few months or less there is an announcement about new imagery for various parts of the world at To venture a wild and unsubstantiated guess I'd say you can count on updates quarterly to yearly, with a max lifespan of 3 years. If Google Maps (Bing's uphill competition, so must somehow be better than #1) is anything to go by, fwiw their last update was just two weeks ago.
Also a very grave problem is that the offset is not simply linear- it can be 2nd or 3rd order polynomial (or TPS) warped, especially in areas of rapidly changing elevation. So a linear correction can (will) change, even within a single tile. I don't mean to be negative but I see this as a massive time sink for what may only be ephemeral gain. And that makes me sad. Also to a small extent it instills false trust, which is dangerous. --Hamish 04:06, 23 February 2011 (UTC)
"I'd say you can count on updates quarterly to yearly". Yeah not in one particular area/city though. Mackerski wouldn't be proposing this if bing's coverage of Dublin was being updated every six months -- Harry Wood 14:12, 23 March 2011 (UTC)

This should be better called an offset sharing web service

Instead of a read only web api, where the offset is computed maybe from some noise gps tracks, the user should also be able to submit his new alignment data for a region and also see, what is already defined nearby, so he not further subdivide existing nearby alignment data, if not needed.

This has many advantages over an automatic offset computation:

  • Immediate updates possible on imagery changes.
  • Users don't have to over and over adjust the same region on every start of their editors, but can still update the offset if it is wrong.
  • Still helps to not let run new users into unaligned imagery.
  • There no longer problems with changes done by the imagery providers.
  • On areas with only few gps-tracks, such as for the Japan post-earthquake imagery, there are very few gps tracks available, so a good, if ever possible, correction, can only be made by a human. --Fabi2 19:18, 15 March 2011 (UTC)

This isn’t just about Bing

It’s tempting to pass opinions about this process that are really your opinions about using photography coming from Microsoft. If you do so you’re missing the point. Being able to calibrate imagery ourselves means that we can use a wider variety of sources, for instance photographs of poor countries commercial sources don’t bother with or up-to-date pictures after disasters or rapid development. Andrew 18:47, 25 March 2011 (UTC)

Recording offsets VS recording reference points

As has been mentioned on this page, keeping a database of offsets synchronized with an imagery service is a losing battle, mainly because there are many such services and they can update their imagery without us noticing. But the reference points (aka survey point) themselves do not suffer from that problem, since they (strive to) match the real world.

Here's an idea to let the editors use those survey points automatically :

  1. Setup the reference (needs human intervention)
    1. Find or create a known-good man_made=survey_point node
    2. Align the highest-res imagery available with that point
    3. Take a small "screenshot" of the imagery, centered around the survey point
    4. Record that screenshot somewhere, linked to the OSM node
  2. Use the reference (done automatically by the editor)
    1. Find the survey_point node with associated screenshot closest to the edited region
    2. Match the stored screenshot with the nearby service-provided imagery (could reuse algorythms from photopanorama-making software like hugin)
    3. Align the imagery


  • Data is much more independent of imagery provider (should even work for different providers)
  • Offset correction works even for pinch, zoom, and bad tiling (just keep aligning to the nearest survey point)
  • Many man_made=survey_point already exist in OSM
  • The data can be stored in OSM (say a "imagery_calibration:jpeg=Base64EncodedBinary" tag on the node) without worrying about OSM-stored data not coresponding to a real-world object.
  • Editors could alert the user about existing survey_points without imagery_calibration data


  • Image-matching algorythms can be complicated; some of them are patented
  • Images may not always match, or the match may be untrustworthy (can show a dialog box in those cases)
  • Some copyright issues could prevent us from storing the screenshot (whould need to negociate the right to do so)
  • Even though the screenshot doesn't need to be big (say 128x128 pixels), it may be (?) considered too big for an OSM tag, forcing us to use an external service.

In any case we'll probably need an editor dev to experiment with this idea before going further. What do you think ? --Vincent De Phily 18:51, 25 May 2011 (BST)

This is great! Why do we need automatic process for this? Without Image analyzing, a manual process could be implemented, and editor software could assist the users to use it. Similar to "man_mande=survey_point" other well visible objects could be tagged. Tagging would include a description of what's the marked object, and users would manually adjust the Imagery layer to those "calibration" objects. (Editor software would help by menus, alerts, highlight, etc.) Any OSM element could be marked as calibration object, if it was surveyed very precisely and is well identifiable on the Imagery.
For example: It could be two intersecting thin(!) ways, or the head a well visible statue. Tagged with their usual tags, plus calibration=yes, calibration:description=Center of a white statue on a dark square. Additional imagery screenshot may be useful, especially because the object may change. For start it could accept that user provides screenshot on a calibration:url= value. It may also need to store the coordinates in key/value, otherwise I don't know what if someone moves the calibration object to the Imagery. :-) It may require a special key/value in the last changeset to accept without warnings. e.g calibration=*.
And this does not need any software support for start, users can start making/using it right away, by finding calibration objects manually in the downloaded data (e.g. in JOSM). Rendering of real world objects must change from their icon to a calibration cross (or this could be switched). And precision info should be added, where we decide whether one of this point is enough (commercial quality surveys), or other users will need to add several simple GPS points for the same objects, and the center of them must be calculated (connected with a relation).
- Kempelen 00:28, 7 August 2011 (BST)
An example: node 1507296883. A painted 'target' is easily visible in Bing. I forsee osm-ers buying white paint, in gobs of 10... --Gorm 00:42, 22 November 2011 (UTC)
I agree that a better service would be based on recording reference points, and the idea of storing a small screenshot is a powerful one, letting us avoid storing offset information directly. That's clever! However the idea of storing this in a tag sounds like a horrible abuse of OSM data structures to me. It should be separate database, but could be brought into the fold as a core OSM API if it proved popular.
With each little screenshot we'd need to store the lat/lon obviously (this being the lat/lon which a pro-mapper has figured out by GPS averaging, to be the correct location of whatever is at the centre of the screenshot)
Now being a suspicious sort of chap I wouldn't trust people to do this diligently, so I think it should also store (and require) references to a set of GPS tracks as evidence. The system could let people review each other's reference points based on this, and add their own tracks as evidence to reinforce or refute a location.
This is actually the nice thing about working with reference points. Because it's closer to the raw information we're basing offsets on, we can check eachother's work in this way. With other schemes I would always in the back of my mind be thinking "was this offset shared by somebody incompetent, who thinks their GPS is perfect" (which is much the same as I always think when looking at offset data at the moment)
One bad thing about working with reference points is, it gets more complicated to use the data. The worst case is if you've got two reference points on screen at once, and their offset aren't exactly the same. In an ideal world the editor would do rubbersheeting of the imagery between the two (which is complex)
A system that lets us gather reference these points as evidence, is quite an achievable development goal though, and we might hope that the display and auto-interpretation of the data could get more sophisticated once this was available.
-- Harry Wood 18:09, 13 September 2012 (BST)


The means of correcting distortion in aerial photography is called orthorectification. If you are intent on replicating such a process then this discussion may be worthwhile. Orthorectification is the most common process used to create cartographic maps. See for a description of why orthorectification is necessary for aerial imagery when used for mapping. Enzedrail 02:25, 3 January 2012 (UTC)

Active or Not

There is a metadata box saying the project is active, but the very first line of article says it is not. Maybe someone can clear this out, and maybe tell what are the reasons, alternatives, etc.