User:Mkv

From OpenStreetMap Wiki
Jump to navigation Jump to search

Random thoughts about rendering

  • Recognizing ways, polygons etc as objects should be decoupled from rendering the objects
  • Render order should be decoupled from objects
  • Logical layers should deal with objects not render layers

Processing order

  • First step should be to recognize areas from ways, points are easy
    • Borders and large bodies of water are harder
    • Multipolygons and polygons are areas


Recongnizing a set of tags as an object

So a tag is a key-value pair where either the key is significant or the key-value pair is ignificant. For instance when drawing buildings, there might be a rendering rule for generic buildings (gray) and some special buildins ie. building=residential might get the color pink. So when encountering building=wood or building=yes it should be drawn with the fallback rule.

So to identify the correct class for an object some tests are run against the tags of the unknown object. A class might have multiple optional rules, but a single class represents a single rendering type, so for instance all objects of the rendering class "highway-primary-trunk" are rendered to look the same.

A single matching rule of a class consists of a set of tagrules to be found and a set of tagrules that must not be found. A tagrule consists of a mandatory key and an optional value. If the value is not specified then it is assumed that the value doesn't matter ie is matched as a wildcard.

when thinking of tagrules as just a set of tags, then the tagrule set must be a subset of the tags in the unkown object.

def is_subset_of(tagrules, objects_tags)
    for t in tagrules:
        if t not in objects_tags:
            return False
    return True


def in_complementset_of(negative_tagrules, object_tags):
    for t in negative_tagrules:
        if t in obects_tags:
            return False
    return True

Another way of thinking of this is that the "union of the tagrules and the objects_tags == tagrules" or for the complement part: "union of the tagrules and the object rules == empty set".

Perhaps making a string of all the key-value pairs sorted alphabetically and running a regex for each rule on them might be the way to go. Regex are usually somewhat optimized so might work performance wise, especially because the regexps could be pre-compiled since the rules change rarely. Thus an objects tags would become a long string such as ";bridge=yes;;highway=trunk;;name=dum dee;" and the tagrule regex would be ".*;bridge=yes;.*;highway=trunk;.*". Inverse rules would also be regex so when you don't want to match bridge or tunnel you'd have the inverse rule "(;bridge=)|(;tunnel=)".

Advanced rendering thoughts

Entrances to buildings as arrows

  • Entrance must be part of building's wall
  • Need to know which side of building's wall is 'inside'
  • Need to know direction of wall to know alignment of arrow
  • Luckily island algorithm solves the harder of these: the inside issue

Generic tips

Rendering with "painter's algorithm"

  • Painter's algorithm == practically all 2d api's
  • Multipolygon is problematic because you need to punch holes revealing what's underneath
    • For each polygon, use scratch image which is off screen
    • Render the selected polygon
    • Set clip region to be the polygon (so that hole borders aren't rendered outside the polygon)
    • Erase all the hole parts
    • Draw the border(s) for the hole(s)
    • Move ready rendered surface to screen-buffer
  • Rendering stuff thousands of pixels outside of boundary is not always handled well.
    • Non-trivial to work around
  • Incomplete areas need to be artificially closed for the fill:
    • Need to decide which side of the line is going to get filled (clockwise or counter clockwise)
    • Algorithm could be:
      • Bring both start and end points to the corner closest to them.
      • while(startp != endp) add segment to endp to go to the next corner (either clockwise or counter clockwise depending on which side is inside)

Rendering on paper

  • Severe limitations:
    • No zooming: Either its legible and intelligible or it's useless
    • No scrolling: Half a name is not useful, roads which have their names outside of shown area even less so
      • Label centering in visible area of object instead of centering in object
  • Stuff should be tweakable because it's a one shot deal
  • Color → grayscale can be automatic but can be improved by texturizing (patterns)
    • Black and white → must use textures

Handling clicks

In short: Make a second image where each object is rendered using a unique color. Keep a list of the color-to-object mapping so that given a color the corresponding object can be found. When a user clicks on the normal map, the color of the corresponding pixel in the color-coded map is checked and the object is looked up. Renderings of objects in the code image are simple because there is no need to render text or actual icons, just their bounding boxes.

  • For each map tile render also a code tile
  • Each object is rendered with a single unique color
  • One color per object means max 4 294 967 296 objects (32bit color space, with no alpha, 16 777 216 objects)
  • Using a look up table the object can be looked up using the color id.
  • Using the square's x,y and zoom the color only needs to be unique to the square → usually very few colors will be used out of all the possible ones → small png files which can be fetched on demand

Browser based rendering, crowd sourced rendering cycles =

Why: People want/like different styles for their maps which has spawned open piste map, cycle map and iirc orienteering map. There are also people with poor eysight which would prefer high contrast maps, color blind of different kinds, Then there's also localized maps for people who speak different languages. However rendering maps takes processing power and cpu time is precious.

Competition: The osmarender and tiles@home provide ways to contribute maps but they require special software to be installed locally and the functionality has not been used to render other maps than the osm osmarender map.

  • SVG rendering in the browser has been tried and found to be too slow
  • Some really fancy stuff has also been done with the 3d webgl api and props for that.

The 2d api is powerful enough

  • I've implemented a couple of proofs of concepts, both destroyed due to wholly incompetent system administrators together with a single harddrive failing.
  • Speed can be really good given some pre-computation or hacks (rendering a very non-trivial part of a map the size of 1024x1024 took < 10 seconds on a 2Ghz machine without pre-computation)

Difficult stuff

  • Dashing ie patterned line
  • multipolygons: Solution outlined above
  • text along a path: Mozilla has some support, not nearly good enough because it doesn't do align=middle. No perfect solution exists right now in the canvas api.
    • Is an overlay of SVG an option? Useless due to poor browser support
    • Server side rendering of text? Hard but possible
    • Custom code for creating the path for text along a path? Difficult and quality might be an issue.

Benefits

  • Browser can POST the rendered PNG directly to the server (See canvas.toDataURL() )
    • Image needs only be rendered once, the ready made PNG would be used when ever it's available
    • Works for legacy browsers
    • Users could leave their browser on a page which could constantly feed more content to be rendered
  • The "handle clicks" image could also be rendered by the browser
  • Images could be created as layers and then combined so that only a single layer would need to be re-rendered when it changes.
  • Users could create their custom styles right in the browser and see the effects live

Lessons learned

  • Browser loading XML is slow compared to JSON
  • Small JSON without useless info (timestamps, creators) is much faster than a larger JSON file
  • Doing way -> object can be very expensive
    • Should really be a step by step increase in abstraction way → area → residential building
  • Rendering at high zoom levels is easy compared to rendering hundreds of square km
    • Render the low zoom levels first at high zoom levels, shrink and combine?
      • How does it work when the whole tile should just be a small piece of the letter "o" in "Europe"?

Open issues

  • Smart collision avoidance with features, text and labels
  • How to do nice text rendering
  • How to do fast "way with a bunch of tags" → "some object" mapping
  • How to represent multipolygons, labels icons etc in "some object"
  • Who wants to write all of the code

Drawing text

Main idea is to use fillText() to draw straight text but to use transformations to align it correctly. The amount of text that can be drawn on each segment is calculated using measureText().

  • Assume we have a way on which we want to render text.
  • Go to first node using translate()
  • ctx.textBaseline=middle,
  • surplus = 0;
  • get distance between nodes (way-segment's length)
  • get the amount of text we can render on that using measureText(), dropping characters from the end until measureText()< (segmentLength+surplus)
  • surplus = segmentLength-measureText()
  • draw the selected amount of text
  • calculate angle between first and second segment (call this alpha)
  • translate context to segmentLength + tan(alpha/2)*fontHeight
  • rotate(alpha)
  • translate by tan(alpha/2)*fontHeight
  • Calculate length of new segment and start loop again

The surplus stuff is needed because sometimes we're rendering a lot of very short segments which means a single segment can never fit any text, which in turn results in us never rendering any text. Recording the surplus or the skipped amount of segment fixes that.