Talk:Featured tile layers/Archive 1

From OpenStreetMap Wiki
Jump to navigation Jump to search

Standard layer source code link

Should Standard's source code link (on Free/Open column) should be replaced by openstreetmap-carto? Curiosity question really, not wanting to start a flame war on any means. --Nighto (talk) 23:37, 9 July 2014 (UTC)

Resolved: Mateusz Konieczny (talk) 22:34, 11 June 2020 (UTC)

Wiki page for the Standard layer

Weird. Is there really just no wiki page for the Standard layer yet or have you just forgotten to link it? ;-) --Wuzzy (talk) 15:19, 11 April 2015 (UTC)

I see you created it: Standard tile layer. Probably a good idea. There's quite a lot of information around the wiki about OpenStreetMap tile serving infrastructure, which (for now at least) all relates to this "standard tile layer", and the hosting thereof. And then there's the style, which personally I do like to refer to as the "standard style" although the github repo is called "openstreetmap-carto". The page you've created focusses a lot on the style at the moment. -- Harry Wood (talk) 16:02, 27 May 2015 (UTC)
Resolved: Mateusz Konieczny (talk) 22:34, 11 June 2020 (UTC)

Caching issues

The front layer problem is probably caused by the fact that front end servers have 3 domain names mapped to the same site, but front-end servers are in fact going to a farm of proxy caches that are frequently unsynchronized between each other
There are 3 domain names in order to allow browsers to perform more than 4 simultaneous parallel requests per domain (default setting in browsers). This then allows 12 parallel requests to the tile servers (in 3 parallel HTTP sessions if the browser supports HTTP queueing).
However all these requests are going to 3 front-end proxy caches that are randomly selected depending on which IP is returned (randomly?) by the DNS client used by the visitor. You cannot then predict on which front-end proxy cache you'll fall, and they are not always the same (a client could be connected to three distinct proxy caches that have different versions of the same tileset)
As a result, evevery one on three tiles you get may have been generated at different times in different version.
When they are loaded in the current page, the queuing HTTP sessions used by the browser are closed and if you pan slightly the map in the browser (or zoom in and out) and want to refresh it or come back to the map after visiting another page, the metadata returned in each tile may or may not match and one third of the tiles or two thirds may be refreshed but not the third subset.
The broweer cache will then keep a local cache of these desynchronized tiles. But there are a few tweaks in how each tile will expire in the client browser cache as they don't have the same expiry time.
To get a coherent version, you need sometime to clear the client browser cache before refreshing the webpage and get all tiles again.
However there are also issues when tiles have been recently updated in OSM data. Note all tiles are drawn at the same time (they are drawn in sets named "supertiles" that are in fact 4x4 tiles for a total of 1024x1024 pixels, but browser clients are requesting 256x256 tiles so they do not download the same part of the supertile and get different versions of the supertile depending on which front-end proxy cache they reach and that have different versions of the supertile.
Depending on the javascript framework used to render the map in webpages the effect of caching and how HTTP sesssions are parallelized and how web browsers are configured (to allow HTTP queueing or not or by the setting of the max number of parallel HTTP requests per domain) you get different results. I've seen that the "OpenLayers" javascript frameworks works better there than other legacy frameworks that are confused on how they should manage the local cache for their queries. So it could take hours or days to get a refreshed tile with these alternate javascript frameworks, because the tiles returned by OSM front end proxies have some issues in their returned metadata (notably for their incoherently set expiration time... a "HTTP HEAD" request on tile servers really return different expiration time and last date of generation for tiles in the same supertile; in fact the OSM front end proxy caches do not track correctly the generation date of supertiles by backend renderers).
So yes, you can get incoherent maps where you see that tiles "do not connect" where they should, even after refreshing the page in the client browser and even after the requested tiles are all on the same supertile that has been completely refreshed in backend renderers: these proxies do not detect that and continue caching and returning some old versions of these tiles even when client perform a refresh.
Most of these problems are avoided with OpenLayers frameworks (OpenLayers better parses the metadata for local caching), but with other frameworks, this is not so effective.
This is not really a problem or bug of browsers, but bugs in those javascript frameworks (hosted by front end web servers) and how caching is configured on OSM front-end proxies (and how they expect the client frameworks to query their tile). Note all javascript frameworks are really conforming to the HTTP standard in how they handle the metadata returned, but there are also a few HTTP conformance issues on OSM proxies too in how theses metadata (for HTTP HEAD requests) are updated: there's a lack of communication somewhere between backend renderers and front-end proxies to track the correct update times, and this confuses the clien-side javascript frameworks in their own requests. — Verdy_p (talk) 14:46, 19 April 2016 (UTC)
I think this sort of information should be put in Standard tile layer and not on this page whose main purpose is to explain the tile layers available on the website and how they are selected. Also, this caching problem should probably be reported somewhere, like on Github as an issue, or a new thread on the dev mailing list. —seav (talk) 15:21, 19 April 2016 (UTC)
This was a reply to the previous comment, trying to explain what he sees. Yes there are some related issues but the interactions are complex between servers, front-end caches, client-side javascripts hosted by web servers, and browser settings or capabilities.
I did not intend to fill in a bug report. But the previous message thread (by Wuzzy) was not explicit enough. So my reply was accurate here (and was part of the previous thread, that you have incorrectly separated of its scope)...
Note also that there was a similar comment (by User8192) in the article page (that was later dropped by you, Seav), showing also that problem (but not trying to explain what was really wrong). — Verdy_p (talk) 16:00, 19 April 2016 (UTC)
Also my comment is absolutely not specific to the "standard layer". It can apply to any layer hosted elsewhere. This is more a problem of the javascript framework(s) in general, even if there are interactions with how Mapnik renderers interoperate with front-end tile servers and their front-end proxies. There are similar issues as well in JOSM (that does not use a browser's cache but uses its own local cache which is not conforming to the HTTP standard, but still has issues to get refreshed tiles from tile servers). — Verdy_p (talk) 16:06, 19 April 2016 (UTC)
Resolved: this is offtopic here and not actually useful anyway Mateusz Konieczny (talk) 22:36, 11 June 2020 (UTC)


Resolved: Mateusz Konieczny (talk) 22:40, 11 June 2020 (UTC)