Overpass API/status

From OpenStreetMap Wiki
Jump to navigation Jump to search
Overpass API logo.svg
Overpass API · Language reference · Language guide · Technical terms · Areas · Query examples · Sparse Editing · Permanent ID · FAQ · more · Web site
Servers status · Versions · Development · Technical design · Installation · XAPI compatibility layer · Public transport sketch lines · Applications · Source code and issues
Overpass turbo · Wizard · Overpass turbo shortcuts · MapCSS stylesheets · Export to GeoJSON · more · Development · Source code and issues · Web site
Overpass Ultra · Examples · Overpass Ultra extensions · MapLibre stylesheets · URL Params · more · Source code and issues · Web site

Please report server failures here. Add the most recent entry first. (also set the correct row in Platform Status accordingly)

Note: This page is only meant for operational issues (server not available, areas outdated, ...). Please report software bugs on the Overpass API Github page


Area generation seems to be no longer running on gall.openstreetmap.de, last update was on 2024-06-13T01:59:49Z

mmd (talk) 15:08, 28 June 2024 (UTC)

Thank you for the notification. Has been restarted, reason for stop most likely a side effect of the vandalism. drolbr (talk) 2024-06-30 06:55 UTC


The public instance gall.openstreetmap.de is currently unavailable because a repair attempt for another bug damaged the database. Recovery is under way. --drolbr (talk)

The recovery is now complete. --drolbr (talk)


We're seeing a lot of QuickOSM (and also other Overpass) users which are still trying to connect to one of the old servers z.overpass-api.de and lz4.overpass-api.de. At this time, both servers still happily accept requests. However, instead of a clear error message, both servers don't answer - at all. The request just hangs there. After one or two minutes, requests are then timing out due to an HTTP timeout.

This behavior is super confusing and causes lots of reports by QuickOSM users (see list below).

I'm wondering why Apache is even running on the old boxes? Could they be shut down instead? Or maybe add some clear error message instead?

Here's list of issues I'm aware of:

mmd (talk) 17:38, 12 April 2023 (UTC)

Thank you for conveying the information about the problem. I have now configured the DNS to point for z.overpass-api.de and lz4.overpass-api.de to the new servers. Will propagate within the next 24 hours or so. --drolbr (talk)
Thank for updating the DNS config. I can see now that lz4.overpass-api.de points to lambert.openstreetmap.de, and z.overpass-api.de to gall.openstreetmap.de.
We still seem to have some issue with non-matching certificates, at least my browser gives me a security warning when accessing https://z.overpass-api.de/ ...
So either another virtual host would be needed in the Apache config, or maybe a server alias would also do. Let's encrypt certbot also needs to know about both lz4 and lambert (https://community.letsencrypt.org/t/best-practice-for-multiple-domains-on-single-server/123262/3), either as two separate certificates, or maybe even a combined one for both servers, in case that's feasible. mmd (talk)


Update: I am successful from a different machine, no errors.

2023-03-21 11:52:14 (UTC -7)

Resolving overpass-api.de (overpass-api.de)...,, 2a01:4f8:110:502c::2, ...

Connecting to overpass-api.de (overpass-api.de)||:443... connected.

ERROR: The certificate of 'overpass-api.de' is not trusted.

ERROR: The certificate of 'overpass-api.de' has expired.

I cannot reproduce the issue here. The machine in question,, is lambert.openstreetmap.de which had not existed before Feb 25, thus has a certificate from around that time with three months validity. Some third party (could have been a firewall or antivirus on your computer, not necessary an evil agent) must have interfered with your connection. Nonetheless, thank you for the report. --drolbr (talk)


The dev server is currently down. The reason is that preparations for a dist upgrade have gone wrong. --drolbr (talk)

The dev server is back to normal operations. --drolbr (talk)


Areas are currently 11 days old. Has the system stopped updating them?

Adavidson (talk) 10:36, 31 July 2022 (UTC)

Still seems broken ([1]) Mateusz Konieczny (talk) 02:09, 6 October 2022 (UTC)

Areas are up to date again Adavidson (talk) 22:57, 23 February 2023 (UTC)


lz4 is answering with an empty list if querying on an area while z actually returns results.

e.g. lz4 (currently empty) vs same query on z (containing results)

-- seems fixed now Pango86


Both lz4 and z completely overwhelmed with requests, and basically unuable.

Since Mar 17, some well known commercial users have started abusing the service with a large number of concurrent requests, massively exceeding the permitted usage policy of 10'000 requests per day. As a result, both lz4 and z only handle about 20%-40% of the typical number of requests/minute. Also, Dispatcher granted memory is constantly at 16GB, making it impossible for most users to run their query. Then, CPU usage is at 100% on both servers since yesterday.

My suggestion would be to block those abusive users on IP level until further notice. Mmd (talk) 06:55, 18 March 2022 (UTC)

Thank you for insisting that I have a further look into it.
It is unfortunately not that easy. First of all, there is a pattern of misuse, as one can see on this and this analytics page for internal use.
The first one lists the resource use per type of request. First and second column are CPU use on z and lz4, third and fourth column are the number of unique IP addresses seen on z and lz4, fifth and sixth column are the number of requests. Seventh column is the request hash and the eight and ninth are partial sums. The request hash is computed after stripping coordinates.
The requests look like
      node["tourism"="hotel"](42.1, -85.2, 42.103, -85.2317);
      way["tourism"="hotel"](42.1, -85.2, 42.103, -85.2317);
      relation["tourism"="hotel"](42.1, -85.2, 42.103, -85.2317);
      node["tourism"="hostel"](42.1, -85.2, 42.103, -85.2317);
      way["tourism"="hostel"](42.1, -85.2, 42.103, -85.2317);
      relation["tourism"="hostel"](42.1, -85.2, 42.103, -85.2317);
      node["tourism"="motel"](42.1, -85.2, 42.103, -85.2317);
      way["tourism"="motel"](42.1, -85.2, 42.103, -85.2317);
      relation["tourism"="motel"](42.1, -85.2, 42.103, -85.2317);
      node["tourism"="guest_house"](42.1, -85.2, 42.103, -85.2317);
      way["tourism"="guest_house"](42.1, -85.2, 42.103, -85.2317);
      relation["tourism"="guest_house"](42.1, -85.2, 42.103, -85.2317);
    out center;
but the coordinates scatter over the planet without obvious pattern.
Similarly, the IP addresses are scattered over quite a wide range and originating from apparently all continents. Not from cloud providers, at least no known large ones. No significant overlap with suggested blocklists. All requests are unencrypted. A typical logline:
[17/Mar/2022:00:00:34 +0000] runtime: 20, return size: 31151, 167493, status: 200, remote host: 162.212.168.X, completed: -, query string: /api/interpreter?data=[...], referer: -, user agent: python-requests/2.26.0
It looks like that the event creating the load has stopped at about 2022-03-18 12:00 UTC -- drolbr (talk)
Thanks again for looking into this issue. I've also noticed the query above, which appears to be fairly broken. For some reason east and west longitude values have been flipped and now those queries always cross the antimeridian, which is fairly expensive due to the large bounding box.
Actually, I noticed another team with a user agent starting with PycURL/ libcurl/7.81.0 OpenSSL/1.1.1l zlib/1.2.8. They were sending lots of queries to lz4 in particular on Mar 16. The last time a few months ago when this has happened, both servers were fairly unresponsive for most part of a weekend. Mmd (talk) 20:25, 19 March 2022 (UTC)
The misuse has stopped for the moment being, and the software does not have an appropriate automatic mechanism to weather this kind of misuse pattern. Thus, it is now rather a software issue than an operational one.

Known offline

overpass-turbo.eu -> overpass.openstreetmap.fr:

An error occurred during the execution of the overpass query! This is what overpass API returned: Error: runtime error: […] The server is probably too busy to handle your request. 

Query: Anything. Have not been able to get any queries to run for 24 hours despite trying a few times per hour.

The instance overpass.openstreetmap.fr is known to be down since about the large relation incident. -- drolbr (talk)
I am using //overpass-api.de/api/ and it is still down using many different sources and queries. -- jvermast

Known offline

overpass-turbo.eu -> overpass.openstreetmap.fr:

runtime error: open64: 2 No such file or directory /data/project/overpass/database/way_tags_global_attic.bin File_Blocks::File_Blocks::1


[date:"2019-08-29T07:25:28Z"]; way [tracktype](user:Jaffs)(area:3606195356); out meta geom;

Simpler queries are OK. Zstadler (talk) 07:31, 31 December 2021 (UTC)

See above. -- drolbr (talk)


Somehow z is not updating anymore since a few hours. Mmd (talk) 21:05, 15 December 2021 (UTC)

Fixed since quite some time. I'm sorry I have overlooked the entry. -- drolbr (talk)


z not updating?

https://z.overpass-api.de/api/interpreter?data=%5Bout%3Ajson%5D%3B%0Anode%28id%3A1%29%3B%0Aout%3B returns:

timestamp_osm_base: "2021-12-09T12:59:53Z"

--Ikonor (talk) 15:40, 9 December 2021 (UTC)

Yes, I can confirm that. Update process restarted, the server is now catching up. --drolbr


z not functional. Responds on ping, but http://z.overpass-api.de/api/status is unreachable. Mashin (talk) 15:00, 6 October 2021 (UTC)

Apache has been down all over the day. This has meanwhile been fixed. drolbr


lz4 seems to be down, even ping isn't working right now. Mmd (talk) 16:48, 5 October 2021 (UTC)

The server has crashed, the cause is unknown. To ensure sufficient capacity, I rather have rebuilt the server from scratch. --drolbr
Areas are currently missing here, the generation script failed. Exact reason unclear, but for sure due to the rebuild of the server. drolbr
Areas are fixed now, too. --drolbr


Diff replication stopped working today on both production instances due an Let's encrypt root certificate expiration. The DST Root CA X3 root certificate expired September 30 14:01:15 2021 GMT. Mmd (talk) 16:07, 30 September 2021 (UTC)

https://twitter.com/bentolor/status/1441319097766068237 worked for me. Mmd (talk) 18:18, 30 September 2021 (UTC)
Fixed. Servers are again fetching diffs. --drolbr 2021-09-30 21:40 UTC


lz4.overpass-api.de is still unreachable. --Aleksanb-hyre 2021-07-21 12:50 UTC

Hi, could you please send me your IP address, e.g. by mail or to user drolbr on osm.org? While the server currently is in general reachable, there had been reports of blocking, but I never figured out which IP network exactly blocks the servers. --drolbr 2021-07-21 16:25 UTC
Found the address for some queries. Looks like the ordinary mechanism. --drolbr 2021-08-26 20:55 UTC


lz4.overpass-api.de is unreachable. --drolbr 2021-07-20 12:30 UTC

Server is back. Looks like the server had totally lost its network connectivity. --drolbr 2021-07-20 15:20 UTC


Again it looks like the server data is +3 hours outdated. Here are an example:

  • Request: 2020-11-04T14:20:00Z
  • Data: 2020-11-04T10:53:03Z

--Wille (talk) 14:21, 4 November 2020 (UTC)

Might be some networking issues affecting both *.overpass-api.de servers. In the meantime, both servers are catching up, or are already current again. See http://lz4.overpass-api.de/munin/localdomain/localhost.localdomain/osm_db_lag.html and http://z.overpass-api.de/munin/localdomain/localhost.localdomain/osm_db_lag.html


Again it looks like the server data are outdated. Here are an example:

  • Request: 2020-10-15T10:50:18Z
  • Data: 2020-10-14T19:25:03Z

--ChrissW-R1 (talk) 11:01, 15 October 2020 (UTC)

This affects only one of the two production servers (lz4) Mmd (talk) 19:35, 15 October 2020 (UTC)


https://z.overpass-api.de/api/interpreter?data=%5Bout%3Acustom%5D%3Bnode%5Bplace%5D%5Bname%3D%22Bonn%22%5D%3Bout%3B is missing the template directory.

https://lz4.overpass-api.de/api/interpreter?data=%5Bout%3Acustom%5D%3Bnode%5Bplace%5D%5Bname%3D%22Bonn%22%5D%3Bout%3B has them, though.

(Previously reported, incorrectly, on github. JesseFW (talk) 20:54, 20 September 2020 (UTC)

Thank you for the report. I have fixed it now. --drolbr 2020-09-21 04:54 UTC.
Confirmed, thanks! JesseFW (talk) 13:56, 21 September 2020 (UTC)


Outdated server? In the past the Overpass API server was still about one minute behind the live data. Now there is a gap of some hours!

  • Request: 2020-08-12T08:51:32Z
  • Data: 2020-08-11T19:04:03Z

ChrissW-R1 (talk) 09:47, 12 August 2020 (UTC)

I can confirm this. Maybe of additional relevance: not only the main instance seems to be affected, but all currently running and minutely-updated servers seem to be stuck at some point yesterday.
I noticed that around the time when the Overpass lag started (16:20-ish UTC on August 11), there was also an issue with the production of minutely diffs on planet.osm.org (see https://planet.osm.org/replication/minute/004/146/): there were no diffs generated for about 80 minutes, and the following minutely diffs were much larger than the "regular" ones. Maybe this is related and helps debugging the issue?
--Tyr (talk) 12:48, 12 August 2020 (UTC)
I'm suspecting some of those minutely diffs to be incomplete but I haven't looked into this in more detail. Mmd (talk) 13:12, 12 August 2020 (UTC)
I have identified the issue: minutely diff file 004/146/693.osc.gz has been written twice (!) by osmosis. All Overpass instances have picked up a file with 24k, while the final version of that file has 920K. The lack of nodes leads to a subsequent crash. Mmd (talk) 14:04, 12 August 2020 (UTC)
I see. Do you have any recommendation for hot to approach fixing an affected stuck instance? --Tyr (talk) 16:03, 12 August 2020 (UTC)
See https://github.com/drolbr/Overpass-API/issues/591 - you need to get rid of the faulty osc.gz, restore a backup or download a recent clone, then start applying minutely diffs again... Mmd (talk) 17:20, 12 August 2020 (UTC)
Thank you for reporting this. There are now clean databases to clone from available, and the lz4 instance is back to normal. z will follow over the course of the day. Rambler apparently has not crashed, but for sure needs a database refresh, too. drolbr 2020-08-13 07:07Z
z is now also back to normal, areas are just refreshing.
Please update to the newest release to avoid future hiccups. I'm grateful that mmd has investigated and TomH has clarified here in the wiki the detailed order to download, and the newest release adheres to that. drolbr 2020-08-13 14:03Z


The two servers deliver different data on same request, reason could be outdated areas on one of them.

2a01:4f8:110:502c::2 has "Areas Time : 2020-03-06 11:03:01 UTC"

2a01:4f8:120:6464::2 has "Areas Time : 2020-07-15 16:11:03 UTC"

For details, see: German forum


HTTPS certificate has expired for overpass-api.de · Issue #579 · drolbr/Overpass-API –Simon04 (talk) 20:55, 14 June 2020 (UTC)

Automated certificate renewal fixed the issue in the meantime. Roland


There seem to be a lot of timeout errors occuring lately with the API. Many Augmented Diff requests timed out --Pitscheplatsch 18:00, 26 October 2019 (UTC)

There is indeed more traffic on the server, and this may slow down certain queries (depending on what the OS does with disk priority and cache utilisation) --Roland


Certificate error for Overpass API Achavi (e.g. https://overpass-api.de/achavi/?changeset=64156105) --Pitscheplatsch (talk) 08:00, 4 November 2018 (UTC)

Automated certificate renewal fixed the issue in the meantime. Mmd (talk) 09:44, 4 November 2018 (UTC)


30/10-18 There seem to be a lot of timeout errors occuring lately with the api. I run queries in JOSM and this timed out repeatedly despite the recent change of default timeout from 25->90s:

  • "name~/apple store/i or brand~/apple store/i in canada" (needs a timeout of 290 to succede) (21 nodes returned)
  • place=locality in sweden (returned 10741 nodes the first time with timeout set to 200)--PangoSE (talk) 12:09, 30 October 2018 (UTC)
There is indeed more traffic on the server, and this may slow down certain queries (depending on what the OS does with disk priority and cache utilisation) --Roland

Needs more info

8/9/2018 I can run a query but the data usually fails when exporting to JOSM Detail (read timed) out message in JOSM. Sometimes it works sometimes it doesn't. Only exporting a few polygons or ways.

Please provide additional details, e.g. where do you run your query, which query do you run, which Overpass API instance you use, which JOSM version, etc. Mmd (talk) 12:41, 10 August 2018 (UTC)


http://overpass.openstreetmap.ru/cgi/interpreter is down (Connection reset by peer). Is it possible to revive ? -- Gryphon

This has already been reported, see below. Mmd (talk) 18:41, 7 August 2018 (UTC)
Is there any chance to see it alive, or it is lost forever ? -- Gryphon
Can't you use one of the other instances? This instance was always a bit slow and lagged behind several hours. Mmd (talk) 19:44, 8 August 2018 (UTC)
Both german servers were blocked by the russian government during the latest witch hunt. So overpass server (even laggy) inside the perimeter would be rather usefull-- Gryphon


Certificate error in {lz4,z}.overpass-api.de prevents overpass-turbo.eu from working --Josemoya (talk) 10:23, 5 August 2018 (UTC)

For the time being you can start overpass turbo via http://overpass-turbo.eu - this should automatically use the Overpass server via HTTP instead of HTTPS. Mmd (talk) 10:29, 5 August 2018 (UTC)

Certificate error for Overpass API Achavi (e.g. https://overpass-api.de/achavi/?changeset=61366514) -- pascal_n

Certificates on both servers are valid again.


In the last couple of hours, I'm getting many time outs during requesting augmented diffs. The status (http://overpass-api.de/api/augmented_diff_status) and the augmented_diff API calls (http://overpass-api.de/api/augmented_diff?...) are very slow. -- pascal_n

Issue was related to excessive network connection from some users. Mmd (talk) 11:59, 5 August 2018 (UTC)


1. Overpass on http://overpass.openstreetmap.ru/cgi/interpreter returns HTTP 502 Bad Gateway.

2. Some augmented diffs contain references to broken entities: https://github.com/drolbr/Overpass-API/issues/482

-- Mmd (talk) 15:13, 10 May 2018 (UTC)

Resolved 2018-05-28

The Overpass API-Dev-Server is not reachable. (http://dev.overpass-api.de)

I'll need this server to get a copy of the database for a new server. -- ChrissW-R1 (talk) 17:15, 25 May 2018 (UTC)

Reason is a power outage in the Hetzner data center: https://www.hetzner-status.de/en.html. It's currently unknown when the server will be available again. Mmd (talk) 17:47, 25 May 2018 (UTC)
Server is up again. Everything works. -- ChrissW-R1 (talk) 07:41, 28 May 2018 (UTC)

Fixed 2018-02-08

<meta osm_base="2018-01-01T03:38:02Z" areas="2016-08-18T12:21:02Z"/>
I can confirm the problem. There is an issue with the underlying file system (the disk for /tmp is full).

-- Zstadler (talk) 09:58, 14 January 2018 (UTC)

Rambler is since 2017-02-08 back to normal operations. -- [User:Roland.olbricht|drolbr]]


The two servers behind overpass-api.de seem to provide different output for the same query:

Reason: as mmd has explained below, the relation in its current state is not considered a valid area because it has no name tag. However, the relation had have a name and type=multipolygon tag in version 10, in december 2016. It had qualified as area at that point in time, and Overpass API keeps old areas if newer versions of the generating object no longer constitute an area. lz4 has been set up long after version 10 and therefore never seen relation 6195356 as an area.

lz4.overpass-api.de says:

 > wget -q -O - http://lz4.overpass-api.de/api/interpreter?data=rel%286195356%29%3Bmap_to_area%3Bout%3B
 <?xml version="1.0" encoding="UTF-8"?>
 <osm version="0.6" generator="Overpass API 054bb0bb">
 <note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note>
 <meta osm_base="2017-12-09T16:47:02Z" areas="2017-12-09T16:13:02Z"/>

z.overpass-api.de says:

 > wget -q -O - http://z.overpass-api.de/api/interpreter?data=rel%286195356%29%3Bmap_to_area%3Bout%3B
 <?xml version="1.0" encoding="UTF-8"?>
 <osm version="0.6" generator="Overpass API">
 <note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note>
 <meta osm_base="2017-12-09T16:48:02Z" areas="2017-12-09T16:24:02Z"/>
   <area id="3606195356">
     <tag k="name" v="Israel and Palestine"/>
     <tag k="name:de" v="Israel und Palästina"/>
     <tag k="name:en" v="Israel and Palestine"/>
     <tag k="note" v="Do not delete!"/>
     <tag k="type" v="multipolygon"/>

Note: In addition to the differences in the resulting area, there is also a difference between generator="Overpass API 054bb0bb" and generator="Overpass API".

This effect was previously discussed in the following Github issue: https://github.com/drolbr/Overpass-API/issues/285. As mentioned in the issue, an area is not deleted, in case the original relation does not longer meet the selection criteria as per the areas.osm3s area creationrules. That's exactly what happened here: the older server z.overpass-api.de still holds an old version of the area, presumably based on version 11 of relation 6195356. The new server lz4.overpass-api.de was only set up recently, and by that time, the relation didn't match the selection criteria anymore and was never considered during the initial area creation process. Hence, both servers have different areas. I should add, that areas are recalculated at regular intervals, either in a delta or initial mode. In both cases, old areas won't get removed from the area part of the database. I added a comment to the Github issues, that areas might be better completely scrapped from time to time and then recalculated to finally get rid of those no longer valid areas. Mmd (talk) 17:20, 9 December 2017 (UTC)
Also, you as a user cannot influence the area creation process by sending queries to the server. It's all defined in the area creation rules, see Overpass_API/Areas Mmd (talk) 17:22, 9 December 2017 (UTC)


overpass-api.de is currently about 4.5 hours behind the main database

Adavidson (talk) 03:43, 20 July 2017 (UTC)

The server has performed at 2017-07-19 23:01:?? a sudden reboot for unknown reason. Afterwards, the update mechanism reported a checksum error on reading a file. For that reason I have stopped updates.
I expect that it will take a day or so to figure out whether this is a hardware failure or software failure and to get the updates back on track. I'm sorry for the inconvenience. --drolbr
The root cause is identified. The server had a loss of power supply. Nonetheless, at least the log file shows corruption, an unexpected block of zero bytes in the file.
I will focus on recovery. By human error, the backup copy of the database is some days behind. Hence, I will play back the backup copy once it is again up to date.
Please use in the meantime the Rambler instance.
For the forensics: The hosting provider does have a UPS, but apparently it did not work. Similarly, I had assumed that the file system cannot screw up on loss of power, but that has proven wrong, too.
There are some further issues within the realm of Overpass API. I will only fix those. I will list them in a blog post once the recovery is accomplished. --drolbr
The server is back to normal operations. --drolbr

Resolved 2017-07-05

http://overpass.osm.rambler.ru/cgi/interpreter returns runtime error: open64: 2 No such file or directory /osm3s_v0.7.54_osm_base Dispatcher_Client::1

in response to query: [out:json];rel(137102);out body;

Thank you for reporting the issue. The server had been rebooted, but I had not yet checked the database and restarted the database dispatcher.
The server is now catching up, except areas. I will turn areas update on once the base data is again up to date. --drolbr

Resolved 2017-06-14

To prevent further damage database update process was automatically suspended on all instances at 2017-05-31T19:03:58Z, as the same minutely diff 2469925 appeared twice with different contents (= upstream replication process error). Manual intervention is needed.

It was a false alert. Only the state file had differences in irrelevant fields. The actual files do not differ. Resumed normal operations. --drolbr
reopening issue as rambler updates are still stuck since may 31st
The Rambler instance is now catching up as well. Unfortunately it is slow, because it has slow disks.


Rambler server is down (does not respond ping)

I also cannot login. --drolbr

Looks like the server is back again --gryphon

Invalid 2016-10-06

I'm sorry for the delayed reply. This is in the end an issue with the code.

Servers seem to have availability issues. See https://help.openstreetmap.org/questions/52384 for details.

Apparently, you want to use the old XAPI which is very limited and partially disabled, as stated in the servers status page at Platform Status.
Only the /xapi?map call is limited at this time. Mmd (talk) 10:24, 7 October 2016 (UTC)
XAPI is not the same as Overpass API, even when it is implemented on the same server.
XAPI will be translated on the fly into Overpass XML, it's simply a wrapper. Mmd (talk) 10:24, 7 October 2016 (UTC)
Most users do not need XAPI (which is old, poorly documented and basically unmaintained, and was stopped as well on the OSMF servers since long).
The reasons why XAPI has been discontinued on OSMF servers years ago entirely different (completely different implementation).
There's no problem with the normal API (as used with Overpass Turbo). — Verdy_p (talk) 20:28, 6 October 2016 (UTC)
If you translate the XAPI query into QL, you will experience exactly the same issues as before: http://overpass-turbo.eu/s/j9C . So the argument that this is XAPI related does not apply.
Note that Russian and German instances both have outdated precomputed "areas" (since long now, due to multiple severe issues on these servers that had corrupted their database multiple times).
Areas are entirely irrelevant to the problem at hand. Also, areas are nowadays updated very frequently on the German instance. The Wiki status is just outdated.
The French instance is much more stable and reliable and replies fast. But remember that precomputed areas still have a delay (which could be several days off as they are refreshed by some server-side bot running during low hours).
The French instance was offline for about 9 days in September due to DB rebuild and used a fallback during that time.
You also note that there's too many active long requests in the instance you test. Visibly a client is abusing its acceptable usage rights, or has internal bugs for communicating with the server and closing its sessions correctly to free up the resources or to correctly cancel its ongoing requests whose results are stored and cached waiting the retrieval of their results. — Verdy_p (talk) 20:38, 6 October 2016 (UTC)

As there's no limit on rambler instance, those queries all originate by Alexvanderlinden's app. /api/status will only shown your own query, never everyone else's queries.
@Alexvanderlinden: please follow up this discussion on the Overpass Dev Mailing list. Mmd (talk) 10:24, 7 October 2016 (UTC)
Where can I find this Overpass Mailing list? (and/or how to use it?) - Alexvanderlinden
Sorry, forgot to post the link: see this announcement for details. It's a bit forum like, you can also post your question via Browser. Mmd (talk) 18:43, 7 October 2016 (UTC)
This really looks like a performance regression in 0.7.53 with some compiler (settings). The query I mentioned above runs in 700ms on this instance, but takes more than 15s both on Rolands dev, the German and the French instance. New issue on Github was created. Mmd (talk) 19:07, 7 October 2016 (UTC)
I'm switching to "Overpass QL". See https://help.openstreetmap.org/questions/52384 for details. This seems to work properly (no availability issues so far). - Alexvanderlinden
Effectively, I doesn't really matter if you run your query as XAPI or Overpass XML or Overpass QL, as the XAPI will be translated to Overpass XML anyway before being executed. You're just lucky that there's not so much load on the server right now. In any case your query currently takes way too much time with 15s runtime, and if the load on the server increases again, you will run into exactly the same issues again. Just give it a try tomorrow at differnt times of the day and run your query a few times in a row. Mmd (talk) 20:48, 7 October 2016 (UTC)
@Mmd You are correct. I still face the same issues. Some more details at https://help.openstreetmap.org/questions/52384 . Is somebody trying to improve this situation or is this just to way it is now? Any tips or hints are welcome.
This issue can only be corrected via a code corretion, for which I have created an issue on Github. A pull request with a fix is already available, but Roland still needs to review the code, merge it and deploy it to the production server. Best is to contact Roland via email on progress. Mmd (talk) 07:57, 10 October 2016 (UTC)

Fixed 2016-09-29

overpass-api.de is currently about 9 hours behind the main database

Adavidson (talk)

Back to a 3 minute lag, so I guess it's fixed.

Adavidson (talk)

Fixed 2016-08-23

As I cannot rule out that something nasty has happened, I've re-installed the server and moved to Ubuntu 16.04 LTS. This has its own set of problems, but now I'm pretty confident that the system is clean. A basic service is now available again. Neither updates nor areas are enabled at the moment. I'll do that tomorrow.

The server still sees around 200 requests per second from thousands of different IP adresses without User Agent or Referrer. Could be both a careless app developer or an attack. But an attack would have most likely a much huger scale. As the defense that is cheapest for server resources, I have deleted the /api/xapi endpoint. This means that Apache can return a HTTP 404 on that without firing up a CGI session.

Some short remarks:

On page Platform Status is recommended "use http://tyrasd.github.io/overpass-turbo/ in the meantime as an alternative"
Just wanted to say, that NOW and the last hour (18:25, 21 August 2016 (UTC)) http://tyrasd.github.io/overpass-turbo/ seem's to have the same problems.
If I understand it correctly, the statement on the Platform Status related to Overpass Turbo (the user interface) rather than Overpass API. --Harg (talk) 20:17, 21 August 2016 (UTC)

There is a section for Overpass API and a separate section for Overpass Turbo. The alternative refers to Overpass Turbo, but both use the same backends. You can get around problems with the backend by changing it in "Settings > General > Server". --Roland

Update: Rambler is now completely back to normal operations. The main instance is back to normal operations but areas are still recreated.

I'm now sure that there is no malware on the server. I've just recreated the installation, now based on Ubuntu 16.04 instead of Ubuntu 14.04 before.

What has caused the sluggish reactions are the requests to the /api/xapi?map API call. I've made a statstic here: The important columns are the first (date), the seventh (number of requests) and the last (number of different IP addresses). Up to 2016-08-20, this call has seen some few requests per hour. Now we are at 300'000 to 500'000 requests per hour. For comparison: the total capacity of the server is rather 50'000 requests per second. As the outmost line of defense, I have removed the /api/xapi call such that Apache will get rid of these requests with a HTTP 404 not found.

I hope that the /api/xapi requests will go down in the next days if the source realizes that it won't get data. I have no idea what is the ultimate source. A local distribution of the addresses: First column is the name of the network, second column is the total amount of data (few because it's all HTTP 404), third column are number of requests, fourth column number of IP addresses.

I'll restore /api/xapi when this problematic access pattern has ceased.

Given the statistics, it is very likely to have been caused by a misconfiguration of an upstream routing , rather than by a single malicious app. However it is possible that some large commercial website featured an OSM map with some badly written scripts, relayed by some bad advertizing network (attempting to get some localized data about users in their browsers. It would be interesting to analyze the kind of XAPI requests performed: it could indicate which kind of geolocalized data these scripts were attempting to get, and could help locate the offending script or ads-network (or if it comes from some dating site). Apparently the data shows that this comes from both mobile and fixed DSL/cable/fiber networks. The distribution also shows a huge "success" in Czech Republic where an offending website woulds be best known. A single application seems unlikely, that's why I think about some advertizing network. Could it also be a non-official helper app made for Pokemon Go players ? — Verdy_p (talk) 12:13, 22 August 2016 (UTC)
If it was a badly coded website and/or advertising content, the requests would still have proper user-agent and referrer headers. Also, the mentioned requests from the Czech Republic (M-Soft_CZ) in the logs seem to be genuine ones (that still went through before the XAPI was closed), as they downloaded quite a large amount of data in relatively few requests from only two IP addresses. -- Tyr (talk) 12:46, 31 August 2016 (UTC)
https://overpass-api.de/achavi/ returns 404 Not Found, probably not re-installed yet on the new Ubuntu? Ikonor (talk) 18:12, 22 August 2016 (UTC)

Thank you for the reminder. I have now fixed this.

Areas are complete as well.

Another Update: The same "Request rejected" error is still happening when checking the fix for Issue #297, by running this query on the German, Russian, and French servers. Luckily the Swiss server is fine! -- Zstadler (talk) 20:36, 28 August 2016 (UTC)

That patch is not yet deployed, hence you need to wait a bit more for retesting. Swiss server contains data from Switzerland only(!), i.e. your query won't produce any meaningful result there, as there's no data to query in the first place. Mmd (talk) 17:16, 31 August 2016 (UTC)
Is there a way to know when a fix is deployed? Are fixed issues expected to be deployed on http://tyrasd.github.io/overpass-turbo?
Thanks for clarifying the scope of the Swiss server. I've updated its entry on the Platform Status page accordingly. -- Zstadler (talk) 08:00, 2 September 2016 (UTC)
The fix is deployed on the French instance (first public 0.7.53-instance with planet-wide scope). Mmd (talk) 10:50, 22 September 2016 (UTC)

See above 2016-08-21

I am getting either no response or an error message when executing a query from firefox developer browser -- example: http://overpass-api.de/api/interpreter?data=%5Bout:json%5D;(node%5B%22amenity%22=%22toilets%22%5D(51.64072811339469,-0.07122477654688271,51.68564387724827,0.0011869347500077093););out%20geom;out;

Response: Error: runtime error: open64: 0 Success /osm3s_v0.7.52_osm_base Dispatcher_Client::request_read_and_idx::timeout. Probably the server is overcrowded.

The same query works normally to http://api.openstreetmap.fr/oapi/interpreter/ (the result is empty, but that is correct for the specified area).

Earlier, I was getting a report in the console log that the site is not CORS enabled.

This is most likely a duplicate of the 2016-08-20 incident below. I'll report progress there.

See above 2016-08-21

Sorry for the late response. I can confirm that the server is responding much slower than normal.

I don't know much more so far. The Rambler instance has got a restart of the server. After this, it is required to restart operations manually. I've done this right now, hence the Rambler should be operational now.

The things that are happening on overpass-api.de are inconclusive. I do see that there is low load inside, but few requests from Apache are actually arriving. Test requests from outside don't get through to the CGI system. Hence, it is most likely somehow related to Apache or the network stack. Restarting Apache got operations back to normal for a few minutes, now it is again slow. I'll try a server restart next.

More details to the 2016-08-20 incident

The following error is received running this wiki example as well as any other query on overpass-turbo.eu:

An error occured during the execution of the overpass query!
Request rejected. (e.g. server not found, request blocked by browser addon, request redirected, internal server errors, etc.)
Error-Code: error (0)

-- Zstadler (talk) 15:31, 20 August 2016 (UTC)

This request is probably too large with the current server load (I tried your request, it works perfectly). My queries (returning about 10MB with complex selection) are working. — Verdy_p (talk) 15:48, 20 August 2016 (UTC)
This is just the first example from Overpass_turbo/Examples. I assume there was no issue with this example since it was created on 13 February 2013... -- Zstadler (talk) 18:15, 20 August 2016 (UTC)
I suggest you retry running your request from another browser, or usuing an "in-private" browser session: if it succeeds, you've got something wrong in your browser plugins. Try also by selecting another Overpass instance (from the preferences menu; for me all 3 instances are working). Try also rebooting your PC (if you have pending updates partially installed, in your browser, or antivirus, or networking components...), or your Internet router (if it runs out of available sessions/ports in its builtin NAT/firewall, or fails in its DNS queries). — Verdy_p (talk) 15:55, 20 August 2016 (UTC)
I've verified the issue with bare-bone Internet Explorer before reporting the issue. -- Zstadler (talk) 18:15, 20 August 2016 (UTC)
Same problem even for tiny requests; I assume it is a server-related issue.
The main instance seems to experience massive load on the Apache side, as even http://overpass-api.de/ is extremly slow to answer (if at all). Rambler instance is also down for most of the day, as the dispatcher doesn't seem to run (error message "Runtime error: open64: 2 No such file or directory /osm3s_v0.7.52_osm_base Dispatcher_Client::1 "). Recommendation: Try HTTPS instead or the French instance for the time being. Mmd (talk) 17:33, 20 August 2016 (UTC)
Yes, I noticed the French server is working fine. This report is about the the main server, which is the only server that has Attic data.
Note that HTTPS also returns an error, yet the error message is a bit more clear:
An error occured during the execution of the overpass query! This is what overpass API returned:
Error: runtime error: […] Probably the server is overcrowded.
-- Zstadler (talk) 18:15, 20 August 2016 (UTC)
I don't see any significant increase of execution time on Rambler even for my more complex requests. The request given in the example above replies correctly almost instantly on all 3 servers. Several days ago, there were still an ongoing massive load due to a bot reconstructing data after a bug, but this has been visibly fixed.
Anyway, most of the time I'm on the French or German instances (and I never need Attic data with Overpass, I use attic data only in JOSM for restoring some incorrectly deleted objects). For me, Overpass is for querying data as it is now, and there are QA tools to detect things that were recently broken (In most cases however it is rarely needed to restore data and corrections are evident (and we can look at history of objects as long as they are not deleted). I see little use for using Overpass with attic data (and we know that if you use them, the queries will be very underperforming, so you need very selective queries (by object id or in a small bounding box not larger than a few kilometers, or much smaller in dense cities). If possible post a link to the Overpass query (click on the "Sharing" option at top of screen, it opens a dialog with a short permalink to the query saved on the Overpass interface web server such as "http://overpass-turbo.eu/s/" followed by a very small opaque identifier made with a few lowercase/uppercase letters and/or digits). — Verdy_p (talk) 18:33, 20 August 2016 (UTC)
Correction: now I see the problem too on the Russian instance (only this one).
Something happened to the server at about 2AM on Saturday. Have a look at the Munin graphs. Adavidson (talk) 02:25, 21 August 2016 (UTC)
This did not happen on OSM servers (nothing visible at that time on its Munin stats on any one of the listed servers). Which Munin are you speaking about? Overpass API servers do not run on any OSM server but are hosted and administered by individual chapters, on their own platform, with their own colocation providers and bandwidth providers, their own domain names, and their own monitoring tools (not visible on the general Server status page). And is it really the same problem on the German and Russian instances running separately? — Verdy_p (talk) 02:57, 21 August 2016 (UTC)
The German instance was slow, now it is simply broken/down, with an error replied imemdiately. For now only the French instance runs (possibly the Swiss too for some requests, but it does not cover the full planet). — Verdy_p (talk) 03:20, 21 August 2016 (UTC)
Munin page for the overpass-api.de instance Adavidson (talk) 03:49, 21 August 2016 (UTC)
That Munin is not replying either (immediate error). There's definitely a problem on the web server or on its front firewall/router. — Verdy_p (talk) 11:39, 21 August 2016 (UTC)

Maintenance work 2016-08-14

To fix data damage caused by a software bug, the main instance and later on the Rambler instance will lag behind by some hours for some hours. The general availability is hopefully not affected. Work is scheduled to start at around 07h00 UTC --Roland

overpass-api.de is back to normal operations.
Rambler is also back to normal operations. --Roland

Invalid 2016-06-11

Clone mechanism on dev.overpass-api.de does not seem to work anymore. Clone script asks for .gz files, but only the uncompressed files are available for download.

The clone directory looks as it should look. Just in case, I've updated it right now. Could you please tell me which version you use to clone? --Roland
In the log files I noticed someone trying to download *.gz files for more than a week. There's a commit to remove gz based downloads, but that hasn't been merged into master yet - it's still in minor issue branch. I guess that person was using the master branch, where download_clone.sh is currently not working as advertised. Mmd (talk) 20:36, 16 June 2016 (UTC)

Done 2016-05-17

Server overpass.osm.rambler.ru does not respond (http/ping)

It's impossible for me to login and see what has happened as well. --Roland
Basic operations (interpreter for node, way, relation, areas) are back up again. --Roland

Invalid 2016-05-04

Errors when using the "changed" filter - only on overpass.osm.rambler.ru. Received

runtime error: open64: 2 No such file or directory /spool/roland/v0.7.52/db/relation_changelog.bin File_Blocks::File_Blocks::1

message when running this query on this server.

The rambler instance doesn't have attic information. "changed" is (as opposed to "newer") only available with the attic module.
Please consider updating the "attic data" column entry for rambler in the Overpass_API#Introduction table.
Also please consider rephrasing the error message.

Fixed 2016-04-28

The databases on all instances lags significantly behind. The reason are connectivity problems with the upstream server [planet.osm.org] for the diff files.

The planet.osm.org delivers again diffs and the updates have catched up.

Fixed 2016-04-18

Rambler Overpass API instance and Overpass API on openstreetmap.fr reject all the requests.

The rambler instance was up but has been spammed with useless requests from a poorly designed app, firing several identical requests at once and dozens of times per second. Unfortunately, the requests contain no useful user agent. Thus I cannot contact the developer. I've blocked this and similar requests now.
Meanwhile, the developer of the problematic app has contacted me and fixed the app.

Fixed 2016-04-13

sketch-line options are missing. style=padua and style=wuppertal are out of order.

Thank you for the notification. It should be fixed now.

Fixed 2016-04-13

Areas on overpass-api.de were last updated on 2016-03-30T00:26:01Z (it was expected that areas should be updated every 6-12 hours).

If this causes problems, try running your request instead over http://overpass.osm.rambler.ru/cgi/

I'm currently in the process of moving to a new piece of hardware, [2]. There areas will be reupdated once we are on the new server. I'm sorry for the break.

Done 2016-03-17

Areas on overpass-api.de were last updated 2016-02-11T09:13:02Z, that's about a month ago.

It has been turned off during the 2016-02-13 event. It's time to turn on area updates again.
Area updates have been turned on again.

Fixed 2016-03-15

Disk read errors on rambler.ru instance since 2016-03-12. It is probably the same problem as during 2015-11-12.

The server got a new cloned database, and is currently applying updates.

Done 2016-02-13

No updates on any instance since 2016-02-12 02:30.

It's again the problem that the servers have caught an invalid minute diff. The dev@ instance already is restarting with updates from the latest known good clone. The overpass-api.de instance will receive a rollback this night. -- drolbr

The rollback on the dev@ instance and the overpass-api.de instance has been completed. Both are back to normal operations. The rambler instance is still to do and will come back during the weekend, as well as areas on all machines. -- drolbr

Obsolete 2016-02-12

No more database updates on rambler.ru instance since 2016-01-20T18:00:02Z.

Please see above. Unfortunately, I have overlooked changes on this page because the wiki notification didn't work. -- drolbr

Obsolete 2016-02-12

Area update processes on overpass-api.de stopped again on Dec 18. Areas on rambler.ru were last updated on Nov 13.

This already causes some strange effects for some queries, reported e.g. here and here

Please see above. Unfortunately, I have overlooked changes on this page because the wiki notification didn't work. -- drolbr

Obsolete 2016-02-12

Area update process seems to have stopped on overpass-api.de at about 2015-11-16, 05:45 UTC. That happened towards the end of a 15480 seconds replication delay (upstream replication issue). Mmd (talk) 11:37, 17 November 2015 (UTC)

Please see above. Unfortunately, I have overlooked changes on this page because the wiki notification didn't work. -- drolbr

Obsolete 2016-02-12

Rambler instance no longer provides meta data for nodes:

... (Voluminous details removed)

Please see above. Unfortunately, I have overlooked changes on this page because the wiki notification didn't work. -- drolbr

Mmd (talk) 17:17, 12 November 2015 (UTC)

I haven't found an explanation immediately. But most likely, it is again the disk problem. I'll check tomorrow. -- drolbr
May be related: attic also used to work about 2 weeks ago on rambler, now I'm getting "remark": "runtime error: open64: 2 No such file or directory /spool/roland/v0.7.52/db/way_tags_global_attic.bin File_Blocks::File_Blocks::1"

Up again 2015-10-19

Update process on rambler instance seems to have stopped yesterday, more than 24 hours ago:

   "timestamp_osm_base": "2015-10-15T09:39:02Z",
   "timestamp_areas_base": "2015-10-15T09:39:02Z",
It looks like a disk failure on the server. To be sure I'm currently checking the disk.
The disk has shown not so few bad sectors. Now I'm cloning back the database from dev@ and hope that the Rambler instance will work afterwards.
Last hard disk issue on rambler was quite recently (see 2015-07-18 below). A hardware replacement may be really needed here to ensure longer term stability. Mmd (talk) 09:10, 23 October 2015 (UTC)
The instance has gone offline for unknown reason. I currently cannot login. -- Roland
The instance now has worked properly for several days. It is still unclear in what state the disk is. -- Roland

Fixed 2015-10-03

overpass-api.de and the Rambler instance have stopped applying updates. The reason is that a bogus version of the minute diff [3] before a reboot of the replication generation has been applied, and now the data is inconsistent.

I will reconstruct the database from a clone snapshot on dev.overpass-api.de with the now corrected minute diff in question. This may take some time. Until then, the public instances will suspend all updates.

The endpoints


are not affected. The server had received the correct copy of the minute diff.

The main instance is now back to normal operations. The rambler instance will get a database reset later on this afternoon.
The rambler instance is now also back to normal operations. Areas are yet to be recreated. The data transfer from dev.overpass-api.de has been far slower than expected.

Fixed 2015-09-16

On the main instance some bogus areas have appeared. The phenomenon cannot be reproduced on the other instances. To check whether it is a non-spurious problem, I've deleted the existing areas. The areas are currently being recreated. -- talk

The problem did not come back. It is most likely due to a bug in code transferred from the backend_cache branch. -- talk

Resolved 2015-08-25

The main instance seems to did not accept any requests since about midnight of August 25. All requests are rejected with the following runtime error: “Error: runtime error: open64: 0 Success /osm3s_v0.7.51_osm_base Dispatcher_Client::request_read_and_idx::timeout. Probably the server is overcrowded.”. See also the related current munin stats and this ticket on github.

Thank you for reporting the issue. In fact, the main instance has started to reject requests already since 2015-08-24 13:06 UTC. --rmo (talk) 16:09, 25 August 2015 (UTC)

Fixed 2015-07-18

The rambler instance reports a permanent "input/output error" for one file of the database. I haven't found what this means in detail but it might be a hardware problem.

The test requests still work, but the update process is on hold. Hence, arbitrary requests are likely to work, but no updates will be applied. The server may get rebooted at any time if necessary to investigate the I/O problem.

The rambler instance is now down for disk checking.
The rambler instance is back. Areas are still recreated.

Fixed 2015-04-30

Similar to the incident of 2015-02-05, the server overpass-api.de has received a bogus version of minutely diff file 1374687 (see this Github ticket for details). This has broken some elements.

I'm currently replaying a backup on the successor machine, next.overpass-api.de. That one should go live in a few days and then fix the issue.

Since about May 10th, the new server is on duty.

Announcement 2015-04-09

The clone feature of the overpass-api.de instance has been shut down permanently. Please use


as base URL instead. The clone process is resource consuming, and the dev server is better suited for a load peak.

Announcement 2015-03-30

The rambler instance will recieve larger disks in the next few days. This can mean a short downtime.

The server is back to normal operations.

Fixed 2015-03-16

Overpass-api.de gives 2 hours old data. See for example: http://overpass-turbo.eu/s/8di vs https://www.openstreetmap.org/way/319081796

or http://overpass-turbo.eu/s/6En vs https://www.openstreetmap.org/changeset/29515776

17:28 MEZ: works again
Thank you for reporting the issue. It was a full disk. I've organised a solution good enough for some days with softlinks.

Won't fix 2015-02-05

On overpass-api.de, OSM elements that have been changed between 2015-02-04 08:00 UTC and 2015-02-04 09:18 UTC and also between 2015-02-04 09:18 UTC and 2015-02-05 06:00 UTC will show the former version instead of the latter version. Elements that have been changed again since 2015-02-05 06:00 UTC will not be affected.

By contrast on overpass.osm.rambler.ru, all changes from between 2015-02-04 08:00 UTC and 2015-02-04 09:18 UTC are simply missing.

This is a trade-off after a problem with the replication process on the main DB: the Overpass servers have received a bogus changeset. On overpass-api.de, I've re-applied the corrected diff afterwards. This reduces the number of affected objects from 60'000 to about 1'000. On the Rambler instance, I will keep the diff untouched to have at least one working instance if the out-of-order diff turns out to have unexpected side effects.

The data problem is likely to be completely mitigated with new hardware in April.

Recovered 2014-11-04

the Overpass API instance on overpass-api.de will receive in a few hours a data rollback to 22nd Oct 2014. This means a shutdown for two to three hours. Then it will catch up from 22nd October to recent data. Please see [4] for further details. The server is now catching up from the database state of Oct 22nd.

Main and attic data should now work properly and up to date. Areas are also up again.

Fixed 2014-06-26 20:50:00 UTC

The Rambler instance had a database disk error after restart. The problem was not reproducible. I've used the opportunity to reset the database and to update the Rambler instance to Version 0.7.50, with meta data and areas, but without attic data.

Completed 2014-05-10 - According to munin osm db lag stats, overpass-api.de is no longer updated since Friday May 9 (+21hours) Couchmapper (talk) 12:23, 10 May 2014 (UTC)

Database replication seems to have kicked in again some minutes ago, changing state to yellow. Couchmapper (talk) 12:27, 10 May 2014 (UTC)
Overpass API seems to be back in normal operation, setting state to completed. Couchmapper (talk) 13:27, 10 May 2014 (UTC)

Fixed 2014-04-22

On the Rambler instance, the database outage from last Friday has screwed up the database. The overpass-api.de instance is not affected.

As the best possible immediate measure I will re-apply all changes since Friday to the database. This may take up to two days. In the mid-term, I will clear this anyway: On 2014-05-02, it is planned to install a new Overpass version on the Rambler instance.

The database has catched up, and there is currently no indication of bogus data anymore.

Done 2014-04-16

Planned downtime: To add as a capcity enhancement a SSD disk, the server overpass-api.de will be down Wednesday morning. Please use during this time the rambler instance. It will operate as usual and should have enough capacity for this time of the day.

The server is back to normal operation.

Solved 2014-04-04 03:30 UTC

The rambler instance has got a sudden reboot. I've checked that everything looks OK after reboot and restarted the service. -- drolbr

Solved 2014-03-28

The API on http://overpass.osm.rambler.ru/cgi/ is down since about 27-03-2014 9:55 UTC. --Tyr (talk) 16:24, 27 March 2014 (UTC)

I can confirm the issue. The server has just been rebooted. I'll investigate whether there are signs of hardware malfunction. If everything looks good, I hope the service is back tomorrow. -- 18h55 UTC
The server is catching up. No indications of a hardware failure so far. Areas aren't updated yet. -- 21h30 UTC
The area regeneration process has been started. -- 05h50 UTC

Invalid 2014-02-05 13h

The API on http://overpass.osm.rambler.ru/cgi/ is not cross-origin enabled anymore. calling from JS gives me "The request was redirected to a URL ('about:blank') which has a disallowed scheme for cross-origin requests."

I cannot reproduce the problem here. Could you please, if you are working from a browser, then try whether the browser can fetch data via Overpass Turbo and "Settings > General > Server" set to the ramber instance? This would clarify whether it is a problem with the connection or with the browser. --Roland
Sorry, thanks for the hint. My adblocker changed its configuration and blocked the JS call.

Solved 2013-12-28 08h55 UTC

Something ugly happened to the area database on overpass-api.de. To get out of the problem, the areas are cleanly regenerated. This means that no areas are available on overpass-api.de until afternoon.

Areas are now back and (almost) fully operational. The areas from relations 60189 (Russia), 80500 (Australia), and 2186646 (Antarctica) have been blacklisted to avoid the problem to reappear. These areas will again be processed once a new version has a better error protection and recovery.

Done 2013-12-12 18h35 UTC

The rambler instance has an almost full disk. Please be prepared that the area feature might be put off on this instance. The area feature will be always active on overpass-api.de (where disk space isn't short).

Solved 2013-06-14 04h50 UTC

The overpass-api.de instance is less reliable than usual. The rambler instance is not affected. This is due to an unexpected behaviour of /api/augmented_diff. It is probably related to both the load of /api/augmented_diff queries and the size of the current diff. A first workaround didn't work as expected. A second workaround actually worked or the bug conditions never appeared again. --Roland

Solved 2012-11-02 08h05 UTC

The rambler server shows at least two unrelated file errors on one of its database files. The reason is not obvious, so I decided to reload the database by a clone from overpass-api.de. I expect the server to be back on the evening of 1st November.

Solved 2012-09-17

http://www.overpass-api.de/api/xapi?*[@meta][railway=*][bbox=-73.5534668,41.9921602,-66.862793,47.4800885] gives me some random stuff in Ireland. Presumably it's due to the license change. --NE2 23:14, 11 July 2012 (BST)

Whatever. It's not like it will be useful for editing until the OSMF fixes diffs. Thank you, OSMF, for giving us this silliness. --NE2 01:51, 12 July 2012 (BST)
Yes, there are no minute diffs. Hence, the database is frozen at its state of 2012-07-11 14:14 UTC. On the other hand, the link above points rather to somewhere in Canada than to Ireland. -- Roland
The link is for railways in northern New England (the northeasternmost part of the U.S.). But it also gives some railways in Ireland. --NE2 10:32, 12 July 2012 (BST)
Thank you for the error report. The Osmosis accident yesterday left some artifacts in the database, in particular some ways that pretend to exist although they are deleted in the main database. I have now deleted these ways manually, so no bogus data can get visible outside Ireland. I don't have detected more bogus data, but that doesn't necessarily mean that there isn't more bogus data. The possibly in Ireland remaing wrong data will be swept off with the database reload at the end of the redaction process.
To explain what has happened: A couple of wrong diffs have been generated by Osmosis yesterday and have been applied to Overpass API. As there is no undo for applying a minute diff, I have continued to apply the corrected Osmosis diffs. Now, if data has been touched in the faulty diffs and then never since then in a corrected diff, it remains in the bogus state. -- Roland
I am still awaiting the license change with a subsequent complete reimport of data. The remaining artifacts did not cause any further harm, so I still deem it acceptable to wait for some more days. -- Roland

The first ODbL-planet has been imported and has replaced the database involved in the incident. -- Roland

Invalid 2012-09-04

The PT line diagram examples on the site http://www.overpass-api.de/public_transport.html do not work anymore.

Do you mean sketch-route?. This has been disabled since quite a long time because explicit OSM object ids in links may suggest that these ids were permanent. However, thank you for pointing me that there existed still a reference in the documentation. -- Roland

(Moved feature suggestions to the discussion page).

Invalid 2012-07-09 04:57 UTC

The main server does not return nodes as of around noon today. Running the following query: '<osm-script> <union> <query type="relation"> <has-kv k="type" v="route"/> <bbox-query s="27.6839" n="27.7299" w="85.2885" e="85.3368,"/> </query> <recurse type="relation-node"/> <recurse type="relation-way"/> <recurse type="way-node"/> </union> <print/> </osm-script>' used to return a whole bunch of <node> elements, and then <way>s and <relation>s. As of noon-ish UTC today, the <node>s are missing.

I do get nodes in the response (7 nodes), e.g. node 314964021. Could you please try again? -- Roland

Solved 2012-06-13 11:45 UTC

The main server returns outdated data (for XAPI queries only, see below), from at least three weeks ago. --NE2 17:17, 12 June 2012 (BST)

As a next step, I have set up a new, larger machine, with IP overpass-api.de will for the moment forward all XAPI requests but the map requests to that machine. Note that the database there is currently a month behind, but the server is catching up. After all, one month old data is still better than no data at all. I will raise the process limit there until that server has considerable load. So please expect to still see the message "server overcrowed", but I hope, this will more and more vanish with increasing capacity.
Overpass-QL queries, XML queries, and XAPI queries starting with "xapi?map?" work normally on overpass-api.de. -- Roland
In the meanwhile, the new server has fully catched up. I will in the next days move the domain overpass-api.de to the new server. -- Roland
Yes, back to normal. Thanks a lot. --NE2 17:18, 13 June 2012 (BST)

Solved 2012-06-15 08:51 UTC

Rambler always returns "Sorry - server overcrowded." This started happening sometime this morning.

I tried to move some load to the rambler instance, but it didn't work. Now everything should be back to normal operation. -- Roland
The rambler instance doesn't respond any more. The web server returns "502 Bad gateway" even on static pages, and I don't get on the server by ssh. I ask the administrator to restart the server. If it is a hardware issue, it would likely take much longer. -- Roland

Rambler is back to normal operation. It was in fact a network issue, not a hardware damage. -- Roland

Solved 2012-06-13 11:43 UTC

The XAPI compability layer got into a problem with overload. Currently, queries of the form "xapi?*" have been disabled on the overpass-api.de instance. All other request (including XAPI request for specific element types, like "xapi?node" and the map call "xapi?map") work without restrictions.

The rambler instance is not affected.

The backgroud for this decision is that a single application (the iOS computer game Geomon) has roughly 30-fold the server load. I'll find a solution with the application developers and then remove the restriction.

-- When do you expect to be back to normal? User:Christine

I think tomorrow, around 8:00 UTC -- Roland
I still have no feedback from the Geomon team. Thus, the restrictions on "xapi?*" will remain for indefinite time. Please use the rambler instance for XAPI queries. -- Roland
In the meanwhile, the Geomon team has answered. The load problem will be solved with the next app update on Tuesday. Please use the rambler instance for XAPI queries until then. -- Roland

The rambler instance itself has been lost due to unknown reasons. The issue itself is solved after redirecting XAPI calls to the new overpass-api.de server. -- Roland

Solved 2012-06-10 21:50 UTC

Filtering by existence of multiple keys doesn't seem to work, e.g. if you want all nodes that contain a tag with key "historic" and also have a "name":

out qt;

This returns an empty result set, but when you just query for

out qt;

you see that most of the nodes have a tag with key "name" and therefore should be returned by the first query. (That's just a simple example, I actually have a more complex query with onions that include e.g. "natural" where the filter could decrease traffic more than in this example.)

Thank you for reporting this. This was a bug introduced with a hotfix after release 0.6.98. I have fixed it now. -- Roland
Thank you for the quick fix and your great work here. -- User:H. G.

Solved 2012-05-30 08:46 UTC

Pretty sure something's not working properly. http://overpass.osm.rambler.ru/cgi/xapi?*[@meta][railway=*][bbox=-91.7248535,30.1166216,-88.0114746,35.0749649] should give the railways in Mississippi, but it's returning stuff from all over, as well as incomplete ways. --NE2 06:55, 29 May 2012 (BST)

Something is wrong with Rambler. It only returns data with today's date.

I don't yet know what goes wrong at the moment, but it is surely broken. Thank you for reporting this.
I'll save the logs to do forensics and then make a clean restart. This will take until tomorrow morning. -- Roland
The update process hit a very uncommon but devastating race condition. I restarted the server with a fresh database from this afternoon, but updates won't start before I have fixed that race condition tomorrow. -- Roland
The server is back to normal operation, only the areas feature will still need some hours to regenerate. -- Roland
Cheers. Thanks for your work, so I can resume mine :) --NE2 10:16, 30 May 2012 (BST)

Invalid 2012-05-12 18:00 UTC:

rambler is not working, returning "The requested URL /~roland/api/interpreter was not found on this server."

The correct URL is




-- Roland

Ah, then it seems all the forms on this page need to be updated to use the correct URL. Thanks for your wonderful service! -- Joshdoe 03:37, 11 May 2012 (BST)
Thank you for this hint. I have replaced the outdated documentation by a link to the current documentation. -- Roland

Solved 2012-03-20 04:35 UTC:

Request to hhttp://overpass-api.de/api/xapi_meta?*[name%3DTschechieche+Spezialit%C3%A4ten+-+Handel+%26+Vertrieb] via taginfo faild with:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
  <meta http-equiv="content-type" content="text/html; charset=utf-8" lang="en"/>
  <title>OSM3S Response</title>

<p>The data included in this document is from www.openstreetmap.org. It has there been collected by a large group of contributors. For individual attribution of each item please refer to https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history </p>
<p><strong style="color:#FF0000">Error</strong>: line 4: parse error: not well-formed (invalid token) </p>

I can confirm this bug. It it due to improper treatment of ampersand charaters in the XAPI compability layer. I'll fix that during the day -- Roland
It is fixed since quite some time, but I forgot to update this status page.

Solved 2012-01-21 08:00 UTC: Broken area-queries again, but this time with different error messages. E.g. the query from section "Download an entire city" at overpass-api.de fails with "Error: line 4: static error: Unknown tag "area-query" in line 4.". If I try to use area-query inside <query type="node"> element, I'll get only Internal Server Error.

Thank you for reporting this. I can reproduce the error but I cannot do much before I'm back from vacancies next Saturday. I'm sorry. -- Roland
It was possible to fix the error with simple means. It was a typo in a refactored piece of source code. -- Roland

Invalid 2012 Jan 12 12:55 UTC: wrong parameter order in bbox

A All bbox queries are failing due to some sort of memory error. The same bbox queries were succeeding earlier in the day.
A Example:


<?xml version="1.0" encoding="UTF-8"?> <osm version="0.6" generator="Overpass API"> <note>The data included in this document is from www.openstreetmap.org. It has there been collected by a large group of contributors. For individual attribution of each item please refer to https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history </note> <meta osm_base="2012-01-08T12\:17\:02Z"/> <remark> runtime error: Query run out of memory in "recurse" at line 7 using about 640 MB of RAM. </remark> </osm>

I assume that in the above query the second and third parameter are in wrong order. Note that the order is West, South, East, North.
selects 30 degress of longitude and latitude, including randomly two thirds of Europe.
selects a bounding box of meaningful size around Rome and works fine.
No other bbox shows an error.

Solved 2012 Jan 1 21:05: bogus meta elements

A mistake in the version update (forgotten --meta parameter to dispatcher on the command line) doomed the meta file indexes. The core data is not damaged and keeps going on.
A new planet import is in progress and expected to be complete on early Friday morning. Meanwhile, please use the Rambler server with base link
http://overpass.osm.rambler.ru/cgi/. It has not got the version update yet and serves meta data without any restrictions.
Update: The Friday morning import has also two other flaws:
  • I started the minute updates at the wrong date. All changes from 21 Dec are missing.
  • I accidently deleted the wrong file when trying to get space on the server's hard disk. Thus, there are no meta data for nodes for the moment.
I'm sorry for all that mess. I've done it too much in a hurry.
Note: The new version itself isn't buggy. The dificulties are human failure while fiddling to install without service interruption.
At sunday evening, the switch to the import succeeded. On Monday morning, the server also has catched up to the minutely diffs. This incident doesn't harm any more.

Solved 2011 Dec 28 20:00: area-queries appear to be broken - did not reappear.

They fail with 'Error: runtime error: open64: 0 /osm3s_v0.6.95_areas Dispatcher_Client::request_read_and_idx::timeout. Probably the server is overcrowded.' While everything else works.
This is fixed now. The measure was to restart the dispatcher, i.e. the internal serializer of queries. For an unknown reason it refused all reading requests before.
It seems the problem mentioned below is back (2011 Dec 17 11:00). It has been fixed once again by a dispatcher restart. But it looks like there is a not reproducable bug in the software.
With extended diagnostic capabilities also the self healing has been improved. It looks like this works as a remedy against the unknown bug. Neither diagnostic messages nor the bug did reappear for roughly a week.
The update is not yet rolled out on the Rambler server.

Solved 2011 Oct 25 23:50: Some ways, e.g. way #4732211 showed no history information.

The investigation is still going on. The bug doesn't appear on a freshly imported database on the rambler server. Thus, it is either a race condition or caused by a bad recovery after the power outage.
A contributing factor has been identifi