This page tells you how to install the OSM3S server such that you can use it as a local OSM mirror. Additional functionality like management of areas and the line diagram utils aren't covered yet.
Please note: the primary source is on https://overpass-api.de/no_frills.html. This is more a location to collect various measures for troubleshooting.
- 1 System Requirements
- 2 Installation
- 3 Populating the DB
- 4 Populating the DB with attic data
- 5 Static Usage
- 6 Starting the dispatcher daemon
- 7 Applying minutely (or hourly, or daily) diffs
- 8 Setting up the Web API
- 9 Area creation
- 10 Troubleshooting
- 10.1 runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client
- 10.2 runtime error: open64: 2 No such file or directory /osm3s_v0.7.51_osm_base Dispatcher_Client::1
- 10.3 File_Error Address already in use 98 /srv/osm3s/db_dir//osm3s_v0.7.3_osm_base Dispatcher_Server::4
- 10.4 File_Error 17 /osm3s_v0.6.94_osm_base Dispatcher_Server::1
- 10.5 No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1
- 10.6 Database population problem
- 10.7 Area batch run out of memory error
- 10.8 Apache config fails
- 10.9 Apache: HTTP 403 Forbidden errors
- 11 Contributors corner
It is highly recommended that you have at least the following hardware resources available for an OSM planet server:
- 1 GB of RAM and sufficient swap space for a small extract or a development system. By contrast, overpass-api.de (the main Overpass API instance) has 32 GB main memory. Actual memory requirements also highly depend on the expected maximum number of concurrent users.
- For a full planet with meta and attic data (=all data since license change in September 2012), about 200 GB - 300 GB disk space are required when using a compressed database (since 0.7.54). Without compression at least double the amount is needed.
- Use of fast SSDs instead of slower hard disks is highly recommended!
It is required that you have the following resources:
- Access to Expat and a C++ compiler
- An OSM file in XML format compressed in bzip format (Geofabrik is an excellent resource for this. Another good resource is located on the Planet.osm page.)
- Alternatively, you can also use an extract or planet file in PBF Format along with osmconvert (requires --out-osm parameter for osmconvert, as Overpass API doesn't support PBF natively)
NOTE: You do not need a database engine (e.g. MySQL or PostgreSQL); the database back-end is included in the OSM3S package.
You will need to identify or create:
- $EXEC_DIR: The root directory in which executable files should be installed (/bin/ suffix removed). (~100 MB). For example, the public server has this on /opt/osm-3s/v0.7.54/
- $DB_DIR: a directory to store the database
- $REPLICATE_DIR: a directory to store minutely (or otherwise) diffs (only necessary if you decide to configure minutely updates below)
|Keep up to date - subscribe to the Overpass developer list today!|
Ubuntu or Debian 6.0 (squeeze) or Debian 7.0 (wheezy)
1. Install the following packages: g++, make, expat, libexpat1-dev and zlib1g-dev.
sudo apt-get update sudo apt-get install g++ make expat libexpat1-dev zlib1g-dev
Option 1: Installation via tarball
2. Download the latest tarball, prepared with GNU autoconf. For example:
3. Unpack the tarball:
tar -zxvf osm-3s_v*.tar.gz
4. Compile the OSM3S package:
cd osm-3s_v* ./configure CXXFLAGS="-O2" --prefix=$EXEC_DIR make install
Option 2: Installation via bleeding edge dev version (expert use)
2. Alternatively, if you want the bleeding edge latest dev version, you can get it from here on github
sudo apt-get install git libtool autoconf automake git clone https://github.com/drolbr/Overpass-API.git cd Overpass-API git checkout minor_issues
Depending on your Ubuntu version you may need to explicitly tell apt-get to install version 1.11:
sudo apt-get install automake1.11
3. Update build system
When using the latest dev version from github, the build system has to be updated first. The following steps were successfully tested on Ubuntu 14.04 and debian 7.0:
cd ./src/ autoreconf libtoolize automake --add-missing autoreconf
4. Compile the OSM3S package:
cd ../build/ ../src/configure CXXFLAGS="-Wall -O2" --prefix=$EXEC_DIR make install
NOTE: If you encounter a message like this: configure: error: cannot find install-sh or install.sh in "../src" "../src/.." "../src/../.." if you already have automake on your computer (sudo apt-get install automake), it may indicate that the symbolic link(s) in the "../src/" directory are broken. Before you can continue you will need to delete and recreate the links to your system's proper files, for example:
ln -s /usr/share/automake-1.11/missing ./missing ln -s /usr/share/automake-1.11/install-sh ./install-sh ln -s /usr/share/automake-1.11/depcomp ./depcomp
or if you receive a 'Link already exists' error you can try using the absolute paths:
sudo rm -r /root/osm-3s_v0.7.50/src/missing sudo rm -r /root/osm-3s_v0.7.50/src/install-sh sudo rm -r /root/osm-3s_v0.7.50/src/depcomp sudo ln -s /usr/share/automake-1.11/missing /root/osm-3s_v0.7.50/src/missing sudo ln -s /usr/share/automake-1.11/install-sh /root/osm-3s_v0.7.50/src/install-sh sudo ln -s /usr/share/automake-1.11/depcomp /root/osm-3s_v0.7.50/src/depcomp
NOTE: If you encounter an error of this format during compiling: make: *** [...] Error 1 it means that something unexpected occurred and this is an opportunity to help make the OSM3S package more robust. To help you will need to capture the compile-time output and email it to the package's current maintainer: Roland Olbricht For example, the following command will capture the output and put it in a file called error.log:
make install >&error.log
Populating the DB
The recommended way to populate the database is via cloning from the dev server:
./download_clone.sh --db-dir=database_dir --source=http://dev.overpass-api.de/api_drolbr/ --meta=no
This is fastest and needs the least space. If you need metadata (i.e. objects version numbers, editing users and timestamps) then put
--meta=yes instead of
If you want even museum data (all the old versions since the license change in 2012) then replace the parameter with
You could also populate the overpass database from a planet file. For this, you need to download a planet file:
Populate the database with:
nohup ../src/bin/init_osm3s.sh planet-latest.osm.bz2 $DB_DIR $EXEC_DIR & tail -f nohup.out
NOTE: If you want to query your server with JOSM, you'll need metadata. Add the
nohup ../src/bin/init_osm3s.sh planet-latest.osm.bz2 $DB_DIR $EXEC_DIR --meta & tail -f nohup.out
To populate the database with attic data, to use augmented diffs use
--keep-attic instead of
It is not possible to get museum data this way, because the planet file does not contain that data.
nohup together with
& detaches the process from your console, so you can log off without accidently stopping it.
tail -f nohup.out allows you to read the output of the process (which is written into
NOTE: This step can take a very long time to complete. In the case of a smaller OSM extract files, less than 1 hour, but in the case of a full planet file this step could take on the order of 24 hours or more, depending on available memory and processor resources. When the process has finished successfully the file
nohup.out will indicate this with "Update complete" at the very end.
(As a side note, this also works for applying OSC files onto an existing database. Thus you can make daily updates by applying these diffs with a cronjob. This method takes fewer disk loads than minute updates, and the data is still pretty timely.)
Populating the DB with attic data
Since Overpass API v0.7.50, it is possible to also retain previous object versions, the so called attic versions, in the database. Previous object versions are accessible via [date:...], [diff:...], [adiff:...] as well as some filters like (changed:...).
For the main Overpass API instance, the database was initially built using the first available ODbL compliant planet dump file (September 2012). If you don't require all the history back to 2012, it is also possible to start with any later planet dump and apply any subsequent update via the daily/hourly/minutely update process.
Any subsequent changes can be automatically stored in the database, if the following two prerequisites are met:
- Dispatcher needs to be run with attic support enabled
- Apply_osc_to_db.sh also needs to run with attic support enabled
Relevant settings for both dispatcher and update script are described further down on this page
- At this time it is not possible to use a full history dump to initialize the database (see User Page).
- Using extracts instead of a planet file along with attic mode is currently being discussed on the developer's list and it likely also not to work.
OSM3S is now ready to answer queries. To run a query, run
and enter your query on the standard input. If typing directly into the console, you need to press Ctrl+D in the end to signal the end of input. Answers will appear on standard output.
If you've imported the entire planet, try the example query:
<query type="node"><bbox-query n="51.0" s="50.9" w="6.9" e="7.0"/><has-kv k="amenity" v="pub"/></query><print/>
This one returns all pubs in Cologne (the city with the best beer in Germany :) ).
Check the full introduction to OSM3S query language on the Web or at $EXEC_DIR/html/index.html (installed as part of OSM3S) for more information.
Lastly, if you're using the dispatcher daemon, osm3s_query can connect to it and find $DB_DIR by itself:
If you can make conversion requests to osm3s_query without specifying the db dir, then the dispatcher daemon is running correctly.
Starting the dispatcher daemon
If you wish to automatically apply diff updates or run the Web API, you need to start the dispatcher daemon (this is otherwise optional). Like all other processes they should be started by a single, standard user. Do not run anything with root privileges. That would be an unnecessary security risk. The tools set the necessary file permissions to allow writing for the user that created the database and reading for everybody else.
nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &
For meta data you need to add a parameter:
nohup $EXEC_DIR/bin/dispatcher --osm-base --meta --db-dir=$DB_DIR &
When serving attic data you need to run the dispatcher with the following parameters:
nohup $EXEC_DIR/bin/dispatcher --osm-base --attic --db-dir=$DB_DIR &
Short answer: Systemd is not designed to run a DBMS, in particular Overpass API. I explain the details in a blog post.
- Never start any component of Overpass API as root. The whole system is designed to work without root, and you can run into really weird bugs if some parts of the system run as root.
- Do not automatically remove any of the lock files, socket files or shared memory files. They work as canaries, i.e. hitting existing files is almost always an indicator for bigger trouble elsewhere. Please ask back in those cases.
Applying minutely (or hourly, or daily) diffs
Note: The dispatcher daemon must be running for diff application to work.
First, decide the maximum tolerable lag for your DB:
- minutely: https://planet.osm.org/replication/minute/
- hourly: https://planet.osm.org/replication/hour/
- daily: https://planet.osm.org/replication/day/
From these, you need to find replicate sequence number, which will become $FIRST_MIN_DIFF in the instructions below. To find it:
- Browse through the replicate directory hierarchy (e.g. https://planet.openstreetmap.org/replication/minute/) and find the diff that has a date before the starting point of the planet dump. The planet dump starts at 00:00 UTC; because the server shows local time, this is equivalent to 01:00 BST during summer and 00:00 BST during winter in the file listing.
- Verify you have the right file by checking the respective *.state.txt file. The timestamp should show a date (here always UTC) slightly before midnight. sequenceNumber in this file (also present in the filename) is your replicant sequence number, and $FIRST_MIN_DIFF.
From $EXEC_DIR/bin, run:
nohup ./fetch_osc.sh $FIRST_MINDIFF_ID https://planet.openstreetmap.org/replication/minute/ $REPLICATE_DIR/ &
This starts a daemon that will download all diffs from $FIRST_MINDIFF_ID to the present into your replicate directory. When new diffs are made available, if this is kept running, it will download them automatically. If you get diffs on another way, you can omit this command.
Next, apply changes to your DB:
nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=no &
This starts the daemon that keeps the database up to date. Latest versions require an additional parameter augmented_diffs:
nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --augmented_diffs=no &
To add metadata, you must add a parameter to the second command. Instead of the above, run:
nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta &
To update your database containing attic data, you need to use the following command:
nohup ./apply_osc_to_db.sh $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=attic &
To see what's going on, watch these log files:
Setting up the Web API
Note: The dispatcher daemon must be running for the Web API to work.
This section describes one way to setup a basic read-only HTTP based API with OSM3S.
1. Install Apache2 (with CGI support)
sudo apt-get install apache2 sudo a2enmod cgi
2. Configure Apache2
cd /etc/apache2/sites-available nano default
Make your default file look something like this:
<VirtualHost *:80> ServerAdmin webmaster@localhost ExtFilterDefine gzip mode=output cmd=/bin/gzip DocumentRoot [YOUR_HTML_ROOT_DIR] # This directive indicates that whenever someone types http://www.mydomain.com/api/ # Apache2 should refer to what is in the local directory [YOUR_EXEC_DIR]/cgi-bin/ ScriptAlias /api/ [YOUR_EXEC_DIR]/cgi-bin/ # This specifies some directives specific to the directory: [YOUR_EXEC_DIR]/cgi-bin/ <Directory "[YOUR_EXEC_DIR]/cgi-bin/"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch # For Apache 2.4: # Order allow,deny # For Apache > 2.4: Require all granted #SetOutputFilter gzip #Header set Content-Encoding gzip </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, alert, emerg LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>
3. Restart Apache2:
sudo /etc/init.d/apache2 restart
NOTE: If when you restart apache, you receive an error message such as "ExtFilterDefine invalid command", you need to tell apache to install the correct filter module:
4. Start the dispatcher process and point it to your database directory:
sudo nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &
With meta data:
sudo nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR --meta &
Note: to convert this process to a service that starts up when your system boots do this (... in progress)
5. Test your Web-API by sending it the following command:
wget --output-document=test.xml http://[your_domain_or_IP_address]/api/interpreter?data=%3Cprint%20mode=%22body%22/%3E
The xml output document should look something like this:
<?xml version="1.0" encoding="UTF-8"?> <osm-derived> <note> The data included in this document is from www.openstreetmap.org. It has there been collected by a large group of contributors. For individual attribution of each item please refer to https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history </note> <meta osm_base=""/> </osm-derived>
If the output from Web API is something else (trash or large binary data) or if the Web API was not found, make sure you enable CGI in Apache:
sudo a2enmod cgi
Then restart apache:
sudo service apache2 restart
This section was taken over from http://overpass-api.de/full_installation.html and may need some revision. Please also check the discussion page and add those details which are worth mentioning here.
To use areas with Overpass API, you essentially need another permanent running process that generates the current areas from the existing data in batch runs.
First, you need to copy the rules directory into a subdirectory of the database directory:
cp -pR "../rules" $DB_DIR
Hint: If you used the tarbars (Install Option 1 above) the rules subfolder is missing (as of Sep 2015)!
Find it at https://github.com/drolbr/Overpass-API
The next step is to start a second dispatcher that coordinates read and write operations for the areas related files in the database:
nohup $EXEC_DIR/bin/dispatcher --areas --db-dir=$DB_DIR & chmod 666 "../db/osm3s_v0.7.*_areas"
The dispatcher has been successfully started if you find a line "Dispatcher just started." in the file transactions.log in the database directory with correct date (in UTC).
The third step then is to start the rule batch processor as a daemon:
nohup $EXEC_DIR/bin/rules_loop.sh $DB_DIR &
Now we don't want this process to impede the real business of the server. Therefore, I strongly suggest to priorize this process down. To do this, you need to find with
ps -ef | grep rules
the PIDs belonging to the processes rules_loop.sh and ./osm3s_query --progress --rules. Run for each of the two PIDs the commands:
renice -n 19 -p PID ionice -c 2 -n 7 -p PID
The second command is not available on FreeBSD. This is not at big problem, because this rescheduling just means giving hints to the operating system.
When the batch process has completed its first cycle, all areas get accessible via the database at once. This may take up to 24 hours.
runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client
Note: if you get an output doc that looks more like this:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" lang="en"/> <title>OSM3S Response</title> </head> <body> <p> The data included in this document is from www.openstreetmap.org. It has there been collected by a large group of contributors. For individual attribution of each item please refer to https://www.openstreetmap.org/api/0.6/[node|way|relation]/#id/history </p> <p><strong style="color:#FF0000">Error</strong>: runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client::1 </p> </body> </html>
Then it may indicate that the dispatcher process is not running or not configured correctly.
runtime error: open64: 2 No such file or directory /osm3s_v0.7.51_osm_base Dispatcher_Client::1
Make sure that the first dispatcher is running.
File_Error Address already in use 98 /srv/osm3s/db_dir//osm3s_v0.7.3_osm_base Dispatcher_Server::4
Check for stale lock files in the following two locations before restarting a crashed/killed dispatcher
- your db directory (a file named osm3s_v*_osb_base).
To clean up these lock files automatically you can try running:
File_Error 17 /osm3s_v0.6.94_osm_base Dispatcher_Server::1
If you killed (or crashed) the dispatcher daemon and wish to restart it, you might encounter this error (unless you reboot) : There is a lock file : /dev/shm/osm3s_v0.6.94_osm_base that prevent other dispatchers to run while one is allready running. Remove that file (and check that no dispatcher is running) and restart it.
To remove this lock file (and others), try running:
No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1
No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1
Error might happen, if file permissions for area files are wrong. Try
chown -R www-data:www-data $DB_DIR/area*
- Good point! This was really just a quick trial and error solution, which might be wrong. Any detailled step-by-step instruction would be really helpful: I got the described error message while following the above official install instructions, so there's definitely some missing part in the documentation! free_as_a_bird (talk) 23:18, 12 September 2015 (UTC)
- Yes, that's not really recommended. Usually you would run both dispatcher and the rules_loop.sh script as a dedicated (non www-data) user. OTOH www-data still needs to have read access to the database files, as the 'interpreter' is run as www-data user via CGI. I think it is best to discuss this all on the Overpass Developer list (see Info Ambox on this page) and also get some feedback from Roland. Mmd (talk) 11:41, 13 September 2015 (UTC)
Database population problem
(Found in version 0.7.5) If you receive an out of memory error while populating the database:
Out of memory in UB 25187: OOM killed process 21091 (update_database)
Try adding --flush-size=1 as a parameter when calling update_database, so in most cases, add the parameter to the last line in the init_osm3s.sh script
Area batch run out of memory error
When generating an area run, you may receive the following:
Query run out of memory in "recurse" at line 255 using about 1157 MB
(Assuming you have enough physical free memory, 4gb worked for me) Try removing all the "area" files from your database directory and increase the element-limit (in your $DB_DIR/rules/rules.osm3s file) to "2073741824"
Apache config fails
If you encounter some message like this one when (re)starting the apache server:
# apache2ctl graceful Syntax error on line 12 of /etc/apache2/httpd.conf: Invalid command 'Header', perhaps misspelled or defined by a module not included in the server configuration Action 'graceful' failed.
then apache doesnt use mod_headers. you can activate mod_headers by running:
# a2enmod headers Enabling module headers. To activate the new configuration, you need to run: service apache2 restart # apache2ctl graceful
After this, apache should start up correctly.
Apache: HTTP 403 Forbidden errors
If you run into 403 forbidden errors on apache, double check your configuration in /etc/apache2/apache2.conf if your directory is explicitly allowed.
|The following section includes some infos from contributors. Note that they're not officially supported or endorsed. Use at your own risk.|
Via a Docker image
There exists a non official docker image available here. This image supports database initialisation, metadata, minute diffs, area creation and api access via http.
The installation is very simple, however it needs to have docker installed on the host computer. Then, you have to:
Download and unzip the archive.
Edit the conf.sh file to your liking.
Then, you simply have to type
make in order to compile and launch the image. The complete image creation from scratch takes about 40 hours. Afterwards the API is accessible at http://localhost:5001/api (you may want to change this port in the conf.sh file).
CentOS or RHEL 7
1. Install dependencies
$ sudo yum install tar make gcc-c++ expat expat-devel zlib-devel bzip2 rpmbuild gcc ruby-devel rpm-install $ sudo gem install fpm
2. Get source
$ wget http://dev.overpass-api.de/releases/osm-3s_v0.7.52.tar.gz $ tar -xvzf osm-3s_v0.7.52.tar.gz
3. Compile the software
cd osm* ./configure CXXFLAGS="-O3" --prefix=/usr/local/osm3s make
4. Systemd unit file
$ vim overpass-api.service [Unit] Description=Overpass API dispatcher daemon After=syslog.target [Service] Type=simple ExecStart=/usr/local/osm3s/bin/dispatcher --osm-base --db-dir=/var/lib/osm3s/data/db ExecStop=/usr/local/osm3s/bin/dispatcher --terminate [Install] WantedBy=multi-user.target
5. Create rpm package with some post install/remove scripts
$ vim post-install #!/bin/bash mv /usr/local/osm3s/bin/overpass-api.service /etc/systemd/system mkdir -p /var/lib/osm3s/data/db
$ vim post-remove #!/bin/bash rm /etc/systemd/system/overpass-api.service
create rpm package using fpm
/usr/local/bin/fpm -s dir -t rpm -n overpass-api -v 0.7 --iteration 52 --exclude bin/.dirstamp --exclude bin/.libs --after-install post-install --after-remove post-remove --prefix /usr/local/osm3s bin/
The package is available on https://packagecloud.io/visibilityspots/packages and a proof of concept is available at https://github.com/visibilityspots/vagrant-puppet/tree/overpass-api