- 1 Database
- 2 Osmosis
- 3 Importing data
- 4 Importing changeset metadata
- 5 Generating tiles
- 6 Rails
- 7 Replication
Set up PostGIS database
Installation instructions below are for PostgreSQL 9.1 or newer, for equivalent 8.x procedures refer to Osmosis/PostGIS_Setup.
# Assuming a database owl here, can be any other database name. createdb owl createlang plpgsql owl psql -f /path/to/pgsql/share/contrib/postgis-2.0/postgis.sql owl psql -f /path/to/pgsql/share/contrib/postgis-2.0/spatial_ref_sys.sql owl
Create hstore extension
psql -c "CREATE EXTENSION hstore;" owl
Install OWL schema
psql -f sql/owl_schema.sql owl psql -f sql/owl_functions.sql owl psql -f sql/owl_indexes.sql owl psql -f sql/owl_constraints.sql owl
Configure database connection
Many scripts use the rails/config/database.yml file to get connection information - username/password/port/database name need to be configured there for things to work properly so better do it early. You should configure the connection in the development section and also optionally in the test section if you plan to run tests (note that test database is cleared on every test run so it's better to separate it from the development database where you import data).
For now we're using a modified version of Osmosis, this should just be an Osmosis plugin in the future. To build Osmosis with the OWL plugin, do the following in a checked out OWL repository:
git submodule init git submodule update cd osmosis-plugin ./gradlew build # Copy resulting build to a path of your convenience
Because OWL is based on changes in the data, it requires at least two versions of an OSM object (node, way) in order to generate OWL's internal data (changes, tiles). This means that there are basically two ways to initially populate the database with data:
- Use the full history database dump (full history planet can be found here, some regional extracts here).
- Use a regular database dump (with only one - current - version of each object) and then process OSC files.
Using full history database dump
Due to the size of the full history dumps, data import is in two stages - first SQL data files are written to disk by the OWL Osmosis plugin and then they are imported directly into Postgres using the owl_load_data.sql script. This is faster than writing directly to a database from Osmosis.
The following command will process the full history file and output SQL data files to disk:
osmosis --read-pbf poland.osh.pbf --lp --write-owldb-history-dump directory=/some/output/directory
After it finishes, you need to import the SQL files into the database:
cd /some/output/directory # Use location from above Osmosis command psql -a -f /directory/where/owl/is/sql/owl_load_data.sql owl
Using database dump without history
The process is exactly the same as processing full history dump files (the same commands can be used) only at the end you will not be able to generate changes yet - because there is only one version of each object in the database.
Importing changeset metadata
Unfortunately, full changeset metadata (with tags) is not included in database dumps so it has to be imported separately. Importing metadata from weekly dumps is done using a modified version of the ChangesetMD tool by ToeBee.
git clone https://github.com/ppawel/ChangesetMD.git cd ChangesetMD git checkout owl wget https://planet.openstreetmap.org/planet/changesets-latest.osm.bz2 bunzip2 changesets-latest.osm.bz2 ./changesetmd.py -d owl -f changesets-latest.osm
This will import the weekly dump of changeset metadata. If you want to have newer metadata as well, you need to setup replication for changeset metadata (see below).
OWL tiler script is used to generate tiles. It reads changeset ids from the standard input.
cd scripts echo 123456 | ./tiler.rb
More advanced usage with changeset ids generated on the fly and passed to the tiler using standard Unix pipes:
cd scripts psql -c "SELECT id FROM changesets WHERE created_at BETWEEN '2013-01-01' AND '2013-01-05'" -t -A | ./tiler.rb
Rails application in the rails/ subdirectory provides the main OWL API.
- Ruby 1.9.1 or newer (other versions are untested)
- Rails 3.2.8 or newer (other versions are untested)
First, set up the database connection: copy rails/config/example.database.yml to `database.yml` and configure the connection there.
Then, install dependencies:
To run Rails server:
- Initialize replication directory
- Set up (e.g. in crontab) the replication pipeline:
# Make sure to use the OWL specific osmosis version built above osmosis \ --read-replication-interval workingDirectory=replication/ \ --write-owldb-change database=owl user=postgres
- Downloads OsmChange file for a specific replication interval (minute/hour/day) - according to the configuration in the `configuration.txt` file.
- Updates data tables (but it does NOT create changes, tiles or changeset metadata - for that separate replication needs to be set up).
Official minutely replication stream for changeset metadata can be found here. It works in a similar way to the Osmosis data replication - there is a state file for each metadata package.
First, you need to prepare a state file - download the current state file from here. If needed, modify the sequence number in the state file to start from different point in time.
Copy that file to the replication directory in OWL. Then execute:
This command will start replication - download metadata starting from the sequence number you specified in the state file up until the current state. It will create records in the changesets table. You can execute this script repeatedly to keep your database up-to-date.
There is also the replication/replicate_changesets.sh (note sh instead of rb) helper script that is more suitable for executing from crontab as it uses proper locking so that multiple executions don't clash with each other.
Changes and tiles