Databases and data access APIs
This page provides an overview of the databases that could be used to store and manipulate OSM data, how to obtain data to populate the databases, and how to query them to find something useful.
It is intended as an overview for new developers who wish to write software to use OSM data, and not for end users of the information.
Sources of OSM Data
See also Downloading data for a run down of the basic options
The various sources of OSM data (either the whole world, or a small part of it) are identified below with links to other Wiki pages which provide more detail.
The most of the following methods of obtaining data return the data in the OSM XML format that can be used by other tools to populate the database. The format of the data is described in Data Primitives.
Every week a dump of the entire current OSM dataset is saved in different formats and made available as Planet.osm. Quite a few people break this file down into smaller files for different regions and make extracts available separately on mirror servers. Various tools are available to cut the Planet file up into smaller areas if required.
Differences between the live OSM data and the planet dump are also published each minute as changeset, so it is possible to maintain an up-to date copy of the OSM dataset.
|Due to the massive growth of client applications the following APIs can be unavailable.
Please check Platform Status.
The Xapi servers allow OSM data to be downloaded in XML format for a given region of the globe, filtered by tag. Xapi will return quite larger areas (city level) of the globe if requested, which makes it different to the standard OSM API described below.
The main API is the method of obtaining OSM data used by editors, as this is the only method of changing the OSM data in the live database. The API page provides a link to the specification of the protocol to be used to obtain data. Its limitations are that it will only return very small areas <0.25deg square.
Allows quite complex queries on larger areas.
Choice of DBMS
There are several different databases systems used by OSM users:
|PostgreSQL||Can handle large datasets. The PostGIS extension allows the use geographic extensions||Requires database server to be installed, with associated administrative overhead||Main OSM API, Mapnik renderer|
|MySQL||Can handle large datasets||Does not have geographic extensions. Requires database server to be installed, with associated administrative overhead||The main database API used MySQL until version 0.6, when it was changed to Postgresql|
|SQLite||Small, does not require a database server||Will struggle with large datasets - See Mail Archive||Microcosm|
|MongoDB||Native Geospatial Indexes and Queries||Osmo, osmcompiler|
OSM uses different database schemas for different applications.
- Whether the schema supports updating with OsmChange format "diffs".
- This can be extremely important for keeping world-wide databases up-to-date, as it allows the database to be kept up-to-date without requiring a complete (and space- and time-consuming) full, worldwide re-import. However, if you only need a small extract, then re-importing that extract may be a quicker and easier method to keep up-to-date than using the OsmChange diffs.
- Whether the schema has pre-built geometries.
- Some database schemas provide native (e.g: PostGIS) geometries, which allows their use in other pieces of software which can read those geometry formats. Other database schemas may provide enough data to produce the geometries (e.g: nodes, ways, relations and their linkage) but not in a native format. Some can provide both. If you want to use the database with other bits of software such as a GIS editor then you probably want a schema with these geometries pre-built. However, if you are doing your own analysis, or are using software which is written to use OSM node/way/relations then you may not need the geometries.
- Whether the full set of OSM data is kept.
- Some schemas will retain the full set of OSM data, including versioning, user IDs, changeset information and all tags. This information is important for editors, and may be of importance to someone doing analysis. However, if it is not important then it may be better to choose a "lossy" schema, as it is likely to take up less disk space and may be quicker to import.
|Schema name||Created with||Used by||Primary use case||Updatable?||Geometries (PostGIS)?||Lossless?||Uses hstore columns?||Database|
|osm2pgsql||osm2pgsql||Mapnik, Kothic JS||Rendering||Yes||Yes||No||optional||PostgreSQL|
Osm2pgsql schema has historically been the standard way to import OSM data for use in rendering software such as Mapnik. It also has uses in analysis, although the schema does not support versioning or history directly. The import is handled by the Osm2pgsql software, which has two modes of operation, slim and non-slim, which control the amount of memory used by the software during import and whether it can be updated. Slim mode supports updates, but time taken to import is highly dependent on disk speed and may take several days for the full planet, even on a fast machine. Non-slim mode is faster, but does not support updates and requires a vast amount of memory.
The import process is lossy, and controlled by a configuration file in which the keys of elements of interest are listed. The values of these "interesting" elements are imported as columns in the points, lines and polygons tables. (Alternatively, values of all tags can be imported into a "hstore" type column.) These tables can be very large, and care must be paid to get good indexed performance. If the set of "interesting" keys changes after the import and no hstore column has been used, then the import must be re-run.
For more information, please see the Osm2pgsql page.
ApiDB is a schema designed to replicate the storage of OSM data in the same manner as the main API schema and can be produced using the Osmosis commands for writing ApiDBs or updating ApiDBs with changes. This schema does not have any native geometry, although in the nodes, ways and relations tables there is enough data to reconstruct the geometries.
This schema does support history, although the import process does not, so it can be used for mirroring of the main OSM DB. A history will be generated as replication diffs are applied.
The import process, even on good hardware, can take several weeks for the full planet. The database will take approximately 1 TB as of April 2012.
The pgsnapshot schema is a modified and simplified version of the main OSM DB schema which provides a number of useful features, including generating geometries and storing tags in a single hstore column for easier use and indexing. JXAPI's schema is built on pgsnapshot.
Although the pgsnapshot data is technically lossy, this is only with metadata and full element data (including all tags) are imported.
Imposm is an import tool, and is able to generate schemas using a mapping which is fully configurable (there is also a good default for most use-cases). As such it really shouldn't count as its own schema, but it needed fitting in somehow. The ability to break data out thematically into different tables greatly simplifies the problem of indexing performance, and may result in smaller table and index sizes on-disk.
Nominatim is a geocoder where the database is produced by a special back-end of Osm2pgsql. It is a special-purpose database, and may not be suitable for other problem domains such as rendering or routing. The development overview gives information on some of the innards.
Nominatim's database is notoriously hard to set up, so you may want to try one of the pre-indexed data releases first.
The Overpass_API is a query language built on top of a custom back-end database with software called OSM3S (see OSM3S/install for install and setup instructions). This is a custom database and it is therefore hard to compare it with other database schemas. You could recreate the complete planet file from the database. It is geared to have good performance on locally concentrated datasets.
OsmSharp is a toolbox of OSM-related routines, including some to import OSM data into Oracle databases.
MongOSM is a set of Python scripts for importing, querying and (maybe) keeping up-to-date OSM data in a MongoDB database.
Inspired by mongosm, Node MongOSM uses mongoose to provide schemas and insert vs upsert options.