Databases and data access APIs
This page provides an overview of the databases that could be used to store and manipulate OSM data, how to obtain data to populate the databases, and how to query them to find something useful.
It is intended as an overview for new developers who wish to write software to use OSM data, and not for end users of the information.
Sources of OSM Data
See also Downloading data for a run down of the basic options
The various sources of OSM data (either the whole world, or a small part of it) are identified below with links to other Wiki pages which provide more detail.
The most of the following methods of obtaining data return the data in the OSM XML format that can be used by other tools to populate the database. The format of the data is described in Data Primitives.
Every week a dump of the entire current OSM dataset is saved in different formats and made available as Planet.osm. Quite a few people break this file down into smaller files for different regions and make extracts available separately on mirror servers. Various tools are available to cut the Planet file up into smaller areas if required, but are also available pre-cut e.g. from GeoFabrik (pre-selected regions like by-state) or ProtoMaps (pbf data filtered by bounding polygon; provides time-limited link to re-download the same data). Some sources omit metadata from tag-less nodes to minimize space.
Differences between the live OSM data and the planet dump are also published each minute as changeset, so it is possible to maintain an up-to date copy of the OSM dataset.
The main API is the method of obtaining OSM data used by editors, as this is the only method of changing the OSM data in the live database. The API page provides a link to the specification of the protocol to be used to obtain data.
Its limitations are:
- it will only return very small areas < 0.25deg square.
- This method of obtaining data should therefore be reserved for editing applications. Use other methods for rendering, routing or other purposes.
The Overpass API is a read-only API that serves up custom selected parts of the OSM map data. In contrast to the editing API described in the previous API, the Overpass API is optimized for data consumers that need a few elements within a glimpse or up to roughly 10 million elements in some minutes, both selected by search criteria like e.g. location, type of objects, tag properties, proximity, or combinations of them. It acts as a database backend for various services. It’s query language is documented at Overpass QL guide/language reference. It is highly recommended to get familiar with various features via overpass turbo, an interactive Web-based frontend.
The Xapi service allowed OSM data to be downloaded in OSM XML format for a given region of the globe, filtered by tag. The service was replaced by Overpass, legacy XAPI applications can leverage the XAPI Compatibility Layer.
The database schema for the main API database (openstreetmap.org) can be found here: Rails port/Database schema.
OSM uses different database schemas for different applications:
- Whether the schema supports updating with OsmChange format "diffs".
- This can be extremely important for keeping world-wide databases up-to-date, as it allows the database to be kept up-to-date without requiring a complete (and space- and time-consuming) full, worldwide re-import. However, if you only need a small extract, then re-importing that extract may be a quicker and easier method to keep up-to-date than using the OsmChange diffs.
- Whether the schema has pre-built geometries.
- Some database schemas provide native (e.g: PostGIS) geometries, which allows their use in other pieces of software which can read those geometry formats. Other database schemas may provide enough data to produce the geometries (e.g: nodes, ways, relations and their linkage) but not in a native format. Some can provide both. If you want to use the database with other bits of software such as a GIS editor then you probably want a schema with these geometries pre-built. However, if you are doing your own analysis, or are using software which is written to use OSM node/way/relations then you may not need the geometries.
- Whether the full set of OSM data is kept.
- Some schemas will retain the full set of OSM data, including versioning, user IDs, changeset information and all tags. This information is important for editors, and may be of importance to someone doing analysis. However, if it is not important then it may be better to choose a "lossy" schema, as it is likely to take up less disk space and may be quicker to import.
- hstore columns
- Whether the schema uses a key-value pair datatype for tags. (This datatype is called hstore in PostgreSQL.)
- hstore is perhaps the most straightforward approach to represent OSM's freeform tagging in PostgreSQL. However, not all tools use it and other databases might not have (or need) an equivalent.
|Schema name||Created with||Used by||Primary use case||Updatable||Geometries (PostGIS)||Lossless||hstore columns||Database|
|osm2pgsql||osm2pgsql||Mapnik, Kothic JS||Rendering||yes||yes||no||optional||PostgreSQL|
|imposm||Imposm||Rendering||no||yes||no||Imposm2: no, Imposm3: yes||PostgreSQL|
|pgsnapshot||Openstreetmap h3||Analysis||no||yes||yes||yes||PostgreSQL, Spark|
Osm2pgsql schema has historically been the standard way to import OSM data for use in rendering software such as Mapnik. It also has uses in analysis, although the schema does not support versioning or history directly. The import is handled by the Osm2pgsql software, which has two modes of operation, slim and non-slim, which control the amount of memory used by the software during import and whether it can be updated. Slim mode supports updates, but time taken to import is highly dependent on disk speed and may take several days for the full planet, even on a fast machine. Non-slim mode is faster, but does not support updates and requires a vast amount of memory.
The import process is lossy, and controlled by a configuration file in which the keys of elements of interest are listed. The values of these "interesting" elements are imported as columns in the points, lines and polygons tables. (Alternatively, values of all tags can be imported into a "hstore" type column.) These tables can be very large, and care must be paid to get good indexed performance. If the set of "interesting" keys changes after the import and no hstore column has been used, then the import must be re-run.
Starting with version 1.3.0, configuration became more flexible. A Lua script describes now the names, fields and types of database tables. For each processed OSM object, a Lua callback is called where you can describe which tables the object should be written to.
Osm2pgsql is used by Nominatim, too.
For more information, please see the Osm2pgsql website
ApiDB is a schema designed to replicate the storage of OSM data in the same manner as the main API schema and can be produced using the Osmosis commands for writing ApiDBs or updating ApiDBs with changes. This schema does not have any native geometry, although in the nodes, ways and relations tables there is enough data to reconstruct the geometries. This schema is not recommended for users who need geometries.
This schema does support history, although the import process does not, so it can be used for mirroring of the main OSM DB. A history will be generated as replication diffs are applied.
The import process, even on good hardware, can take several weeks for the full planet. The database will take approximately 1 TB as of April 2012.
The pgsnapshot schema is a modified and simplified version of the main OSM DB schema which provides a number of useful features, including generating geometries and storing tags in a single hstore column for easier use and indexing. JXAPI's schema is built on pgsnapshot.
Imposm is an import tool, and is able to generate schemas using a mapping which is fully configurable. As such it really shouldn't count as its own schema, but it needed fitting in somehow. The ability to break data out thematically into different tables greatly simplifies the problem of indexing performance, and may result in smaller table and index sizes on-disk.
Nominatim is a forward and reverse geocoder. The database is produced by a special back-end of Osm2pgsql. It is a special-purpose database, and may not be suitable for other problem domains such as rendering. The Nominatim homepage provides links to the detailed technical documentation, change logs, etc.
The OGR library can read OSM data (XML and PBF) and can write into various other formats, including PostgreSQL/PostGIS, SQLite/Spatialite, and MS SQL databases (though I've tried only PostGIS). The ogr2ogr utility can do the conversion without any programming necessary with a schema configuration that's reminiscent of osm2pgsql. One interesting feature is that it resolves relations into geometries: OSM multipolygons and boundaries become OGC MultiPolygon, OSM multilinestrings and routes become OGC MultiLineString, and other OSM relations become OGC GeometryCollection.
It is listed as lossy because membership info, such as nodes in ways and relation members, is not preserved. Metadata is optional. Untagged/unused nodes and ways are optional.
The Overpass_API is a query language built on top of a custom back-end database with software called OSM3S (see OSM3S/install for install and setup instructions). This is a custom database engine and it is therefore hard to compare it with other database schemas. You could recreate the complete planet file from the database. It is geared to have good performance on locally concentrated datasets.
OsmSharp is a toolbox of OSM-related routines, including some to import OSM data into Oracle databases.
MongOSM is a set of Python scripts for importing, querying and (maybe) keeping up-to-date OSM data in a MongoDB database.
Inspired by mongOSM, Node-MongOSM uses Mongoose to provide schemas and insert vs upsert options via a command line interface.
Objects are loaded into a single osmdata table with column geom and tags.
openstreetmap_h3 a high performance tool for importing OSM PBF files into PostGIS databases or into Big Data ecosystem via Apache Arrow data format. This project split planet dump Geo data by H3 indexes into many partitions to simplify world wide data Geo analysis/aggregation and routing tasks.
Choice of DBMS
While OSM.org mainly uses PostgreSQL, several different databases systems used by OSM users:
|PostgreSQL||Can handle large datasets. The PostGIS extension allows the use geographic extensions||Requires database server to be installed, with associated administrative overhead||Main OSM API, Mapnik renderer|
|MySQL||Can handle large datasets||Does not have geographic extensions. Requires database server to be installed, with associated administrative overhead||The main database API used MySQL until version 0.6, when it was changed to Postgresql|
|SQLite||Small, does not require a database server||May struggle with large datasets - See Mail Archive (from 2008, may not be current)||Microcosm|
|MongoDB||Native Geospatial Indexes and Queries||MongOSM, Node-Mongosm|
|Hadoop / Hive||Can handle very large datasets (known as big data). Extensions available for geospatial queries (for example ESRI GIS for Hadoop)||Requires Hadoop cluster to be installed, with associated administrative overhead||OSM2Hive|