Planet.osm/full
There is a full history dump at https://planet.openstreetmap.org/planet/full-history/ and pbf dumps at https://planet.openstreetmap.org/pbf/full-history/ that includes almost all OSM data. A new version is released every week. It's a big file (on 2024-11-01, the plain OSM XML variant takes over 3.97 TB when uncompressed from the 221.2 GB bzip2-compressed or 131.3 GB PBF-compressed downloaded data file).
- For nodes, ways, and relations created after the introduction of API 0.5 (in October 2007 see https://lists.openstreetmap.org/pipermail/talk/2007-October/018638.html), the file includes all versions that ever existed, even if the objects have been deleted since.
- For nodes and ways created before the introduction of API 0.5, the file includes only the version that was visible when the changeover occurred, plus all later versions. If the object had already been deleted when API 0.5 was introduced, then it is not included.
- Since segments were dropped with the introduction of API 0.5, they are not included.
- The file does not include redacted elements which cannot be published under ODbL.
- User names listed in the file reflect only the last known user name in case it was changed. The user name is always the same for a given user id.
- Anonymous edits have no uid (0 in PBF) and no user name.
- Changesets are included in the file.
This full history dump is only useful if you want to develop something like Historical Coverage and to do more statistical analyses. If you are just interested in the current data, use Planet.osm instead.
Data Format
The full history dump uses the same XML scheme as a normal planet file, with the exception that there will usually be several change versions of the same object.
The file is ordered by object type (node-way-relation), then id, then version.
The file does not have any newline characters. Code that tries to read this file line-by-line will fail.
The uncompressed size of this file is about 3.7 TB. Code that tries to parse this file into a DOM tree will fail.
The compressed file is created with a parallel bzip2, resulting in a multi-stream file. This file cannot be read by the BZ2file module in Python. You have to convert it to a single stream first
You can convert the file to a single stream like this:
bzip2 -cd full-planet-110115-1800.osm.bz2 | bzip2 -c > full-planet.new.osm.bz2
This may take a long time (about 36 hours on an 2.66Ghz Intel Core i7 with 8GB of RAM). The resulting file will be slightly smaller than the original.
Processing
Osmosis does not have explicit support for this type of file but some Osmosis operations seem to work with it.
Osmium lets you extract data from OSM history files for a given point in time or a time range or a polygon or a bounding box. See osmium-tools and its extract command which can handle history dump files in various formats (see the --with-history
option).
There is further information on User:MaZderMind/Reading OSM History dumps
Use the osm-history-renderer and it's importer to import history-files into osm2pgsql-like database. The render-scripts can generate images for arbitrary points in time from it. By accessing the database directly also statics and other analytics can be ran for given points or ranges in time.
OSHDB and the ohsome API allow to run in-depth data analysis on OSM history data after conversion to its dedicated .oshdb
data format.