From OpenStreetMap Wiki
Jump to: navigation, search


hdc1 150GB 86% used (peaked at 96%)
hdd1 280GB empty


TODO: can someone analyse tile stats in a way that helps us model the disk usage and simulate strategies for using the space available?

Current structure

  • /home/ojw/tiles-ojw is hdc1
  • /home/ojw/tiles-ojw2 is hdd1

/home/ojw/tiles-ojw contains the layers

  • Tiles
  • maplint
  • cycle

Each layer contains a directory for each zoom level (0-17) and all tiles are in subdirectories under there

In the website structure,

  • /Tiles/data is a symlink to /home/ojw/tiles-ojw
  • /Tiles/data/layername/zoom/0/0/0/0.png is then used as the filename for any given image

Proposals for splitting tiles between disks

When forming a proposal, please remember:

  • Must be implementable in PHP
  • Must be implementable in apache mod-rewrite
  • Any suggestions requiring millions of symlinks or hardlinks should be accompanied by software to maintain those links. (existing directory structure is maintained by the upload script in PHP)

Maplint on one disk, osmarender on the other

  • problem: big variation in number of images (e.g. maplint has only a few zoom levels, no seas, etc.)

Zoom-17 on its own disk

(or whatever division of zoom levels is deemed balanced). Could do this by making /layer/17 directory a link to the other disk, manually for each layer.

Even x tiles on one disk odd on the other

Every access to the dev server requires some odd and some even x values. Make the even x values symlinks to the other disk, this is a simple way to access always boths disks and double the space.

Good idea, so advancing on this idea: To be able to use N disks (following: more disks -> more individual heads -> more I/O per sec.), use the X coordinate and use 'X mod N' to determine the disk the tile should be on. When the N is a configuration parameter it will easily scale with every addition of disks, improving on disk space AND I/O per sec. This allows for lots of cheap disks on cheap simple controllers (No RAID or fancy system configuration involved!). Lambertus
we should not forget that it uses the same space on both disks, so some space will be useless. Unless we make a 'mod 3' for two harddisks and store 1+2 on one disk and 3 on the other. --Damian 18:55, 11 May 2007 (BST)
True, it works most efficiently when using disks of the same size. Another disadvantage is that tiles need to be moved around when another disk is added (N+1), but that can be incorporated in a run-once script or 'on access' (when the tile is requested). Lambertus
second one: how does it work with symbolic links? isnt it that way that the kernel looks on the disk, finds a symbolic link, and then jumps to the location? So the 'mod' function should be done before accessing the disk.--Damian 18:55, 11 May 2007 (BST)
The path of the tile on the server can be determined in the code that receives and handles the requested URL. Lambertus
Advancing even more: When using 'X mod N1' and 'Y mod N2' the tiles can be split even more (e.g. over N1 servers each with N2 disks). Lambertus
Advancing even a little more on the server Idea, if every server has a Tile cache in RAM it can store the tiles there, and deliver them faster.--Damian 18:55, 11 May 2007 (BST)
Linux already uses all available free RAM as filesystem cache, so that function automatically works. Lambertus

Why not use existing OS facilities?

It sounds like this is a Linux server, in which case the OS already provides both Logical Volume Management and RAID.

  • You could add an LVM physical volume on the 280GB disk, create a volume group, and then one or more logical volumes (e.g. one for maplint, one for osmarender). Use a filesystem that supports runtime growth e.g. ext3 for each logical volume. Copy the data to this new filesystem. Then the 150GB disk can be added as a second physical volume once you've copied the data off it. If it's already using LVM (the default for many Linux distributions) you can shortcut a lot of this.

This makes effective use of all the disks, and you can upgrade by just adding more physical volumes but the I/O performance is usually no better than for a single disk.

  • LVM 2 under Linux support striping -- it will allow you to stripe a logical volume's extents across two or more physical volumes
  • You could use RAID 0 to combine up to 150GB from each disk, creating a ~300GB RAID.

This wastes some of the larger disk (but you could partition it separately as scratch space or something) and you will need temporary storage for the existing data while you format, so it could be awkward to actually do this. However read I/O performance may be increased by a factor of two.

I agree, before you reinvent the wheel, ie dividing content by hashsums on several disks etc. take advantage of what the OS offers. Raid 0 lets you combine the disks, or an LVM dynamically add space as you need it. Everything else will be a hack and be lots of manual work. --spaetz 15:45, 11 May 2007 (BST)
I also proposed hash splitting of tiles accross disks/partitions in Talk:Tiles@home/Dev/Coastline. Probably these issues should be dealt with together -- Stefanb 16:09, 11 May 2007 (BST)

Brute force and regeneration

Disk 2 has 280GB capacity, compared with the 150GB disk 1. So one method for increasing the overhead available is to simply copy everything to disk 2.

However, since most places need to be re-rendered anyway to get coastlines vaguely-fixed, to remove the millions of 571-byte sea tiles, and to detect empty-land and empty-sea tiles, we could use this opportunity to regenerate the whole map.

Proposal: create blank directories for tiles on disk2, and send all uploads to that. Modify the 404 handler so that if the tile can't be found on disk2 it looks in the existing directory on disk1.

disk2 will eventually fill up with map tiles. when that occurs, old tiles on disk1 can be deleted to leave room for the map layers stored on that disk.


  • maplint should be left on disk1 to share the capacity?
  • how will the world be requested? the database will be left intact so that meta-info for existing tiles can be found.
  • browsing files left on disk1 will be slower, as they'll be going through PHP (unless someone wants to do a mod-rewrite that tries both disks)