ES:API de Overpass/Instalación

From OpenStreetMap Wiki
Jump to navigation Jump to search
Overpass API logo.svg
API de Overpass · Referencia de lenguaje · Guía de lenguaje · Términos técnicos · Áreas · Ejemplos de consultas · Edición dispersa · ID permanente · Preguntas frecuentes · más (español) · Sitio web
Estado de servidores · Versiones · Desarrollo · Diseño técnico · Instalación · Capa de compatibilidad XAPI · Esquemas de transporte público · Aplicaciones · Código fuente e incidencias
Overpass turbo · Asistente · Atajos de Overpass turbo · Hojas de estilo MapCSS · Exportar a GeoJSON · más (español) · Desarrollo · Código fuente e incidencias · Sitio web
Overpass Ultra · Overpass Ultra extensions · Hojas de estilo MapLibre ·más (español) · Código fuente e incidencias · Sitio web


Help (89606) - The Noun Project.svg

This page tells you how to install the OSM3S server such that you can use it as a local OSM mirror. Additional functionality like management of areas and the line diagram utils aren't covered yet.

Please note: the primary source is on This is more a location to collect various measures for troubleshooting.

System Requirements


It is highly recommended that you have at least the following hardware resources available for an OSM planet server:

  • 1 GB of RAM and sufficient swap space for a small extract or a development system. By contrast, (the main Overpass API instance) has 32 GB main memory. Actual memory requirements also highly depend on the expected maximum number of concurrent users.
  • For a full planet with meta and attic data (=all data since license change in September 2012), about 200 GB - 300 GB disk space are required when using a compressed database (since 0.7.54). Without compression at least double the amount is needed.
  • Use of fast SSDs instead of slower hard disks is highly recommended!


It is required that you have the following resources:

  • Access to Expat and a C++ compiler
  • An OSM file in XML format compressed in bzip format (Geofabrik is an excellent resource for this. Another good resource is located on the Planet.osm page.)
  • Alternatively, you can also use an extract or planet file in PBF Format along with osmconvert (requires --out-osm parameter for osmconvert, as Overpass API doesn't support PBF natively)

NOTE: You do not need a database engine (e.g. MySQL or PostgreSQL); the database back-end is included in the OSM3S package.

You will need to identify or create:

  • $EXEC_DIR: The root directory in which executable files should be installed (/bin/ suffix removed). (~100 MB). For example, the public server has this on /opt/osm-3s/v0.7.54/
  • $DB_DIR: a directory to store the database
  • $REPLICATE_DIR: a directory to store minutely (or otherwise) diffs (only necessary if you decide to configure minutely updates below)


Ubuntu or Debian 6.0 (squeeze) or Debian 7.0 (wheezy)

1. Install the following packages: g++, make, expat, libexpat1-dev and zlib1g-dev.

sudo apt-get update
sudo apt-get install g++ make expat libexpat1-dev zlib1g-dev

Option 1: Installation via tarball

2. Download the latest tarball, prepared with GNU autoconf. For example:


3. Unpack the tarball:

tar -zxvf osm-3s_v*.tar.gz

4. Compile the OSM3S package:

cd osm-3s_v*
./configure CXXFLAGS="-O2" --prefix=$EXEC_DIR
make install

Option 2: Installation via bleeding edge dev version (expert use)

2. Alternatively, if you want the bleeding edge latest dev version, you can get it from here on github

sudo apt-get install git libtool autoconf automake
git clone
cd Overpass-API
git checkout minor_issues

Depending on your Ubuntu version you may need to explicitly tell apt-get to install version 1.11:

sudo apt-get install automake1.11

3. Update build system

When using the latest dev version from github, the build system has to be updated first. The following steps were successfully tested on Ubuntu 14.04 and debian 7.0:

cd ./src/
automake --add-missing

4. Compile the OSM3S package:

cd ../build/
../src/configure CXXFLAGS="-Wall -O2" --prefix=$EXEC_DIR
make install

NOTE: If you encounter a message like this: configure: error: cannot find install-sh or in "../src" "../src/.." "../src/../.." if you already have automake on your computer (sudo apt-get install automake), it may indicate that the symbolic link(s) in the "../src/" directory are broken. Before you can continue you will need to delete and recreate the links to your system's proper files, for example:

ln -s /usr/share/automake-1.11/missing ./missing
ln -s /usr/share/automake-1.11/install-sh ./install-sh
ln -s /usr/share/automake-1.11/depcomp ./depcomp

or if you receive a 'Link already exists' error you can try using the absolute paths:

sudo rm -r /root/osm-3s_v0.7.50/src/missing
sudo rm -r /root/osm-3s_v0.7.50/src/install-sh
sudo rm -r /root/osm-3s_v0.7.50/src/depcomp

sudo ln -s /usr/share/automake-1.11/missing /root/osm-3s_v0.7.50/src/missing
sudo ln -s /usr/share/automake-1.11/install-sh /root/osm-3s_v0.7.50/src/install-sh
sudo ln -s /usr/share/automake-1.11/depcomp /root/osm-3s_v0.7.50/src/depcomp

NOTE: If you encounter an error of this format during compiling: make: *** [...] Error 1 it means that something unexpected occurred and this is an opportunity to help make the OSM3S package more robust. To help you will need to capture the compile-time output and email it to the package's current maintainer: Roland Olbricht For example, the following command will capture the output and put it in a file called error.log:

make install >&error.log

Option 3: AWS Marketplace AMI

A paid pre-built AMI based on these instructions exists on the AWS marketplace at

  • The image includes a snapshot of the database on the day the image was built (See version number)
  • By default it exposes the API on HTTP only.
  • Minutely updates are enabled
  • NOTE: It does not include areas and is cloned with meta=no (i.e does not include meta=yes or meta=attic)
    • Generally speaking, burstable EC2 instance are not suitable for building areas

Populating the DB

The recommended way to populate the database is via cloning from the dev server:

./ --db-dir=database_dir --source= --meta=no

This is fastest and needs the least space. If you need metadata (i.e. objects version numbers, editing users and timestamps) then put --meta=yes instead of --meta=no. If you want even museum data (all the old versions since the license change in 2012) then replace the parameter with --meta=attic.

You could also populate the overpass database from a planet file. For this, you need to download a planet file:


Populate the database with:

nohup ../src/bin/ planet-latest.osm.bz2 $DB_DIR $EXEC_DIR &
tail -f nohup.out

NOTE: If you want to query your server with JOSM, you'll need metadata. Add the --meta parameter:

nohup ../src/bin/ planet-latest.osm.bz2 $DB_DIR $EXEC_DIR --meta &
tail -f nohup.out

It is not possible to get museum data this way, because the planet file does not contain that data.

The nohup together with & detaches the process from your console, so you can log off without accidently stopping it. tail -f nohup.out allows you to read the output of the process (which is written into nohup.out).

NOTE: This step can take a very long time to complete. In the case of a smaller OSM extract files, less than 1 hour, but in the case of a full planet file this step could take on the order of 24 hours or more, depending on available memory and processor resources. When the process has finished successfully the file nohup.out will indicate this with "Update complete" at the very end.

(As a side note, this also works for applying OSC files onto an existing database. Thus you can make daily updates by applying these diffs with a cronjob. This method takes fewer disk loads than minute updates, and the data is still pretty timely.)

Populating the DB with attic data

Since Overpass API v0.7.50, it is possible to also retain previous object versions, the so called attic versions, in the database. Previous object versions are accessible via [date:...], [diff:...], [adiff:...] as well as some filters like (changed:...).

For the main Overpass API instance, the database was initially built using the first available ODbL compliant planet dump file (September 2012). If you don't require all the history back to 2012, it is also possible to start with any later planet dump and apply any subsequent update via the daily/hourly/minutely update process.

Any subsequent changes can be automatically stored in the database, if the following two prerequisites are met:

  • Dispatcher needs to be run with attic support enabled
  • also needs to run with attic support enabled

Relevant settings for both dispatcher and update script are described further down on this page

To populate the database with attic data, to use augmented diffs use --keep-attic instead of --meta.


  • At this time it is not possible to use a full history dump to initialize the database (see User Page).
  • Using extracts instead of a planet file along with attic mode is currently being discussed on the developer's list and it likely also not to work.

Static Usage

OSM3S is now ready to answer queries. To run a query, run

$EXEC_DIR/bin/osm3s_query --db-dir=$DB_DIR

and enter your query on the standard input. If typing directly into the console, you need to press Ctrl+D in the end to signal the end of input. Answers will appear on standard output.

If you've imported the entire planet, try the example query:

<query type="node"><bbox-query n="51.0" s="50.9" w="6.9" e="7.0"/><has-kv k="amenity" v="pub"/></query><print/>

This one returns all pubs in Cologne (the city with the best beer in Germany :) ).

Check the full introduction to OSM3S query language on the Web or at $EXEC_DIR/html/index.html (installed as part of OSM3S) for more information.

Lastly, if you're using the dispatcher daemon, osm3s_query can connect to it and find $DB_DIR by itself:


If you can make conversion requests to osm3s_query without specifying the db dir, then the dispatcher daemon is running correctly.

Starting the dispatcher daemon

If you wish to automatically apply diff updates or run the Web API, you need to start the dispatcher daemon (this is otherwise optional). Like all other processes they should be started by a single, standard user. Do not run anything with root privileges. That would be an unnecessary security risk. The tools set the necessary file permissions to allow writing for the user that created the database and reading for everybody else.

nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &

For meta data you need to add a parameter:

nohup $EXEC_DIR/bin/dispatcher --osm-base --meta --db-dir=$DB_DIR  &

When serving attic data you need to run the dispatcher with the following parameters:

nohup $EXEC_DIR/bin/dispatcher --osm-base --attic --db-dir=$DB_DIR &

Systemd, Upstart

Short answer: Systemd is not designed to run a DBMS, in particular Overpass API. I explain the details in a blog post.

Some reminders:

  • Never start any component of Overpass API as root. The whole system is designed to work without root, and you can run into really weird bugs if some parts of the system run as root.
  • Do not automatically remove any of the lock files, socket files or shared memory files. They work as canaries, i.e. hitting existing files is almost always an indicator for bigger trouble elsewhere. Please ask back in those cases.


Overpass includes a script, ${EXEC_DIR}/bin/, which may be used with the crontab @reboot option. It requires some editing before deployment, and should be run as the overpass user, not root.

Applying minutely (or hourly, or daily) diffs

Reminder: The primary source is on Unless you both know why you need a non-standard setup and are familiar with the implementation details of the Planet server replication, please use the clone based approach there.

Note: The dispatcher daemon must be running for diff application to work.

First, decide the maximum tolerable lag for your DB:

From these, you need to find replicate sequence number, which will become $FIRST_MIN_DIFF in the instructions below. To find it:

  1. Browse through the replicate directory hierarchy (e.g. and find the diff that has a date before the starting point of the planet dump. The planet dump starts at 00:00 UTC; because the server shows local time, this is equivalent to 01:00 BST during summer and 00:00 BST during winter in the file listing.
  2. Verify you have the right file by checking the respective *.state.txt file. The timestamp should show a date (here always UTC) slightly before midnight. sequenceNumber in this file (also present in the filename) is your replicant sequence number, and $FIRST_MIN_DIFF.

From $EXEC_DIR/bin, run:


This starts a daemon that will download all diffs from $FIRST_MINDIFF_ID to the present into your replicate directory. When new diffs are made available, if this is kept running, it will download them automatically. If you get diffs on another way, you can omit this command.

Next, apply changes to your DB:

nohup ./ $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=no &

This starts the daemon that keeps the database up to date. Latest versions require an additional parameter augmented_diffs:

nohup ./ $REPLICATE_DIR/ $FIRST_MINDIFF_ID --augmented_diffs=no &

To add metadata, you must add a parameter to the second command. Instead of the above, run:


To update your database containing attic data, you need to use the following command:

nohup ./ $REPLICATE_DIR/ $FIRST_MINDIFF_ID --meta=attic &

To see what's going on, watch these log files:

  • $DB_DIR/transactions.log
  • $DB_DIR/apply_osc_to_db.log
  • $REPLICATE_DIR/fetch_osc.log

Setting up the Web API

Note: The dispatcher daemon must be running for the Web API to work.

This section describes one way to setup a basic read-only HTTP based API with OSM3S.

1. Install Apache2 (with CGI support)

sudo apt-get install apache2
sudo a2enmod cgi

2. Configure Apache2

cd /etc/apache2/sites-available
nano default

Note: use the correct name of the default file for your apache installation.

Make your default file look something like this:

<VirtualHost *:80>
	ServerAdmin webmaster@localhost
	ExtFilterDefine gzip mode=output cmd=/bin/gzip
	DocumentRoot [YOUR_HTML_ROOT_DIR]

	# This directive indicates that whenever someone types 
	# Apache2 should refer to what is in the local directory [YOUR_EXEC_DIR]/cgi-bin/
	ScriptAlias /api/ [YOUR_EXEC_DIR]/cgi-bin/

	# This specifies some directives specific to the directory: [YOUR_EXEC_DIR]/cgi-bin/
	<Directory "[YOUR_EXEC_DIR]/cgi-bin/">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                # For Apache 2.2:
                #  Order allow,deny
                # For Apache >= 2.4:  
                Require all granted
                #SetOutputFilter gzip
                #Header set Content-Encoding gzip

	ErrorLog /var/log/apache2/error.log

	# Possible values include: debug, info, notice, warn, error, crit, alert, emerg
	LogLevel warn

	CustomLog /var/log/apache2/access.log combined


3. Restart Apache2:

sudo /etc/init.d/apache2 restart

NOTE: If when you restart apache, you receive an error message such as "ExtFilterDefine invalid command", you need to tell apache to install the correct filter module:

a2enmod ext_filter

4. As the overpass user, start the dispatcher process and point it to your database directory:

nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR &

With meta data:

nohup $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR --meta &

Note: to convert this process to a service that starts up when your system boots do this (... in progress)

5. Test your Web-API by sending it the following command:

wget --output-document=test.xml http://[your_domain_or_IP_address]/api/interpreter?data=%3Cprint%20mode=%22body%22/%3E

The xml output document should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
    The data included in this document is from It has there been collected 
    by a large group of contributors. For individual attribution of each item please refer to[node|way|relation]/#id/history 
  <meta osm_base=""/>


If the output from Web API is something else (trash or large binary data) or if the Web API was not found, make sure you enable CGI in Apache:

sudo a2enmod cgi

Then restart apache:

sudo service apache2 restart

Area creation

This section was taken over from and may need some revision. Please also check the discussion page and add those details which are worth mentioning here.

To use areas with Overpass API, you essentially need another permanent running process that generates the current areas from the existing data in batch runs.

First, you need to copy the rules directory into a subdirectory of the database directory:

cp -pR "../rules" $DB_DIR

Hint: If you use an early tarball (ca. 2015) the rules subfolder is missing. It may be found here if you need it:

The next step is to start a second dispatcher that coordinates read and write operations for the areas related files in the database:

nohup $EXEC_DIR/bin/dispatcher --areas --db-dir=$DB_DIR &

chmod 666 "../db/osm3s_v0.7.*_areas"

The dispatcher has been successfully started if you find a line "Dispatcher just started." in the file transactions.log in the database directory with correct date (in UTC).

The third step then is to start the rule batch processor as a daemon:

nohup $EXEC_DIR/bin/ $DB_DIR &

Now we don't want this process to impede the real business of the server. Therefore, I strongly suggest to priorize this process down. To do this, you need to find with

ps -ef | grep rules

the PIDs belonging to the processes and ./osm3s_query --progress --rules. Run for each of the two PIDs the commands:

renice -n 19 -p PID
ionice -c 2 -n 7 -p PID

The second command is not available on FreeBSD. This is not at big problem, because this rescheduling just means giving hints to the operating system.

When the batch process has completed its first cycle, all areas get accessible via the database at once. This may take up to 24 hours.


runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client

Note: if you get an output doc that looks more like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="" xml:lang="en" lang="en">
  <meta http-equiv="content-type" content="text/html; charset=utf-8" lang="en"/>
  <title>OSM3S Response</title>

   The data included in this document is from It has there been collected
   by a large group of contributors. For individual attribution of each item please refer to[node|way|relation]/#id/history 

<p><strong style="color:#FF0000">Error</strong>: runtime error: open64: 2 /osm3s_v0.6.91_osm_base Dispatcher_Client::1 </p>


Then it may indicate that the dispatcher process is not running or not configured correctly.

runtime error: open64: 2 No such file or directory /osm3s_v0.7.51_osm_base Dispatcher_Client::1

Make sure that the first dispatcher is running.

File_Error Address already in use 98 /srv/osm3s/db_dir//osm3s_v0.7.3_osm_base Dispatcher_Server::4

Check for stale lock files in the following two locations before restarting a crashed/killed dispatcher

  • /dev/shm
  • your db directory (a file named osm3s_v*_osb_base).

To clean up these lock files automatically you can try running:

$EXEC_DIR/bin/dispatcher --terminate

File_Error 17 /osm3s_v0.6.94_osm_base Dispatcher_Server::1

If you killed (or crashed) the dispatcher daemon and wish to restart it, you might encounter this error (unless you reboot) : There is a lock file : /dev/shm/osm3s_v0.6.94_osm_base that prevent other dispatchers to run while one is allready running. Remove that file (and check that no dispatcher is running) and restart it.

To remove this lock file (and others), try running:

$EXEC_DIR/bin/dispatcher --terminate

No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1

No such file or directory /srv/osm-3s_v0.7.52/db/areas.bin File_Blocks::File_Blocks::1

Error might happen, if file permissions for area files are wrong. Try

chown -R www-data:www-data $DB_DIR/area*

This change would imply write access for www-data to the database files, which is ill-advised. Area creation will usually run as non www-data user. To handle queries, read-only access to area files is definitively sufficient. Mmd (talk) 20:49, 12 September 2015 (UTC)
Good point! This was really just a quick trial and error solution, which might be wrong. Any detailled step-by-step instruction would be really helpful: I got the described error message while following the above official install instructions, so there's definitely some missing part in the documentation! free_as_a_bird (talk) 23:18, 12 September 2015 (UTC)
Yes, that's not really recommended. Usually you would run both dispatcher and the script as a dedicated (non www-data) user. OTOH www-data still needs to have read access to the database files, as the 'interpreter' is run as www-data user via CGI. I think it is best to discuss this all on the Overpass Developer list (see Info Ambox on this page) and also get some feedback from Roland. Mmd (talk) 11:41, 13 September 2015 (UTC)

Database population problem

(Found in version 0.7.5) If you receive an out of memory error while populating the database:

Out of memory in UB 25187: OOM killed process 21091 (update_database)

Try adding --flush-size=1 as a parameter when calling update_database, so in most cases, add the parameter to the last line in the script

Area batch run out of memory error

When generating an area run, you may receive the following:

Query run out of memory in "recurse" at line 255 using about 1157 MB

(Assuming you have enough physical free memory, 4gb worked for me) Try removing all the "area" files from your database directory and increase the element-limit (in your $DB_DIR/rules/rules.osm3s file) to "2073741824"

Apache config fails

If you encounter some message like this one when (re)starting the apache server:

# apache2ctl graceful 
Syntax error on line 12 of /etc/apache2/httpd.conf:
Invalid command 'Header', perhaps misspelled or defined by a module not included in the server configuration
Action 'graceful' failed.

then apache doesnt use mod_headers. you can activate mod_headers by running:

# a2enmod headers
Enabling module headers.
To activate the new configuration, you need to run:
service apache2 restart
# apache2ctl graceful 

After this, apache should start up correctly.

Apache: HTTP 403 Forbidden errors

If you run into 403 forbidden errors on apache, double check your configuration in /etc/apache2/apache2.conf if your directory is explicitly allowed.

Contributors corner

WebAPI using NGINX (Ubuntu)

You may want to set up WebAPI using NGINX instead of Apache. You can follow instuctions below to enable NGINX to serve cgi interpreter.

Installing dependencies

First of all, get NGINX and install it:

$ sudo apt-get install nginx

NGINX does not support direct communication via CGI interface. We must wrap this communication into FCGI protocol. To do so we need to install fcgiwrap:

$ sudo apt-get install fcgiwrap

After this, both NGINX and fcgiwrap services should be started and enabled on statup, but just to be sure, we can enable them manualy:

$ sudo systemctl enable nginx
$ sudo systemctl enable fcgiwrap

$ sudo systemctl restart fcgiwrap
$ sudo systemctl restart nginx

fcgiwrap will create a socket file, as of today (28 Oct 2019) most recent version, in directiory /var/run/fcgiwrap.socket, one should be able to find them in this directory:

$ ls -la /var/run/

With a socket in place, we can go further (if location of the socket is different, modify further config accordingly).

Configuring NGINX

Create and edit separate server definition in sites-available nginx directory:

$ sudo nano /etc/nginx/sites-available/overpass.conf

Paste and adjust following configuration:

server {
    listen 80;
    location /api/ {
        alias [path-to-exec-dir]/cgi-bin/;
        #gzip on;
        #gzip_types application/json application/osm3s+xml;
        # set the minimum length that will be compressed
        #gzip_min_length 1000; # in bytes
        # Fastcgi socket
        fastcgi_pass  unix:/var/run/fcgiwrap.socket;
        # Fastcgi parameters, include the standard ones
        include /etc/nginx/fastcgi_params;
        # Adjust non standard fcgi parameters
        fastcgi_param SCRIPT_FILENAME  $request_filename;

To enable gzip compression, uncomment lines no. 5, 6 and 9. This configuration assumes http traffic (port 80) and no domain name, so api requests should look like this:


Enabling configuration

Last, we have to enable created configuration.

Delete default enabled nginx configuration:

$ sudo rm /etc/nginx/sites-enabled/default

Then, symlink newly created cofiguration into sites-enabled directory:

$ cd /etc/nginx/sites-enabled/
$ sudo ln -s /etc/nginx/sites-available/overpass.conf

Last, we will reload nginx configuration. First check if config is syntactically correct:

$ sudo nginx -t

If output reports no errors, reload nginx service:

$ sudo systemctl reload nginx

After successful reloading, Overpass API will be available under following address (assuming you started dispatcher and made all required previous steps):



Via a Docker image

There exists a non official docker image available here. This image supports database initialisation, metadata, minute diffs, area creation and api access via http.

The installation is very simple, however it needs to have docker installed on the host computer. Then, you have to:

Download and unzip the archive.

Edit the file to your liking.

Then, you simply have to type make in order to compile and launch the image. The complete image creation from scratch takes about 40 hours. Afterwards the API is accessible at http://localhost:5001/api (you may want to change this port in the file).

CentOS or RHEL 7

1. Install dependencies

$ sudo yum install tar make gcc-c++ expat expat-devel zlib-devel bzip2 rpmbuild gcc ruby-devel rpm-install
$ sudo gem install fpm

2. Get source

$ wget
$ tar -xvzf osm-3s_v0.7.52.tar.gz

3. Compile the software

cd osm*
 ./configure CXXFLAGS="-O3" --prefix=/usr/local/osm3s

4. Systemd unit file

$ vim overpass-api.service
Description=Overpass API dispatcher daemon
ExecStart=/usr/local/osm3s/bin/dispatcher --osm-base --db-dir=/var/lib/osm3s/data/db
ExecStop=/usr/local/osm3s/bin/dispatcher --terminate

5. Create rpm package with some post install/remove scripts

post-install script

$ vim post-install
mv /usr/local/osm3s/bin/overpass-api.service /etc/systemd/system
mkdir -p /var/lib/osm3s/data/db

post-remove script

$ vim post-remove
rm /etc/systemd/system/overpass-api.service

create rpm package using fpm

/usr/local/bin/fpm -s dir -t rpm -n overpass-api -v 0.7 --iteration 52 --exclude bin/.dirstamp --exclude bin/.libs --after-install post-install --after-remove post-remove --prefix /usr/local/osm3s bin/

The package is available on and a proof of concept is available at

This package seems to be incomplete, it doesn't include cgi-bin directory nor does it match the overall structure of the official installation. Moved to experimental section for the time being. Better put it on your own user page until it is ready. Mmd (talk) 14:24, 30 December 2015 (UTC)