This post shows how to deploy OpenStreetMap-related services on a separate Ubuntu 26.04 server.
The OSM service domain used in this setup is:
osm.maksonlee.comThis server will provide:
Nominatim
OSRM car
OSRM foot
NGINX reverse proxy
HTTPS certificate using Cloudflare DNS-01 challenge
Incremental Nominatim updates
Scheduled OSRM graph rebuildThe final service layout looks like this:
https://osm.maksonlee.com
├── /nominatim/
├── /osrm-car/
└── /osrm-foot/These OSM services can be used by another backend application. Client applications should normally call the application backend, not the OSM services directly.
Previous Posts
This post assumes Docker and PostgreSQL 18 with PostGIS are already installed.
This setup is based on these previous posts:
In this post, I will not repeat the full Docker and PostgreSQL/PostGIS installation steps.
What This Server Does
The OSM server provides helper services:
Nominatim:
Place search and geocoding
OSRM foot:
Walking distance and walking time
OSRM car:
Road travel time fallbackA typical architecture can look like this:
Client App
↓
Application Backend
↓
https://osm.maksonlee.com
├── /nominatim/
├── /osrm-car/
└── /osrm-foot/The application backend should own the application-specific logic. The OSM server only provides geocoding and routing helper APIs.
Lab Context
In this setup:
OS: Ubuntu 26.04
OSM service domain: osm.maksonlee.com
DNS provider: Cloudflare
Web server: NGINX
Certificate tool: Certbot
Certificate validation: DNS-01 challenge
OSM data: Taiwan only
PostgreSQL: PostgreSQL 18
PostGIS: Already installed
Docker: Already installedThe server used in this setup has:
CPU: 6 vCPU
RAM: 10 GB
Disk: 96 GB root diskWhy Nominatim and OSRM Need Separate Data
Nominatim and OSRM can use the same source file:
taiwan-latest.osm.pbfBut they do not share the same database.
Nominatim imports OSM data into PostgreSQL/PostGIS and builds a geocoding database. It is used for place search, such as:
台北車站 → latitude / longitudeOSRM builds routing graph files. It is used for routing, such as:
A coordinate → B coordinateSo the relationship is:
taiwan-latest.osm.pbf
├── Nominatim database
├── OSRM car graph
└── OSRM foot graphThey come from the same OSM extract, but they produce different data structures.
Why Separate OSRM Car and Foot
OSRM profiles are decided during the build stage.
That means car and foot are not just different API paths. They are different routing graphs built with different rules.
For example:
OSRM car:
Built with car.lua
Used for vehicle routing
OSRM foot:
Built with foot.lua
Used for walking routingFor walking time, the backend should use OSRM foot:
User location → destination
User location → nearby stop or point of interestFor road travel fallback time, the backend can use OSRM car:
Point A → Point B by roadIf only car is used, walking routes may be wrong because cars cannot use sidewalks, pedestrian paths, underpasses, or footbridges.
If only foot is used, road travel fallback time would be wrong because vehicles travel on roads, not walking paths.
So this setup uses both:
OSRM foot:
Walking time
OSRM car:
Road travel time fallback- Check Prerequisites
Docker should already be installed.
Check Docker:
docker --version
sudo docker run hello-worldPostgreSQL 18 with PostGIS should also already be installed.
Check PostgreSQL clusters:
pg_lsclustersCheck PostGIS packages:
dpkg -l | grep postgisPostgreSQL/PostGIS is used by Nominatim on this OSM server.
- Install Additional Packages
Install the additional packages needed by Nominatim, Certbot, NGINX, and the OSM service setup:
sudo apt update
sudo apt install -y \
pkg-config \
libicu-dev \
osm2pgsql \
virtualenv \
nginx \
certbot \
python3-certbot-dns-cloudflare \
jq- Download Taiwan OSM Data
Create a shared OSM data directory:
sudo mkdir -p /srv/osm-data
sudo chown -R $USER:$USER /srv/osm-dataDownload the Taiwan extract:
cd /srv/osm-data
wget -O taiwan-latest.osm.pbf \
https://download.geofabrik.de/asia/taiwan-latest.osm.pbfCheck the file:
ls -lh /srv/osm-data/taiwan-latest.osm.pbfIn this setup, the downloaded Taiwan PBF file used about 310 MB.
- Create the Nominatim User
Create a dedicated system user:
sudo useradd -d /srv/nominatim -s /bin/bash -m nominatim
sudo chmod a+x /srv/nominatimCreate PostgreSQL roles:
sudo -u postgres createuser -s nominatim
sudo -u postgres createuser www-data || trueIf www-data already exists, the error can be ignored.
- Tune PostgreSQL for Nominatim
Find the PostgreSQL config file:
PGCONF=$(sudo -u postgres psql -tAc "SHOW config_file;" | xargs)
echo "$PGCONF"Back it up:
sudo cp "$PGCONF" "$PGCONF.bak.$(date +%Y%m%d-%H%M%S)"Edit it:
sudo vi "$PGCONF"For a 10 GB RAM server, I used conservative settings:
shared_buffers = 2GB
maintenance_work_mem = 2GB
work_mem = 32MB
effective_cache_size = 6GB
synchronous_commit = off
max_wal_size = 4GB
checkpoint_timeout = 30min
checkpoint_completion_target = 0.9
random_page_cost = 1.0
autovacuum_max_workers = 1Restart PostgreSQL:
sudo systemctl restart postgresql
sudo systemctl status postgresql --no-pager- Install Nominatim
Switch to the nominatim user:
sudo -u nominatim bashSet environment variables:
export USERNAME=nominatim
export USERHOME=/srv/nominatimCreate a Python virtual environment:
cd /srv/nominatim
virtualenv nominatim-venvInstall Python packages:
/srv/nominatim/nominatim-venv/bin/pip install --upgrade pip wheel setuptools
/srv/nominatim/nominatim-venv/bin/pip install psycopg[binary]
/srv/nominatim/nominatim-venv/bin/pip install nominatim-db
/srv/nominatim/nominatim-venv/bin/pip install \
nominatim-api falcon uvicorn gunicornActivate the virtual environment:
. /srv/nominatim/nominatim-venv/bin/activateCheck the version:
nominatim --version- Import Taiwan Data into Nominatim
Create the project directory:
mkdir -p /srv/nominatim/nominatim-project
cd /srv/nominatim/nominatim-project
export PROJECT_DIR=/srv/nominatim/nominatim-projectBecause this setup uses true incremental Nominatim updates, do not use --no-updates.
Import the Taiwan OSM extract:
nominatim import \
--osm-file /srv/osm-data/taiwan-latest.osm.pbf \
2>&1 | tee setup.logIf the import was interrupted before, the PostgreSQL database may already exist. In that case, the next import may fail with:
database "nominatim" already existsFor a fresh test server, remove the old incomplete database:
exit
sudo -u postgres dropdb nominatimIf there are active connections, terminate them first:
sudo -u postgres psql -d postgres -c "
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'nominatim';
"
sudo -u postgres dropdb nominatimThen switch back to the nominatim user and rerun the import:
sudo -u nominatim bash
cd /srv/nominatim
. /srv/nominatim/nominatim-venv/bin/activate
cd /srv/nominatim/nominatim-project
export PROJECT_DIR=/srv/nominatim/nominatim-project
nominatim import \
--osm-file /srv/osm-data/taiwan-latest.osm.pbf \
2>&1 | tee setup.logIn an earlier Taiwan-only import test, the import completed successfully in about 5030 seconds, which is about 84 minutes. The final log showed Import completed successfully in 5030.09 seconds.
After the import finishes, check the database:
cd /srv/nominatim/nominatim-project
nominatim admin --check-databaseTest a search:
nominatim search --query "台北車站"Exit the nominatim user shell:
exit- Configure Incremental Nominatim Updates
For true incremental updates, configure the Taiwan update feed in the Nominatim project directory.
Switch to the nominatim user:
sudo -u nominatim bash
cd /srv/nominatim/nominatim-projectCreate .env:
cat > .env << 'EOF'
NOMINATIM_REPLICATION_URL="https://download.geofabrik.de/asia/taiwan-updates"
NOMINATIM_REPLICATION_UPDATE_INTERVAL=86400
NOMINATIM_REPLICATION_RECHECK_INTERVAL=900
EOFInstall osmium inside the Nominatim virtual environment.
The safest way is to call Python from the virtual environment directly:
/srv/nominatim/nominatim-venv/bin/python -m pip install -U osmiumCheck it:
/srv/nominatim/nominatim-venv/bin/python -m pip show osmiumThen initialize replication:
cd /srv/nominatim/nominatim-project
. /srv/nominatim/nominatim-venv/bin/activate
nominatim replication --initRun one update manually:
nominatim replication --onceIn my test, one incremental update completed much faster than the initial import:
Update completed.
Import: 0:01:14
Indexing: 0:00:06
Total: 0:01:21
Remaining backlog: 19:51:26This means the update run completed in about 1 minute and 21 seconds, but the database still had about 19 hours and 51 minutes of replication backlog to catch up. After the initial backlog is caught up, the systemd timer runs the update once per day.
Exit the nominatim user shell:
exit- Create a systemd Timer for Nominatim Updates
Create a systemd service that runs one replication update:
sudo tee /etc/systemd/system/nominatim-updates.service > /dev/null << 'EOF'
[Unit]
Description=Single Nominatim replication update
After=network-online.target postgresql.service
Wants=network-online.target
[Service]
Type=simple
User=nominatim
Group=nominatim
WorkingDirectory=/srv/nominatim/nominatim-project
ExecStart=/srv/nominatim/nominatim-venv/bin/nominatim replication --once
StandardOutput=journal
StandardError=inherit
EOFCreate the timer:
sudo tee /etc/systemd/system/nominatim-updates.timer > /dev/null << 'EOF'
[Unit]
Description=Run Nominatim replication updates
[Timer]
OnCalendar=*-*-* 04:00
Persistent=true
Unit=nominatim-updates.service
[Install]
WantedBy=timers.target
EOFEnable the timer:
sudo systemctl daemon-reload
sudo systemctl enable --now nominatim-updates.timerCheck the timer:
systemctl list-timers | grep nominatim
systemctl status nominatim-updates.timer --no-pagerCheck the update service logs:
journalctl -u nominatim-updates.service -n 100 --no-pagerIf the update service is currently running, Trigger may temporarily show as n/a. After the service finishes, the next trigger time should appear normally.
To follow the update log live:
journalctl -u nominatim-updates.service -f- Create the Nominatim systemd Service
Create a socket unit:
sudo tee /etc/systemd/system/nominatim.socket > /dev/null << 'EOF'
[Unit]
Description=Gunicorn socket for Nominatim
[Socket]
ListenStream=/run/nominatim.sock
SocketUser=www-data
[Install]
WantedBy=multi-user.target
EOFCreate the service unit:
sudo tee /etc/systemd/system/nominatim.service > /dev/null << 'EOF'
[Unit]
Description=Nominatim running as a gunicorn application
After=network.target postgresql.service
Requires=nominatim.socket
[Service]
Type=simple
User=www-data
Group=www-data
WorkingDirectory=/srv/nominatim/nominatim-project
ExecStart=/srv/nominatim/nominatim-venv/bin/gunicorn -b unix:/run/nominatim.sock -w 4 -k uvicorn.workers.UvicornWorker "nominatim_api.server.falcon.server:run_wsgi()"
ExecReload=/bin/kill -s HUP $MAINPID
PrivateTmp=true
TimeoutStopSec=5
KillMode=mixed
[Install]
WantedBy=multi-user.target
EOFEnable and start it:
sudo systemctl daemon-reload
sudo systemctl enable --now nominatim.socket
sudo systemctl enable --now nominatim.serviceCheck the service:
systemctl status nominatim.socket --no-pager
systemctl status nominatim.service --no-pager- Build OSRM Car Graph
Create a directory for the car profile:
sudo mkdir -p /srv/osrm/car
sudo chown -R $USER:$USER /srv/osrm
cp /srv/osm-data/taiwan-latest.osm.pbf /srv/osrm/car/Download the OSRM Docker image:
sudo docker pull osrm/osrm-backendBuild the car graph:
cd /srv/osrm/car
sudo docker run --rm -t \
-v /srv/osrm/car:/data \
osrm/osrm-backend \
osrm-extract -p /opt/car.lua /data/taiwan-latest.osm.pbf
sudo docker run --rm -t \
-v /srv/osrm/car:/data \
osrm/osrm-backend \
osrm-partition /data/taiwan-latest.osrm
sudo docker run --rm -t \
-v /srv/osrm/car:/data \
osrm/osrm-backend \
osrm-customize /data/taiwan-latest.osrmIn my test, OSRM car graph generation was much faster than Nominatim import. The log showed osrm-extract, osrm-partition, and osrm-customize completed successfully. It processed the Taiwan PBF and produced the MLD routing graph.
This is expected. Nominatim builds a geocoding/search database, text indexes, ranking data, and address-related structures in PostgreSQL. OSRM only extracts the routable road graph for the selected profile. It does not build a place search database.
The OSRM car build log also showed peak RAM usage around this range:
osrm-extract peak RAM: about 1.65 GB
osrm-partition peak RAM: about 0.81 GB
osrm-customize peak RAM: about 1.18 GBSo for Taiwan-only data, the OSRM build was much lighter than the Nominatim import on this server.
- Build OSRM Foot Graph
Create a directory for the foot profile:
sudo mkdir -p /srv/osrm/foot
sudo chown -R $USER:$USER /srv/osrm/foot
cp /srv/osm-data/taiwan-latest.osm.pbf /srv/osrm/foot/Build the foot graph:
cd /srv/osrm/foot
sudo docker run --rm -t \
-v /srv/osrm/foot:/data \
osrm/osrm-backend \
osrm-extract -p /opt/foot.lua /data/taiwan-latest.osm.pbf
sudo docker run --rm -t \
-v /srv/osrm/foot:/data \
osrm/osrm-backend \
osrm-partition /data/taiwan-latest.osrm
sudo docker run --rm -t \
-v /srv/osrm/foot:/data \
osrm/osrm-backend \
osrm-customize /data/taiwan-latest.osrm- Create OSRM systemd Services
Create the car service:
sudo tee /etc/systemd/system/osrm-car.service > /dev/null << 'EOF'
[Unit]
Description=OSRM Car Routing Service
After=docker.service
Requires=docker.service
[Service]
Restart=always
RestartSec=5
ExecStartPre=-/usr/bin/docker rm -f osrm-car
ExecStart=/usr/bin/docker run --name osrm-car --rm \
-p 127.0.0.1:5000:5000 \
-v /srv/osrm/car:/data \
osrm/osrm-backend \
osrm-routed --algorithm mld /data/taiwan-latest.osrm
ExecStop=/usr/bin/docker stop osrm-car
[Install]
WantedBy=multi-user.target
EOFCreate the foot service:
sudo tee /etc/systemd/system/osrm-foot.service > /dev/null << 'EOF'
[Unit]
Description=OSRM Foot Routing Service
After=docker.service
Requires=docker.service
[Service]
Restart=always
RestartSec=5
ExecStartPre=-/usr/bin/docker rm -f osrm-foot
ExecStart=/usr/bin/docker run --name osrm-foot --rm \
-p 127.0.0.1:5001:5000 \
-v /srv/osrm/foot:/data \
osrm/osrm-backend \
osrm-routed --algorithm mld /data/taiwan-latest.osrm
ExecStop=/usr/bin/docker stop osrm-foot
[Install]
WantedBy=multi-user.target
EOFEnable both services:
sudo systemctl daemon-reload
sudo systemctl enable --now osrm-car
sudo systemctl enable --now osrm-footCheck them:
systemctl status osrm-car --no-pager
systemctl status osrm-foot --no-pager
sudo docker ps- Create the OSRM Graph Rebuild Script
Nominatim can use incremental updates, but OSRM does not automatically refresh its routing graph from the OSM update feed in this setup.
The OSRM graph is built from the Taiwan PBF file. Normal routing requests call osrm-routed against the existing graph. To refresh the road data, I need to download a newer Taiwan PBF file and rebuild both the car and foot graphs.
This script builds the new OSRM graphs in a temporary directory first. It only replaces the active graph directories after both car and foot graphs are built successfully. If the new OSRM services fail to start after the swap, the script restores the previous graph directories and starts the old services again.
Create the rebuild script:
sudo tee /usr/local/bin/rebuild-osrm-taiwan.sh > /dev/null << 'EOF'
#!/usr/bin/env bash
set -euo pipefail
DATE="$(date +%Y%m%d-%H%M%S)"
BASE="/srv/osrm-build-$DATE"
PBF="/srv/osm-data/taiwan-latest.osm.pbf"
IMAGE="osrm/osrm-backend"
rollback() {
echo "[ROLLBACK] Restoring previous OSRM graph directories"
systemctl stop osrm-car || true
systemctl stop osrm-foot || true
rm -rf /srv/osrm/car /srv/osrm/foot
if [ -d /srv/osrm/car.old ]; then
mv /srv/osrm/car.old /srv/osrm/car
fi
if [ -d /srv/osrm/foot.old ]; then
mv /srv/osrm/foot.old /srv/osrm/foot
fi
systemctl start osrm-car || true
systemctl start osrm-foot || true
}
mkdir -p "$BASE/car" "$BASE/foot"
echo "[1/9] Download latest Taiwan PBF"
wget -O "$PBF.tmp" https://download.geofabrik.de/asia/taiwan-latest.osm.pbf
mv "$PBF.tmp" "$PBF"
echo "[2/9] Prepare build directories"
cp "$PBF" "$BASE/car/"
cp "$PBF" "$BASE/foot/"
echo "[3/9] Build car graph"
docker run --rm -t -v "$BASE/car:/data" "$IMAGE" osrm-extract -p /opt/car.lua /data/taiwan-latest.osm.pbf
docker run --rm -t -v "$BASE/car:/data" "$IMAGE" osrm-partition /data/taiwan-latest.osrm
docker run --rm -t -v "$BASE/car:/data" "$IMAGE" osrm-customize /data/taiwan-latest.osrm
echo "[4/9] Build foot graph"
docker run --rm -t -v "$BASE/foot:/data" "$IMAGE" osrm-extract -p /opt/foot.lua /data/taiwan-latest.osm.pbf
docker run --rm -t -v "$BASE/foot:/data" "$IMAGE" osrm-partition /data/taiwan-latest.osrm
docker run --rm -t -v "$BASE/foot:/data" "$IMAGE" osrm-customize /data/taiwan-latest.osrm
echo "[5/9] Stop OSRM services"
systemctl stop osrm-car
systemctl stop osrm-foot
echo "[6/9] Backup current graph directories"
rm -rf /srv/osrm/car.old /srv/osrm/foot.old
if [ -d /srv/osrm/car ]; then
mv /srv/osrm/car /srv/osrm/car.old
fi
if [ -d /srv/osrm/foot ]; then
mv /srv/osrm/foot /srv/osrm/foot.old
fi
echo "[7/9] Install new graph directories"
mkdir -p /srv/osrm
mv "$BASE/car" /srv/osrm/car
mv "$BASE/foot" /srv/osrm/foot
echo "[8/9] Start OSRM services"
if ! systemctl start osrm-car; then
rollback
exit 1
fi
if ! systemctl start osrm-foot; then
rollback
exit 1
fi
echo "[9/9] Verify OSRM services"
if ! systemctl is-active --quiet osrm-car; then
rollback
exit 1
fi
if ! systemctl is-active --quiet osrm-foot; then
rollback
exit 1
fi
rm -rf "$BASE"
rm -rf /srv/osrm/car.old /srv/osrm/foot.old
echo "Done."
EOF
sudo chmod +x /usr/local/bin/rebuild-osrm-taiwan.shRun it once manually:
sudo /usr/local/bin/rebuild-osrm-taiwan.shCheck the services:
systemctl status osrm-car --no-pager
systemctl status osrm-foot --no-pager- Create a systemd Timer for OSRM Graph Rebuild
To keep OSRM road data reasonably fresh, create a systemd service and timer for the rebuild script.
Create the service:
sudo tee /etc/systemd/system/osrm-rebuild.service > /dev/null << 'EOF'
[Unit]
Description=Rebuild OSRM Taiwan car and foot graphs
After=network-online.target docker.service
Wants=network-online.target
Requires=docker.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/rebuild-osrm-taiwan.sh
StandardOutput=journal
StandardError=inherit
EOFCreate the timer:
sudo tee /etc/systemd/system/osrm-rebuild.timer > /dev/null << 'EOF'
[Unit]
Description=Run OSRM Taiwan graph rebuild
[Timer]
OnCalendar=Sun 03:30
Persistent=true
Unit=osrm-rebuild.service
[Install]
WantedBy=timers.target
EOFEnable the timer:
sudo systemctl daemon-reload
sudo systemctl enable --now osrm-rebuild.timerCheck the timer:
systemctl list-timers | grep osrm
systemctl status osrm-rebuild.timer --no-pagerCheck rebuild logs:
journalctl -u osrm-rebuild.service -n 100 --no-pagerTo follow a rebuild live:
journalctl -u osrm-rebuild.service -f- Create a Cloudflare API Token
In Cloudflare, create an API token for DNS validation.
The token should have access to the maksonlee.com zone.
Recommended permissions:
Zone → DNS → Edit
Zone → Zone → ReadDo not use the Global API Key if you do not need to. A restricted API token is safer.
- Store the Cloudflare API Token
Create the credentials file:
mkdir -p ~/.secrets/certbot
nano ~/.secrets/certbot/cloudflare.iniAdd the token:
dns_cloudflare_api_token = YOUR_CLOUDFLARE_API_TOKENSet strict permissions:
chmod 600 ~/.secrets/certbot/cloudflare.iniThis file contains a sensitive API token, so it should not be readable by other users.
- Request the Certificate
Request the certificate using the Cloudflare DNS plugin:
sudo certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /home/administrator/.secrets/certbot/cloudflare.ini \
--dns-cloudflare-propagation-seconds 60 \
-d osm.maksonlee.comI use the full path here instead of ~ because the command runs with sudo.
If successful, the certificate files will be created here:
/etc/letsencrypt/live/osm.maksonlee.com/fullchain.pem
/etc/letsencrypt/live/osm.maksonlee.com/privkey.pem- Configure NGINX
Create the NGINX site config:
sudo tee /etc/nginx/sites-available/osm-services > /dev/null << 'EOF'
server {
listen 80;
listen [::]:80;
server_name osm.maksonlee.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name osm.maksonlee.com;
ssl_certificate /etc/letsencrypt/live/osm.maksonlee.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/osm.maksonlee.com/privkey.pem;
location /nominatim/ {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://unix:/run/nominatim.sock:/;
}
location /osrm-car/ {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:5000/;
}
location /osrm-foot/ {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:5001/;
}
}
EOFEnable the site:
sudo ln -sf /etc/nginx/sites-available/osm-services /etc/nginx/sites-enabled/osm-servicesTest and reload NGINX:
sudo nginx -t
sudo systemctl reload nginx- Test Nominatim
From a machine that can access osm.maksonlee.com, test:
curl "https://osm.maksonlee.com/nominatim/status"Expected result:
OKTest place search:
curl -G "https://osm.maksonlee.com/nominatim/search" \
--data-urlencode "q=台北車站" \
--data-urlencode "format=json" \
--data-urlencode "limit=3" | jqWhen testing Chinese search terms with curl, use --data-urlencode. Sending raw Chinese characters directly inside the URL may cause a 400 Bad Request.
For example, this may fail:
curl "https://osm.maksonlee.com/nominatim/search?q=台北車站&format=json&limit=3" | jqThe error may look like this:
Invalid HTTP request received.Using curl -G --data-urlencode avoids this issue.
If a broad query like 台北車站 returns nearby shops, restaurants, or bus stops before the main station object, that does not necessarily mean Nominatim is broken. It means the local OSM data and search ranking may match nearby POIs around the station area.
For application search, the backend should still apply its own filtering, ranking, and domain-specific logic.
- Test OSRM Foot
OSRM uses this coordinate order:
longitude,latitudeNot:
latitude,longitudeTest walking routing:
curl "https://osm.maksonlee.com/osrm-foot/route/v1/foot/121.5170,25.0478;121.5200,25.0470?overview=false" | jq- Test OSRM Car
Test car routing:
curl "https://osm.maksonlee.com/osrm-car/route/v1/driving/121.5170,25.0478;121.4642,25.0143?overview=false" | jqTest the OSRM table API:
curl "https://osm.maksonlee.com/osrm-car/table/v1/driving/121.5170,25.0478;121.4642,25.0143;121.5430,25.0330?annotations=duration,distance" | jqThe table API is useful when calculating multiple road travel durations.
Did this guide save you time?
Support this site