Securely Exposing Academic Servers with Cloudflare Tunnel
Background
When running Elasticsearch (full-text search) or Cantaloupe (IIIF image delivery) on an academic research server, you typically need to open ports to the outside world. However, opening ports introduces the risk of attacks exploiting vulnerabilities.
With Cloudflare Tunnel, you can securely expose services to the public without opening any inbound ports on your server.
What Is Cloudflare Tunnel?
In a conventional server setup, the server opens ports and listens for incoming connections (inbound connections). Cloudflare Tunnel reverses this model.
[Conventional]
External → (port 80/443) → Server
* Server opens ports and listens for connections
[Cloudflare Tunnel]
External → Cloudflare → ← Server (cloudflared)
* Server initiates the connection to Cloudflare (outbound)
* No inbound ports required
An agent called cloudflared runs on the server and maintains an outbound connection to Cloudflare. Incoming requests are received by Cloudflare and forwarded to the server through this tunnel.
Benefits
- No port forwarding needed: All inbound ports can be closed
- WAF and DDoS protection: Cloudflare automatically absorbs attacks
- Automatic SSL: No need for Let’s Encrypt configuration or reverse proxies (Traefik, etc.)
- Free: Tunnel is available on the free plan
Architecture
Cloudflare
├── iiif-cf.example.jp → Cantaloupe (8182)
└── es-cf.example.jp → Elasticsearch (9200)
│
│ Tunnel (encrypted)
│
Server (Docker)
├── cloudflared (Tunnel agent)
├── elasticsearch (full-text search)
└── cantaloupe (IIIF image delivery)
Steps
1. Register Your Domain with Cloudflare
Add your domain in the Cloudflare dashboard and update your registrar’s nameservers to point to Cloudflare.
2. Create a Tunnel
Create a Tunnel from the Cloudflare dashboard (Zero Trust → Networks → Tunnels) and obtain a token.
You can also create one via CLI:
cloudflared tunnel create my-tunnel
3. Docker Compose
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared
restart: always
command: tunnel --protocol http2 run
environment:
TUNNEL_TOKEN: <your-token>
extra_hosts:
- host.docker.internal:host-gateway
networks:
- tunnel-network
elasticsearch:
image: elasticsearch:8.17.0
container_name: elasticsearch
restart: always
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms512m -Xmx512m
volumes:
- es-data:/usr/share/elasticsearch/data
networks:
- tunnel-network
cantaloupe:
image: mitlibraries/cantaloupe:latest
container_name: cantaloupe
restart: always
volumes:
- cantaloupe-images:/imageroot
networks:
- tunnel-network
volumes:
es-data:
cantaloupe-images:
networks:
tunnel-network:
Key points:
- No service exposes ports externally (no
portsdirective) cloudflaredaccesses each service through Docker’s internal network--protocol http2is needed in environments with UDP restrictions
4. Configure DNS Routing
cloudflared tunnel route dns my-tunnel iiif-cf.example.jp
cloudflared tunnel route dns my-tunnel es-cf.example.jp
5. Configure Ingress Rules
Use the Cloudflare API to map hostnames to services:
curl -X PUT "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/cfd_tunnel/<TUNNEL_ID>/configurations" \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{
"config": {
"ingress": [
{"hostname": "iiif-cf.example.jp", "service": "http://cantaloupe:8182"},
{"hostname": "es-cf.example.jp", "service": "http://elasticsearch:9200"},
{"service": "http_status:404"}
]
}
}'
The final http_status:404 is a required catch-all rule.
6. Start and Verify
docker compose up -d
curl https://es-cf.example.jp
# → Returns Elasticsearch response
curl https://iiif-cf.example.jp
# → Returns Cantaloupe admin page
7. Close the Firewall
Once you have confirmed that the Tunnel is working properly, close all inbound ports on the server. Since cloudflared only uses outbound connections, no inbound ports are needed at all.
Exposure Policy by Service
Not every service needs to be publicly exposed through the Tunnel. Choose the appropriate exposure level based on the nature of each service.
Services That Can Be Public
Services like IIIF image delivery (Cantaloupe), which are accessed by a wide range of clients, should be exposed through the Tunnel with CDN caching enabled.
Services That Should Not Be Public
Internal services like Elasticsearch should generally not be publicly exposed. Exposing them without authentication creates risks of data leakage and tampering.
There are several approaches to handle this:
Option 1: Exclude from Ingress
Don’t include Elasticsearch in the tunnel ingress, and connect only when needed via SSH port forwarding.
# Port forward via Zero Trust SSH
ssh -L 9200:elasticsearch:9200 my-server-cf
This makes Elasticsearch available at localhost:9200, and you can connect from a development environment like Next.js with the following configuration:
ELASTICSEARCH_URL=http://localhost:9200
Option 2: Protect with Cloudflare Access
Expose through the Tunnel but require Cloudflare Access authentication. Similar to SSH, only authenticated users are allowed access. For API access, issue a Service Token.
Option 3: Connect Internally Within the Tunnel for Production Apps
When a production app (e.g., Cloudflare Workers) needs to connect to Elasticsearch, use Tunnel-internal networking (such as Service Binding) to connect without public exposure.
Recommended Architecture
Publicly exposed via Tunnel
└── Cantaloupe (IIIF image delivery) → CDN cache enabled
Private (not included in Ingress)
└── Elasticsearch (internal service)
Development access
└── Zero Trust SSH port forwarding → localhost:9200
Removing Elasticsearch from Public Access
Remove the entry from the ingress rules and delete the DNS record.
# Remove from Ingress (exclude the ES entry)
curl -X PUT "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/cfd_tunnel/<TUNNEL_ID>/configurations" \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{
"config": {
"ingress": [
{"hostname": "iiif-cf.example.jp", "service": "http://cantaloupe:8182"},
{"service": "http_status:404"}
]
}
}'
# Also delete the DNS record
# Delete the corresponding CNAME via Cloudflare API or dashboard
Connecting to Elasticsearch During Local Development
Use port forwarding via Zero Trust SSH.
# Start SSH port forward (specify ES container IP)
ssh -L 9200:<ES-container-IP>:9200 my-server-cf
You can find the ES container IP with:
ssh my-server-cf "docker inspect elasticsearch --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'"
While the port forward is active, you can access ES at localhost:9200.
Next.js .env.local:
ELASTICSEARCH_URL=http://localhost:9200
Example API route (app/api/search/route.ts):
const ES_URL = process.env.ELASTICSEARCH_URL || "http://localhost:9200";
export async function GET(request: NextRequest) {
const q = request.nextUrl.searchParams.get("q") || "";
const category = request.nextUrl.searchParams.get("category") || "";
const must = [];
if (q) {
must.push({ multi_match: { query: q, fields: ["title", "description"] } });
}
if (category) {
must.push({ term: { category } });
}
const res = await fetch(`${ES_URL}/documents/_search`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
query: must.length > 0 ? { bool: { must } } : { match_all: {} },
aggs: { categories: { terms: { field: "category" } } },
}),
});
const data = await res.json();
return NextResponse.json({
hits: data.hits.hits.map((h) => h._source),
facets: data.aggregations.categories.buckets,
total: data.hits.total.value,
});
}
With this setup, Elasticsearch is never exposed to the internet, and during development, it is only accessed via SSH port forwarding.
SSL Certificate Considerations
Cloudflare’s free plan automatically issues a wildcard certificate for *.example.jp (one level). However, two-level subdomains like es.cf.example.jp are not covered.
| Subdomain | Certificate |
|---|---|
es-cf.example.jp (one level) | Covered |
es.cf.example.jp (two levels) | Not covered |
If you need two-level subdomains, Advanced Certificate Manager ($10/month) is required.
Comparison with Conventional Setup (Traefik)
Architectural Differences
[Traefik Setup]
Internet
→ Server (ports 80/443 open)
└── Traefik (reverse proxy)
├── Cantaloupe
├── Elasticsearch
└── ModSecurity (if WAF is needed)
[Cloudflare Tunnel Setup]
Internet
→ Cloudflare (WAF, DDoS protection, SSL)
→ Tunnel (encrypted)
→ Server (no open ports)
└── cloudflared
├── Cantaloupe
└── Elasticsearch
Feature Comparison
| Feature | Traefik + Let’s Encrypt | Cloudflare Tunnel |
|---|---|---|
| Open ports | 80, 443 required | Not needed |
| SSL management | Let’s Encrypt auto-renewal | Fully managed by Cloudflare |
| WAF | None (requires separate setup) | Included (free) |
| DDoS protection | None | Included (free) |
| Cost | Server costs only | Free |
| Configuration | Traefik config + labels | Docker Compose + API |
Implementing WAF with Traefik
Traefik does not have built-in WAF capabilities. If WAF is needed, you must run a separate ModSecurity container and integrate it as Traefik middleware.
# Traefik + ModSecurity setup (reference)
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
modsecurity:
image: owasp/modsecurity-crs:apache
environment:
- PARANOIA=1
cantaloupe:
labels:
- "traefik.http.routers.cantaloupe.middlewares=waf@docker"
This setup has several challenges:
- ModSecurity configuration and tuning is complex
- False positives need to be addressed
- CRS (Core Rule Set) updates must be managed
- Performance impact
With Cloudflare Tunnel, all of this is managed on Cloudflare’s side, significantly reducing operational overhead.
Ease of Migration
Migrating from Traefik to Cloudflare Tunnel is relatively straightforward:
- Remove Traefik label configurations
- Add the cloudflared container to docker-compose.yml
- Remove external port mappings
- Configure Ingress rules on the Cloudflare side
Existing service containers (Elasticsearch, Cantaloupe, etc.) can be used as-is.
CDN Cache Configuration
Services exposed through Cloudflare Tunnel do not have CDN caching enabled by default (cf-cache-status: DYNAMIC). For cases where the same response is requested repeatedly, such as IIIF image tiles, enabling CDN caching can significantly reduce the load on the origin server.
Why Caching Is Not Enabled by Default
Cloudflare automatically caches static file extensions like .jpg and .png, but responses served through the Tunnel may be classified as DYNAMIC (not eligible for caching) based on the origin’s Cache-Control headers or dynamic content detection. Explicitly enabling caching through Cache Rules ensures that CDN caching works reliably.
Configuring Cache Rules
Set up hostname-based cache rules via the Cloudflare API:
curl -X POST "https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/rulesets" \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{
"name": "IIIF Cache Rules",
"kind": "zone",
"phase": "http_request_cache_settings",
"rules": [
{
"expression": "(http.host eq \"iiif-cf.example.jp\")",
"description": "Cache IIIF responses",
"action": "set_cache_settings",
"action_parameters": {
"cache": true,
"edge_ttl": {
"mode": "override_origin",
"default": 86400
},
"browser_ttl": {
"mode": "override_origin",
"default": 86400
}
}
}
]
}'
Response Headers After Configuration
cf-cache-status: HIT ← Served from CDN cache
age: 24 ← Seconds since cached
cf-ray: xxxxx-NRT ← Served from Narita edge server
| Header Value | Meaning |
|---|---|
MISS | Not cached. Fetched from origin and stored in cache |
HIT | Served from cache. No request to origin |
DYNAMIC | Not eligible for cache rules |
Cost
CDN caching on the free plan has no limits on storage or bandwidth. There are no restrictions on IIIF image tile delivery.
How Far Can You Go on the Free Tier?
All Cloudflare features used in this setup are available within the free plan.
| Feature | Use Case | Free Tier |
|---|---|---|
| DNS | Domain nameserver | Unlimited |
| Tunnel | Secure connection to server | Up to 50 |
| CDN | Image tile caching | Unlimited storage and bandwidth |
| WAF | Attack protection | Basic rules included |
| SSL | Automatic certificate issuance and renewal | Unlimited |
| Zero Trust Access | SSH authentication, etc. | Up to 50 users |
It is virtually impossible to exceed the free tier for research purposes. The only costs incurred are for the server itself (VPS, mdx, etc.).
Why Are All These Features Free?
Cloudflare’s business model is based on acquiring a large user base through its free plan and monetizing through Enterprise and Pro plans for large organizations. Individuals, researchers, and small-scale projects are on the benefiting side of the free plan.
What This Setup Achieves
Here is a summary of what changed compared to the conventional setup:
[Before (Traefik + Direct VPS Exposure)]
- Open ports required → Attack risk
- SSL certificate management required
- No WAF → Framework vulnerabilities directly exposed
- No CDN → Load concentrated on origin
- High operational burden on server administrators
[Cloudflare Tunnel + mdx]
- All inbound ports closed → No attack surface
- SSL/WAF/CDN/DDoS protection all automatic
- SSH also protected by Zero Trust
- Internal services (ES, etc.) remain private while still accessible for development
- Zero cost on the Cloudflare side
Practical Example: Next.js + Elasticsearch + IIIF Demo App
A practical example of building a web application with search and image viewing capabilities using the Cloudflare Tunnel setup.
Overall Architecture
Browser
│
├── app-cf.example.jp → Next.js (search UI)
│ Exposed via Tunnel
│
└── iiif-cf.example.jp → Cantaloupe (IIIF image delivery)
Exposed via Tunnel + CDN cache
Server (inside Docker network)
├── cloudflared (Tunnel agent)
├── app (Next.js)
│ → Connects directly to elasticsearch:9200 (Docker internal, <1ms)
├── elasticsearch (full-text search) ← Not publicly exposed
└── cantaloupe (IIIF image delivery)
→ Fetches images from S3-compatible storage
Key points:
- Next.js and Elasticsearch are on the same Docker network, connecting directly via Docker DNS resolution (
elasticsearch:9200). No public exposure needed. - Cantaloupe image delivery leverages CDN caching, so repeated requests for the same tile are served from Cloudflare’s edge.
- Browsers access only through the Tunnel. All inbound ports on the server are closed.
docker-compose.yml
services:
cloudflared:
image: cloudflare/cloudflared:latest
restart: always
command: tunnel --protocol http2 run
environment:
TUNNEL_TOKEN: <token>
extra_hosts:
- host.docker.internal:host-gateway
networks:
- tunnel-network
elasticsearch:
image: elasticsearch:8.17.0
restart: always
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms512m -Xmx512m
volumes:
- es-data:/usr/share/elasticsearch/data
networks:
- tunnel-network
cantaloupe:
image: islandora/cantaloupe:6.3.12
restart: always
environment:
CANTALOUPE_SOURCE_STATIC: S3Source
CANTALOUPE_S3SOURCE_ENDPOINT: https://s3.example.jp
CANTALOUPE_S3SOURCE_REGION: us-east-1
AWS_ACCESS_KEY_ID: <access-key>
AWS_SECRET_ACCESS_KEY: <secret-key>
CANTALOUPE_S3SOURCE_BASICLOOKUPSTRATEGY_BUCKET_NAME: my-bucket
CANTALOUPE_S3SOURCE_LOOKUP_STRATEGY: BasicLookupStrategy
CANTALOUPE_CACHE_SERVER_DERIVATIVE_ENABLED: "true"
CANTALOUPE_CACHE_SERVER_DERIVATIVE: FilesystemCache
volumes:
- cantaloupe_cache:/data
networks:
- tunnel-network
app:
build: ./es-search
restart: always
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
networks:
- tunnel-network
volumes:
es-data:
cantaloupe_cache:
networks:
tunnel-network:
No port mapping (ports) is configured for ES. The app container accesses it only through Docker’s internal network.
Tunnel Ingress Configuration
curl -X PUT "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/cfd_tunnel/<TUNNEL_ID>/configurations" \
-H "Content-Type: application/json" \
--data '{
"config": {
"ingress": [
{"hostname": "app-cf.example.jp", "service": "http://app:3000"},
{"hostname": "iiif-cf.example.jp", "service": "http://cantaloupe:8182"},
{"service": "http_status:404"}
]
}
}'
Elasticsearch is not included in the ingress.
Connecting Next.js to Elasticsearch
The API route (app/api/search/route.ts) accesses ES and appends IIIF image URLs to the search results:
const ES_URL = process.env.ELASTICSEARCH_URL; // http://elasticsearch:9200
const IIIF_BASE = "https://iiif-cf.example.jp";
export async function GET(request: NextRequest) {
const q = request.nextUrl.searchParams.get("q") || "";
const category = request.nextUrl.searchParams.get("category") || "";
// Send search query to Elasticsearch
const res = await fetch(`${ES_URL}/documents/_search`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
query: ...,
aggs: { categories: { terms: { field: "category" } } },
}),
});
const data = await res.json();
// Append IIIF image URLs and return
return NextResponse.json({
hits: data.hits.hits.map((h) => {
const source = h._source;
if (source.iiif_id) {
source.iiif_info = `${IIIF_BASE}/iiif/2/${encodeURIComponent(source.iiif_id)}/info.json`;
source.iiif_thumbnail = `${IIIF_BASE}/iiif/2/${encodeURIComponent(source.iiif_id)}/full/200,/0/default.jpg`;
}
return source;
}),
facets: data.aggregations.categories.buckets,
total: data.hits.total.value,
});
}
ELASTICSEARCH_URL uses Docker’s internal DNS resolution (http://elasticsearch:9200), so there is no need to expose it externally.
IIIF Image Viewer with OpenSeadragon
Clicking an IIIF image in the search results opens a high-resolution viewer using OpenSeadragon (OSD). OSD runs on the client side and fetches tiles from iiif-cf.example.jp. Tiles are cached by Cloudflare’s CDN, so re-displaying the same region is fast.
// OSD only works on the client side, so use dynamic import
useEffect(() => {
import("openseadragon").then((OSD) => {
OSD.default({
element: viewerRef.current,
tileSources: [infoUrl], // IIIF info.json URL
});
});
}, [infoUrl]);
S3-Compatible Storage Considerations
When Cantaloupe connects to S3-compatible storage (non-AWS), the AWS_REGION environment variable must be set. Without it, the AWS SDK v2 will repeatedly retry region resolution, causing tile fetches to time out. The actual region value does not matter, so a dummy value like us-east-1 works fine.
Conclusion
Using Cloudflare Tunnel, you can securely expose services while keeping all inbound ports on the server completely closed. The reverse proxy (Traefik, etc.) and SSL certificate management that were previously required are no longer needed, and WAF, DDoS protection, and CDN caching are all available at no additional cost.
By placing your Docker-based applications (Next.js, etc.) and databases (Elasticsearch, etc.) on the same network, internal services can communicate at high speed without being publicly exposed, and during development, you can connect from your local machine via SSH port forwarding.