Skip to content

Backing Up Synology to Cloudflare R2: A Practical Guide for Developers

Published:
5 min read

If you run a Synology NAS, local RAID is not enough. It helps with disk failure, but not with accidental deletion, ransomware, or site incidents. You still need an off-site backup.

I have been testing Synology Hyper Backup with Cloudflare R2, and it is one of the most practical cloud backup setups I have used recently. Configuration is straightforward, pricing is easier to reason about, and S3 compatibility keeps the tooling familiar.

This post walks through why R2 is a good destination, how to configure it in Synology, and the trade-offs you should understand before rolling it out.


Why Cloudflare R2 is a strong target for Synology backups

From a developer and operator perspective, R2 has several concrete benefits:

I especially like the no-egress angle. Restore tests become a technical exercise, not a finance negotiation.

Image: Cloudflare R2 bucket settings page showing S3 API endpoint and bucket details.

Cloudflare R2 bucket settings


Prerequisites

Before configuration, prepare these items:

  1. A Synology NAS with Hyper Backup installed.
  2. A Cloudflare account with R2 enabled.
  3. A private R2 bucket (for example, synology-backup-prod).
  4. R2 API credentials (Access Key + Secret Key) scoped to the bucket.
  5. A retention decision (versions, rotation, and recovery objectives).

Step-by-step setup in Hyper Backup with R2

In Synology DSM, open Hyper Backup and create a new task.

Start by selecting the backup type:

Image: Hyper Backup wizard showing backup type options.

Hyper Backup type selection screen

Then configure destination settings using S3-compatible values:

Image: Hyper Backup destination form with custom S3 server fields.

Hyper Backup S3 destination form

Then enable the important safety switches:

A quick endpoint validation from a workstation can help before large first syncs:

export AWS_ACCESS_KEY_ID="<r2_access_key>"
export AWS_SECRET_ACCESS_KEY="<r2_secret_key>"
export AWS_DEFAULT_REGION="auto"

aws s3 ls s3://synology-backup-prod \
  --endpoint-url https://<accountid>.r2.cloudflarestorage.com

If this command returns cleanly, your endpoint and credentials are correct.


Retention, rotation, and lifecycle strategy

The default mistake is keeping too few versions.

In Hyper Backup, enable rotation and use a policy that reflects your real recovery window.

Image: Hyper Backup rotation settings with retention controls.

Hyper Backup rotation policy

Typical starting point:

On the R2 side, lifecycle rules help with long-term cost control.

Example lifecycle rule (S3-compatible API) to expire a specific prefix after 365 days:

{
  "Rules": [
    {
      "ID": "expire-old-archives",
      "Status": "Enabled",
      "Filter": { "Prefix": "archive/" },
      "Expiration": { "Days": 365 }
    }
  ]
}

Apply with:

aws s3api put-bucket-lifecycle-configuration \
  --bucket synology-backup-prod \
  --lifecycle-configuration file://lifecycle.json \
  --endpoint-url https://<accountid>.r2.cloudflarestorage.com

Security and operational notes

A few recommendations that make a big difference:

Trade-offs to keep in mind:

In my view, those trade-offs are acceptable for most home labs and small teams.


Why this matters

Most homelab and SMB backup failures are process failures. People configure backup once, never test restore, and assume RAID equals resilience.

Using Synology + R2 encourages a healthier model:

For developers, this aligns with how we build systems: design for failure, not for perfect hardware.


Final thoughts

If you already run Synology, Cloudflare R2 is a very sensible off-site target. Setup is quick, S3 compatibility keeps tooling familiar, and no egress fees remove one of the usual barriers to proper recovery testing.

My recommendation is simple: start with one critical folder set, run your first backup, then perform a real restore test in the same week. Once that succeeds, scale gradually across the rest of your NAS workloads.

A backup is only real when restore is boring. This setup gets you much closer to that outcome.


Edit on GitHub

👋 ¡Hola! Pregúntame lo que quieras sobre el blog
🤖 AI Assistant