A developer posted on Reddit claiming that Bunny Storage repeatedly lost production files over a period of approximately 15 months. The claim describes files that appear to upload correctly, remain accessible briefly, and then disappear without any recorded deletion.

According to the post, files uploaded successfully through Bunny’s API later returned 404 errors — even though the poster says they never deleted the files and Bunny’s logs showed no deletion event. The post also claims that Bunny support acknowledged an unusual condition: some files appeared in replication regions but not in the main region.
Bunny promotes its storage product around replication, geographical distribution, and CDN integration. Its documentation states that Bunny Storage uses a main region with replication regions, and that recently uploaded files should remain available even before replication fully completes.
This article is based on a publicly available Reddit post describing the issue. The claims have not been independently verified.
The Reddit post describes a long-running pattern, not a one-time outage. According to the timeline shared by the developer, the support ticket reportedly began on January 13, 2025, after missing files and repeated 404 errors appeared in backend logs. On January 14, support escalated the case and confirmed that the missing files existed in some replication regions but not in the main region.
The timeline shows that the issue did not stop after the initial escalation. Additional missing files are listed on April 8, April 24, and April 29, 2025, with more than 200 affected cases within a single week at one point. A follow-up on March 24, 2026, saw Bunny respond that its storage team had still not reached a conclusion. Two days later, on March 26, 2026, another file uploaded that morning had already disappeared by the same day.
The developer states that the setup was not unusual and that roughly 10 million files were stored on Bunny at the time. That figure comes solely from the Reddit post and has not been verified.
Storage failures are bad enough when writes fail immediately. Silent disappearance is worse: the upload appears successful, the file is briefly available, and it vanishes later without a recorded deletion event.
The Reddit post says files existed in replication regions but not in the main region. If accurate, that conflicts with the expected behavior of a storage system built around a primary region and replication flow. It would point to a reliability failure inside the storage path itself, rather than a simple cache delay or user-side deletion.
The Reddit post includes a detailed timeline and alleges that Bunny support acknowledged a mismatch between replication regions and the main region. The developer states files were recorded as sent to storage, were briefly accessible, and then disappeared hours later without a recorded deletion.
The root cause and full scope remain publicly unconfirmed. This should be treated as a serious allegation backed by a detailed timeline — not as proof that every Bunny Storage deployment is affected.
Why This Storage Issue Should Concern Developers
Regardless of how this specific case is ultimately explained, it highlights a broader infrastructure lesson: replication is not backup. CDN-linked storage is not disaster recovery. Teams still need versioned backups, integrity checks, restore testing, and a migration plan.
The Reddit post also highlights a support problem. The developer states that the issue was not only the disappearing files, but also the long delays, repeated ticket replies, and lack of direct escalation. When production files go missing, teams need recovery options and answers quickly.
How to Protect Your Data on Bunny Storage
If your stack depends on Bunny Storage, the following steps are worth taking regardless of this incident.
1. Verify files after upload
Do not assume a successful upload response means the file is safe. Add a post-write check that confirms the file is accessible, and run a second check after a short delay. Automated integrity checks on production media and user uploads will catch silent failures before they affect end users.
2. Keep backups outside your delivery layer
Storage attached to a CDN is not a backup system. Keep separate backups with version history stored in an independent provider (such as Backblaze B2, AWS S3, or equivalent), and test restores on a regular schedule. If you do not currently have this in place, it is the highest-priority action.
3. Monitor for silent 404 growth
Track missing-file rates over time. Even a low percentage of missing files represents a serious reliability risk at scale. A dashboard alert on 404 rates for storage-backed assets will surface problems early.
3. Prepare a migration path
If your application depends on provider-specific APIs, document the work needed to move to a different provider. The original poster noted that migration was not immediate because the storage layer sat deep inside production workflows. That is exactly why teams should plan exits before they are needed.
The Reddit post does not confirm a root cause, but the described behavior suggests a potential inconsistency between the main storage region and replication layers. In distributed storage systems, this can happen due to failed writes, replication delays, indexing issues, or internal cleanup processes. Without official confirmation, the exact cause remains unclear.
