Real-time Notifications with Webhooks
Setting up webhook callbacks, understanding event types and payloads, handling retries, and building reliable event-driven integrations.
Why event-driven is essential for media workflows
Media teams in eCommerce, events, attractions, and real estate rely on fast asset delivery. Polling the API to check if an asset has been processed, or scanning for new uploads, wastes resources and adds latency—and in high-volume environments, it simply doesn't scale. Webhooks flip the model: instead of your application asking FileSpin "anything new?", FileSpin tells your application the moment something happens. This is essential for retail teams launching seasonal campaigns, event organizers managing thousands of submissions under deadline, attractions maximizing preview-to-purchase conversion, and real estate teams publishing listings to multiple portals instantly.
This guide covers setting up webhooks, understanding the event types and payloads, and building reliable, event-driven integrations.
Why webhooks matter
Without webhooks, a typical integration polls repeatedly:
With webhooks, FileSpin pushes events to your app the moment they happen:
Setting up webhooks
Webhooks are configured in your FileSpin account settings. You specify one or more callback URLs to receive POST notifications when asset events occur.
Configure via Dashboard
- Navigate to Settings in your FileSpin Dashboard
- In the Webhooks section, add your callback URL (e.g.,
https://your-app.com/webhooks/filespin) - Save your settings
You can configure multiple webhook URLs. FileSpin will POST to all configured URLs whenever an asset event occurs.
Webhook events
FileSpin sends webhook callbacks for the following asset lifecycle events:
| Event | When it fires | Common use case |
|---|---|---|
file-saved | File is stored in your storage after upload (File Picker or Upload API) | Insert new asset into your database |
file-processed | Image and video conversions are processed (upload workflow or Conversion API) | Mark asset as ready, cache CDN URLs, enable in UI |
file-data-updated | Custom data is attached via FileSpin.update or Update File Data API | Sync metadata changes to your application |
file-deleted | Conversions, transcodes, or the original file is deleted via Delete API | Soft-delete or archive in your application |
file-undeleted | Original file is undeleted via Undelete API | Restore references in your application |
addon-processed | An addon has completed processing | React to addon-specific results (face recognition, background removal, etc.) |
Event flow for a typical upload
Upload ──> file-saved ──> file-processed
-
file-savedfires when the original file is stored. At this point, the asset has core metadata (name, size, content type, dimensions) but no conversions or transcodes yet. -
file-processedfires when all automatic processing is complete. The asset now has its generated conversions, transcodes, and addon results.
For metadata updates:
Update data ──> file-data-updated
For deletion and restoration:
Delete ──> file-deleted ──> Undelete ──> file-undeleted
Webhook payload
Standard events (file-saved, file-processed, file-data-updated, addon-processed)
For these events, the webhook payload is the Asset Data Format — the same JSON structure returned by the Asset API. This includes:
{
"id": "99d819953914402babbdeb68337ea6a3",
"status": "OK",
"name": "product-photo.jpg",
"size": 2456789,
"checksum": "d41d8cd98f00b204e9800998ecf8427e",
"content_type": "image/jpeg",
"creator_id": 42,
"upload_time": "2026-02-17T10:30:00Z",
"update_time": "2026-02-17T10:30:05Z",
"metadata": {
"width": 4000,
"height": 3000
},
"data": {
"product_name_txt": "Summer Straw Hat",
"sku_s": "HAT-STR-001"
},
"conversions": {
"720p-video": {
"width": 1280,
"height": 720,
"size": 716000,
"public": true
}
},
"addons_info": {
"ON_DEMAND_IMAGE": {
"available": true
}
}
}
On-demand image (ODI) is available for assets after they are processed the first time. Availability is indicated by the ON_DEMAND_IMAGE key in addons_info.
Key fields to check:
| Field | Purpose |
|---|---|
id | The asset ID — use to match with your database records |
status | Current asset status (OK, NOT_READY, ERROR, ARCHIVED) |
data | Custom metadata (if a schema is assigned) |
conversions | Available conversions and transcodes |
addons_info | Which addons have been processed |
Deletion events (file-deleted, file-undeleted)
These events use a different payload format:
{
"event": "file-deleted",
"id": "99d819953914402babbdeb68337ea6a3",
"keys": ["deepzoom"],
"status": "OK",
"message": ""
}
| Field | Type | Description |
|---|---|---|
id | string | File ID |
event | string | file-deleted or file-undeleted |
keys | JSON | List of keys sent in original deletion request |
status | string | OK if successful, MAYBE if uncertain (check message), ERROR if failed (see errors) |
message | string | Additional details about status |
errors | JSON | Keys that failed and error messages (when status is ERROR) |
Example file-deleted with error:
{
"event": "file-deleted",
"id": "99d819953914402babbdeb68337ea6a3",
"keys": ["deepzoom"],
"status": "ERROR",
"message": "S3 Permission denied",
"errors": {
"deepzoom": "S3 Permission denied"
}
}
Building a webhook handler
Here's a practical webhook handler that processes different events:
Python (Flask)
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/webhooks/filespin", methods=["POST"])
def handle_webhook():
payload = request.get_json()
# Deletion events include an "event" field
event = payload.get("event")
if event in ("file-deleted", "file-undeleted"):
handle_deletion_event(payload)
else:
handle_asset_event(payload)
return jsonify({"status": "ok"}), 200
def handle_asset_event(asset_data):
"""Handle file-saved, file-processed, file-data-updated, addon-processed."""
asset_id = asset_data["id"]
status = asset_data.get("status")
name = asset_data.get("name")
content_type = asset_data.get("content_type")
if status == "NOT_READY":
# file-saved: asset uploaded but not yet processed
print(f"New asset saved: {name} ({content_type})")
# db.assets.insert(asset_id=asset_id, name=name, status="processing")
elif status == "OK":
# file-processed: all conversions complete
conversions = asset_data.get("conversions", {})
urls = asset_data.get("urls", {})
available_formats = list(conversions.keys())
print(f"Asset processed: {asset_id}, formats: {available_formats}")
# db.assets.update(asset_id, status="ready", formats=available_formats, urls=urls)
elif status == "ERROR":
errors = asset_data.get("errors", {})
print(f"Processing failed for {asset_id}: {errors}")
# db.assets.update(asset_id, status="error", errors=errors)
def handle_deletion_event(payload):
"""Handle file-deleted and file-undeleted."""
asset_id = payload["id"]
event = payload["event"]
status = payload.get("status")
keys = payload.get("keys", [])
if event == "file-deleted":
print(f"Asset deleted: {asset_id}, keys: {keys}, status: {status}")
# db.assets.mark_deleted(asset_id)
elif event == "file-undeleted":
print(f"Asset restored: {asset_id}, keys: {keys}")
# db.assets.mark_active(asset_id)
if __name__ == "__main__":
app.run(port=5000)
Node.js (Express)
const express = require("express");
const app = express();
app.use(express.json());
app.post("/webhooks/filespin", (req, res) => {
const payload = req.body;
const event = payload.event;
if (event === "file-deleted" || event === "file-undeleted") {
handleDeletionEvent(payload);
} else {
handleAssetEvent(payload);
}
res.status(200).json({ status: "ok" });
});
function handleAssetEvent(assetData) {
const { id, status, name, content_type } = assetData;
if (status === "NOT_READY") {
console.log(`New asset saved: ${name} (${content_type})`);
// Insert into database
} else if (status === "OK") {
const formats = Object.keys(assetData.conversions || {});
console.log(`Asset processed: ${id}, formats: ${formats}`);
// Update database with URLs and formats
} else if (status === "ERROR") {
console.error(`Processing failed: ${id}`, assetData.errors);
// Log error, alert ops
}
}
function handleDeletionEvent(payload) {
const { id, event, keys, status } = payload;
if (event === "file-deleted") {
console.log(`Asset deleted: ${id}, keys: ${keys}`);
// Soft delete from your system
} else if (event === "file-undeleted") {
console.log(`Asset restored: ${id}`);
// Restore in your system
}
}
app.listen(5000);
Retry behavior and delivery guarantees
- At-least-once delivery — Webhook callbacks are attempted up to 8 times if your endpoint does not respond with an HTTP 20x code (200, 201, or 202).
- Exponential backoff — Retries follow a back-off algorithm that spreads attempts over an 8-hour period.
- Idempotency — Your endpoint may receive the same webhook more than once. Use the
asset_idas a key to detect and handle duplicate deliveries.
Reissuing callbacks
If your endpoint was down or missed callbacks, you can reissue them for a specific time range.
Via Dashboard
Navigate to Dashboard > Tools and use the reissue option.
Via API
Send a POST request with an ASSET_ADMIN role API key:
curl -X POST "https://app.filespin.io/api/v1/callbacks/reissue" \
-H "X-FileSpin-Api-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"start_date": "2026-02-01T00:00:00Z",
"end_date": "2026-02-07T23:59:59Z"
}'
Constraints:
- Time range is limited to a maximum of 7 days
- Only assets belonging to the authenticated user are reissued
- Returns
202 Accepted— the operation is asynchronous
Real-world pattern: syncing assets to your application database
Here's a common pattern that you can use to keep your application's database in sync with FileSpin using webhooks.
PYTHON
def sync_asset(payload):
event = payload.get("event")
if event in ("file-deleted", "file-undeleted"):
asset_id = payload["id"]
if event == "file-deleted":
db.execute(
"UPDATE assets SET status='deleted', deleted_at=NOW() WHERE id=%s",
(asset_id,)
)
elif event == "file-undeleted":
db.execute(
"UPDATE assets SET status='active', deleted_at=NULL WHERE id=%s",
(asset_id,)
)
return
asset_id = payload.get("id")
status = payload.get("status")
if status == "NOT_READY":
# file-saved
db.execute(
"INSERT INTO assets (id, name, content_type, status, created_at) "
"VALUES (%s, %s, %s, 'processing', NOW()) "
"ON DUPLICATE KEY UPDATE name=%s",
(asset_id, payload["name"], payload["content_type"], payload["name"])
)
elif status == "OK":
# file-processed
conversions = payload.get("conversions", {})
urls = payload.get("urls", {})
addons_info = payload.get("addons_info", {})
db.execute(
"UPDATE assets SET status='ready', conversions=%s, "
"urls=%s, addons_info=%s, updated_at=NOW() WHERE id=%s",
(
json.dumps(conversions),
json.dumps(urls),
json.dumps(addons_info),
asset_id
)
)
Testing webhooks locally
During development, your webhook endpoint runs on localhost, which FileSpin can't reach. Use a tunneling tool to expose your local server:
Using ngrok
# Start your local webhook handler
python webhook_handler.py # Runs on port 5000
# In another terminal, create a tunnel
ngrok http 5000
ngrok gives you a public URL like https://abc123.ngrok.io. Configure this as your webhook URL in FileSpin settings.
Using cloudflared
cloudflared tunnel --url http://localhost:5000
Remember to update your webhook URL to your production endpoint before going live. Development tunnel URLs are temporary.
Best practices
-
Return 200 quickly. Process webhook payloads asynchronously. Accept the webhook, queue the work, and return 200 immediately. If your handler takes too long, the connection may time out and trigger unnecessary retries.
-
Handle duplicate deliveries. Network issues can cause the same webhook to be delivered more than once. Use the
idfield to detect and skip duplicates. -
Log webhook payloads. Store the raw payload for debugging. When something goes wrong, having the original webhook data is invaluable.
-
Monitor your endpoint. Track webhook delivery success rates. If your endpoint starts failing, you'll miss events and trigger retries.
-
Use the reissue API for recovery. If your endpoint was down, reissue callbacks for the affected time range rather than polling for every asset.
-
Differentiate by status. For standard events, use the
statusfield (NOT_READY,OK,ERROR) to determine the event type rather than relying on separate event names in the payload.