Storage
client.storage handles file upload, download URLs, and deletion.
Upload a file
Section titled “Upload a file”const file = input.files[0]; // File | Blob
const record = await client.storage.upload(file, { bucket: "avatars", // optional — target bucket (default: "default") isPublic: true, collection: "posts", // optional — link to a collection recordId: post._id, // optional — link to a specific record onProgress: (p) => console.log(`${Math.round(p * 100)}%`), // browser/RN only});
console.log(record.id); // file IDconsole.log(record.filename); // original filename, e.g. "photo.jpg"console.log(record.mime_type); // "image/jpeg"console.log(record.size); // bytesconsole.log(record.is_public); // trueDownload URL
Section titled “Download URL”Pass record.filename to include the original file extension in the URL. This helps browsers and CDNs identify the file type without relying on the Content-Type header alone.
// Without filename — works but URL has no extensionconst url = client.storage.downloadUrl(record.id);// "https://your-server.com/api/v1/storage/01JKX..."
// With filename — extension-aware URLconst url = client.storage.downloadUrl(record.id, record.filename);// "https://your-server.com/api/v1/storage/01JKX.../photo.jpg"Use directly in markup:
<img src={client.storage.downloadUrl(post.image_id, post.image_filename)} alt="Cover" />Private files: Browsers cannot send
Authorizationheaders via<img src>. For private files, generate a signed URL or proxy through your own endpoint.
Image transforms
Section titled “Image transforms”Append query parameters to any image download URL to resize, crop, or convert on the fly. BunBase processes the transform server-side and caches the result — subsequent requests for the same transform are served from cache.
const base = client.storage.downloadUrl(record.id, record.filename);
// Resize to 400px wide, keep aspect ratioconst thumb = `${base}?w=400`;
// Crop to exact 200×200 square thumbnail, convert to WebP at quality 80const avatar = `${base}?w=200&h=200&fit=cover&format=webp&q=80`;
// Convert to WebP without resizingconst webp = `${base}?format=webp`;In JSX:
// Responsive image — 800px wide, original format<img src={`${client.storage.downloadUrl(post.cover_id, post.cover_filename)}?w=800`} alt="Cover" />
// Avatar — 64×64 crop<img src={`${client.storage.downloadUrl(user.avatar_id)}?w=64&h=64&fit=cover&format=webp`} alt="Avatar" />Transform parameters
Section titled “Transform parameters”| Parameter | Description | Example |
|---|---|---|
w | Output width in pixels | ?w=400 |
h | Output height in pixels | ?h=300 |
fit | cover (crop to exact size) or contain (letterbox) | ?w=400&h=300&fit=cover |
format | Convert to webp, jpeg, or png | ?format=webp |
q | Quality for JPEG/WebP, 1–100 (default 85) | ?q=75 |
Only image files are transformed (image/jpeg, image/png, image/webp, image/gif, image/avif). Non-image files are served as-is.
File metadata
Section titled “File metadata”Every uploaded file returns a FileRecord with the following fields:
interface FileRecord { id: string; // unique file ID used in all API calls key: string; // storage path (e.g. "avatars/01JKX....jpg") bucket: string; // bucket name filename: string | null; // original filename from the upload mime_type: string; // detected MIME type (e.g. "image/jpeg") size: number; // file size in bytes is_public: boolean; // whether the file is accessible without auth collection: string | null; // linked collection, if set at upload time record_id: string | null; // linked record ID, if set at upload time created_at: number; // Unix timestamp (ms)}Display helpers:
// Human-readable file sizefunction formatSize(bytes: number): string { if (bytes < 1024) return `${bytes} B`; if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`; return `${(bytes / 1024 / 1024).toFixed(2)} MB`;}
console.log(formatSize(record.size)); // "2.34 MB"
// Check if transformableconst isImage = record.mime_type.startsWith("image/");Signed upload (direct to backend)
Section titled “Signed upload (direct to backend)”For large files, skip the BunBase proxy and upload directly to the storage backend. This works for both local and S3 providers — the SDK handles the difference automatically.
// One-call convenience method (recommended)const record = await client.storage.signedUploadFile(file, { bucket: "avatars", isPublic: true, expiresIn: 300,});
console.log(record.id); // registered FileRecordManual flow if you need more control:
// 1. Get the presigned PUT URLconst { url, key } = await client.storage.signedUpload("photo.jpg", { bucket: "avatars", isPublic: true, contentType: file.type, expiresIn: 300,});
// 2. PUT the file directly to the storage backend (local or S3)const res = await fetch(url, { method: "PUT", body: file, headers: { "Content-Type": file.type },});
// 3a. Local provider: PUT returns FileRecord directly (201)if (res.status === 201) { const record = await res.json();}
// 3b. S3 provider: confirm the upload with BunBaseif (res.ok && res.status !== 201) { const record = await client.storage.confirmUpload({ key, bucket: "avatars", filename: "photo.jpg", mimeType: file.type, size: file.size, isPublic: true, });}List files
Section titled “List files”const files = await client.storage.list();// FileRecord[] — files owned by the current userDelete a file
Section titled “Delete a file”await client.storage.delete(record.id);API surface
Section titled “API surface”| Method | Description |
|---|---|
upload(file, options?) | Upload via BunBase (multipart, works for any file size) |
signedUploadFile(file, options?) | Full signed upload flow in one call (local + S3) |
signedUpload(filename, options?) | Get a pre-signed PUT URL (manual flow) |
confirmUpload(options) | Register metadata after an S3 presigned PUT |
downloadUrl(id, filename?) | Build the download URL |
list() | List files owned by current user |
delete(id) | Delete a file |