Skip to content
BunBase BunBase BunBase Docs Alpha v0.1.0

Storage

client.storage handles file upload, download URLs, and deletion.

const file = input.files[0]; // File | Blob
const record = await client.storage.upload(file, {
bucket: "avatars", // optional — target bucket (default: "default")
isPublic: true,
collection: "posts", // optional — link to a collection
recordId: post._id, // optional — link to a specific record
onProgress: (p) => console.log(`${Math.round(p * 100)}%`), // browser/RN only
});
console.log(record.id); // file ID
console.log(record.filename); // original filename, e.g. "photo.jpg"
console.log(record.mime_type); // "image/jpeg"
console.log(record.size); // bytes
console.log(record.is_public); // true

Pass record.filename to include the original file extension in the URL. This helps browsers and CDNs identify the file type without relying on the Content-Type header alone.

// Without filename — works but URL has no extension
const url = client.storage.downloadUrl(record.id);
// "https://your-server.com/api/v1/storage/01JKX..."
// With filename — extension-aware URL
const url = client.storage.downloadUrl(record.id, record.filename);
// "https://your-server.com/api/v1/storage/01JKX.../photo.jpg"

Use directly in markup:

<img src={client.storage.downloadUrl(post.image_id, post.image_filename)} alt="Cover" />

Private files: Browsers cannot send Authorization headers via <img src>. For private files, generate a signed URL or proxy through your own endpoint.

Append query parameters to any image download URL to resize, crop, or convert on the fly. BunBase processes the transform server-side and caches the result — subsequent requests for the same transform are served from cache.

const base = client.storage.downloadUrl(record.id, record.filename);
// Resize to 400px wide, keep aspect ratio
const thumb = `${base}?w=400`;
// Crop to exact 200×200 square thumbnail, convert to WebP at quality 80
const avatar = `${base}?w=200&h=200&fit=cover&format=webp&q=80`;
// Convert to WebP without resizing
const webp = `${base}?format=webp`;

In JSX:

// Responsive image — 800px wide, original format
<img src={`${client.storage.downloadUrl(post.cover_id, post.cover_filename)}?w=800`} alt="Cover" />
// Avatar — 64×64 crop
<img src={`${client.storage.downloadUrl(user.avatar_id)}?w=64&h=64&fit=cover&format=webp`} alt="Avatar" />
ParameterDescriptionExample
wOutput width in pixels?w=400
hOutput height in pixels?h=300
fitcover (crop to exact size) or contain (letterbox)?w=400&h=300&fit=cover
formatConvert to webp, jpeg, or png?format=webp
qQuality for JPEG/WebP, 1–100 (default 85)?q=75

Only image files are transformed (image/jpeg, image/png, image/webp, image/gif, image/avif). Non-image files are served as-is.

Every uploaded file returns a FileRecord with the following fields:

interface FileRecord {
id: string; // unique file ID used in all API calls
key: string; // storage path (e.g. "avatars/01JKX....jpg")
bucket: string; // bucket name
filename: string | null; // original filename from the upload
mime_type: string; // detected MIME type (e.g. "image/jpeg")
size: number; // file size in bytes
is_public: boolean; // whether the file is accessible without auth
collection: string | null; // linked collection, if set at upload time
record_id: string | null; // linked record ID, if set at upload time
created_at: number; // Unix timestamp (ms)
}

Display helpers:

// Human-readable file size
function formatSize(bytes: number): string {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
return `${(bytes / 1024 / 1024).toFixed(2)} MB`;
}
console.log(formatSize(record.size)); // "2.34 MB"
// Check if transformable
const isImage = record.mime_type.startsWith("image/");

For large files, skip the BunBase proxy and upload directly to the storage backend. This works for both local and S3 providers — the SDK handles the difference automatically.

// One-call convenience method (recommended)
const record = await client.storage.signedUploadFile(file, {
bucket: "avatars",
isPublic: true,
expiresIn: 300,
});
console.log(record.id); // registered FileRecord

Manual flow if you need more control:

// 1. Get the presigned PUT URL
const { url, key } = await client.storage.signedUpload("photo.jpg", {
bucket: "avatars",
isPublic: true,
contentType: file.type,
expiresIn: 300,
});
// 2. PUT the file directly to the storage backend (local or S3)
const res = await fetch(url, {
method: "PUT",
body: file,
headers: { "Content-Type": file.type },
});
// 3a. Local provider: PUT returns FileRecord directly (201)
if (res.status === 201) {
const record = await res.json();
}
// 3b. S3 provider: confirm the upload with BunBase
if (res.ok && res.status !== 201) {
const record = await client.storage.confirmUpload({
key,
bucket: "avatars",
filename: "photo.jpg",
mimeType: file.type,
size: file.size,
isPublic: true,
});
}
const files = await client.storage.list();
// FileRecord[] — files owned by the current user
await client.storage.delete(record.id);
MethodDescription
upload(file, options?)Upload via BunBase (multipart, works for any file size)
signedUploadFile(file, options?)Full signed upload flow in one call (local + S3)
signedUpload(filename, options?)Get a pre-signed PUT URL (manual flow)
confirmUpload(options)Register metadata after an S3 presigned PUT
downloadUrl(id, filename?)Build the download URL
list()List files owned by current user
delete(id)Delete a file