I Went Silent for Two Weeks — And Built a File Converter That's Not Just Vibe Coded
Been a minute. 👋
If you've been following along, my last post was about VoiceOtaku — that fun little AI anime hotline I built after getting ghosted one too many times. That was April 24th. And then... silence.

So what happened?
The Pivot
I'll be honest — the job hunt has been... a journey. If you read my "Resetting Myself" post, you already know the vibe. Applications, interviews, ghosting, repeat. The market is brutal right now, especially here in the Philippines.
But somewhere between the rejections and the podcast binges, something shifted.
I started listening to solo founder stories. You know the ones — developers who built a SaaS product on their own, grew it to real revenue, and eventually quit their 9-to-5. Starter Story on YouTube became my late-night fuel. People shipping real products, solving real problems, and building real businesses. No corporate ladder required.

And I thought: Maybe the 8-to-5 job isn't the only path. Maybe I should start building my own backup plan. Not just fun side projects (though I love those), but something with actual use. Something people would pay for. Something that scratches my itch for building real systems.
So I took two weeks and stepped up my game.
The Elephant in the Room: Vibe Coding
Let's address the elephant. Vibe coding and agentic coding are everywhere right now. Open X, and it feels like anyone with a weekend and an AI assistant is shipping a full app.
And honestly? Good for them. AI has made it incredibly easy to go from idea to working prototype. That's powerful.
But here's where I draw the line:
Vibe coding is not replacing engineers.
It just isn't. Can vibe coders ship a web app? Absolutely. A simple CRUD dashboard? Sure. A weekend hackathon project? Done before lunch.
But what happens when that app needs to handle untrusted user uploads? When you need sandboxed workers that can't phone home? When you need rate limiting that doesn't double-charge people? When you need audit logs, timing-safe key comparisons, and egress firewalls on every worker host?

That's not vibe coding territory. That's engineering territory.
I use AI to ship faster — no shame in that. But AI was initially labeled as an assistant, not a replacement. And there's a reason for that. A real engineer uses AI as a multiplier. You bring the system design, the security model, the operational rigor — and AI helps you implement it faster. The thinking still has to come from you.
This isn't gatekeeping. It's a personal standard. And my latest project is where I chose to draw that line.
Enter FileForge
Meet FileForge — a self-hosted file converter for images, audio, video, documents, and archives.

At first glance, it's just a converter. You upload a file, pick a format, download the result. Simple, right?
That's the point. The simplicity is the surface. What's underneath is what matters.
FileForge converts between 50+ format pairs — PNG, JPEG, WebP, MP3, WAV, FLAC, OGG, MP4, WebM, AVI, PDF, DOCX, ODT, TXT, HTML, Markdown, ZIP, 7z, TAR, and more. But that's not the story I want to tell. The story is how it converts them.
Because FileForge isn't just a wrapper around ffmpeg. It's a carefully designed system that handles security, isolation, scalability, and privacy — not as an afterthought, but as a first-class concern.
Under the Hood
This is the part where I show you why FileForge is not a vibe-coded app.
Hub-and-Worker Architecture
FileForge doesn't run everything on one server. It uses a hub-and-worker topology:
- A hub host runs the API, the web app, Redis queues, and observability
- Worker hosts run isolated Docker containers for each format type — one for images, one for audio, one for video, one for documents, one for archives
Workers connect back to the hub over a private LAN. They authenticate via a token exchange handshake on boot, receive time-limited Redis credentials, and heartbeat every 5 seconds. If a token is revoked, the worker is immediately locked out. No static credentials sitting in config files.
Presigned URL Isolation
Here's a detail I'm particularly proud of: workers never hold S3 credentials.
When a conversion job is enqueued, the API pre-signs two URLs — one for downloading the input file, one for uploading the output — and embeds them directly into the BullMQ job payload. The worker picks up the job, downloads from the presigned GET URL, converts, and uploads to the presigned PUT URL. It never talks to MinIO directly. It never authenticates to storage. The API handles all of that.
This means even if a worker is compromised, it can only access the single file for its current job. Nothing else.
Sandboxed by Design
Every conversion runs in a locked-down Docker container:
read_only: trueroot filesystemtmpfsat/tmponly (sizes calibrated per worker type)cap_drop: [ALL]withno-new-privileges: truepids_limit: 256to prevent fork bombs- Non-root user (
forge, uid 10001) dumb-initas PID 1 for proper signal handling- Host-side nftables egress lockdown — workers can't reach the internet, only the hub over the LAN
That last one is important. Each worker host has a firewall rule that drops all outbound traffic except to the hub's private IP, loopback, and established connections. Even if a worker container escapes its Docker restrictions, it cannot exfiltrate data.

Defense in Depth
Security isn't one wall — it's layers:
- Magic-byte verification at upload time, re-verified by workers before processing
- ClamAV scanning on every file during upload finalization
- Zip-bomb defense in the archive worker: 4 GB extracted-bytes cap, 50,000 file cap, symlink rejection
- Timing-safe key comparison for admin tokens and API keys (
timingSafeEqualso there's no timing leak) - Dual CORS policy: the browser-facing API (
/api) has CORS enabled; the public API (/v1) has no CORS at all — API keys should never be embedded in front-end JavaScript, and the protocol enforces that - Row-Level Security on all user-facing Postgres tables via Supabase
- Audit logging for every admin operation and user deletion
Production Concerns From Day One
This wasn't built and then "made production-ready." It was designed for production:
- Rate limiting with per-IP, per-user, and per-API-key buckets — and anti-double-charge logic so API-key requests don't get hit twice
- Observability: Prometheus metrics with bounded cardinality, Grafana dashboards, Loki log aggregation, Pino structured logging
- Maintenance mode with admin bypass — flip a flag, users get 503, admins keep testing
- Idempotent cleanup cron with Redis distributed locks (future-proofed for multi-replica deployments)
- ZFS-aware backup script that refuses to write into a directory unless it's actually a ZFS mountpoint
- LibreOffice per-job user profiles — each conversion gets its own
--env:UserInstallation=file:///tmp/<uuid>to avoid lockfile races under a read-only rootfs

Take the LibreOffice example. In a normal setup — your laptop, a basic Docker container — converting a document works fine every time. But run that same conversion inside a locked-down sandbox where the filesystem is read-only, temporary storage is volatile, and two people convert a file at the exact same millisecond? It breaks silently. Lockfiles collide. Jobs hang. And you'd never know until real users hit it at the same time. I only found it because I designed for isolation from the start — and that's the point. Vibe coding gets you to "it works on my machine." Engineering gets you to "it works under attack."
The Full-Stack Flex
Here's what I mean when I say I own FileForge end to end:
- Frontend: Next.js 14 (App Router), TypeScript, Tailwind CSS, shadcn/ui, TanStack React Query
- Backend API: Express + TypeScript (ESM), BullMQ job queues on Redis
- Auth & Database: Supabase (Postgres + Auth with Row-Level Security)
- Storage: MinIO (private, presigned URLs, files auto-delete after 24h/7d)
- Workers: Docker containers per format type, deployed on Proxmox hosts with Ansible
- Observability: Prometheus, Grafana, Loki — all self-hosted
- Docs: VitePress, because of course the docs are also self-hosted

Everything runs on hardware I can walk to. On Proxmox hosts I built myself. In a room I pay the electricity bill for. When AWS goes down (and it does — I was online during the 2025 outages), my stuff stays up.
This is what I meant in my homelab post. This is the resume in executable form.
FileForge practices every domain I've built my career around — frontend, backend, database, infrastructure, security, self-hosting. From the first pnpm init to production traffic on metal I own. No cloud vendor hand-holding. No "it works on my machine." It works on my machines, and I can prove it.
Try It
FileForge is live and free to use:
→ FileForge — Convert anything, on metal you can see.
→ Docs — Architecture, API reference, security model, the whole thing.
Free tier: 5 conversions per day, forever. No strings attached.
Forge Pro: Unlimited conversions, 500 MB files, 7-day retention, bulk uploads (up to 20 files at once), API access with personal ff_live_... keys, and metadata preservation. $8/month.

What's Next
The MVP is solid and in production, but FileForge isn't done. The roadmap includes:
- More format pairs and presets
- Payment system implementation — real Stripe integration for Forge Pro subscriptions, no more manual awkwardness
- MCP (Model Context Protocol) on top of the API — so AI agents can discover and call FileForge conversions directly, turning the app into a tool any AI workflow can use
- x402 payment implementation — HTTP 402 as a first-class payment primitive. Pay-per-conversion without ever creating an account. Swipe your key, get your file
- x402 Bazaar discovery — a marketplace where FileForge registers its paid endpoints, and any x402-aware client can discover and pay for conversions automatically. No signup, no billing dashboard, just protocol-level payments
- Webhooks for API users (get notified when conversions finish)
- Team accounts for organizations
- Off-site backup (current backups are local ZFS snapshots — off-site is V2)
If there's a format you need that's missing, or a feature that would make your workflow smoother, hit me up. I'm literally one person and I read every message.
The Bigger Picture
Two weeks ago, I was on the couch listening to solo founder stories wondering if I could ever do that. Today, I'm running a SaaS product on my own hardware, built from the ground up, securing untrusted file uploads by design.
I hope someday I can also be featured on Starter Story with FileForge — the same kind of stories that inspired this pivot in the first place. If those founders can do it, why can't I? Why can't any of us?
The line between a vibe-coded app and a carefully designed system isn't about gatekeeping. It's about caring what happens when things go wrong. Because things always go wrong. And when they do, I'd rather have a sandboxed worker with no network access and a presigned URL that expires in 15 minutes than a sudo prompt and a prayer.
One last thing. I want to give a shoutout to the last project manager I worked with — you know who you are. One of her key points stuck with me more than she probably intended: file upload is the most vulnerable attack vector for this kind of system — not just code injection. That single sentence shaped every security decision in FileForge. Magic-byte verification, ClamAV scanning, sandboxed workers with no network access, presigned URLs so workers never hold credentials, zip-bomb caps — none of that was an afterthought. It was the first thing I planned. No corners cut. No single cent spared.
So yeah. This is FileForge. Not vibe coded. Engineered.

Until next time! 🚀