KILN
ACQUIRING
Signal article
Fixed position

Studio KILN started the way a lot of real projects start, with a clear taste for what the thing should feel like, and almost no certainty about how to make it real. There was no software or web development background at the beginning, and there definitely was not a clean season of “learning first” followed by a clean season of “building later.” Most of what exists now got learned in motion, while shipping AI-enabled software at Rust Automation and Controls, and while trying to build a custom studio site that refused to fit inside a normal template.
Early on, the stack looked like a wall. Frontend, backend, deploy, DNS, SSL, email, CMS, storage, AWS, all the words were familiar enough to be intimidating, and vague enough to feel like separate universes. The pivot was not some sudden mastery moment, it was a smaller, more useful realization, a person does not need to pre-understand everything to start, because understanding shows up fastest when the project forces it.
AI did not remove the complexity in that wall, and it did not magically turn the work into a straight line. What it changed was the dead time, the long stretches where being stuck turns into quitting, or into hours of wandering through docs without knowing what to look for. AI turned “I’m stuck” into “I have a next move,” and after enough repetitions, those next moves add up to a working system.
This post is a field report. It is not a tutorial for one perfect stack, and it is not a pitch that says anyone can build anything instantly. It is a record of the actual choices, the actual tradeoffs, the parts that broke, the parts that held, and the way the stack stopped being abstract once it started having consequences.
The thesis is simple, and it is meant to be repeatable. Even when the stack seems daunting, a person can still just do things, learning can happen while building, and AI makes that path more accessible than most people assume.
Vibe coding in practice
“Vibe coding” is a slippery phrase, because it can mean anything from “use AI to write everything” to “wing it until something works.” In this build it ended up meaning something tighter, and honestly more demanding.
The builder still decided what the product is, what “good” looks like, what the design language should feel like, and what counts as finished. AI helped, sometimes a lot, but the work never became hands-off, because the system still needed to run, ship, and survive real usage, and that responsibility cannot be outsourced.
The loop was simple and kind of brutal: build a small slice, break it, ask for help in context, patch the slice, then repeat until the system starts to feel readable. A “small slice” mattered because vague goals create vague code. A small slice is concrete, a route that fetches content, a page section that animates, a form that posts to an endpoint, a build that has to succeed on deploy. Once a slice exists, it can be tested, it can fail, and it can teach.
The surprise was that fear did not disappear first. Fear stayed, especially when touching infrastructure, or when pushing a deploy that could break the live site, or when trying to set up email correctly and realizing deliverability has rules you cannot negotiate with. Competence showed up later, and it showed up the way it usually does, as repetition, as pattern recognition, as “oh, this error is that kind of problem,” as a growing sense that the stack has handles.
Why Studio KILN had to be custom
Studio KILN was never meant to be “just a site.” It needed to function as a studio presence, a portfolio layer, a publishing layer, a storefront for digital products, and a long-term container that can expand without a rebuild every time something new is added.
That scope matters, but the bigger pressure came from one non-negotiable requirement: the site could not feel like another SaaS website.
Templates can get a person online quickly, and that is often the right move, but templates also come with a design language that leaks into everything, even after a new logo and new colors get added. For Studio KILN, the goal was authored motion, atmosphere, and a visual identity that feels specific, like it was built on purpose rather than selected from a menu.
One concrete example shows how “taste” turns into “tech.” The site needed a slow, controlled scroll rhythm that feels more like moving through a space than moving down a page, with sections that arrive on timing rather than on default browser momentum, and with transitions that hold for half a beat so the eye can actually register what changed. That kind of experience is hard to fake with template widgets and stock animations. It pushes the build toward deliberate motion tooling, smooth-scroll coordination, and an actual system for sequencing, because otherwise everything collapses into the same generic bounce and fade.
That requirement forces better tooling and better taste preservation. It also forces the builder into the stack, because control has a price. The price is learning the edges, build systems, deployment, DNS, SSL, email, a CMS that feeds a custom frontend, storage that does not live in the repo, and the basic reality that the internet is a machine with rules.
Choosing the right model for the right layer
One of the easiest ways to get discouraged with AI-assisted building is to treat “AI” like one tool, and then ask the same tool to do everything, frontend, backend, infrastructure, animation, copy, refactors, deployment issues. When the output feels generic or brittle, it is tempting to blame AI as a whole, or to assume the builder is the problem.
What actually mattered here was routing. Backend is constraint management, correctness, structure, edge cases, request flow clarity, predictable refactors. Frontend is taste under constraint, composition, motion timing, rhythm, visual specificity, restraint. Different models do different parts better, and once that is accepted, AI stops being a vague promise and turns into a set of tools that can be used deliberately.
In practice, backend logic, request flows, and refactors went to Opus 4.6 via Cursor, because it stayed coherent under complexity. Frontend motion, layout feel, and authored timing went to Kimi k2.5 in the terminal with screenshots and short videos, because visual context is the difference between “rendered” and “authored.” Infrastructure confusion, DNS, SSL, SES, Cloudflare rules, mostly used AI for diagnosis and next steps, then relied on docs for final truth, because infra is the place where one wrong assumption wastes hours.
For backend work, Opus 4.6 inside Cursor ended up being the most reliable for the kind of complexity that makes projects feel fragile. It handled API flows and request/response logic in a way that stayed consistent as features expanded, it was good at edge cases and data handling, and it did better at refactors that keep the codebase coherent, where the goal is not “make it work once,” but “make it work, and keep it working when the next feature lands.”
The way it got used mattered as much as the model. The work stayed sane when the relevant files were provided instead of the whole repo, when outcomes and constraints were stated clearly, and when the request was for the smallest correct change, not a total rewrite. This is roughly the prompt shape that kept the model grounded.
Goal: Add X without breaking Y.
Context:
- File A: (paste)
- File B: (paste)
Constraints:
- Do not change public API routes
- Keep types strict
- No new dependencies
Request:
- Propose the smallest correct change
- Return a diff and explain the risk pointsFrontend is where most models fall into “default internet.” They can generate React and Tailwind that renders, but the motion reads like a UI demo, the spacing feels like a template, and the overall vibe slides toward the median. That is not because the builder lacks taste, it is often because the model is aiming for the safest average output.
The biggest frontend shift came from using Kimi k2.5 in the terminal and feeding it screenshots and short videos. That context changes everything, because now the conversation is not abstract. The model can see what feels wrong, what feels too stiff, what spacing is off, what motion reads as “template,” what timing needs to breathe. A simple “art director mode” prompt pattern helped because it forced critique before code.
Here is a screenshot/video of the current section.
Target feeling:
- slower, heavier rhythm
- less UI-demo motion
- more authored spacing
Constraints:
- keep GSAP + ScrollTrigger
- keep the current typography scale
Request:
- critique what reads generic
- propose 3 specific changes
- provide the exact code editsThe takeaway is blunt. AI coding is a toolkit, not a monolith. Output ceilings depend on routing. If the frontend looks generic, it might be the model choice and the input context, not the builder.
The stack, fully disclosed
Studio KILN’s stack is not presented as the best stack, it is just the stack that got chosen, learned, and shipped.
The foundation is Next.js 14 with the App Router, strict TypeScript, Tailwind CSS, Ghost as a content backend with MDX in the pipeline, and a static export build (SSG) to keep the frontend runtime simple and predictable.
The experience layer is React Three Fiber for 3D moments, GSAP with ScrollTrigger for authored motion and scroll timing, Zustand for lightweight state, and Lenis for smooth scrolling that matches the site’s rhythm, with Averia Serif and Inter carrying the typography.
On the infrastructure side, the frontend ships on Vercel, Ghost runs on AWS EC2, email runs through AWS SES with Mautic for automation, Stripe handles payments, the domain is studiokiln.io, and Hostinger is part of the hosting picture. Cloudflare sits at the edge as CDN and traffic reality, Let’s Encrypt covers SSL, and AWS S3 holds assets that should not live in the repo. Analytics are intentionally absent.
The AWS footprint that had to be learned directly was EC2, SES, IAM, and S3, which is basically enough to force the mental model.
Here is the part most stack write-ups skip: why these tools exist in the system, and what got traded away to use them.
Next.js 14 was chosen because it supports modern React patterns cleanly, and because it makes it easier to grow from “site” into “system” without switching frameworks, even though the tradeoff is learning the App Router’s opinions. Static export was chosen to keep the frontend runtime simpler, cheaper, and harder to break, even though anything truly dynamic then has to be designed around. Strict TypeScript was chosen to reduce silent failure as the codebase grows, even though it adds early friction and forces clarity before shortcuts.
Tailwind was chosen because it keeps styling close to the component and makes a design system easier to enforce, even though it can look like noise until it becomes a language. Ghost was chosen because publishing needs a real backend with drafts, tags, and workflow, without being trapped inside a closed platform, even though hosting a CMS brings updates, backups, and security into the job. MDX alongside Ghost was chosen because it lets writing stay close to writing while still allowing bespoke components when needed, even though the pipeline has to stay consistent or things get messy.
React Three Fiber was chosen because certain atmospheres are easier to express in 3D than in flat UI, especially when the goal is “place” rather than “landing page,” even though 3D brings performance and complexity. GSAP and ScrollTrigger were chosen because authored motion needs reliable sequencing and predictable control across browsers, even though motion becomes a real system that has to be debugged. Lenis was chosen because scroll feel is part of the design language, even though smooth scrolling can fight scroll triggers unless it is wired correctly. Zustand was chosen because it stays lightweight and doesn’t force heavy architecture, even though simple tools still require discipline.
Vercel was chosen to remove early deployment friction and keep shipping momentum, even though platform dependence becomes a real long-term question. Ghost on EC2 was chosen to keep the publishing backend owned and configurable, even though uptime and maintenance are now responsibilities, not features. SES and Mautic were chosen because deliverability is real infrastructure and automation should not require SaaS lock-in, even though setup and ongoing care are unavoidable. S3 was chosen because durable storage keeps assets out of the repo and out of deploy churn, even though IAM and permissions add complexity. Cloudflare and Let’s Encrypt were chosen because edge behavior and HTTPS are basic web reliability, even though caching and misconfiguration can hide reality and waste time.
If this list still feels like “too much,” that is fair. The point is that it was not learned all at once. It was learned in the order the project demanded, and that order is what made it survivable.
Why Vercel is in the stack
There is a type of internet argument where managed hosting gets treated like a moral failure. That mindset is usually more about identity than outcomes.
Vercel is in this stack for the honest reason that it is easy, the free plan is generous, and it removes early friction while the other layers are being learned and stabilized.
The principle underneath that choice is simple. Ownership is a spectrum. Hosting is constraints and cost, not ideology. Starting simple is scope control, and scope control is often what makes shipping possible.
A fully self-hosted path like Coolify was evaluated, and it is real, but it adds complexity that was not needed right now. Choosing Vercel today is picking the fights that matter.
Where publishing fits inside a custom site
Publishing is the part of a site that should feel easy, because publishing is a habit, and habits break when the workflow is annoying.
A CMS existed here for a basic reason, publishing should not require rebuilding and redeploying the whole site every time a new post goes up. The goal was a Substack-style writing flow, without being locked into Substack.
Ghost made sense as a publishing backend because it is built for writing and content management, and because it plays cleanly with a custom frontend. That matters, because the site’s design language stays intact, instead of being forced into whatever the CMS theme system wants.
MDX matters because it keeps the writing process close to writing, while still allowing components when needed. It keeps publishing consistent with the site’s identity, because the content layer is not separate from the design layer.
Why email gets real fast
Email is where a lot of builders learn, quickly, that the internet has rules.
It is easy to assume email is “just another API,” a simple integration, a box to check. In reality, email is deliverability, reputation, authentication, and policies that do not care how hard someone tries.
AWS SES was chosen because it is serious sending infrastructure, and because it forces learning the real constraints, SPF, DKIM, DMARC, warm-up, reputation, and the practical reality that sending email reliably is a system. Mautic was chosen because it offers automation and campaigns without a SaaS lock-in story, which fits the broader direction of owning the system, even though ownership includes upkeep.
If there is one checklist that would have saved time early, it is the boring deliverability basics. Authentication has to be real, SPF and DKIM configured correctly, and DMARC set with a real policy once things stabilize. SES setup has to be complete, domain verification, moving from sandbox to production, understanding sending limits. Reputation has to be respected, warm-up is real, especially for new domains, and reputation is fragile early. Bounces and complaints have to be tracked and handled, because high complaint rates punish deliverability. Consistency matters more than people expect, because sending from a stable domain identity builds trust and random sender changes read like spam.
Email feels like software right up until it starts behaving like a trust system, and then it becomes obvious why “just another API” was never true.
Storage and edge reality
S3 was chosen because assets should not live in the repo, and because learning one durable cloud primitive early makes later architecture cleaner. It forces clarity, what is static, what is content, what is data, what belongs in build output, what belongs in storage.
Cloudflare exists in this stack because edge reality matters. Performance, caching, routing behavior, basic protection, those are not fancy extras, they are part of what makes a site feel solid.
SSL is table stakes, and it is also one of the first “infra is real” moments. Let’s Encrypt makes certs accessible, but a builder still needs the mental model, what is being validated, where HTTPS terminates, how renewal works, and what breaks when it does not.
The unsexy part that makes it real
Static export (SSG) was a strategy choice. It keeps the frontend runtime simpler, reduces moving parts, and makes shipping easier, especially early.
Even with that simplicity, operations still arrives. Environment variables are not optional. “Works locally” is not a milestone. Production has different assumptions, and it will punish hidden dependencies. When things break, the builder lives in logs and config, and that is when the whole concept of “deploy” stops being a button and starts being a system.
A few predictable failures showed up, the same ones that show up for most people once a project becomes real. A feature would work locally, then fail on deploy, and the root cause would be a missing environment variable, or a variable that existed locally but was named differently in production, and the fix would be treating env vars like code, documenting them, naming them consistently, validating them early. Content updates would sometimes not appear, or the site would show an older version, and the root cause would be caching, at Cloudflare, in the browser, or in build artifacts that weren’t invalidated, and the fix would be making cache behavior explicit and learning where “freshness” actually comes from. A “simple” infrastructure change would eat an entire evening, and the root cause would be DNS propagation, SSL renewal timing, or a misread record type, and the fix would be slowing down, verifying records, and accepting that the internet has its own clock.
A mindset shift happens somewhere in that pain. Shipping is not the end. Maintenance is the cost of ownership. The good news is that the cost gets smaller as understanding grows, because the same categories of problems show up again and again, and repetition builds instinct.
No analytics, on purpose
The default instinct is to install analytics immediately, track everything, build the funnel, measure every click.
Studio KILN intentionally chose none.
The site was meant to feel like a place, not like a dashboard. Adding analytics early changes how a builder thinks, because it quietly turns presence into performance. Instrumentation can always be added later. It is harder to remove the mentality once it becomes the default.
That does not mean flying blind. A project can still pay attention in human ways, direct feedback, email engagement, Stripe dashboards, and the simple truth of whether people come back and share the work.
The deeper point is that not everything has to be counted to be real. Some things should stay human scale, especially a studio site that is built around atmosphere and authored presence.
The repeatable method
The real story is not complicated. The builder started without web dev experience. Learning happened while shipping. Day-job work building AI-enabled software accelerated pattern recognition, because real environments teach fast, and mistakes have consequences.
The method is copyable, and it is less mystical than it sounds. Keep scope small. Ask AI for the next step, not the whole universe. Keep the system running. Deepen complexity only when the project demands it.
Underneath that method is a mindset shift that makes the whole thing possible. A person does not need to pre-understand the stack. A person needs to be willing to touch it, iterate, and learn the next constraint when it shows up. AI makes the touching cheaper, but willingness still drives the process.
Practical guidance without copying this exact stack
Readers do not need to copy Studio KILN’s stack to copy the method. The win is not the exact tools, it is knowing how to choose tools when taste and constraints collide.
There are three paths that tend to cover most people’s reality. A convenience-first path is for building momentum, managed hosting, minimal infrastructure, ship fast, learn by contact. A hybrid ownership path, which is the direction Studio KILN took, is for protecting taste and flexibility while outsourcing commodity friction, keep shipping velocity high, but own the parts that carry identity. A full self-host path is for when control is a requirement, not a flex, and the price is time plus ongoing maintenance.
Across all three paths, the single rule holds: do not pre-study everything. Build until the next real constraint appears. Learn that constraint, then continue.
If a simple order of operations helps, start with the domain and DNS so the site has an address, then make HTTPS real early because it touches everything. Deploy a boring first version as soon as possible, because a blank but live site beats a perfect local one. Decide how content enters the system and make that path easy, because publishing should not feel like friction. Decide where assets live and keep them out of the repo once they grow. Add email only when it matters, then do it correctly because deliverability punishes shortcuts. Add payments when the product needs it, not as decoration.
Complexity should be earned. A real CMS becomes worth it when publishing becomes frequent and the workflow starts to matter. Stripe becomes necessary the moment money is on the line. Email becomes a system when it becomes a channel. Authored motion becomes a requirement when the site’s identity depends on timing and feel, because templates rarely carry taste. Durable storage becomes necessary when media grows, because repos are not warehouses. Platform choices like Vercel vs self-host become worth revisiting when costs climb or constraints bite, not because someone on the internet says it is “more pure.”
Also, model literacy matters. Backend and frontend need different strengths. If work is routed poorly, the builder will assume they are the problem, when the real issue is tool mismatch.
The site is the proof
This is not theoretical. The system is live. The receipts are visible on purpose.
Studio KILN lives at studiokiln.io, and the clearest disclosure point is /system, because it shows the stack choices directly instead of talking around them.
The project did not start with a developer. It started with being willing to build while learning, and willing to keep touching the system until it became readable.
If that is the real requirement, and most of the time it is, then the invitation is straightforward. Build one small slice. Let it break. Fix it. Repeat.