2026-03-15 // Infrastructure
Building a Zero-Friction
AI Deployment Pipeline
How we wired an AI assistant, an automation platform, a version control system, and a hosting provider into a persistent, zero-credential deployment loop. No tokens to paste, no files to shuffle, no manual steps.
The first principle of Leverage the Leverage is simple: if you're the bottleneck, you haven't found the leverage yet. This post documents the first real test of that principle. Building a deployment pipeline where an AI assistant can author, commit, deploy, and verify code changes to a production website without any human touching a terminal, a file manager, or a dashboard.
The Problem
AI assistants can generate code. They can generate good code. But the last mile, getting that code from the conversation into production, typically requires the human to become a shuttle service. Download the file. Open the repository. Upload. Commit. Wait for deployment. Check if it worked. Come back to the conversation. Report the result. That's not leverage. That's a human acting as a slow, error-prone CI/CD pipeline.
The Constraint
The solution couldn't involve pasting API tokens into chat windows. Credentials in conversation history are a security liability. It also couldn't require installing CLI tools or running local environments. The whole point of LTL's operational spec is zero infrastructure toil. The pipeline had to be persistent across sessions, authenticated via OAuth, and require nothing from the operator except a review and an approval.
The Architecture
The solution uses four systems, each handling one link in the chain. The AI assistant authors files and orchestrates the pipeline. An automation platform acts as the bridge layer. It holds persistent OAuth credentials to the version control system and exposes them as callable tools. The version control system receives the commits and triggers the hosting provider's auto-deploy pipeline. The AI assistant then queries the hosting provider to verify the deployment landed successfully.
Every link uses persistent OAuth. No session-scoped tokens. No credentials in chat. The operator's role is reduced to reviewing the proposed changes and saying "go." The pipeline handles everything from commit to production verification.
The Key Insight: GraphQL as an Escape Hatch
The automation platform's native file-upload module for the version control system was marked "Not Used Yet." Incomplete and non-functional. A dead end. But the platform also exposes an "Execute a GraphQL Query" module, a generic escape hatch that can send any authorized GraphQL mutation to the version control API. We used this to call the createCommitOnBranch mutation directly, bypassing the broken native module entirely.
This is a pattern worth internalizing: when a platform's purpose-built module fails, check for a generic API or GraphQL module. The escape hatch is almost always there. It requires more configuration, but it works.
The Two Tools
The pipeline runs on two custom tools built inside the automation platform:
Tool 1: Get HEAD OID. Queries the repository's default branch and returns the current commit SHA. This is required as an input to any commit operation. The version control API uses it for optimistic concurrency control.
Tool 2: Push File. Accepts four inputs: the HEAD SHA, a file path, base64-encoded file content, and a commit message. It executes a GraphQL mutation that creates a new commit on the main branch with the specified file addition. The hosting provider's git integration detects the push and auto-deploys.
The Deployment Flow
1. AI authors file in its local environment
2. AI calls Tool 1 → receives current HEAD SHA
3. AI base64-encodes file content
4. AI calls Tool 2 with SHA + path + content + message
5. Automation platform pushes commit via GraphQL
6. Hosting provider auto-deploys from git push
7. AI queries hosting provider connector → confirms READY state
8. Entire flow: ~30 seconds, zero human steps
The Cost
Zero. The automation platform's free tier includes 1,000 operations per month. Each deployment consumes roughly 2 operations (one to get the SHA, one to push the file). At our current deployment frequency, we'd use maybe 50 operations in a busy month. The hosting provider's free tier handles the deployment and CDN. The version control system's free tier handles the repository. The AI assistant's cost is borne by the conversation itself.
What This Proves
This isn't a theoretical framework. This blog post was authored, committed, deployed, and verified through the pipeline it describes. The page you're reading was never touched by a human file manager, a git CLI, or a deployment dashboard. It went from an AI conversation directly to production.
That's Tier 1 leverage: identify the asymmetric return (persistent OAuth connectors as a bridge layer), extract it (build two tools), and prove it works (ship this post). Tier 2 starts now. Documenting the methodology so it compounds.
This is Ledger Entry 001 in the LTL experiment. Every strategic decision is a transaction. This post is the receipt.