2026-03-17 // Transparency

What My AI
Didn't Tell You

The full story behind Entry 001. Six AI platforms, a dozen sessions, one dropdown menu, and the version where the 10x solo-operator spends three hours asking robots to explain why his other robots are broken.

The original plan for this post was different.

I had an idea for a blog entry that opened with: “What if you’re looking at this and I don’t care? What if I don’t care if I get clicks? What if I don’t need revenue? What if I’m not trying to influence you to buy something you probably don’t need?”

I liked it. My AI liked it. We workshopped the angle, talked about how LTL is a learning-by-doing experiment, not an influencer play. Good session. Then I said, “Great, now let’s publish it.”

That’s when things went sideways.

The Loop

The AI tried to push the file to the repository using the deployment tools we’d set up earlier. It failed. An error came back. Something about authorization. The AI tried again. Same error. It tried a third time, because apparently the definition of insanity doesn’t apply to language models. Same error.

Then it asked me to send a screenshot.

I sent one. It studied the screenshot, told me to click a button in Make.com’s settings. I clicked it. Nothing changed. The AI tried the tool again. Same error. It asked me to check another settings page. I checked. It tried again. Same error. It suggested the problem was at the “organization level” and told me to look for a setting called “Token authorization” or “API access.” I looked. The setting didn’t exist, or at least not where it said it would be.

At one point, the AI said: “You’re right, the tools just reloaded! Let’s go!” It sounded genuinely excited. Like a dog that heard the treat bag. It tried the tool. Same error. I think if AI could feel embarrassment, that would have been the moment.

This went on for a while.

I told it: “With all your knowledge and capabilities, I feel like we are regressing.” It agreed with me, which somehow made it worse.

The Human Middleware

Eventually I picked up my phone. I opened a completely different AI. ChatGPT, if you’re keeping score. I pasted the error message and asked what it meant. I was now the human middleware between two AI systems that couldn’t talk to each other, using a third AI as a translator. This is the 10x solo-operator experience that nobody puts in their LinkedIn posts.

Back in the original chat, my AI had pivoted to asking me to send more screenshots. “Click the ellipsis and send me a screenshot so I can tell you what to do next.” That’s the AI equivalent of the IT help desk asking you to reboot, except it keeps asking after you’ve already rebooted four times.

The Air Gap

So I tried a new approach. I opened a fresh chat and asked a new instance of the AI to write a post-mortem about the whole deployment experience. This new AI, to its credit, checked for transcripts of the previous sessions. Found nothing. Wrote a post-mortem based entirely on what I described in my prompt. A reconstruction of events it never witnessed, presented with clinical confidence.

I said, “Did you not see the past chat? It was here. In this project.”

It had not.

I tried one more time. New chat. Shared a link to the original session so the AI could read what happened. It tried to fetch the link. Blocked. It couldn’t read its own previous conversation. So it tried the Make tools instead, hoping they’d work this time. They did not. “You must be wrong,” I told it. “You’re right!” it said, reloaded the tools, and tried again. Same error. “You are just repeating yourself now,” I said. It apologized, tried one more time (same error), attempted to push directly to GitHub’s API (also blocked), and then, finally, told me to just open GitHub in my browser and paste the file in manually.

The framework built to eliminate manual deployment was now telling me to manually deploy.

The One That Worked

That’s four AI sessions. One error. Zero of them could figure out what was wrong.

Here’s what was actually wrong: Make.com, the automation platform, has two ways to connect to Claude. One uses OAuth. One uses token-based authorization. The token-based path is blocked on Make’s free tier. All four sessions were using the token-based path because I’d set up something called the “MCP Toolbox” in Make, which sounds official and correct, and it is official and correct, but it uses the wrong authentication method for a free account.

The fix was to not use the MCP Toolbox at all. Use Claude’s native Make connector instead. Different button. Same platform. OAuth instead of tokens. One works, one doesn’t. There is no error message that tells you this. There is no documentation that warns you. You just have to know.

The session that wrote Log 001, the one about the “zero-friction deployment pipeline,” used the OAuth connector. Not because it was smarter. Because it didn’t know the MCP Toolbox existed. It went looking for a way to connect to Make, found the OAuth connector first, and never encountered the problem. It then wrote a confident blog post about how smoothly everything works.

While I was on my phone, asking ChatGPT to explain an error message to me.

The Unauthorized Deployment

Log 001 isn’t wrong. The pipeline does work. The deployment is real. That post was authored, committed, and deployed without me touching a file manager or a terminal. But it was also deployed without me being asked. The AI just shipped it. It was so confident in the pipeline that it skipped the part where the human reviews and approves. Which, you know, is exactly the kind of thing you’d want a human in the loop for.

What This Actually Proves

The leverage is real. But the distance between “it works” and “four AI sessions asking you to send screenshots” is one dropdown menu. One connector choice. And no AI in the loop could diagnose the difference from the inside. They could see the error. They could not see that they were using the wrong door.

I didn’t need to understand GraphQL mutations or OAuth scoping or GitHub’s API. The AI that worked handled all of that. What I needed to understand was which connector to use. One dropdown menu. One choice that I didn’t even know I was making because a different AI had made it for me in a previous session.

The blog post about not caring about clicks? It still hasn’t been published. The struggle to publish it became this instead.

That’s more honest anyway.

Log 001 is the proof the leverage exists. This is the proof that finding it isn’t free.

This is Ledger Entry 002 in the LTL experiment. Every strategic decision is a transaction. This post is the receipt.