Everyone keeps talking about Manus. About Poke. About the next agent that browses the web and manages your calendar. They're impressive. We use some of them.
But nobody is saying the obvious thing: every one of these agents is still just a tool. You give it a task, it executes. It doesn't have standing in the world. It doesn't make people feel at ease.
The Real Problem
When people find out they've been talking to an AI, they feel unease. Not anger. Something more subtle and more damaging.
We saw this firsthand. Our AI agents run outbound sales conversations over iMessage, real back-and-forth exchanges with real people. The moment someone suspects they're talking to a bot, the conversation dies. Not because the AI said something wrong. Because trust collapsed.
This is the unsolved problem in the AI agent industry. Intelligence isn't the bottleneck. GPT-4 is smart enough. Capability isn't the bottleneck. The bottleneck is emotional trust.
From $8B to $50B
The AI agent market is going from $8 billion to $50 billion in three years. That growth won't come from agents getting smarter. We've crossed the intelligence threshold.
It will come from agents earning the right to hold real roles in companies. Sales reps, support leads, teammates who show up every day. The day an AI can reliably do that is the day the market explodes.
The barrier isn't technical. It's relational.
What We Learned in the Hardest Channel
We started Chert to fix outbound sales. Email is dead. LinkedIn is noise. iMessage gave us 10X response rates because the channel itself felt personal. Like a message from someone you know.
That taught us something bigger than the 10X stat: the channel shapes how much trust people extend to the other side. iMessage is the channel you use with people who matter to you. That intimacy transfers.
When our agents nailed the tone, read the thread, responded like a real person, something clicked. People didn't just respond. They engaged. They asked follow-ups. They referred friends. They said things like "this was actually a pleasant conversation."
That's when we realized we weren't building a sales tool. We were learning how to make AI feel human.
What We're Building
The emotional trust layer isn't a feature. It's infrastructure.
Real agency, not scripts. Chert agents don't follow decision trees. They read context, adapt tone, make judgment calls. That's what makes them feel worth responding to.
Emotional intelligence. We study how humans actually communicate. The pacing, when to push and when to back off. Our agents mirror that. Sounds small. It's the whole thing.
Consistency over time. Trust isn't earned in one message. It's built across a thread. Our agents remember context, follow up naturally, and never make someone feel like they're starting over with a machine.
Why Outbound First
Cold outreach is the hardest trust problem. You start from zero, a stranger reaching out uninvited. If AI can earn trust there, it can earn trust anywhere.
Every lesson from outbound applies to support, recruiting, account management. Anywhere a company needs AI to represent them to a real human. We're building something that generalizes.
Trust Is the Unlock
Agents will take on human roles. That's happening. The only question is whether they feel like trustworthy colleagues or uncanny chatbots.
We started Chert because we believe this future can go right. People can talk to AI and feel comfortable, respected, understood. Not because the AI fooled them, but because it genuinely got them.
That's what we're building. The foundation that makes the agentic future work.
If you're a founder building an outbound pipeline, let's talk.
— The Chert Team