Human alignment is harder than AI alignment
The bottleneck in AI-assisted work isn't aligning the model—it's aligning the humans.
Everyone's worried about AI alignment. But in practice, the harder problem is human alignment: getting teams coordinated around shared goals before the AI writes any code.
The friction comes from humans
AI models follow instructions, explain their reasoning, and course-correct when you push back. The friction in AI-assisted work rarely comes from the model.
It comes from unclear requirements, conflicting priorities, and teams that don't share context. I've watched teams spend hours debugging AI outputs that were technically correct—but wrong for the unstated context. The AI did exactly what it was asked. No one had agreed on what to ask for.
Research on human-AI teams confirms this: misunderstandings caused by humans reduce team trust and performance far more than AI limitations do.
Specs force human alignment first
This is why spec-driven development works. Writing a spec forces alignment among humans before the AI writes any code. The spec becomes the shared truth that everyone—human and AI—can reference.
Add AI to a team and the coordination cost explodes. You need to align humans with humans, humans with AI, AI outputs with expectations, and multiple AI sessions with each other. Organizations that skip human alignment and jump straight to prompt engineering find themselves drowning in inconsistent outputs.
Trust comes first
Team trust mediates almost everything in human-AI collaboration. High-trust teams recover from AI mistakes faster. Low-trust teams second-guess correct outputs and create friction where none existed.
But trust between humans is the prerequisite. If your team doesn't trust each other's judgment, they won't trust each other's AI workflows either. The organizations seeing real productivity gains from AI aren't the ones with the best prompts—they're the ones where humans already work well together.
Four diagnostic questions
If you're struggling with AI-assisted work, the problem probably isn't the AI:
- Do we have shared context? Is there a single source of truth for requirements, conventions, and decisions?
- Are priorities explicit? When trade-offs arise, does the team know what to optimize for?
- Is there psychological safety? Can people admit when AI outputs are wrong?
- Are handoffs clean? When one person's AI work flows to another, is the context preserved?
These are human problems. Solve them first, and AI alignment follows naturally.
Two different problems
AI alignment research—ensuring models are helpful, honest, and safe—matters enormously. But for most organizations, the models are already aligned enough to be useful. The question is whether the humans can coordinate well enough to use them.
AI alignment is a technical research problem. Human alignment is an organizational one. If you're struggling with AI-assisted work right now, the human side is probably where you should look first.