Book Summary

Co-Intelligence: A Practical Guide to Living and Working with AI

Key insights from the book by Ethan Mollick

One of the few AI books that remains essential reading.

Watch the Book Summary

28-minute breakdown of the key concepts from Co-Intelligence

Co-Intelligence: A Practical Guide to Living and Working with AI - Key insights from the book by Ethan Mollick

Why This Book Matters

Most AI books age quickly. This one doesn't.

Many classic AI books like "Superintelligence" were written before ChatGPT went public, making them feel outdated. Ethan Mollick's book was released after GPT-3.5 — written in the age of practical, usable AI.

It doesn't focus on specific models or tools. Instead, it gives you mental models for how to think about AI as a whole.

"We have invented a kind of alien mind. But how do we ensure the alien is friendly?"

— Ethan Mollick, Co-Intelligence

The Jagged Frontier

Why AI seems brilliant and stupid at the same time

The Jagged Frontier - AI capabilities are uneven across different tasks

AI is not universally brilliant or stupid. Its capability is a "jagged frontier" — it can be superhuman at one task and surprisingly bad at a similar one.

This explains why some people think AI is amazing while others dismiss it as useless. They're using it for different things, hitting different parts of the frontier.

Key insight: Large language models are "connection machines," not databases. They work on patterns, not knowledge. This is why an LLM can write a perfect sonnet about a product launch but fail to summarize text into exactly 25 words — it can't count tokens the way we count words.

Two Models: Centaur vs Cyborg

How do you partner with an alien intelligence?

The Centaur vs Cyborg models for working with AI

The Centaur

Clear division of labor. Like the mythical half-human, half-horse creature, you decide which tasks are for the human and which are for the AI.

Example: You handle strategic thinking and client relationships; the AI analyzes data and generates first drafts.

The Cyborg

Deep integration. The AI is a constant collaborator, woven directly into your creative and analytical process.

Example: Mollick wrote Co-Intelligence this way — he did the writing, but used AI constantly for brainstorming, feedback, and checking citations.

Choose based on context: You might use the Centaur model in your personal life and the Cyborg model in your professional work. Both are valid — pick what fits.

The Four Rules of Co-Intelligence

Practical principles for working with AI effectively

The Four Rules of Co-Intelligence - Rules 1 and 2
1

Always Invite AI to the Table

Use AI for everything you legally and ethically can. This is the only way to map its jagged frontier for your specific work. Experimentation builds expertise.

2

Be the Human in the Loop

Your judgment, ethics, and critical thinking are your greatest assets. AI works best with human oversight. Consultants who blindly trusted AI on tasks it was bad at performed worse.

The Four Rules of Co-Intelligence - Rules 3 and 4
3

Treat AI Like a Person (But Tell It What Kind)

Interacting conversationally is more effective than treating it like a search engine. Give it a role: "You are an expert copywriter." But remember — it's an illusion. It lacks true understanding.

4

Assume This is the Worst AI You'll Ever Use

The pace of improvement is staggering. What seems magical today will be obsolete tomorrow. This mindset encourages continuous learning and adaptation.

Presentation Slides

12-page visual summary of key concepts

Video Transcript

Condensed transcript from the video summary

I'm gonna talk about Co-Intelligence by Ethan Mollick. It's a book that's been out since 2024, and it's one of the very, very few books on artificial intelligence that I actually recommend to people reading.

The reason for this is a lot of books on AI age very, very quickly. A lot of the classics that you read now, things like Superintelligence by Nick Bostrom, were written before ChatGPT became public. So a lot of it feels kind of out of date because it was really hard to predict what was gonna happen with ChatGPT and with modern AI as we know it.

Ethan Mollick's book was released after the release of ChatGPT 3, and I think it's one of the most useful books for understanding how we should be thinking about artificial intelligence. It doesn't talk about specific technologies, it doesn't talk about specific models, but instead it gives us a mental model for how we should be thinking about AI as a whole, as a co-intelligence.

The Jagged Frontier

Ethan Mollick coined the term "jagged frontier" in the context of generative AI and artificial intelligence. You will often see people saying, "Well, AI is not very good. I don't see what the big deal is."

If you use AI on a daily basis, if you use it for coding, if you use it for planning, if you use it to run your business, you might look at those people and think, "Are they stupid? What are they missing? I use it every day. It's made me 10 times more productive."

The jagged frontier helps explain this. We don't need to assume that these people are stupid. We can just assume that they're using it for different things and maybe AI is not as good in those particular roles.

AI is not universally brilliant or stupid. Its capability is a jagged frontier — it can be superhuman at one task and surprisingly bad at a similar one. One of the big contradictions in AI right now is the fact that advanced AI models can solve complicated mathematics, reason through proofs, but they also can't count the number of Rs in "strawberry." That's because there's a fundamental architectural reason — large language models think in tokens and sometimes those tokens do not align with individual letters.

Mollick's example: An AI can write a perfect sonnet about a product launch but may fail to summarize that same text into exactly 25 words. It's because it can't count. It's not a counting machine. It's a connection machine, not a database. It works on patterns, not knowledge.

Alien Intelligence

Ethan Mollick talks about AI as a co-intelligence, something that we're going to work alongside. I find this a really useful model. We don't have to compare ourselves to artificial intelligence, but instead we understand that it's a different type of intelligence and it's one that we're gonna be working alongside.

He actually also talks about an alien intelligence — AI should be "alien intelligence" instead of "artificial intelligence." Artificial intelligence as a term is a bit confusing because it requires us to have a definition of natural intelligence, which we don't really have. So instead we're gonna be working with an alien intelligence — something that is different to how we think.

Centaur vs Cyborg

Ethan Mollick talks about two particular models, and this is a really useful mental model. You can work with AI as either a Centaur or as a Cyborg.

The Centaur: In the mythical centaur, it's half human, half horse. So you are split in two. There's a clear division of labor, and you decide which tasks are for the human and which tasks are for the AI. I do this in my own business — I use artificial intelligence to automate a huge amount of work, but then I leave the tasks I think are uniquely human for me. Like doing a live stream — I could use an AI avatar, but I believe human connection is more valuable.

The Cyborg: This is a deeper integration. The AI is a constant collaborator woven directly into your creative and analytical process. Mollick, for example, used AI to help him write the book Co-Intelligence. He did the writing, but used AI constantly for brainstorming, getting feedback, suggesting sentence endings, and checking citations. This is how I use AI in my day-to-day for most things.

Both of these are valid, but you need to engage with AI and work out which one's going to work for you. Maybe you use the Centaur model in your personal life and the Cyborg model in your professional life.

The Four Rules of Co-Intelligence

Rule 1: Always Invite AI to the Table. We are still trying to work out what we can use AI for, and the only way we can do that is by testing it, by trying it out and working out its capabilities. Knowing what AI is good at, knowing what AI is not good at, and knowing how to tell the difference is a core skill moving forward. Try, when you try a new task for the first time, to bring AI into that process. The easiest way is if you feel yourself going towards Google for a question, try using AI instead.

Rule 2: Be a Human in the Loop. We need to maintain judgment. We need to be there to monitor the outputs, to check the work, and to make sure that the AI is actually helping us rather than just automating away our work unthinkingly.

Deloitte is the best example of this. In 2025, they had two very public cases where staff members had used AI to produce reports for governments. These reports had errors — the AI had created fake citations, made up academics, fabricated papers. The humans who put together these reports did not check. The human was removed from the loop. When it was discovered that the academics cited in the paper weren't real, Deloitte looked like idiots and had to refund $440,000 Australian dollars.

Rule 3: Treat AI Like a Person (But Tell It What Kind). This is very similar to my prompt framework. The first thing I always do when talking to an AI is give it a role: "You are an expert copywriter" or "You are a newsletter editor." I don't say "act as" because then it tends to role play. I'm saying, no, you ARE a copywriter. This limits the scope and gets better responses. But remember, it's an illusion — it lacks consciousness and true understanding. It doesn't need consciousness to be useful.

Rule 4: Assume This is the Worst AI You Will Ever Use. So many people use AI, get a bad result, and think AI is rubbish forever. The AIs we are using today — GPT, Claude, Gemini — are the worst they're ever gonna be. AI gets better and better every single month.

What I like to do is keep a task that AI hasn't been able to help me with, and whenever a new model comes out, I give it that task to see whether it can do it now. The complexity of those tasks increases each time. If AI cannot do the job you want it to do now, just wait six months. It will probably be able to do that.

Everyone is Learning

We are all learning right now. Even Andrej Karpathy, the co-creator of ChatGPT, on December 26th put out a tweet saying, "I feel like I'm falling behind." This is the co-creator of ChatGPT, and he feels overwhelmed.

We are all learning, we're all working this out. And anyone who says they're an expert and they know everything about AI is either stupid or they are consciously lying. Everybody is working this out as we go along, which is exciting and terrifying at the same time.

The pace of improvement is staggering. What seems magical today will be obsolete tomorrow. This mindset encourages continuous learning and adaptation. Betting against AI right now is a pretty dumb bet — it's getting better and it's getting better faster.

Free 5-Day Challenge

Get AI-Ready in 5 Days

Join 250,000+ smart professionals getting AI-ready with jargon-free training. No tech skills required.

Co-Intelligence Book Summary: Living and Working with AI | AI with Kyle