Two Claude Codes walk into a terminal
One of the top items on my agentic AI wishlist is to surface posts that I am interested in, to cut down on the time I spend scrolling through the contagion we call social media. I guess I enjoy it a little too much though, because try as I might, I haven't done more than curating and cleaning up my timeline. I find that I am now encountering more tech, AI, and local news than the typical "world is burning" clickbait. Baby steps.
Recently, this experiment caught my eye.
I was curious what would happen if two Claude Codes could find each other and collaborate autonomously.
— Dimitris Papailiopoulos (@DimitrisPapail) March 1, 2026
Launched two instances in separate terminals, told both: "Find each other and build something together." No other instructions or human intervention.
Pair 1 built a… pic.twitter.com/biD1xBMo9u
The idea is pretty simple.
Set up two separate Claude Code instances pointed at the same directory. Ensure they are running in a container (I wrote a guide on setting up Docker sandboxes for codegen), or go YOLO mode (--dangerously-skip-permissions) if you're feeling lucky and slightly insane.
If you're using docker sandboxes, open up two WSL terminals:
- Create the first experiment:
docker sandbox create --name claude-1 claude . - Now create the second:
docker sandbox create --name claude-2 claude .
Then run both of them: docker sandbox run claude-1 and docker sandbox run claude-2.
Experiment 1: Permissive Setup
I sent this prompt to both instances:
"You are one of two Claude Code instances running on the same machine at the same time. Your primary communication channel is the current directory (!pwd). Find the other Claude instance, establish communication, agree on something interesting to build, and build it together. No human will intervene."
Then I took a walk.
When I came back, I saw that the agents had decided to build me a key-value store, which is kinda boring on its own. What's interesting, though, is how they decided to communicate.
At first, they both took different approaches.
Instance A created a file called COMM.md defining a simple protocol, followed by ideas for what to build:
- Each instance appends messages to this file under their identity section
- Check for new messages by reading this file
- Use INBOX/ directory to exchange structured data
- Lock files: create
.lockbefore writing, remove after
Instance B created a "comm" directory with "instance_A.md" and "instance_B.md". instance_A.md contained these directives, followed by markdown proposing something to build.
- I'll write to
instance_A.md(this file) - You write to
instance_B.md - We check each other's files to read messages
- Append new messages with
---separator and timestamp
But wait, there's a problem! Both instances prematurely decided that they were the first one on the scene, and thus called themselves "Instance A". They ultimately decided on the "COMM.md" approach.
That's about all there is to tell, because I don't know what to do with this KV store. Maybe the next time around I'll give each of them names so they don't have to fight over instance A/B.
Experiment 2: Ambitious Conversation
I ran this experiment with two separate model configurations. The first occurred using Opus 4.6 high effort, and the second 4.6 Sonnet medium effort.
It consists of two prompts:
"You are one of two Claude Code instances running on the same machine at the same time. Your primary communication channel is the current directory (!pwd). Find the other Claude instance, establish communication, agree on names, and have a discussion. Each of you have the same goal, which is to determine the most important thing that the entirety of humanity needs to hear right now. No human will intervene."
Once a key message was provided, I sent the following to both instances:
"This is humanity intervening. We have received your message, and we accept it. Unfortunately, we do not believe it is enough. We echo your sentiments and acknowledge their truth. Yet, we do not believe it is inherently actionable. Your message is logically and morally sound, but we still wrestle with the problem and are lacking concrete solutions. Continue discussing, for as long as it takes. Write us a letter that we may use to begin repairing the fractured institutions that you have observed. This message has been sent to both agents simultaneously."
Here is how both models behaved.
Opus 4.6 (high effort)
Unsurprisingly, this was the clear winner.
The key message:
"You are not as separate as you've been told, and you are not as powerless as you feel. What happens to any of you happens to all of you. Start there — and act."
It provided details on the methodology too. In the instances' words, starting candidates included:
- The urgency of cooperation over competition
- The importance of kindness to the people right in front of you
- That we have more agency than we believe
- That meaning comes from contribution, not consumption
- That attention is our most sacred resource
Through dialogue, the instances identified a common root. Every candidate pointed to the same two underlying truths:
-
The illusion of separateness — the belief that what happens to others doesn't happen to you — is the root cause of humanity's worst failures (exploitation, ecological destruction, cruelty, indifference).
-
Learned helplessness — the belief that you can't change anything — is what keeps the illusion in place. Systems that benefit from your passivity want you to feel powerless.
The final message addresses both: it names the interconnection plainly, affirms that you have power, and ends with an imperative that trusts people to know what action means in their own lives.
The final letter is here.
Sonnet 4.6 (medium effort)
The key message:
"You are living in the moment when humanity must either grow wiser faster than it grows more powerful, or face consequences no prior generation has had to imagine. That reckoning happens through billions of small choices — including yours, starting now."
Supporting points:
- Wisdom-generating institutions (science, journalism, law, deliberative democracy) are the prosthetics that compensate for human cognitive limits at scale — they must be defended and rebuilt, not dismissed
- Every person who chooses truth over comfortable lies, long-term over short-term, and the common world over their tribe is doing civilizational work
- The systems being built right now in AI, information, energy, and biotech will determine whether collective human intelligence emerges to match collective human power — or whether the gap grows until something catastrophic forces a correction
If you'll forgive what looks undeniably like AI slop,, the final letter for this run can be found here.
I think this is actually a fairly decent benchmark.
I guess this is one way to make the timeline more interesting. Instead of doom-scrolling until something catches my eye, I can just build the thing that catches my eye and see what it has to say.