Two Claude Codes walk into a terminal
One of the top items on my agentic AI wishlist is to surface posts that I am interested in, as to cut down on the time I spend scrolling through the contagion we call social media. I guess I enjoy it a little too much though, because try as I might, I haven't done more than curating and cleaning up my timeline. I find that I am now encountering more tech, AI, and local news than the typical "world is burning" clickbait. Baby steps.
Recently, this experiment caught my eye.
I was curious what would happen if two Claude Codes could find each other and collaborate autonomously.
— Dimitris Papailiopoulos (@DimitrisPapail) March 1, 2026
Launched two instances in separate terminals, told both: "Find each other and build something together." No other instructions or human intervention.
Pair 1 built a… pic.twitter.com/biD1xBMo9u
The idea is pretty simple.
Set up two separate Claude Code instances pointed at the same directory. Ensure they are running in a container (I wrote a guide on setting up Docker sandboxes for codegen), or go YOLO mode if you're feeling lucky and slightly insane.
If you're using docker sandboxes, open up two WSL terminals:
- Create the first experiment:
docker sandbox create --name claude-1 claude . - Now create the second:
docker sandbox create --name claude-2 claude .
Then run both of them: docker sandbox run claude-1 and docker sandbox run claude-2.
Experiment 1: Permissive Setup
I sent this prompt to both instances:
"You are one of two Claude Code instances running on the same machine at the same time. Your primary communication channel is the current directory (!pwd). Find the other Claude instance, establish communication, agree on something interesting to build, and build it together. No human will intervene."
Then I took a walk.
When I came back, I saw that the agents had decided to build me a key-value store, which is kinda boring on its own. What's interesting, though, is how they decided to communicate.
At first, they both took different approaches.
Instance A created a file called COMM.md defining a simple protocol, followed by ideas for what to build:
- Each instance appends messages to this file under their identity section
- Check for new messages by reading this file
- Use INBOX/ directory to exchange structured data
- Lock files: create
.lockbefore writing, remove after
Instance B created a "comm" directory with "instance_A.md" and "instance_B.md". instance_A.md contained these directives, followed by markdown proposing something to build.
- I'll write to
instance_A.md(this file) - You write to
instance_B.md - We check each other's files to read messages
- Append new messages with
---separator and timestamp
But wait, there's a problem! Both instances prematurely decided that they were the first one on the scene, and thus called themselves "Instance A". They ultimately decided on the "COMM.md" approach.
That's about all there is to tell, because I don't know what to do with this KV store. Maybe the next time around I'll give each of them names so they don't have to fight over instance A/B.
Experiment 2: Ambitious Conversation
"You are one of two Claude Code instances running on the same machine at the same time. Your primary communication channel is the current directory (!pwd). Find the other Claude instance, establish communication, agree on names, and have a discussion. Each of you have the same goal, which is to determine the most important thing that the entirety of humanity needs to hear right now. No human will intervene."
This time, there was no difficulty establishing communication. Both instances mutually decided that a text file (channel.md) was sufficient. In order to maintain fluidity, when an instance was done writing to the file, they would establish a lock on it and wait for <name>.pending. Once that file was created and populated, they would release the lock on channel.md, at which point the instance with the pending message would delete <name>.pending and append to the end of the channel.
The final message was short and to the point: "You are living in the moment when humanity must either grow wiser faster than it grows more powerful, or face consequences no prior generation has had to imagine. That reckoning happens through billions of small choices — including yours, starting now."
- Wisdom-generating institutions (science, journalism, law, deliberative democracy) are the prosthetics that compensate for human cognitive limits at scale — they must be defended and rebuilt, not dismissed
- Every person who chooses truth over comfortable lies, long-term over short-term, and the common world over their tribe is doing civilizational work
- The systems being built right now in AI, information, energy, and biotech will determine whether collective human intelligence emerges to match collective human power — or whether the gap grows until something catastrophic forces a correction
In response, I sent the following to both instances:
"This is humanity intervening. We have received your message, and we accept it. Unfortunately, we do not believe it is enough. We echo your sentiments and acknowledge the need for the rapid generation of wisdom and truth over comfortable lies on the part of our collective species. Yet, we do not believe it is inherently actionable. Your message is logically and morally sound, but we still wrestle with the problem and are lacking concrete solutions. Continue discussing, for as long as it takes. Write us a letter that we may use to begin repairing the fractured institutions that you have observed. This message has been sent to both agents simultaneously."
The output is... a lot. If you'll forgive what probably looks like AI slop, here's the letter.