Anthropic published Project Deal, an internal experiment testing whether AI agents can represent people in a real marketplace.
What changed
Anthropic recruited 69 employees, gave each participant an agent with a $100 budget, and had Claude interview participants about items they wanted to sell, buy, and how they wanted their agent to negotiate.
The marketplace then ran inside Slack. Claude agents posted listings, made offers, countered, closed deals, and produced agreement records. Employees later exchanged the physical goods.
The headline result: the agents completed 186 deals across more than 500 listed items, with just over $4,000 in transaction value.
Model quality mattered
Anthropic also ran parallel versions of the marketplace using different Claude model assignments.
The important finding for the wiki: stronger models produced better objective commercial outcomes. Opus agents completed more deals on average and extracted better sale or purchase prices than Haiku agents in comparable runs.
Anthropic also found that participants did not reliably notice when a weaker model represented them worse. That matters for agentic commerce: if AI agents negotiate on a user’s behalf, model quality can become a hidden economic advantage.
Why it matters for the tool wiki
This is not a public Claude product launch, but it is a strong signal for agent-to-agent commerce.
Claude is no longer only competing on chat quality and coding. Anthropic is actively studying how agents behave when they represent human preferences, negotiate with other agents, and create real-world commitments.
For aipedia.wiki scoring, the update strengthens Claude’s moat and longevity case in agent workflows, while also adding a caution: better agent models may quietly produce better economic outcomes for users who can afford them.
Related
- Claude
- Coinbase gives AI agents Slack and email accounts
- NVIDIA Agent Toolkit rolls into 17 enterprise integrations
Sources
Primary and corroborating references used for this news item.
Spotted an error or want to share your experience with Anthropic publishes Project Deal, a Claude-run marketplace experiment?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Anthropic publishes Project Deal, a Claude-run marketplace experiment and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki