This week brought major developments in building trust between humans and AI agents. A report revealed over 23.7 million secrets leaked on GitHub in 2024 due to poorly managed non-human identities (NHIs) like AI bots. Tech companies now have 45 machine accounts for every human worker, creating security risks.

Harvard researchers suggested careful supervision as key to trustworthy AI agents. Software company SAS introduced new tools letting organizations customize how humans interact with AI while keeping decisions explainable.

IBM predicted 2025 will see more fully autonomous AI agents handling complex projects alone. Meanwhile, cybersecurity experts warned these smart bots need special verification systems to prevent abuse. Companies worldwide are racing to balance AI's power with safety measures.

Extended Coverage
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now