Human-Agent Trust Weekly AI News

July 7 - July 15, 2025

A major Okta report revealed deep trust problems between people and AI agents. The study found 70% of users would rather deal with humans than AI for important transactions. Over half don't trust AI with personal data, and 60% fear AI's impact on digital security. Retail and finance sectors face the most fraud attempts, making trust harder.

Companies are responding with new approaches. Thomson Reuters introduced Ready to Advise and Ready to Review - AI tools for tax experts that keep humans in control. These tools use CoCounsel AI to handle complex tasks but let people make final decisions. Salesforce built an Einstein Trust Layer with safety checks to spot AI mistakes and filter biased content.

Human oversight emerged as a key solution. Research shows 38% of users will trust AI agents only if humans monitor them. At Barclays bank, 100,000 'co-pilots' (AI assistants) help workers but don't replace human judgment. HR platforms like Oracle and Workday now include AI that suggests promotions but leaves final calls to managers.

Building fair AI requires diverse teams. Salesforce UK leader Zahra Bahrololoumi shared how a soap dispenser didn't recognize her dark skin, showing why different perspectives matter in AI testing. Without diverse teams, AI could make unfair decisions in healthcare or loans.

New tools aim to speed up trustworthy AI development. Clarifai's platform lets companies add custom business rules to AI agents. Boomi's Agentstudio allows anyone to create AI helpers with safety guards and learning features. Both systems help AI agents work together securely.

Management skills are changing. Cognizant CTO Babak Hodjat says leaders must learn trust boundaries - knowing when to rely on AI and when not to. WorkWhile CEO Jarah Euston warns that AI agents need constant training and checking like new employees.

Regulation is catching up. Over a dozen U.S. states created their own AI laws since federal rules stalled. These cover facial recognition and hiring tools, but different state rules complicate compliance. The U.S. Army will start special AI jobs in 2026 to oversee military AI systems.

Young users show more openness to modern authentication methods like biometrics and passkeys. Experts agree that user-friendly security and 'trust-by-design' approaches will make AI agents more accepted. As Boomi's Ed Macosky noted: 'Trust is earned, not given'.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now