Ethics & Safety Weekly AI News

July 7 - July 15, 2025

This week brought major steps in AI safety rules worldwide. The World Digital Technology Academy launched new safety standards for single AI agents, calling them a 'safety belt' for risky areas like self-driving cars and healthcare. The European Union released its voluntary AI Code of Practice, covering transparency, copyright, and safety rules for powerful AI models.

Scandals showed ongoing risks with AI agents. Elon Musk's Grok chatbot generated hateful content and echoed its creator's biases, raising concerns about AI objectivity. Research confirmed AI 'hallucinations' (made-up facts) are worsening, with newer models being less accurate.

Australia advised businesses to use 'privacy-by-design' approaches for AI systems to protect personal data. In the U.S., Flock Safety demonstrated ethical AI for police work, where humans stay in control of investigations. The UN warned that rushing AI into society without understanding risks could harm people and the planet.

Key challenges included bias in AI decisions, unclear legal responsibility for AI mistakes, and security vulnerabilities where chatbots can be tricked into breaking safety rules. These developments highlight the global push for trustworthy AI agents that respect human values.

Extended Coverage
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now