Ethics & Safety Weekly AI News
March 24 - April 1, 2025Countries worldwide took important steps to manage AI agent risks this week.
The European Union clarified rules for what counts as an AI system, helping companies follow new laws. They also stopped work on a plan to make AI companies pay for mistakes, causing debate about who’s responsible when AI agents cause harm.
In cybersecurity, Microsoft introduced 11 AI security agents that work like robot guards. These agents can block phishing emails and track hackers faster than humans. This comes as hacking attacks hit record levels worldwide.
The UK showed strong action against AI misuse. New laws make using AI to create fake child abuse images a serious crime. Their AI Security Institute (formerly Safety Institute) now focuses on stopping AI-enabled fraud and cyber attacks.
At a UN conference in Geneva, military experts discussed AI weapons. One talk showed how special tools can check if battlefield AI works safely. Another group proposed using digital fingerprints to track AI mistakes during wars.
In the US, 4 states banned Chinese AI app DeepSeek from government devices, fearing spying. Congress proposed new bills to block Chinese AI tech completely.
Hospitals faced ethics questions after a study found patients like AI-written messages but feel tricked if not told a computer helped. This matches White House rules pushing for clear AI disclosures.
Looking ahead, experts warned that AI agents acting independently could divide society. Some people might think bots have real feelings, while others see them as tools. Companies face tough choices about being honest while keeping users happy.
Global meetings tried to fix these issues. The AI Standards Hub Summit brought 30 countries together to create safety rules for advanced AI. They particularly debated how to handle "open AI" systems that anyone can modify.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.