Microsoft took center stage this week with new AI security tools aimed at stopping phishing attacks and data leaks. Their Security Copilot agents can now automatically handle 30 billion phishing emails detected yearly, freeing human workers for tougher tasks. A new “shadow AI” defense blocks unauthorized chatbots from accessing company secrets through web filters and browser controls. Teams users also get better protection against malicious links starting in April.

In Europe, 28 groups warned the EU’s new AI Code of Practice isn’t strong enough. They say key protections became optional, risking privacy and fair treatment. The European Telecommunications group ETSI fought back against future threats with Covercrypt – a hybrid encryption system that keeps data safe even from quantum computers.

China revealed its 2025 plans to fund explainable AI research, including brain-like systems that show how they make decisions. This comes as global surveys rank AI code reliability as a top tech challenge.

U.S. debates heated up over AI’s role in government. While some want AI to improve services like healthcare, others fear rushed rollouts might erode public trust. A new report showed personal info about federal judges is too easy to find online, raising safety concerns.

Experts agree that AI compliance remains tricky. “We have to prove our AI tools are safe, but the rules keep changing,” said one security leader. Companies now demand vendors explain exactly how they protect data and prevent AI mistakes. As hackers get smarter with AI, the race to secure systems has never been more urgent.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now