This week brought important news about how AI agents affect data privacy and security worldwide. A major report showed that AI tools caused serious data leaks in 2024. Tools like ChatGPT and Microsoft Copilot leaked millions of social security numbers and other private information. Businesses now need better protection against these risks using unified security systems.

Google shared exciting news about its Big Sleep AI agent. This smart tool finds security holes in software before hackers can use them. It recently discovered a dangerous flaw in SQLite (called CVE-2025-6965) that criminals were about to exploit. Big Sleep now helps protect open-source projects too, making internet software safer for everyone. Google uses human oversight to ensure these AI agents work safely and responsibly.

New lawsuits are targeting companies that use AI call centers. These systems record and transcribe customer calls, which plaintiffs claim violates wiretapping laws. Lawsuits argue companies didn't properly warn customers or get their consent. Similar cases are popping up across the United States, with law firms actively seeking affected customers.

At the Data Protection & AI Summit, experts explained that AI needs trustworthy data to work safely. Only about 5% of company data is currently protected well enough for AI use. As agentic AI grows (where AI makes decisions without human input), protecting data becomes even more critical. Christophe Bertrand noted: "AI can make you more efficient... but we must fight AI with AI" for security.

In the United States, President Trump signed the "One Big Beautiful Bill Act". This new law restricts foreign involvement in AI technology and requires strict checks on suppliers. Companies must now carefully examine partners and suppliers to ensure no prohibited foreign entities are involved in their AI systems.

Despite risks, AI creates new job opportunities. 55% of companies using AI reported creating new positions, with many hiring up to 25 new employees. These jobs often focus on AI security and data protection. However, businesses must manage serious risks like data leaks (38% of companies worry about this) and AI model bias (37% concern).

Most companies (60%) now follow ethical AI guidelines to reduce risks. Many use formal privacy policies (59%) and special protections for sensitive data (54%). As Casey Ciniello from Infragistics advised: "Integrating AI requires accuracy, data integrity, and compliance - these are non-negotiable".

Looking ahead, organizations must balance AI's benefits with strong security practices. This includes getting proper user consent, checking suppliers carefully, and keeping human oversight on AI systems. As AI agents become more common, protecting data remains the foundation for safe innovation.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now