Ethics & Safety Weekly AI News
May 26 - June 3, 2025The RSA 2025 conference highlighted major advances in Agentic AI for cybersecurity. Companies like Google and SentinelOne unveiled AI tools that act like human security experts, automatically investigating threats and managing risks. For example, ArmorCode’s Anya AI helps teams fix app security issues faster by sorting through alerts. While these tools save time, experts note that trust in AI is still low, mirroring early doubts about cloud computing and automation.
New AI laws in Europe and California are forcing companies to be more open. The EU’s AI Act and California’s Transparency Act require clear explanations of how AI agents make decisions, especially in hiring or law enforcement. Companies must now document every step their AI takes, making it easier to spot errors or bias. Dr. Vivian Lyon, a cybersecurity expert, warns that without these rules, AI could accidentally harm people by making rushed decisions.
Bias in AI remains a big concern. Recent studies show some Agentic AI systems favor certain groups when approving loans or medical treatments. To fix this, developers are using “adversarial debiasing” – a technique that teaches AI to ignore unfair patterns in data. California’s regulators are auditing AI hiring tools to ensure they treat all job applicants equally.
Cybersecurity teams face a double-edged sword with Agentic AI. While it helps block attacks in real-time, hackers are now creating “poisoned” AI models that trick systems into opening backdoors. Jason Elrod from CybersecurityTribe suggests using AI to watch other AI, creating a safety net against these threats. Meanwhile, banks like those using Noureen Njorage’s systems deploy AI to catch fraud but must balance speed with careful checks to avoid false accusations.
As Agentic AI use grows (25% of companies plan trials this year), ethical guidelines struggle to keep up. Legal experts worry about who’s responsible when AI makes a mistake – the developer, user, or the AI itself. Multi-agent systems, where AIs talk to each other, add more complexity since they can learn unpredictable behaviors. Northeastern University’s report urges governments and companies to jointly create safety standards before AI agents become too widespread.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.