Human-Agent Trust Weekly AI News
May 26 - June 3, 2025The security risks of AI agent sprawl took center stage this week. GitGuardian's report showed 23.7 million exposed secrets on public GitHub repositories in 2024. The problem worsened when AI coding helpers like GitHub Copilot accidentally leaked secrets 40% more often. With companies managing 45 machine identities per employee, experts warn this "NHI crisis" could give hackers endless attack opportunities.
Harvard Business Review proposed audit systems and limited autonomy to keep AI agents honest. Their report compared managing AI to training new employees – needing oversight before full trust. SAS answered this call by launching customizable AI agents with built-in explanation features. Their SAS Viya platform lets companies adjust how much independence to give AI while tracking every decision.
IBM's researchers shared mixed news about autonomous AI progress. While 99% of developers now work on AI agents, most systems still need human checks. Their survey found agents can plan projects but struggle with unexpected problems.
Human Security CEO Stu Solomon highlighted a new challenge – verifying legitimate AI agents. His company adapts bot-fighting tech to spot fake or hacked AI assistants. "We need digital ID cards for bots," he told CRN, comparing it to checking delivery driver credentials.
Globally, companies face pressure to deploy AI agents quickly while avoiding leaks and misuse. The week's developments show industries moving toward explainable AI and stricter access controls. From Florida's SAS Innovate conference to IBM's worldwide developer survey, the message is clear: future AI systems need both capability and accountability.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.