It’s been just three years since ChatGPT launched to huge popular acclaim. But already, the industry is looking ahead to the next big wave of innovation: agentic AI. True to form, OpenAI is at the cutting edge again with its new ChatGPT agent offering, which promises “to handle tasks from start to finish” on behalf of its users.

Unfortunately, with greater autonomy comes greater risk. The challenge for corporate IT and security teams will be to empower their users to harness the technology, without exposing the business to a new breed of threats. Fortunately, they already have a readymade approach to help them do so: Zero Trust.

### Huge Gains but Emerging Risks

Agentic AI systems represent a major leap forward from generative AI (GenAI) chatbots. While traditional GenAI chatbots create and summarize content reactively based on prompts, agentic AI is designed to proactively plan, reason, and act autonomously to complete complex, multi-stage tasks. It can even adjust its plans on the fly when presented with new information.

The potential productivity, efficiency, and cost benefits are significant. Gartner predicts that by 2029, agentic AI will “autonomously resolve 80% of common customer service issues without human intervention,” leading to a 30% reduction in operational costs.

However, the same capabilities that make agentic AI exciting for businesses also introduce new risks. With less human oversight, malicious actors could manipulate an AI agent’s actions without raising red flags for users. Because agentic AI can make decisions with irreversible consequences—such as deleting files or sending emails to the wrong recipient—the potential damage is greater unless safeguards are in place.

For example, an attacker might embed a malicious prompt in a webpage the agent visits. Given that agents deeply integrate with broader digital ecosystems, the risk of breaching highly sensitive accounts and information increases significantly. Moreover, because agents can acquire detailed knowledge of their users’ behavior, there are also substantial privacy concerns.

### Why Access Controls Matter

Addressing these challenges starts with identity and access management (IAM). As organizations create a virtual digital workforce made up of AI agents, they must manage the identities, credentials, and permissions these agents require to perform their tasks safely.

Most agents today are generalists rather than specialists. Take the ChatGPT agent, for instance: it can schedule meetings, send emails, interact with websites, and more. This flexibility makes the tool powerful but complicates the application of traditional access control models, which were originally designed around human roles with clearly defined responsibilities.

This means we need to rethink access controls for the agentic AI era. The solution lies in embracing the Zero Trust mantra: **“never trust, always verify.”**

### Zero Trust Reimagined for Agentic AI

What does Zero Trust look like in an agentic AI environment?

– **Assume Unintended Actions:** Agents may perform actions that are unintended or difficult to predict—a reality even OpenAI acknowledges.

– **Treat Agents as Separate Identities:** Instead of thinking of AI agents as extensions of existing user accounts, treat them as distinct identities with their own credentials and permissions.

– **Enforce Access Management at Multiple Levels:** Access control should be applied not only at the agent level but also at the tool level, governing which resources agents can access.

This approach enables more fine-grained controls, ensuring permissions align closely with each specific task. Think of it as “segmentation,” but not in the traditional network sense. Instead, it means restricting agent permissions strictly to the systems and data necessary for their job—and nothing more.

In some cases, applying time-bound permissions can enhance security by limiting how long an agent can access certain resources.

### Rethinking Multifactor Authentication (MFA)

Traditional MFA methods don’t easily translate to AI agents. Asking an agent for a second authentication factor offers little security if the agent is compromised.

Instead, human oversight should serve as a second layer of verification, especially for high-risk actions. However, organizations must balance this approach against the risk of consent fatigue—if agents request too many approvals, users may start approving actions reflexively, undermining security.

### Visibility and Monitoring

Organizations also need comprehensive visibility into agent activities. Implementing logging systems to record actions and monitoring for unusual behavior is essential. This practice not only supports security but also accountability, aligning with core Zero Trust principles.

### Looking Ahead

Agentic AI is still in its early days. But for organizations eager to leverage AI that can act autonomously with minimal oversight, managing risk effectively is paramount. The best way to do this is by adopting a Zero Trust mindset—never trusting anything by default.

**Interested in strengthening your cybersecurity knowledge?** Check out the [best online cybersecurity courses](#) to stay ahead in protecting your organization in this new era of AI-driven innovation.
https://www.techradar.com/pro/how-zero-trust-can-help-organizations-manage-mounting-agentic-ai-security-risk

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *