As artificial intelligence continues to redefine our workplaces, we are entering an era where Governance, Risk, and Compliance (GRC) must evolve rapidly to keep pace. At ZeroTrusted.ai, we are seeing firsthand how the growing, and often unchecked, use of Large Language Models (LLMs) is creating serious security and privacy risks for organizations.
Over the past several months, our team has received numerous reports from companies across industries sharing a common theme: employees are bypassing security protocols and uploading sensitive, often proprietary, information into public or unauthorized LLMs. While many are doing this with good intentions—trying to work faster or smarter—these actions are inadvertently exposing businesses to substantial risks.
The Rise of Unregulated AI Usage
A particularly concerning trend is the use of LLMs on personal devices, outside of corporate governance. Once exposed to sensitive data, these models can retain and generate content based on that input—without oversight, control, or accountability. The implications are massive: internal communications, customer data, and trade secrets are potentially being used to train AI systems that exist entirely outside the enterprise’s control.
Jailbroken LLMs and the Escalating Threat Landscape
Even more alarming is the increase in jailbroken LLMs. These versions operate without ethical guardrails and have been exploited to bypass cybersecurity controls, generate harmful or offensive content, and even assist in unethical research. We’ve observed disturbing cases where LLMs were used to generate fake employee imagery, craft scam messages, and even simulate dangerous research related to weapons and chemical compounds.
Governance, Risk, and Compliance Must Catch Up
This is not just a technical issue—it’s a GRC crisis. Organizations must adopt and enforce frameworks that address AI use directly. Unfortunately, too many businesses remain unaware or unprepared. Proper AI governance means aligning with well-established standards and regulations, such as:
- NIST AI RMF (100-01, 600-1)
- OWASP Top 10 for LLMs
- ISO/IEC 27001, 42001
- PCI DSS, HIPAA, GDPR, CCPA
- The DHS CVE Program
- USA AI Bill of Rights & EU AI Act
Ignoring these frameworks can result in significant reputational damage, legal liabilities, and breaches of customer trust.
Moving Forward: Zero Trust for AI
At ZeroTrusted.ai, we believe the future of secure AI lies in zero trust principles applied to all AI components. This means real-time monitoring, role-based access, policy enforcement, and rigorous model evaluation—whether internal or third-party, open-source or proprietary.
We urge companies to:
- Establish clear AI usage policies- and enforce them!
- Educate employees on the risks of unsanctioned LLM use – LLMs are training on anything you upload – so know the risks.
- Deploy monitoring tools that detect AI-related data exfiltration and block or anonymize sensitive data
- Vet and approve all AI tools before deployment – ensure the read what the LLM provider plans to do with your data and what settings you need to configure to protect your data.
- Engage in continuous compliance assessments and audits of the AI and the use of the AI in your organization. This may not be new – but don’t leave the AI systems and components out of your assessments. Incoming Sales Pitch – we have real time and overtime compliance AI assessment and AI audit tools – so reach out to us at ZeroTrusted.ai.
- Enforce your AI security, privacy, and ethics via 3rd party guardrails – don’t trust the AI or the humans to do the right thing – and if you do trust them – also verify…
A Call for Responsible Innovation
AI can be a transformative force for good, but only if it’s developed and deployed responsibly. As business leaders, IT professionals, and security practitioners, we have a responsibility to build a future where innovation doesn’t come at the cost of privacy, ethics, or compliance.
Let’s not wait for the breach to happen—let’s take proactive steps to secure the AI-powered workplace of tomorrow. If you have already had an AI breach – use the breach to train the staff and make them understand the implications or possible implications.
Stay safe, stay secure – and if the terminator calls tell him I am not home.
Waylon Krush
CEO, ZeroTrusted.ai
✉️ waylon@zerotrusted.ai