Zephyrnet Logo

Crafting an AI Policy That Safeguards Data Without Stifling Productivity

Date:

Chief information security officers (CISOs) are facing a sea change in the threat landscape. Once, they protected companies against people who operated machines, but with recent advances in AI, the machines themselves have become adversaries

With concerns such as bias, lack of transparency, security, and data privacy issues being more real than ever, CISOs should know how to address them, as the technology quickly garners rapid adoption across markets. 

The knee-jerk reaction to unsafe employee use of AI would, of course, be to ban it — and some companies are taking this route. But the benefits that AI brings to the workplace are undeniable and substantial. AI can fuel creativity, improve efficiency, and automate tasks, freeing employees for higher-value work. Every day, another company adds a feature leveraging AI, which makes it inevitable that we will all be interfacing with it in the near future. So, how can a security team enable employees to embrace the benefits of AI while protecting their company, customers, and data against the risks? The answer is a thorough corporate AI policy that acknowledges AI’s utility, while setting clear boundaries to curtail unsafe utilization. 

The Internal Threat That AI Poses

In addition to AI features being added to many productivity tools, the rise of large language models (LLMs) that serve as the foundation for generative AI (GenAI) technology has significantly impacted business risk. Developers, salespeople, and even executive leadership are often tempted to leverage tools like ChatGPT to source creative graphics and visuals for presentations or more quickly generate code for important projects. Although this may seem innocuous, employees can inadvertently submit intellectual property, source code, or, worse, regulated customer data to these chatbots, potentially exposing proprietary information. 

With the exception of emerging enterprise-specific licenses, LLMs generally reserve the right to store user inputs as training data. An executive may not think much of giving an LLM proprietary or financial data to fast-track generating presentation content to summarize a business unit’s quarterly performance, but if the LLM stores this data, it poses grave concerns for your compliance with data handling laws. As  of tens of thousands of ChatGPT credentials can attest, data stored by an LLM is not necessarily safe. 

One of the more concerning uses of GenAI involves software development. Once an LLM has ingested code, it could resurface in results generated from others’ prompts. One way to mitigate this is using enterprise-focused GenAI licenses (like ChatGPT Enterprise) that do not ingest inputs as training data. That said, this will only work if employees do not use LLMs on their personal devices. Shadow IT has been a longstanding problem, but the implications are especially dire for employees submitting sensitive code through non-enterprise LLM licenses. 

Three Steps to Creating an AI Policy (and a Fourth to Make It Stick)

Despite the risks, AI’s benefits and growing prevalence cannot be overstated. AI is here to stay, and the introduction of enterprise-oriented GenAI licenses will surely cause adoption among businesses to grow even faster. CISOs aim to protect their organizations while also creating pathways that enable employee and business success, so defining a formal AI policy that makes AI usage safer is a more effective approach. 

Successfully creating an AI policy that outlines appropriate use while providing necessary guardrails that minimize risk requires several considerations: 

  • Make policy development a companywide effort

AI will eventually impact every business area, so any policy exclusively written by the CISO (or their office) is doomed from the start. Policy drafting must be a collaborative process with key stakeholders across the business. This is the best way to identify the potential for risk, the company’s risk tolerance, and the middle ground between the two where an AI policy should live. 

  • Establish general ground rules

Many AI use cases have nuance, but a few behaviors are clearly responsible or irresponsible. Those should be used as a baseline around which to build a policy. Universally undesirable practices include uploading source code or sensitive data (whether from a customer or your company) to an LLM that reserves the right to train on that data, and failing to validate outputs before including them in your work. Universally good practices include using enterprise licenses to keep data secure, inputting only innocuous or public data into non-enterprise LLMs, validating outputs with thorough testing, and ensuring that company IP is not jeopardized. 

  • Create an ongoing process to make case-by-case decisions

I’ve previously written about the value of building an affirmational security culture of “yes” that focuses on helping employees accomplish what they want with a “yes, and …” approach. No AI policy will be sufficiently comprehensive to cover every possible employee request. It is essential to create a straightforward process by which employees can submit use cases for evaluation and approval (or a discussion of what modifications are necessary for approval). 

  • Once a policy is in place, champion success stories

Given that much of the conversation about AI involves passionate speculation around hypotheticals, focusing on concrete use cases and real-life wins can go a long way toward making employees take an AI policy seriously. This is where company visionaries who find ways to accelerate their work with AI safely can play an impactful role. Highlight them, have them present their strategies to the rest of the company, and make clear that you’re excited to hear and share more success stories. The efficient use of AI is its own reward in this case, so extrinsic rewards are likely unnecessary. 

Given how rapidly the field is evolving, new AI challenges will inevitably emerge, requiring updates to the company policy. But a policy with clearly outlined best practices, sourced from throughout the company, with a straightforward process for incorporating changes, will provide the necessary flexibility that empowers employees to capitalize on the benefits of AI while minimizing the company’s risk exposure.

spot_img

Latest Intelligence

spot_img