Zephyrnet Logo

Adapting Security to Protect AI/ML Systems

Date:

Artificial intelligence (AI) isn’t just the latest buzzword in business; it’s rapidly reshaping industries and redefining business processes. Yet as companies race to integrate AI and machine learning (ML) into every facet of their operations, they are also introducing new security and risk challenges. With a focus on agile development practices to achieve a competitive advantage, security takes a backseat. This was the case in the early days of the World Wide Web and mobile applications, and we’re seeing it again in the sprint to AI.

The way AI and ML systems are built, trained, and operated is significantly different from the development pipeline of traditional IT systems, websites, or apps. While some of the same risks that apply in traditional IT security continue to be relevant in AI/ML, there are several significant and challenging differences. Unlike a Web application that relies on a database, AI applications are powered by ML models. The process of building a model involves collecting, sanitizing, and refining data; training ML models on the data; then running those models at scale to make inferences and iterate based on what they learn.

There are four main areas where traditional software and AI/ML development diverge. These are, respectively, changed states versus dynamic states, rules and terms versus use and input, proxy environments versus live systems, and version control versus provenance changes.

Open sourced AI/ML tools, such as MLflow and Ray, provide convenient frameworks for building models. But many of these open source software (OSS) tools and frameworks have suffered from out-of-the-box vulnerabilities that could lead to serious exploitation and harm. Individually, AI/ML libraries themselves create a much larger attack surface, since they contain massive amounts of data and models that are only as safe as the AI/ML tool they’re saved in. If these tools are compromised, attackers can access multiple databases’ worth of confidential information, modify models, and plant malware.

Security by Design for AI/ML

Traditional IT security lacks several key capabilities for protecting AI/ML systems. First is the ability to scan tools used by data scientists to develop the building blocks of AI/ML systems, like Jupyter Notebooks and other tools in the AI/ML supply chain, for security vulnerabilities.

While data protection is a central component of IT security, in AI/ML it takes on added importance, since live data is constantly being used to train a model. This leaves the doors open for an attacker to manipulate AI/ML data and can result in models becoming corrupted and not performing their intended functions.

In AI/ML environments, data protection requires the creation of an immutable record that links data to the model. Therefore, if the data is modified or altered in any way, a user who wants to retrain the model would see that the hashing values (which are used to ensure the integrity of data during transmission) don’t match up. This audit trail creates a record to trace when the data file was edited and where that data is stored, to determine if there was a breach.

Additionally, scanning AI/ML models is required to detect security threats such as command injection. That’s because a model is an asset that lives in memory, but when saved to disk (for distribution to co-workers), the format can have code injected into it. So, while the model will continue to run exactly as it did before, it will execute arbitrary code.

Given these unique challenges, here are a few useful best practices to consider:

  • Find dependencies for vulnerabilities: Contextualized visibility and strong query tools can generate a wide-ranging view of all ML systems in real-time. It should span all vendors, cloud providers, and supply chain resources involved in AI/ML development to provide a view of all dependencies and threats. A dynamic ML bill of materials (ML BOM), can list all components and dependencies, giving the organization a full provenance of all AI/ML systems in the network.

  • Secure cloud permissions: Cloud containers leaking data can be a fatal flaw in AI security, given the model’s reliance on that data for learning. Scanning permissions on the cloud is a priority to prevent data loss.

  • Prioritize data storage security: Implement integrated security checks, policies, and gates to automatically report on and alert about policy violations in order to enforce model security.

  • Scan development tools: Just like development operations evolved into development security operations, AI/ML development needs to build security into the development process, scanning development environments and tools like ML Flow and their dependencies for any vulnerabilities, along with all AI/ML models and data input.

  • Audit regularly: Automated tools can provide the necessary immutable ledgers that serve as timestamped versions of the AI/ ML environment. This will support forensic analysis in the case of a breach, showing who may have violated the policy, where, and when. Additionally, audits can help update protections to tackle the threat landscape.

To tap AI’s potential while addressing its inherent security risks, organizations should consider implementing the best practices listed above, and begin to implement MLSecOps.

spot_img

Latest Intelligence

spot_img