Plato Data Intelligence.
Vertical Search & Ai.

AWS CISO: Pay Attention to How AI Uses Your Data

Date:

Enterprises are increasingly adopting generative AI to automate IT processes, detect security threats, and take over front-line customer service functions. An IBM survey in 2023 found that 42% of large enterprises were actively using AI, and another 40% were exploring or experimenting with AI.

In the inevitable intersection of AI and cloud, enterprises need to think about how to secure AI tools in the cloud. One person who’s thought a lot about this is Chris Betz, who became the CISO at Amazon Web Services last August.

Before AWS, Betz was executive vice president and CISO of Capital One. Betz also worked as senior vice president and chief security officer at Lumen Technologies and in security roles at Apple, Microsoft, and CBS.

Dark Reading recently talked with Betz about the security of AI workloads in the cloud. An edited version of that conversation follows.

Dark Reading: What are some of the big challenges with securing AI workloads in the cloud?

Chris Betz: When I’m talking with a lot of our customers about generative AI, those conversations often start with, “I’ve got this really sensitive data, and I’m looking to deliver a capability to my customers. How do I do that in a safe and secure way?” I really appreciate that conversation because it is so important that our customers focus on the result that they’re trying to achieve.

Dark Reading: What are customers most worried about?

Betz: The conversation needs to start with the concept that “your data is your data.” We have a great advantage in that I get to build on top of IT infrastructure that does a really good job of keeping that data where it is. So the first advice I give is: Understand where your data is. How is it being protected? How is it being used in the generative AI model?

The second thing we talk about is that the interactions with a generative AI model often use some of their customers’ most sensitive data. When you ask a generative AI model about a specific transaction, you’re going to use information about the people involved in that transaction.

Dark Reading: Are enterprises worried both about what the AI does with their internal company data and with customer data?

Betz: The customers most want to use generative AI in their interactions with their customers and in mining and taking advantage of the massive amount of data that they have internally and making that work for either internal employees or for their customers. It is so important to the companies that they manage that incredibly sensitive data in a safe and secure way because it is the lifeblood of their businesses.

Companies need to think about where their data is and about how it’s protected when they’re giving the AI prompts and when they’re getting responses back.

Dark Reading: Are the quality of responses and the security of the data related?

Betz: AI users always need to think about whether they’re getting quality responses. The reason for security is for people to trust their computer systems. If you’re putting together this complex system that uses a generative AI model to deliver something to the customer, you need the customer to trust that the AI is giving them the right information to act on and that it’s protecting their information.

Dark Reading: Are there specific ways that AWS can share about how it’s protecting against attacks on AI in the cloud? I’m thinking about prompt injection, poisoning attacks, adversarial attacks, that kind of thing.

Betz: With strong foundations already in place, AWS was well prepared to step up to the challenge as we’ve been working with AI for years. We have a large number of internal AI solutions and a number of services we offer directly to our customers, and security has been a major consideration in how we develop these solutions. It’s what our customers ask about, and it’s what they expect.

As one of the largest-scale cloud providers, we have broad visibility into evolving security needs across the globe. The threat intelligence we capture is aggregated and used to develop actionable insights that are used within customer tools and services such as GuardDuty. In addition, our threat intelligence is used to generate automated security actions on behalf of customers to keep their data secure.

Dark Reading: We’ve heard a lot about cybersecurity vendors using AI and machine learning to detect threats by looking for unusual behavior on their systems. What are other ways companies are using AI to help secure themselves?

Betz: I’ve seen customers do some amazing things with generative AI. We’ve seen them take advantage of CodeWhisperer [AWS’ AI-powered code generator] to rapidly prototype and develop technologies. I’ve seen teams use CodeWhisperer to help them build secure code and ensure that we deal with gaps in code.

We also built generative AI solutions that are in touch with some of our internal security systems. As you can imagine, many security teams deal with massive amounts of information. Generative AI allows a synthesis of that data to make it very usable by both builders and security teams to understand what’s going on in the systems, go ask better questions, and pull that data together.

When I started thinking about the cybersecurity talent shortage, generative AI is not only today helping improve the speed of software development and improving secure coding, but also helping to aggregate data. It’s going to continue to help us because it amplifies our human abilities. AI helps us bring together information to solve complex problems and helps bring the data to the security engineers and analysts so they can start asking better questions.

Dark Reading: Do you see any security threats that are specific to AI and the cloud?

Betz: I’ve spent a lot of time with security researchers on looking at cutting-edge generative AI attacks and how attackers are looking at it. There are two classes of things that I think about in this space. The first class is that we see malicious actors starting to use generative AI to get faster and better at what they already do. Social engineering content is an example of this.

Attackers are also using AI technology to help write code faster. That’s very similar to where the defense is at. Part of the power of this technology is it makes a class of activities easier, and that’s true for attackers, but that’s also very true for defenders.

The other area that I’m seeing researchers start to look at more is the fact that these generative AI models are code. Like other code, they’re susceptible to having weaknesses. It’s important that we understand how to secure them and make sure that they exist in an environment that has defenses.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?