Plato Data Intelligence.
Vertical Search & Ai.

Prompt Security Launches With AI Protection for the Enterprise

Date:

Prompt Security launched out of stealth today with a solution that claims to use AI to secure a company’s AI products against prompt injection and jailbreaks — and also keeps employees from accidentally feeding sensitive data to tools like ChatGPT.

Organizations are pursuing the benefits of generative AI (GenAI), but they are also worried about the effects these new tools could have on the security of their environments. In a recent Dark Reading survey, respondents saw many potential risks of fully adopting GenAI, including the opaqueness of third-party tools (46%), a lack of consensus on GenAI guidelines and policies (43%), and data governance concerns (39%).

To address these risks, Prompt Security says it safeguards every interaction with GenAI in the organization, such as internal AI tools and commercial products with AI features, by checking each prompt and model response to protect against exposure of sensitive data, block harmful content, and other GenAI-specific attacks. By inspecting semantic data, Prompt Security can ward off threats such as prompt injection, jailbreaking, and data extraction. And contextual LLM-based models detect and redact sensitive data, protecting customer and employee information, as well as intellectual property, from accidental exposure.

The solution also catalogs the array of AI tools used within an organization. The security team can see how people are using these tools and define access policies by application and user group.

Prompt Security’s co-founders came from Orca Security. The CEO Itamar Golan was previously head of ML and AI at Orca and the CTO Lior Drihem was Orca’s head of innovation. The $5 million seed round was led by Hetz Ventures, with participation from Four Rivers and multiple angel investors.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?