Navigating the AI Data Security Landscape
Stay in the Know: Subscribe to Our Blog
With enterprise AI adoption increasing, we explored the critical aspects of AI data security and discussed practical strategies for effective governance and risk management.
Moderator: Steve Zalewski, former CISO, Levi-Strauss
Panelists:
- Malcolm Harkins, Chief Security and Trust Officer, HiddenLayer
- Danny Portman, PhD, Head of Generative AI and VP Data Science, Zeta Global
- Ganesh Kirti, CEO and Founder, TrustLogix
View three key moments from the webinar speakers below. You can access the full webinar recording of “Navigating the AI Data Security Landscape” above.
Malcolm Harkins
Chief Security and Trust Officer, HiddenLayer
There might be some headline saying somebody manipulated AI, and it gave an inappropriate answer. You could argue unintended consequences, or people didn't set up the the AI right with its boundary conditions. Having said that, I do know of, and HiddenLayer has worked with some organizations (that I won't mention) where there have been real attacks from organized crime and nation-state actors to subvert GenAI models. They're just not in the headlines yet.
Danny Portman, PhD
Head of Generative AI and VP Data Science, Zeta Global
With GenAI, you’re taking your regular users and actually giving them tools to become super users or power users…so if you're not careful there, and if you don't set up your permissions correctly, you essentially have non-technical users with the ability to read from data that they're not supposed to read from, or worst case scenario, overwrite data that they're not supposed to touch, or even, you know, like nightmare scenarios like a drop table, or something like that, in terms of SQL. So definitely as GenAI is already empowering users, we anticipate seeing more and more guardrails that need to be put in place so that the users don't unintentionally abuse this new quote-unquote superpower.
Ganesh Kirti
Founder and CEO, TrustLogix
The challenge is, there's no agreement between all these stakeholders, you know, security people and data people. And so there is a little bit of a data governance issue that’s really important and has kind of evolved in the [AI] discussions.
AI deployments and adoption is slowly increasing. And we see that in education verticals, and financials, and healthcare verticals. You know, AI is a great enabler, but it's come with security baggage. I can’t think of an LLM app that’s not vulnerable. It's really all about, like modifying the prompt, if you can play with it, and then you can now get the system to do something that it is not supposed to be doing. And that’s only one example, the prompts….there's also other data leakage happening. So every company has to really understand the risk-centric approach.