ExtraHop, a leader in cloud-native network detection and response (NDR), releases today a capability that offers organizations visibility into employees’ use of AI as a service (AIaaS) and generative AI tools, like OpenAI and ChatGPT, to help them understand better their risk exposure while providing insight into an organization’s adherence to policy.
As generative AI and AIaaS are adopted within enterprise settings, there is a growing concern that proprietary data and other sensitive information are being shared with these services.
While AIaaS offers organizations productivity improvements across a range of industries, organizations must be able to audit employee use – and potential misuse – of these tools to protect against insider threats, whether they are intentional or not.
To help determine whether proprietary or other sensitive data may be at risk, ExtraHop offers customers visibility into devices and users on their networks that are connecting to external AIaaS domains, the amount of data employees are sharing with these services, and in some cases, the type of data and individual files that are being shared.
“Customers have expressed a real concern about employees sending proprietary data and other sensitive information into AI services, and until today, there has been no good way to assess the scope of this problem,” said Patrick Dennis, CEO, ExtraHop.
“Amid the proliferation of AIaaS, it’s extremely important that we give customers the tools they need to see what is happening across the network, what data is being shared, and what could be at risk. With this new capability, our goal is to ensure that they can reap the wide-ranging benefits of generative AI tools while still ensuring their data is protected,” Dennis said.
Learn more at www.extrahop.com.