top of page
Writer's pictureRavish Ailinani

Investing to Secure Gen AI

Updated: Oct 28

Last year, we wrote a detailed blog post about our thoughts on Generative AI (GenAI) and Dallas Venture Capital’s areas of investment focus in the category.


We have been pleasantly surprised by the rapid pace of development since then. We often find ourselves playing catch-up as paradigms evolve and innovative developments impact our thinking.




In this blog, we cover security for GenAI, an important area of focus for emerging and existing enterprises and for us at DVC as investors. We believe that addressing security will be critical to mainstream adoption of GenAI.




We acknowledge that GenAI Security is in the early stages of being defined, understood, and categorized, and we don’t presume to have all the answers. However, we hope to summarize recent developments in this category and the exciting work done to address these risks.


GenAI has expanded the attack surface and amplified existing threats while simultaneously introducing new risks through vulnerable deployments and new risky user behaviors. The recent Crescendo jailbreak technique example by Microsoft Research showcases how easy it is to manipulate most state-of-the-art LLMs.


We believe that the breadth and velocity of threats and vulnerabilities will accelerate through the widespread adoption of GenAI, increasing the TAM of cybersecurity vendors and service providers while simultaneously becoming more critical to their customers. GenAI can also help augment existing cybersecurity systems such as threat detection and response, email security, security operations, identity and access management, third-party supply chain management, etc.


In their recent earnings presentation, Palo Alto Networks confirms these viewpoints and estimates that they have already surpassed $100M in AI-first ARR. According to one market research, the GenAI in security market size is expected to be around ~$2.6 billion by 2032 from ~$530 million in 2022, growing at a CAGR of 17.9% from 2023 to 2032. Crowdstrike estimates a cybersecurity GenAI TAM of around $3 billion in CY24.


According to Gartner, 34% of organizations are either already using or implementing artificial intelligence application security tools to mitigate the accompanying risks of GenAI.


We often find that in discussions on AI Security with founders, CISOs, and other investors, certain terms, features, and capabilities across the following three broad categories are often used interchangeably.


In our earlier blog, we wrote about why we think AI Governance is critical to enterprise adoption of AI tools, and are excited about our investment in Holistic AI, and they have written extensively on this topic.


We are also excited about the emerging category of AI Observability, where companies like Fiddler, Arize, and WhyLabs are doing exciting work. We intend to cover this in a subsequent blog post.


This blog post highlights the exciting developments in AI Security, where companies are creating solutions to safeguard the deployment and consumption of LLMs and GenAI applications.


Over time, we anticipate the emergence of end-to-end platforms and partnerships between various vendors while several category-leading companies continue to have some overlap in functionality.

The deployment lifecycle of GenAI encompasses various stages, each introducing potential privacy and security risks. These stages include data preparation, training, fine-tuning, RAG optimization, and prompt engineering. Post-deployment, many additional risks and vulnerabilities are introduced, such as inappropriate user access, prompt injections, endpoint security, insecure API calls, and transmission of confidential information.


Several institutes, big tech companies, and smaller startups provide comprehensive guides on navigating vulnerabilities introduced from the deployment of GenAI. 



The OWASP LLM AI Security and Governance Checklist lists various LLM threat categories and highlights Top 10 critical vulnerabilities in LLMs.


There are several popular open-source tools and frameworks that can be utilized to build security for AI and GenAI. Some interesting open-source security tools/frameworks and companies that are building on these tools are:



(GitHub interaction metrics that use a combination of Stars, Forks, Pushes, Pull Requests, and Top Contributors are the most popular way to categorize the success and adoption of open-source software.)



Using LLMs from closed-source foundation-model providers (like OpenAI, Anthropic, etc.) may be more expensive and offer less control, but customers can expect greater security measures and more rigorous testing, vulnerability patching, dedicated security teams, and infrastructure designed to protect sensitive data. On the other hand, open-source models offer greater flexibility but may carry unknown vulnerabilities and may not have undergone the same level of security scrutiny as those from major providers. One way to protect against this vulnerability is to scan these models before subjecting them to further processes. ModelScan by ProtectAI is an open-source repository that does precisely that.


The Databricks AI Security Framework highlighted below provides a comprehensive list of AI system components and identifies 55 security vulnerabilities across all stages.



The development and deployment of GenAI applications will require rethinking and modifying existing DevOps and DevSecOps practices to ensure a robust approach to GenAI security.


We attempt to identify vulnerabilities of generative models by going through broad categories of production.



Model Training

 

The majority of attacks during model training can be classified as poisoning attacks (data poisoning or model poisoning) and supply chain attacks (LLMs often rely on external datasets, pre-trained models, and code libraries. Compromises in these third-party resources can introduce vulnerabilities during the LLM training process).



Data

 

We hypothesize that the existing suite of products for data governance, quality, and privacy can expand their capabilities (both products and processes) to serve enterprises in ensuring the right kind of data is being used for training. There may be few opportunities for new companies to address security at this stage of the process.


While existing toolsets offer some protection, enterprises will increasingly need more sophisticated solutions to manage these risks. One promising avenue is synthetic data.  As foundation models exhaust available real-world data, synthetic data offers advantages: it can be generated with greater comprehensiveness and frequency while customizing to specific needs. This allows enterprises to tailor datasets for improved model performance while also enhancing security by avoiding the risks inherent in real-world data.



Fine-tuning and Retrieval-Augmented Generation (RAG)

 

Fine-tuning and Retrieval-Augmented Generation (RAG) are powerful techniques used to enhance the capabilities of LLMs. Fine-tuning is adapting a pre-trained large language model to a specific downstream task by further training it on a smaller, task-specific dataset. This adjusts the model’s weights to align better with the desired domain or application. Risks specific to fine-tuning can be classified as –


  • Input Privacy Breaches: When proprietary or sensitive data is exposed to third-party AI platforms during fine-tuning, it can lead to data leaks and privacy violations.

  • Output Privacy Risks / Model Memorization: Large language models can inadvertently memorize and store private training data, which can then be extracted through targeted prompts. This was seen in the Samsung data leak case.


The RAG process starts by indexing a knowledge source and creating vector embeddings – these turn text into numerical representations that capture the meaning within. This database then becomes a searchable resource. When a user asks a question, the retriever uses semantic search and approximate nearest neighbor (ANN) algorithms to find the most relevant pieces of information from the database.  This retrieved context is then fed into a generator LLM, which crafts a response using its language abilities alongside the extracted knowledge for a more comprehensive and informative answer. Some risks specific to Retrieval-Augmented Generation (RAG) are LLM log leaks, and RAG poisoning.



Post Deployment / Inference


Once a generative model is deployed, it’s not immune to attack. Post-deployment or inference attacks target models in production to disrupt their output or extract sensitive information. Output disruption examples include prompt injection/leaking, goal hijacking adversarial examples, and membership inference while extracting sensitive information, which entails model extraction and data extraction (e.g., Personally Identifiable Information) attacks.


As LLMs are deployed, most applications will interact with them through APIs. Several vulnerabilities that we have discussed in this blog are exploitable through APIs and a shift-left approach of API Security Testing through Pynt.io (our portfolio company) will be a critical tool in reducing risks.


There are several ways to safeguard against inference-time attacks. Continuous red teaming is one way to bolster a company’s cybersecurity defenses – by constantly simulating real-world cyberattacks on the organization’s systems. Another is maintaining robust firewalls and using GenAI to combat these attacks.


An important factor to consider while deploying any security solution is that it doesn’t become too restrictive and reduces the ROI of deploying GenAI. Innovative companies will understand context and prevent malicious and risky behavior while ensuring that customers and employees still enjoy the benefits of deploying GenAI.


Role and permission-based access to LLM results will be critical to ensure rapid adoption instead of a one-size-fits-all LLM. Deploying numerous LLMs trained on various sets of data is unrealistic. Dynamically filtering results with fast response rates in a role-appropriate manner while still being useful to the user is a challenging problem.


We are very excited about the opportunities to address the security challenges in GenAI. If you are building anything interesting in this space, we would love to talk with you and learn more!



References:




 

Comments


Commenting has been turned off.
bottom of page