May 4, 2023|3 min|Technology

RAG to riches: Unleashing the power of retrieval augmented generation (RAG) in LLMs


In the era of artificial intelligence, off-the-shelf trained large language models (LLMs) have emerged as a powerful tool for generating human-like responses in various applications. However, most existing knowledge-grounded conversation models rely on out-of-date materials that could be related to the topic of a conversation, limiting their ability to generate diverse and knowledgeable responses that could involve more proprietary or domain-specific data.

Retrieval augmented generation (RAG) defined

To overcome this challenge, the concept of retrieval augmented generation (RAG) has been introduced, which combines the strengths of LLMs with the ability to retrieve information from multiple documents. RAG not only enables LLMs to generate more knowledgeable, diverse, and relevant responses but also offers a more efficient approach to fine-tuning these models. By using RAG to determine what to respond with and fine-tuning to guide how to respond, LLMs can deliver a more engaging and informative conversational experience.

Cohesity innovation and retrieval augmented generation (RAG)

At Cohesity, we are building a ground-breaking approach to provide robust and domain-specific context to RAG-driven AI systems. This is made possible through the patented SnapTree and SpanFS architectures of Cohesity. By leveraging these robust file systems, Cohesity can make its platform ‘AI Ready’ for RAG-assisted large language models (LLMs) through an on-demand index of embeddings that are provided just-in-time to the AI application requesting the data. Additionally, the data is secured through Cohesity’s role-based access control (RBAC) models.

The retrieval-augmented response generation platform under development by Cohesity accepts a user or machine driven input—such as a question, or a query. That input is then tokenized with some keywords extracted that are used to filter the petabytes of an enterprise’s backup data to filter down to a smaller subset of data. It then selects representations from within those documents or objects that are most relevant to the user or machine query. That result is packaged, along with the original query, to the Language Model (such as GPT4) to provide a context-aware answer. This innovative approach ensures that the generated responses are not only knowledgeable but also diverse and relevant to the enterprise’s domain-specific content.

By using RAG on top of an enterprise’s own dataset, a customer will not need to perform costly fine-tuning or initial training to teach the Language Models ‘what’ to say. This saves time, money, as well as a reduced environmental impact because as we all know, an enterprise’s dataset is constantly changing and evolving. Leveraging RAG always provides the most recent and relevant context to any query.

Both automatic and human evaluation results with a large-scale dataset show that Cohesity’s retrieval-augmented response generation platform can generate more knowledgeable, diverse, and relevant responses compared to off-the-shelf LLMs without duplicating or massively increasing the data storage requirements. This breakthrough has significant implications for the future of Enterprise Conversational Q&A and Search & Discovery models for their applications in various industries.

For technology and business executives, the introduction of RAG-driven AI systems by Cohesity presents a unique opportunity to leverage the power of data-driven insights and enhance the quality of conversations across various platforms. By harnessing the power of Cohesity’s Data Protection and Data Management enhanced by Artificial Intelligence, organizations can unlock new levels of efficiency, innovation, and growth.

Learn more about retrieval augmented generation (RAG) in the video below:


Capitalizing on the potential of RAG-driven AI systems

The development of retrieval-augmented response generation models by Cohesity represents a significant leap forward in the realm of knowledge-grounded conversations. By using the power of multiple documents and incorporating both the topic and local context of a conversation, these models can generate more knowledgeable, diverse, and relevant responses than ever before. As a result, businesses and technology executives can capitalize on the potential of RAG-driven AI systems to transform the way they engage with customers, partners, and employees, driving innovation and growth in the process. With the Cohesity Data Cloud, our robust and secure platform, the future of AI-driven conversations is not only promising but also within reach.

This blog is part of our “Road to Catalyst” series. Check back every week for new data security and AI content, and register today to join us at Cohesity Catalyst, our data security and management virtual summit.

Written by

Greg Statton headshot

Greg Statton

Office of the CTO - Data & AI

Greg Statton headshot

Greg Statton

Office of the CTO - Data & AI

You may also like

resource
Blog

How generative AI can help you get ahead of data security threats

resource
Blog

The power of AI-ready data and Cohesity

resource
Blog

Cohesity and Microsoft: Unleashing the future of data security, AI, and cloud

X image
Icon ionic ios-globe

You are now leaving the German section of www.cohesity.com/de/ and come to an English section of the site. Please click if you want to continue.

Don't show this warning again

Icon ionic ios-globe

You are now leaving the German section of www.cohesity.com/de/ and come to an English section of the site. Please click if you want to continue.

Don't show this warning again