Rivalz hackathon
  • Developer Guides (Short Version)
    • Rivalz AI World Hackathon
      • Hackathon Ideas
    • Developer Overview
    • OCY - AI Data Storage and RAG
      • Rivalz Developer Console
        • How to signup an account
        • How to claim your free credit
        • Dashborad
        • Billing
        • Profile
        • Upload history
      • Access to the Rivalz Storage
      • Knowledge Base
      • Retrieval Augmented Generation (RAG)
      • SDK and APIs
        • PythonSDK
        • NodejsSDK
    • ADCS - AI/Data Oracle System
      • Data Providers
      • Adaptor Creator
      • Dapp Creator
        • Off-chain Components
          • Adaptor
          • ADCS Nodes
            • Fetcher Node
            • Data Storage
            • Data Providers
        • On-chain Components
          • CoordinatorBase
          • Consumer Contract
          • Oracle Router
          • Oracle
      • Dapp-Example
      • Embed a data provider to AI Agents
      • Self-Agent Deployment
    • Eliza ai16z
    • Project Submission (CLOSED)
Powered by GitBook
On this page
  • What is RAG?
  • Rivalz RAG
Export as PDF
  1. Developer Guides (Short Version)
  2. OCY - AI Data Storage and RAG

Retrieval Augmented Generation (RAG)

PreviousKnowledge BaseNextSDK and APIs

Last updated 7 months ago

One of the most powerful applications enabled by Large Language Models (LLMs) is sophisticated Question-Answering (Q&A) Chatbots. These chatbots can answer questions based on specific source information, offering more relevant and accurate responses.

To achieve this, these applications use a technique known as Retrieval Augmented Generation (RAG). RAG enhances the model's ability to generate responses by retrieving relevant information from a database or document, allowing the chatbot to answer questions with greater precision and context.

What is RAG?

RAG is a technique used to augment the knowledge of Large Language Models (LLMs) by providing additional, relevant data.

While LLMs are capable of reasoning about a wide range of topics, their knowledge is restricted to the public data available up until the point they were trained. This means they may not have up-to-date information or may lack knowledge of private or specialized data. To build AI applications that can reason about new data or private information, it's essential to augment the model’s knowledge with the specific information required.

This process of incorporating external data into the model's prompt, allowing it to generate more informed and accurate responses, is known as Retrieval Augmented Generation (RAG).

LangChain offers a variety of tools and components designed to help build Q&A applications and general RAG applications by efficiently retrieving relevant data and feeding it into the LLM, making it a powerful framework for integrating external information into AI models.

Rivalz RAG

Rivalz streamlines the process of creating Retrieval Augmented Generation (RAG) applications by offering an easy-to-use API.

RAG APIs for Python
RAG APIs for Nodejs