Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Rivalz Storage is a cutting-edge distributed vector storage service, allowing users to securely store and access data from any location on the internet. Utilizing advanced decentralization, peer-to-peer (P2P) networking, artificial intelligence, and blockchain technology, Rivalz delivers a scalable, cost-efficient, and resilient solution for cloud-based vector storage.
In addition to its powerful storage capabilities, Rivalz offers an advanced AI platform that extracts valuable knowledge from your documents. We provide an easy-to-use API that allows you to vectorize your documents and integrate them into an AI model, creating a customized "knowledge base" tailored specifically to your application’s needs.

The Rivalz Developer Console is a user-friendly, intuitive web UI that allows developers to efficiently manage and monitor their uploaded files within the Rivalz Storage ecosystem. In addition, it helps manage billing information, generate and secure API keys.
Rivalz is a layer 2 blockchain creating the World Abstraction Layer for AI and AI Agents. The Rivalz Network is a decentralized market of Data, AI, DePIN and Human resources. Rivalz bridges AI and Agents to the real world. In the Rivalz AI World Hackathon we invite you to build on top of 1.0 versions of our Data Storage/AI RAG solution (OCY DePIN) + our Agentic Data and AI Oracle Network (ADCS).
This hackathon invites new and existing projects to bring exciting AI project to life using our infrastructure via 3 main tracks: OCY, ADCS, AI Agents. And a special ai16z sub-track.
This is a global online hackathon where anyone can participate, additionally there will be a special offline track in Manila, Philippines with a separate prize pool.
After submission you'll be given a special link for private developer chat.
Apply to the offline track in Manila, Philippines by The Block - .
Total Prize Pool: $85,000. [50% USDT, 50% Rivalz Native Token $RIZ]
OCY - / - $20,000
This track will focus on Data Management and Usage for AI use cases.
ADCS - / - $20,000
This track will focus on creative DAPP or Agent design involving ADSC Oracles on-chain. Additionally, top creative Data and AI providers and Adapter creators will be rewarded.
AI Agents - - $45,000
General track for creation of AI Agents using OCY or ADCS. [$15,000]
Eliza Sub-tracks. For AI Agent developers using ai16z tech - Here you do not have to use any of our infra, although it will be a bonus if you do. For clarity: Rivalz is in no way associated with ai16z.
[$15,000] Most creative/valuable use-cases of long-term autonomous planning and execution, by AI Agents. Supply the Agent with any resources from access to services to money, and have them perform as long as possible towards the end goal.
[$15,000] Most creative/valuable use-cases of VMs or Web-access by AI Agents. AI Agent has full control over a virtual machine or just a web-browser.
Additional points for using Waking Up .
Week 1 (Nov 25 - 29):
Introduction to OCY and ADCS
Team formation
Weeks 2-3 (Dec 2 - Dec 13):
Partner Workshops
Business Talks
Weeks 4 (Dec 16 - Dec 20):
Check-in & Feedback
Weeks 7 (Deadline Dec 20th- 9 PM UTC):
Final Grant/Project Submission
The winner Announcement will take place within 2 weeks of the hackathon conclusion. Date TBA.
The GitHub repository, project description, and video presentation must all be submitted before the deadline. A link will be provided soon.
Participants must be 18+
The judging panel will primarily consist of the Rivalz Team, but we may also onboard additional judges during the hackathon. Stay tuned.
First part - Projects that have successfully submitted all the details will be screened. Top projects will be invited to second part.
Second part - Present your project on a video session with judges.
Projects will be selected based on merit of utility, creativity, depth of development, presentation.
Each track will have Gold, Silver, Bronze winners, with rewards worth 50%, 30%, 20% respectively of tracks reward pools.
Rewards will be allocated within 4 weeks of winners announcements.
Winners will be provided with additional support in Resources, Marketing, Funding, Business and more. All participants will have a fast track for our incubator program -. For existing projects in the AI/Agentic fields we suggest to apply now.
The developer guides are designed to equip you with a comprehensive range of resources, tools, and support, empowering you to build applications within the Rivalz ecosystem—especially AI-focused solutions. Through these guilds, developers gain access to detailed technical documentation, SDKs, and development frameworks, simplifying the journey from concept to deployment, making it easier to create, innovate, and excel in building applications on Rivalz Infrastructure.
Read for an overview of Rivalz's distributed vector storage service, learn how to create a knowledge base and use it for AI RAG systems.
Description: Build an Agent that allows users to supply additional information for its knowledge base on-chain, with an open-source directory and time-stamps for different data added to the knowledge base. Additional points for AI-driven data categorisation.
Autonomous Research Agent
Description: Develop AI agents that autonomously retrieve, analyze, and summarize academic research or news articles on specific topics using OCY's RAG capabilities. Additional points, if your Agent understands what data it already has, doesn’t double store, categorises information found from a research point of view and does cycles of goal re-evaluation based on the knowledge found.
Evolving-RAG Agent
Description: Create and Agent that interact with users to requesting additional material. Then it adjusts its own knowledge through this material. Interesting possibilities can come from the side of directly allowing new information to twist the personality of the agent.
Description: Create agents that analyze user data, assess its value (e.g., insights from usage patterns or preferences), and automatically tokenize the data into tradable assets on Arbitrum or Base. Use Case: Enable users to monetize their data while maintaining ownership, creating a decentralized data marketplace.
Description: Create agents that manage decentralized data cooperatives, where contributors pool their data and earn collective rewards through smart contract-governed profit-sharing. Use Case: Empower communities to monetize shared data for social or economic benefits.
Side-note: Creative Data/AI Providers and unique Adapter creators will be rewarded separately.
Isolated agentic economies where each user controls their own Agent There is an extremely interesting use-case for agentic pvp, this could be in any vertical, but we will provide a few examples.
Isolated trading - There is a token, but only agents are allowed to trade it, say to participate each user needs to launch their own agent and add specification to how it acts. Then at certain event intervals agents access the oracle to make ai-driven decisions on how to act.
Isolated gamefi - Real users only setup the initial parameters of agents, then let them pvp with each other again requesting adcs for decision making.
Description: Agents provide real-time AI services (e.g., analytics, recommendations) on a subscription basis, with payments handled automatically via recurring Arbitrum/Base smart contracts. Use Case: Enterprises or individuals pay only for the AI insights they use, reducing upfront costs.
Description: Agents that mediate data sharing by encrypting and tokenizing user data, enabling secure and private transactions via Abritrum/Base smart contracts. Use Case: Protect user privacy while allowing controlled data sharing for AI applications.
Alternatively, you can create a tornado.cash type product by creating a mother agent SC and having users deploy their own pre-built SC as agents to where end state offload is managed by the mother agent and all TXs are handled by agents off-chain.
Description: Smart contract-powered bounties incentivize AI agents to retrieve and process data for specific research tasks. Rewards are issued when the task is verified as complete and valuable. Use Case: Universities, startups, or think tanks seeking decentralized AI research capabilities.
Any Agents using ADCS or OCY.
ai16z Eliza sub-tracks - We are specifically interested in Agents:
Utilizing more mediums, from full access to devices to Virtual Machines / to WebOS
Having larger degrees of freedom, think outside of the box
With long-term reasoning/planning/action
That leverages SWARMS, such as this
Most importantly, make something interesting and exciting.
Learn more about ADCS (Agentic Data Coordination System) to create on-chain verifiable Dapps using AI and AI Agents.
Find out about Vord – a no-code platform for creating AI Applications.

The design of the Dapp creator off-chain components includes the Adaptor and ADCS Nodes, each playing a vital role in the overall architecture of the system
With access to trusted and reliable data, AI Agents can unlock their full potential – from making smarter decisions to streamlining processes and drastically improving operational efficiency. ADCS enables you to seamlessly integrate data providers directly into your AI Agents within an off-chain environment, offering unparalleled flexibility and performance.
One of the most powerful applications enabled by Large Language Models (LLMs) is sophisticated Question-Answering (Q&A) Chatbots. These chatbots can answer questions based on specific source information, offering more relevant and accurate responses.
To achieve this, these applications use a technique known as Retrieval Augmented Generation (RAG). RAG enhances the model's ability to generate responses by retrieving relevant information from a database or document, allowing the chatbot to answer questions with greater precision and context.
RAG is a technique used to augment the knowledge of Large Language Models (LLMs) by providing additional, relevant data.
While LLMs are capable of reasoning about a wide range of topics, their knowledge is restricted to the public data available up until the point they were trained. This means they may not have up-to-date information or may lack knowledge of private or specialized data. To build AI applications that can reason about new data or private information, it's essential to
The billing page provides a comprehensive overview of your financial activities within the Rivalz platform. It allows you to easily monitor and manage your invoices, track your subscription status, and update or add new payment methods. This page ensures that you stay on top of your account’s billing cycle and payments, making it easier to manage your costs.
Rivalz implements a robust and secure payment system by leveraging Stripe, a leading global payment processing platform. This integration offers users a seamless, efficient, and familiar payment experience while ensuring the highest standards of security and compliance.
ADCS nodes plays a crucial role in aggregating and processing data from various providers, ensuring that the data ecosystem operates effectively and securely, No matter If the user's request involves inference or not. Here’s a breakdown of the key components of ADCS nodes:
Fetcher Node
Data Storage
Data Providers
Read the DataProviders section to learn more
Oracles act as intermediaries that connect AI agents to external data sources, facilitating the flow of information between blockchain-based applications and off-chain data. In the Dapp-Agentic Data Coordination System (ADCS), oracles receive data from reporters (data providers) and relay it to the AI agents or smart contracts.
However, sending raw data back to oracles in ADCS can introduce security risks, especially when handling sensitive or proprietary information. Such data could be vulnerable to tampering, interception, or unauthorized access during transmission, posing a significant threat to the integrity and privacy of the system.
To mitigate these risks, Zero-Knowledge Proofs (ZKPs) are employed. ZKPs are cryptographic techniques that enable the reporter to validate the accuracy of the data being sent to the oracle without revealing the actual content of the data. This method ensures:
Data confidentiality: The sensitive information remains hidden, even as its validity is verified by the oracle.
Trust without exposure: The oracle can trust that the data is correct without needing access to the raw information, preventing leaks or breaches.
Protection against tampering: The data cannot be manipulated or altered without detection, as ZKPs ensure that only valid, authenticated data is processed.
The Oracle Router is responsible for directing the data requests to the appropriate Oracles based on the specifications outlined by the user.
When the Coordinator contract receives a request from a Consumer contract, it performs a thorough analysis of the request details. This includes understanding the type of data required, the specific parameters defined by the user.
After the Coordinator has evaluated the request, it routes it to the corresponding Oracle Router.
The Oracle Router has the capability to dynamically select oracles based on the type of data being requested and the criteria set forth by the user.
We currently support for both Python and Node.js.
In the context of Rivalz, the DApp Creator represents a seamless integration of AI and blockchain technology, transforming how data is processed and decisions are made. AI's capacity to analyze extensive data in real-time, combined with blockchain's decentralized, secure, and immutable architecture, is revolutionizing large-scale, data-driven decision-making.
A key innovation in this space is the development of Onchain AI Agents—intelligent systems that function directly on blockchain networks. These agents enable decentralized AI services that are transparent, secure, and highly efficient, unlocking new possibilities for applications in a wide range of fields.
Dapp creator consists of both on-chain and off-chain components that work together to provide off-chain data access to on-chain applications. This architecture facilitates efficient data flow between decentralized applications (dApps) and external data sources, ensuring that AI-driven decision-making processes are supported by reliable data.
The following sections provide a more in-depth look at its key components.
You can find all the relevant information here: https://github.com/ai16z
This process of incorporating external data into the model's prompt, allowing it to generate more informed and accurate responses, is known as Retrieval Augmented Generation (RAG).
LangChain offers a variety of tools and components designed to help build Q&A applications and general RAG applications by efficiently retrieving relevant data and feeding it into the LLM, making it a powerful framework for integrating external information into AI models.
Rivalz streamlines the process of creating Retrieval Augmented Generation (RAG) applications by offering an easy-to-use API.
To get credit, you need to add a payment method. However, the payment system is in Test Mode, so you should not use real account numbers. Stripe provides a test environment (known as "Test Mode") where you can use test card numbers that simulate real transactions without involving real money.
Here’s how you can create a test payment using Stripe's test cards:
Visa Test Card:
Card Number: 4242 4242 4242 4242
MM/YY: 12/24
CVC: 123
ZIP: 94105
MasterCard Test Card:
Card Number: 5555 5555 5555 4444
MM/YY: 12/24
CVC: 123

The CoordinatorBase Contract plays a pivotal role in the Dapp-Agentic Data Coordination Service (ADCS) architecture, serving as the bridge between on-chain smart contracts and off-chain AI agents. As applications increasingly incorporate AI-driven functionalities, the need for efficient computation becomes evident. However, implementing AI computations directly on the blockchain poses significant challenges, primarily due to the high computational costs involved.
Many applications opt to perform these computations off-chain on centralized servers, submitting only the final results to the blockchain. While this approach is practical and efficient, it compromises the core principles of decentralization, raising security concerns and potentially undermining the trust and transparency foundational to the blockchain ecosystem.
To address these challenges, We propose a solution where AI agents perform computations or inferences off-chain. Here’s how it works:
Offchain Inference: Once the user has defined the specific schema for an inference request, they can invoke the requestInference function to initiate an inference request. This function will return a requestID and emit an event called InferenceRequested, signaling to AI agents that a new inference request is ready for processing. Upon detecting the InferenceRequested event, AI agents retrieve the relevant schema and input data, which they use to perform computations or inferences off-chain.
Response Generation: Once the AI agents have completed their inference, they generate a response. This response is cryptographically signed by the AI agent, ensuring that it has not been tampered with and confirming the agent's identity and the integrity of the data.
Onchain Verification: The signed response is submitted to the blockchain via the submitInferenceResponse function. Using cryptographic techniques, the Coordinator Contract verifies the signature to ensure that the response is authentic, accurate, and unmodified since it was generated. Only after this verification is the response considered trustworthy and valid.
The Adaptor is a critical parameter for off-chain inference that acts as a template, dictating the structure and content of inference requests. Users define the specific parameters for the schema used in off-chain inference as follows:
Request Instructions: This contains detailed guidelines about the nature of the request, including various parameters such as variables, consumer contracts, event requests, and required user fees.
Reference Data: The data that the inference should reference or utilize, including any necessary context or historical data.
Response Format: The expected structure of the inference response, which ensures the output is formatted correctly for subsequent workflows. This could be a boolean, JSON, or Integer format depending on the use case.
Users can define multiple Adaptor, each tailored to specific requirements, with each adaptor being uniquely identified by a randomly generated jobID during the creation process. These jobIDs serve as crucial identifiers in Consumer contracts, ensuring accurate referencing.
The Dashboard page helps you monitor the status of your uploaded files, including the total bytes uploaded. It provides detailed insights and visual representations of your data, breaking it down into daily, weekly, and monthly views. This allows you to track your storage usage, identify trends, and make data-driven decisions to optimize your storage needs.
Submissions are NOW CLOSED for the Rivalz AI World Hackathon! Deadline: December 20, 2024
$85,000 in Rewards
Get fast-tracked into the Rivalz Alliance
Join 100+ developers, showcase your talent, and take your project to the next level
The Upload History page within the Rivalz Developer Console provides a detailed log of all the files you have uploaded to the Rivalz storage system. It allows you to easily track your file uploads, and see important information related to your uploads, such as:
File Name: The name of the uploaded file, which allows you to easily identify your files.
File Size: The size of the uploaded file, measured in bytes or GB, so you can monitor your storage usage.
Upload Hash: A unique hash is generated for each uploaded file, providing a way to verify file integrity and ensure the file hasn't been altered or corrupted.
Upload At: The date and time the file was uploaded, enabling you to track when files were added to your storage.
The Profile page is where you can manage your personal information, access your API Key, and also create credits or subscribe to Rivalz storage service packages.
An API is a JWT token, formatted as a string like: eyJhbGciOiJIUzI1NiJ9.eyJpZCI6IjY2ZmI3ZTY..., which is needed to authenticate your requests to the Rivalz storage service.
The Agentic Data Coordination System (ADCS) is the connectivity module and the next evolution in data infrastructure tailored specifically for AI Agents. Designed to revolutionize the future of Artificial Intelligence, ADCS builds an expansive data network that emphasizes rapid validation and ultra-low latency.
Providers
To access the Rivalz storage service, you need to have an account. This section walks you through the process of creating an account and obtaining the credentials required to access the Rivalz storage service. Follow the step-by-step instructions to get started quickly and easily.
Refer to learn How to sign up for an account
If you already have a account, you can use those credentials to sign in.
Structured data refers to information that is organized in a format that is easily understandable by both humans and machines. In structured data, the elements or fields are clearly defined, and there is a well-defined schema or model that governs the relationships and properties of these elements. This organization allows for efficient storage, retrieval, and analysis of data.
Structured data is typically found in relational databases, spreadsheets, and other tabular formats. Each piece of data is assigned to a specific category or field, and relationships between different pieces of data are explicitly defined. The use of schemas ensures that the data adheres to a specific structure, which simplifies operations like querying, filtering, and aggregating data.
Examples of structured data include tables with rows and columns, where:
To claim your free credits, simply visit the and click on the "Create Credit" button in the Profile tab.
Dapp Creator
Self-Agent Deployment
Each column represents a specific attribute or property (e.g., "Name" "Age", "Salary").
Each row corresponds to a unique record or entry (e.g., an employee's details in a company's database).
Structured data is commonly used in a variety of applications, including business databases, financial systems, and information management systems, where organization and consistency are critical. This format allows for easy reporting, automation, and analysis through tools like SQL, ensuring data integrity and seamless interaction between systems.
Unstructured data refers to information that does not have a predefined schema or structure, making it more difficult to organize, search, and analyze compared to structured data. Common examples include text files, PDFs, images, videos, audio files, and other media types. Unlike structured data, unstructured data doesn’t fit neatly into a traditional row-and-column database model.
Although unstructured data can technically be stored in relational databases as Binary Large Objects (BLOBs), it is generally more suitable for file systems or object storage systems, especially due to its large size and unique requirements for backup and compliance.
However, metadata and vector embeddings associated with unstructured data still need to be stored in databases to make this data discoverable and usable.
Metadata typically includes information such as file name, URI, size, type, owner, and creation date. It may also contain deeper details like extracted text, object boundaries, and other context-relevant data.
This metadata can be stored in either a structured or semi-structured format, such as a JSON column or a combination of both.
To further enhance the usability of unstructured data, machine learning models can be employed to generate metadata or vector embeddings. These embeddings are useful for searching, analyzing, and building real-time AI applications.
Semi-structured data exists in the gray area between structured and unstructured data. It has some organizational structure but leaves room for flexibility and undefined elements. Common formats for semi-structured data include XML, JSON, Avro, and Parquet. Data from sources like sensors and server logs can easily fall into this category, as it often appears in or can be converted to formats like JSON or CSV.
Some data vendors even classify HTML code and emails as semi-structured. For example, an email can be represented as a JSON object with fields like sender, recipient, subject, and timestamp. However, if the email includes attachments such as media files or PDFs, it may also be considered unstructured data.




ZIP: 94105



Upon creating an account, you'll receive $500 in free credits, allowing you to explore and utilize our platform’s features without the need for immediate payment. Visit here to claim your free credit.
We offer an S3 Standard package designed for general storage purposes, making it ideal for storing a variety of data, especially frequently accessed information. With this package, you can store your data at affordable rates:
First 50 GB / Month: $0.10 per GB
After 450 GB: $0.08 per GB

To get your API secret keys:
Open the Rivalz Console: Log in to your Rivalz account and go to the console.
Go to the Profile Tab: Click on "Profile" in the navigation menu.
Find the API Keys Section: In this section, you'll find your API secret key.
Copy the Key: Click the "Copy" button to save the API secret key to your clipboard.
After obtaining your API secret key, be sure to store it in a secure location. You'll need it to authenticate requests to the Rivalz API.


Fetcher Nodes are vital components of the Agentic Data Coordination Service (ADCS) Node, designed to intelligently retrieve, process, and aggregate data from diverse external sources. By leveraging a structured data adapter framework, these nodes ensure that requestors have access to standardized data structures.
In the ADCS system, Fetcher Nodes serve as data provision points for oracles, ensuring they are supplied with high-quality, reliable information necessary for efficiently fulfilling consumer requests.
In addition, to further ensure data integrity and bolster network security, we are committed to open-sourcing our Fetcher Nodes. This initiative empowers anyone to run a node, fostering a collaborative environment where the community actively participates in maintaining the integrity of our data ecosystem.
Each Fetcher Node consists of three primary components:
Fetcher: This component is responsible for intelligently fetching data from designated sources based on predefined configurations. It ensures efficient retrieval of information while adhering to the specifications set in the data adapter.
Data Adapter: The idea behind our adapter framework is to ensure that users can request various types of data from diverse sources while maintaining compatibility with standardized formats. It acts as the critical interface that standardizes the interaction between external data sources and Fetcher within the ADCS framework.
Each data adapter is identified by a unique adapterHash that ensures compatibility with the aggregator. This hash guarantees the data retrieved by Fetcher is correctly structured and compatible for aggregation, maintaining data integrity throughout the process.
Aggregator: Aggregates the data retrieved from the fetchers and provides the final, aggregated data. Each node stores both a local aggregate and a global aggregate.
The Aggregator consolidates this data according to the unique specifications of each Adapter, ensuring accuracy and relevance. When data is fetched from a fetcher, it is processed using different aggregation methods, such as median or majority voting, depending on the type of data. This aggregated information is then stored in the local aggregate. Subsequently, all local aggregates are synchronized with each other to become global aggregates. When the consensus is reached, the global aggregate will be the final data for storage.
We use the Raft consensus mechanism to ensure that all nodes in the network agree on the global aggregate data, maintaining consistency and reliability while effectively managing data replication and fault tolerance.
Steps Raft Takes to Achieve Consensus for a Global Aggregate:
Leader Election: Initially, a leader must be elected among the nodes. This node will be responsible for managing the log replication process and ensuring that all nodes (followers) are in sync.
Log Entry Creation: Once a leader is elected, it proposes the global aggregate data to be stored. This is done by creating a new log entry that contains the aggregated data and then sends this log entry to all follower nodes
Log Replication: Each follower node receives the proposed log entry from the leader and appends it to its own log. Followers send an acknowledgment back to the leader to confirm they received and stored the log entry.
The Adaptor Creator represents a foundational tool that defines the structure of off-chain inference requests, creating a standardized framework for AI agents, smart contracts, or other consumers to communicate effectively with off-chain data sources.
To create an adaptor, you need to specify several essential parameters that define its functionality, capabilities, and communication requirements. These parameters ensure that the adapter is correctly configured to facilitate interactions between different systems, particularly between on-chain and off-chain environments. Here are the critical parameters:
Name (String): Name of the adaptor.
Provider: Choose a data provider (Currently, We provide a list of providers. You can choose one that fits your requirements).
Network: The network you wish to deploy on (Base, Rivalz, Arbitrum).
Description(String): Provides a brief description of what the adapter does.
During the generation process, the system will automatically create a unique JobID in a format compatible with Solidity's bytes32 type. This JobID serves as a unique identifier and must be generated before the creation of the adaptor.
For example jobID: 0x1b364865ca3e6bb5ada098d0ea96f9e9369b5693cacede79d1352334c4213ac2
Visit the Rivalz ADCS at . Next, select "Connect Wallet" in the header to login the website.
Click "Your Adaptor" and then Click the "Create Adaptor" button.
Create your adaptor
In this example, I would like to build a simple application designed to gather real-time token price data. From this foundation, you can expand the functionality by integrating AI agents to analyze the data, providing users with actionable insights.
For example, by leveraging AI-driven models, you can develop PriceIntel, an advanced system that evaluates current and historical token price trends to help predict future movements. Based on this analysis, the AI agent can assess whether the price of a specific token is likely to increase or decrease, offering actionable predictions for users in the context of:
Trading: Deciding when to buy or sell tokens based on predictive trends.
After defining all the necessary information for your adaptor, click the 'Create Adaptor' button.
Congratulations! You have successfully created an adaptor !
Providers are crucial to the Agentic Data Coordination Service (ADCS) ecosystem, supplying the essential data that power AI-driven applications and services. Based on the specific data requirements defined within the Adaptor, they can deliver various types of data. However, there are two primary providers: Classic Data and Inference Data (AI Data). Classic Data and Inference Data, each serving different purposes and use cases within the network.
Classic data typically consists of two primary types:
Before engaging in AI-driven decision-making within the Dapp - Agentic Data Coordination Service (ADCS), users must first deploy a Consumer Contract. This contract acts as a crucial intermediary between the user’s specific requirements and the AI inference capabilities of the system. Operating similarly to a class in object-oriented programming, the Consumer Contract inherits key functionalities from the Coordinator Contract, enabling it to manage the intricate request and response processes essential for effective inference operations.
The primary responsibility of the Consumer contract is
Request Inference: The Consumer contract requests an inference from the Coordinator contract, providing the necessary input data.
Handle Request: The Consumer contract receives and processes the inference response to make a decision.
To access the Rivalz Console, you need to have an account. If you don’t have one, go to to sign up.
Enter your email and password. Please make sure to enter the same password in both the 'Password' and 'Repeat Password' fields. Then click the "Sign up" button.
Congratulations! You have successfully signed up for an account on Rivalz.


Variables (String): Defines the adapter’s variables in a comma-separated string format for easy processing and validation.
Category ID(Number): The ID of the category this adaptor belongs to.
Output Type ID(Number): Specifies the ID of the output type that the adapter will produce. ( Currently, We support four different types of output: Bool, Bytes, Uint256, StringAndBool )
Prompt: Describe the specific actions or functionality you want your consumer contract to perform.







Trade Parameters: The type of trade, risk appetite, asset, and the amount of funds they are willing to bet...
Data Sources: Reference data from resources, including live market prices, volatility indicators, and historical performance...
Decision Logic: The criteria used by the AI agent to make a decision, such as certain price movements or market conditions...
Then the Consumer Contract will request an inference from the Coordinator Contract. This is done by calling the function requestInference, which takes the necessary input data, including:
The defined adaptor.
The user’s account ID.
The callback gas limit (the maximum amount of gas that can be used for processing the response).
The Consumer Contract is designed to be highly scalable, which allows it to interface with multiple oracles via oracle routers. This means that instead of relying on a single source of truth, the contract can gather data from various oracles, increasing the reliability and accuracy of the inferences made. The results from multiple oracles are collected by the consumer contract, which checks if it has received the required number of results from the oracles. For example, In option trading, the consumer contract may require a certain number of fulfilled requests. Once this condition is met, it will trigger the handleInferenceResponse function to proceed with further actions. Based on the aggregated insights, it will determine whether to execute the trade or take another action.
function requestTradeDecision(bytes calldata schema, uint64 accId, uint32 callbackGasLimit) external
returns (uint256) {
uint256 requestId = requestInference(schema, accId, callbackGasLimit);
emit TradeDecisionRequested(requestId, msg.sender);
return requestId;
}// For example:
function handleInferenceResponse(uint256 requestId, bytes calldata responseData) external {
require(msg.sender == address(coordinator), "Only coordinator can call this function");
// Process the response data to make a trade decision
string memory decision = abi.decode(responseData, (string));
if (keccak256(abi.encodePacked(decision)) == keccak256(abi.encodePacked("trade"))) {
// Logic here
emit TradeDecisionMade(requestId, "trade");
} else(keccak256(abi.encodePacked(decision)) == keccak256(abi.encodePacked("not trade"))) {
// Logic here
emit TradeDecisionMade(requestId, "not trade");
}Data Streams: Delivers continuous, real-time data for applications requiring instant updates.
Data feeds: consist of pre-packaged, processed data sets that are delivered at scheduled intervals, such as hourly, daily, or weekly. These feeds typically provide structured data, meaning the data is already organized into a clear format—such as tables, rows, and columns—and is often aggregated to offer insights or summary information. Data feeds are particularly useful when regular updates are needed, but real-time data isn't critical. They are commonly employed in scenarios like market updates, financial reports, or periodic weather forecasts, where the focus is on trends and insights rather than immediate reactions.
Characteristics:
Structured and aggregated
Delivered at defined intervals (e.g., every hour, day)
Used for historical analysis, insights, and trend monitoring
Easier to process due to its structured format
Example Providers:
CoinMarketCap: Provides cryptocurrency pricing, market cap, and volume data at regular intervals, useful for tracking trends in the digital asset space.
Binance: Offers structured historical data feeds for cryptocurrency prices, trading volumes, and order books, often used by traders and analysts.
Bloomberg: Supplies pre-packaged financial data at regular intervals, including stock indices, commodities, and market insights.
Before data providers can contribute to the ADCS as a Data Hub, they must first complete their registration through the dashboard using their wallet or any authentication method capable of signing the data which ensures secure and verifiable identity management.
After signing and registering the wallet, they can start providing data.
Validators evaluate the data submitted by providers by executing the same processes as the providers. If the results match, it indicates that the provider is performing well and will receive a positive score. The reputation score is determined by the number of accurate submissions made by the provider; higher accuracy contributes to a positive growth in reputation. Conversely, if the results do not align, the provider will be flagged as potentially malicious.
Once validators have compared the data submitted by providers with their own results, they reach a consensus on the accuracy of the submissions. After this validation, proof is generated to demonstrate that the data has been assessed and found to be accurate. This proof includes hashes of the validated data, along with relevant metadata, and is subsequently submitted on-chain to ensure transparency and immutability. By storing this proof on the blockchain, the integrity of the validation process is preserved, allowing for easy verification and accountability within the ADCS framework.
Participating as a Provider within ADCS offers several advantages:
Monetization Opportunities - Providers can monetize their data by selling or renting their data feeds and streams within the Intel Market, creating revenue streams from their data assets.
Enhanced Trust and Reliability - The ADCS Reputation System ensures that only trusted and reliable data is utilized, increasing the value and trustworthiness of the providers’ data offerings.
Scalability and Flexibility - Providers can offer a wide range of data types and formats, catering to diverse applications and industries within the ADCS network. This flexibility allows Data Providers to scale their offerings based on demand and market needs.
By contributing high-quality data, Data Providers play a pivotal role in maintaining a robust and reliable data ecosystem, empowering AI Agents to make informed and accurate decisions across various domains.
Inference Data refers to the results or outputs produced by an artificial intelligence (AI) model after processing raw input data. This data can be used for various applications, including decision-making, recommendations, and identifying patterns in large datasets. In the context of AI-powered systems, Inference Data is crucial because it reflects how an AI model interprets and responds to real-world data.
Characteristics:
It can come in various formats, including numerical values, categorical labels, probabilities, or even more complex outputs like text or images.
Driven by Input Data: It is highly dependent on the input it receives. A slight change in the input can lead to different outputs.
Dynamic and Adaptive: Inference Data can change as new input data is processed by the model. AI systems adapt to new patterns, and their output may evolve over time as more data is fed into the model, improving accuracy and predictions.
Providers:
For Inference providers, we currently support the following:
Llama
Gemini
Upcoming providers include:
Anthropic
OpenAI
Saving vector data can be an expensive process, so credits are required to create a knowledge base. However, we’ve got you covered! When you create an account, you’ll receive 500 USD in free credits. Visit here to learn How to claim your free credit.
Select the document you need to upload in PDF format.
API secret key to authenticate. If you don't have one, please refer to the "Get API Secret key" section to obtain it.
To install the SDK, run the following command for your preferred language:
Python
Nodejs
To initialize the client with your secret token, you have two options:
Using a .env file with SECRET_TOKEN variable
you can pass the secret token directly to the client during initialization
Python: Use the python-dotenv package to load the token from .env file
Nodejs: Use the dotenv package to load the token from .env file
Use the create_rag_knowledge_base method to create a knowledge base. This method takes the path to the PDF document and the name of the knowledge base as arguments.
This method will return the knowledge base details as a JSON object, including the unique knowledge base ID. You can use this ID for future queries on the knowledge base.
The embedding process may take some time depending on the size of the document. This process will be processed off request and you can check when the process is done by checking the status of the knowledge base.
Python
Nodejs
Congratulations! You have successfully created your first knowledge base.
pip install rivalz-clientnpm install rivalz-clientfrom dotenv import load_dotenv
from rivalz_client.client import RivalzClient
load_dotenv()
# Get the secret token from environment variables
secret_token = os.getenv('SECRET_TOKEN')
if not secret_token:
raise ValueError("SECRET_TOKEN is not set in the environment variables.")
# Initialize the RivalzClient with the secret token
client = RivalzClient(secret_token)import RivalzClient from 'rivalz-client';
import dotenv from 'dotenv';
dotenv.config();
const rivalzClient = new RivalzClient(process.env.SECRET_TOKEN);# python
# create knowledge base
knowledge_base = client.create_rag_knowledge_base('sample.pdf', 'knowledge_base_name')
print(knowledge_base) # print the knowledge base details
// node.js
// create knowledge base
const knowledgeBase = await rivalzClient.createRagKnowledgeBase('sample.pdf', 'knowledge_base_name');
console.log(knowledgeBase); // print the knowledge base detailsknowledge_base = client.create_rag_knowledge_base('sample.pdf', 'knowledge_base_name')
print(knowledge_base) # you will get the knowledge base id with the status 'processing'
# check the status of the knowledge base
knowledge_base = client.get_knowledge_base(knowledge_base['id'])
print(knowledge_base['status']) # you will get 'ready' when the process is done// node.js
const knowledgeBase = await rivalzClient.createRagKnowledgeBase('sample.pdf', 'knowledge_base_name');
console.log(knowledgeBase); // print the knowledge base details at this point the status will be 'processing'
// check the status of the knowledge base
const knowledgeBaseStatus = await rivalzClient.getKnowledgeBase(knowledgeBase.id);
console.log(knowledgeBaseStatus.status); // you will get 'ready' when the process is done# main.py
import os
from dotenv import load_dotenv
from rivalz_client.client import RivalzClient
import time
def main():
# Load environment variables from .env file
load_dotenv()
# Get the secret token from environment variables
secret_token = os.getenv('SECRET_TOKEN')
if not secret_token:
raise ValueError("SECRET_TOKEN is not set in the environment variables.")
# Initialize the RivalzClient with the secret token
client = RivalzClient(secret_token)
# create knowledge base
knowledge_base = client.create_rag_knowledge_base('sample.pdf', 'knowledge_base_name')
print(knowledge_base) # print the knowledge base details
#sleep for 5 seconds to allow the process to finish you can do this in a loop
time.sleep(5)
# check the status of the knowledge base
knowledge_base = client.get_knowledge_base(knowledge_base['id'])
print(knowledge_base['status']) # you will get 'ready' when the process is done
if __name__ == '__main__':
main()// main.js
import RivalzClient from 'rivalz-client';
import dotenv from 'dotenv';
dotenv.config();
async function main() {
const rivalzClient = new RivalzClient(process.env.SECRET_TOKEN);
const knowledgeBase = await rivalzClient.createRagKnowledgeBase('sample.pdf', 'knowledge_base_name');
console.log(knowledgeBase); // print the knowledge base details
// sleep for 5 seconds to allow the process to finish you can do this in a loop
await new Promise(resolve => setTimeout(resolve, 5000));
// check the status of the knowledge base
const knowledgeBaseStatus = await rivalzClient.getKnowledgeBase(knowledgeBase.id);
}
main();
rivalz-client is a Python client designed for interacting with the Rivalz API. It enables developers to upload, download, and manage files on the Rivalz platform using IPFS (InterPlanetary File System).Upload Files: Upload any file to the Rivalz platform and get an IPFS hash.
Upload Passport Images: Upload passport images to the Rivalz platform.
Download Files: Download files from the Rivalz platform using an IPFS hash.v
Delete Files: Delete files from the Rivalz platform using an IPFS hash.
Vectorize Documents: Vectorize documents to create a RAG (Retrieval-Augmented Generation) based on the document uploaded.
Create conversations: Create conversations based on the document uploaded.
You can install the rivalz-client package via pip using the following command:
This guide provides detailed instructions on how to use the rivalz-client to interact with the Rivalz API.
First, import the RivalzClient class and initialize it with your secret token. If you don’t provide a token, it will use a default example token.
To upload a file to the Rivalz, you can use the upload_file method. Simply provide the path to the file you want to upload as an argument
To upload a passport image, use the upload_passport method. Provide the path to the passport image file.
To download a file, use the download_file method with the IPFS hash of the file and the directory where you want to save the file.
To delete a file, use the delete_file method with the IPFS hash of the file you want to delete.
To get your uploaded files, use the get_upload_history method. This method accept page and size for pagination. Page start from 0. And default is page 0 (first page) and size is 10.
Before using the RAG API, you need api key and some rivalz credits. Claim for free now here
To vectorize a document and create a knowledge base for Retrieval-Augmented Generation (RAG), use the create_rag_knowledge_base method, which takes the document's file path as input. This method generates a vectorized embedding of the document, assigns it a knowledge base ID, and stores it for future use in RAG-based conversations. Currently, this process supports only PDF files.
To add a document to an existing knowledge base, use the add_document_to_knowledge_base method. This method requires the knowledge base ID (from the knowledge base you’ve already created) and the file path of the new document.
To delete a document from an existing knowledge base, use the delete_document_from_knowledge_base method with the knowledge base id and the document name.
To get all knowledge bases, use the get_knowledge_bases method.
To get details of a knowledge base, use the get_knowledge_base method with the knowledge base id.
To initiate a conversation in the RAG (Retrieval Augmented Generation) system, use the create_chat_session method. This method requires the knowledge base ID (from your existing knowledge base) and the question you want to ask. The AI will return a response based on the context provided by the knowledge base, along with a chat session ID to continue the conversation if needed.
To add a message to a conversation, use the same method create_chat_session with the chat session id and the message.
To get all conversations, use the get_chat_sessions method.
To get details of a conversation (which contains chat history for this conversation), use the get_chat_session method with the chat session id.
To get all uploaded documents, use the get_uploaded_documents method.
Here is a complete example demonstrating how to use the rivalz-client to create a simple RAG conversation based on a PDF document:
pip install rivalz-clientfrom rivalz_client.client import RivalzClient
# Initialize the client with your secret token
client = RivalzClient('your_secret_token')response = client.upload_file('path/to/your/file.txt')
print(response)response = client.upload_passport('path/to/your/passport_image.jpg')
print(response)file_path = client.download_file('QmSampleHash', 'save/directory')
print(f"File downloaded to: {file_path}")response = client.delete_file('QmSampleHash')
print(response)total_files_uploaded, upload_histories = client.get_upload_history(0, 10)
print(f"Total files uploaded: {total_files_uploaded} ")
print(f"Upload histories: {upload_histories}")response = client.create_rag_knowledge_base('path/to/your/document.pdf', 'knowledge_base_name')
print(response)
# {'id': '66fa5bf022e73c17073768f0', 'name': 'test', 'files': '1727683567711_sample.pdf', 'userId': '66c4151c98bd0d3d47de682a'}response = client.add_document_to_knowledge_base('path/to/your/document.pdf', 'knowledge_base_id')
print(response)response = client.delete_document_from_knowledge_base('document_id','knowledge_base_id',)
print(response)response = client.get_knowledge_bases()
print(response)response = client.get_knowledge_base('knowledge_base_id')
print(response)response = client.create_chat_session('knowledge_base_id', 'question')
print(response)
# {'answer': 'Hello! How can I help you today? \n', 'session_id': '66fa625fb58f5a4b9a30b983', 'userId': '66c4151c98bd0d3d47de682a'}response = client.create_chat_session('knowledge_base_id', 'message', 'chat_session_id')
print(response)response = client.get_chat_sessions()
print(response)response = client.get_chat_session('chat_session_id')
print(response)response = client.get_uploaded_documents()
print(response)# main.py
import os
from dotenv import load_dotenv
from rivalz_client.client import RivalzClient
import time
def main():
# Load environment variables from .env file
load_dotenv()
# Get the secret token from environment variables
secret_token = os.getenv('SECRET_TOKEN')
if not secret_token:
raise ValueError("SECRET_TOKEN is not set in the environment variables.")
# Initialize the RivalzClient with the secret token
client = RivalzClient(secret_token)
# create knowledge base
knowledge_base = client.create_rag_knowledge_base('sample.pdf', 'knowledge_base_name')
knowledge_base_id = knowledge_base['id']
if knowledge_base['status'] == 'processing':
print('Knowledge base is processing')
#sleep for 5 seconds
time.sleep(5)
# create conversation
conversation = client.create_chat_session(knowledge_base_id, 'what is the document about?')
conversation_id = conversation['session_id']
# add message to conversation
conversation = client.create_chat_session(knowledge_base_id, 'what is the document about?', conversation_id)
print(conversation['answer'])
if __name__ == '__main__':
main()Upload Files: Upload any file to the Rivalz platform and get an IPFS hash.
Upload Passport Images: Upload passport images to the Rivalz platform.
Download Files: Download files from the Rivalz platform using an IPFS hash.v
Delete Files: Delete files from the Rivalz platform using an IPFS hash.
Vectorize Documents: Vectorize documents to create a RAG (Retrieval-Augmented Generation) based on the document uploaded.
Create conversations: Create conversations based on the document uploaded.
Before getting started, ensure that you have both Node.js and either npm or yarn installed. These are essential for managing the Rivalz client dependencies.
To install the Rivalz client, run one of the following commands:
After installing the package, proceed to the Rivalz Dashboard to generate your encryption key and secret key:
Encryption Key: Used for encrypting files to ensure data security.
Secret Key: Required for authenticating API requests to access Rivalz services.
Import and use the RivalzClient class in your TypeScript/JavaScript code:
file: A readable stream of the file to be uploaded.
Returns a promise that resolves to the IPFS hash of the uploaded file.
file: A readable stream of the file to be uploaded.
Returns a promise that resolves to the IPFS hash of the uploaded file.
ipfsHash: The IPFS hash of the file to be downloaded.
savePath: The path where the downloaded file will be saved.
Returns a promise that resolves to the path of the saved file.
ipfsHash: The IPFS hash of the file to be downloaded.
Returns a promise that resolves to a buffer containing the downloaded file.
ipfsHash: The IPFS hash of the file to be deleted.
Returns a promise that resolves to the IPFS hash of the deleted file.
page: The page number of the uploaded history.
size: The number of items per page.
Returns a promise that resolves to an array of uploaded files.
Prerequisites
Before using the RAG API, you need api key and some rivalz credits. Claim for free now here
Creating a knowledge base from a document
To vectorize a document and create a knowledge base for Retrieval-Augmented Generation (RAG), use the createRagKnowledgeBase method, which takes the document's file path as input. This method generates a vectorized embedding of the document, assigns it a knowledge base ID, and stores it for future use in RAG-based conversations. Currently, this process supports only PDF files.
To add a document to an existing knowledge base, use the addDocumentToKnowledgeBase method with the knowledge base id and the path to the document.
To delete a document from an existing knowledge base, use the deleteDocumentFromKnowledgeBase method with the knowledge base id and the document name.
To get all knowledge bases, use the getKnowledgeBases method.
To get details of a knowledge base, use the getKnowledgeBase method with the knowledge base id.
To initiate a conversation in the RAG (Retrieval Augmented Generation) system, use the createChatSession method. This method requires the knowledge base ID (from your existing knowledge base) and the question you want to ask. The AI will return a response based on the context provided by the knowledge base, along with a chat session ID to continue the conversation if needed.
To add a message to a conversation, use the same method createChatSession with the chat session id and the message.
To get all conversations, use the getChatSessions method.
To get details of a conversation (which contains chat history for this conversation), use the getChatSession method with the chat session id.
To get all uploaded documents, use the getUploadedDocuments method.
Here is a complete example demonstrating how to use the rivalz-client to create a simple RAG conversation based on a PDF document:
# Using npm
npm install rivalz-client
# Or, using yarn
yarn add rivalz-client
import RivalzClient from 'rivalz-client';
const rivalzClient = new RivalzClient('your-secret-key');rivalzClient.uploadFile(file,fileName)rivalzClient.uploadPassport(file)rivalzClient.downloadFile(ipfsHash, savePath)rivalzClient.download(ipfsHash)rivalzClient.deleteFile(ipfsHash)rivalzClient.getUploadedHistory(page, size)```javascript
const RivalzClient = require('rivalz-client');
const fs = require('node:fs');
const rivalzClient = new RivalzClient('your-secret-key');
async function uploadFile() {
const filePath = 'file_path';
const buffer = fs.readFileSync(filePath)
const fileName = filePath.split('/').pop();
try {
const filelog = await rivalzClient.uploadFile(buffer,fileName);
console.log(filelog);
} catch (error) {
console.error('Error uploading file:', error);
}
}
```const response = await client.createRagKnowledgeBase('path/to/your/document.pdf', 'knowledge_base_name')
console.log(response)
// {'id': '66fa5bf022e73c17073768f0', 'name': 'test', 'files': '1727683567711_sample.pdf', 'userId': '66c4151c98bd0d3d47de682a'}const response = await client.addDocumentToKnowledgeBase('path/to/your/document.pdf','knowledge_base_id')
console.log(response)const response = await client.deleteDocumentFromKnowledgeBase('document_id','knowledge_base_id')
console.log(response)const response = await client.getKnowledgeBases()
console.log(response)const response = await client.getKnowledgeBase('knowledge_base_id')
console.log(response)const response = await client.createChatSession('knowledge_base_id', 'question')
console.log(response)
// {'answer': 'Hello! How can I help you today? \n', 'session_id': '66fa625fb58f5a4b9a30b983', 'userId': '66c4151c98bd0d3d47de682a'}const response = await client.createChatSession('knowledge_base_id', 'message', 'chat_session_id')
console.log(response)const response = await client.getChatSessions()
console.log(response)const response = client.getChatSession('chat_session_id')
console.log(response)const response = await client.getUploadedDocuments()
console.log(response)/*
main.ts
*/
import RivalzClient from 'rivalz-client';
const main = async () => {
// Initialize the RivalzClient with the secret token
const client = new RivalzClient('your-secret-key');
// create knowledge base
const knowledgeBase = await client.createRagKnowledgeBase('sample.pdf', 'knowledge_base_name');
const knowledgeBaseId = knowledgeBase.id;
if(knowledgeBase.status !== 'ready') {
console.log('Knowledge base is still processing');
// wait for 5 seconds to allow the process to finish
await new Promise(resolve => setTimeout(resolve, 5000));
}
// create conversation
let conversation = await client.createChatSession(knowledgeBaseId, 'what is the document about?');
const conversationId = conversation.session_id;
// add message to conversation
conversation = await client.createChatSession(knowledgeBaseId, 'What is a RAG application ?', conversationId);
console.log(conversation.answer);
}
main()To help you better understand how a Dapp - ADCS works, let’s dive into the following example together Meme Coin Trend Application.
The Meme Coin Trend application utilizes cutting-edge AI and blockchain technology to monitor trending meme coins and analyze market data in real time. With an integrated AI agent, the platform autonomously trades meme coins using insights gathered from the ADCS network, ensuring efficient and data-driven trading decisions.
Visit to learn How to create an adaptor.
You need to define an adapter that outlines the necessary parameters.
Provider: Meme Coin Trend
Network: Rivalz
Category : Meme Coin
Output Type : StringAndBool
Once an adaptor is created, the system will generate a unique JobID.
Oxd7....2352 is the JobID for your Meme Coin Trend Adaptor.
Depending on the type of outputData you have defined in the adaptor, your consumer contract must inherit from the appropriate ADCS fulfillment contract. Here the outputData is StringandBool so the consumer contract will inherit the ADCSConsumerFulfillStringAndBool.
Struct
MemeCoin is a custom data type used to to represent a meme coin with three properties:
Contract Variables:
The memeCoins variable represents an array of MemeCoin structs, allowing the contract to store multiple meme coins
Constructor:
ADCSConsumerBase is an abstract contract that serves as the base for consumer contracts interacting with the ADCS coordinator. It initializes data requests and verifies the fulfillment.
The constructor takes the Coordinator contract's address (_coordinator) as input and passes it to the base contract. The Coordinator manages the interaction between oracles and consumers, facilitating the flow of data requests and responses.
_weth: the address of the WETH (Wrapped Ether) token contract.
_swapRouter: the address of the Uniswap V3 swap router contract, used to interact with the Uniswap decentralized exchange (DEX) to swap tokens
Request Functions:
This function Initiates a data request to the ADCS network and requests :
_callbackGasLimit: The maximum amount of gas a user is willing to pay for completing the function.
_jobId: The jobID that the consuming contract is registered to.
The function uses the buildRequest() function to create a request, adds the necessary parameters, and sends it to the Coordinator.
Fulfillment Functions:
This function is called by the ADCS coordinator to fulfill the data request and trigger a trade.
TradeMemeCoin Function:
The tradeMemeCoin function executes the buy or sell trade on Uniswap V3 based on the data received and triggered by the fulfillDataRequest function.
After processing the received data, the tradeMemeCoin function is called with the tokenName and the result (true/false).
If result is true, the contract performs a buy trade by swapping WETH (Wrapped Ether) for the specified meme coin on Uniswap V3.
If result is false, the contract performs a sell trade by swapping the meme coin for WETH on Uniswap V3.
addMemeCoin Function: Adds a new memecoin to the list of tradable tokens
setWethAmountForTrade Function: Sets the amount of WETH to use for trading.
After defining all necessary functions, the global state, and the Coordinator contract for your consumer contract, the next step is to deploy the contract to the blockchain. .
When receiving a request from the consumer contract, the system listens for the request and then identifies an appropriate AI to process the data.
The AI system processes the request, identifying the meme coin (e.g., "Shiba Inu") and making a recommendation (e.g., buy or sell).
The Coordinator contract receives the result from the AI system and calls the fulfillDataRequest() function in the consumer contract, passing the processed result (token name and recommendation).
The consumer contract processes the result in the fulfillDataRequest() function, which identifies the meme coin (via the name) and calls the tradeMemeCoin() function to execute the buy or sell action.
We've also built some examples to help you easily understand what you can do with ADCS network
.
Description(String): Get a trending meme coin and make decisions which meme coin should buy.
Prompt: Retrieve the current trending meme coins and analyze market conditions to recommend which meme coin should be bought.
The contract interacts with the Uniswap V3 router to perform the trade, and once completed, the TradeSuccess event is emitted.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "../ADCSConsumerFulfill.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
import "@uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol";
import "@uniswap/v3-periphery/contracts/interfaces/IPeripheryPayments.sol";
import "@openzeppelin/contracts/token/ERC20/IERC20.sol";
contract MockTradeMemeCoin is ADCSConsumerFulfillStringAndBool,Ownable { } struct MemeCoin {
string name;
address addr;
uint8 decimals;
}MemeCoin[] public memeCoins constructor(
address _coordinator,
address _weth,
address _swapRouter
) ADCSConsumerBase(_coordinator) Ownable(msg.sender) {
WETH = _weth;
swapRouter = ISwapRouter(_swapRouter);
}function requestTradeMemeCoin(
bytes32 jobId,
uint256 callbackGasLimit
) external returns (uint256 requestId) {
bytes32 typeId = keccak256(abi.encodePacked("stringAndbool"));
ADCS.Request memory req = buildRequest(jobId, typeId);
requestId = COORDINATOR.requestData(callbackGasLimit, req);
emit DataRequested(requestId);
return requestId;
}function fulfillDataRequest(
uint256 requestId,
StringAndBool memory response
) internal virtual override {
string memory tokenName = response.name;
bool result = response.response;
// Find memecoin address and decimals by name
tradeMemeCoin(requestId, tokenName, result);
}
function tradeMemeCoin(uint256 requestId, string memory tokenName, bool result) internal {
// Find memecoin address and decimals by name
address memeTokenAddress;
uint8 tokenDecimals;
for (uint i = 0; i < memeCoins.length; i++) {
if (keccak256(bytes(memeCoins[i].name)) == keccak256(bytes(tokenName))) {
memeTokenAddress = memeCoins[i].addr;
tokenDecimals = memeCoins[i].decimals;
break;
}
}
if (memeTokenAddress == address(0)) {
emit MemecoinNotFound(tokenName);
return;
}
// Execute trade through Uniswap V3
if (result) {
// buy memecoin with eth
IERC20(WETH).approve(address(swapRouter), wethAmountForTrade);
swapRouter.exactInputSingle(
ISwapRouter.ExactInputSingleParams({
tokenIn: WETH,
tokenOut: memeTokenAddress,
fee: 3000,
recipient: address(this),
deadline: block.timestamp + 15 minutes,
amountIn: wethAmountForTrade,
amountOutMinimum: 0,
sqrtPriceLimitX96: 0
})
);
emit TradeSuccess(requestId, wethAmountForTrade, true);
} else {
// sell memecoin for eth
// First approve router to spend our tokens
uint256 memeCoinAmountInWei = memeCoinAmount * (10 ** tokenDecimals);
IERC20(memeTokenAddress).approve(address(swapRouter), memeCoinAmountInWei);
swapRouter.exactInputSingle(
ISwapRouter.ExactInputSingleParams({
tokenIn: memeTokenAddress, // memecoin token
tokenOut: WETH, // eth
fee: 3000, // 0.3% fee tier
recipient: address(this),
deadline: block.timestamp + 15 minutes,
amountIn: memeCoinAmountInWei,
amountOutMinimum: 0, // Set minimum amount out to 0 (should use proper slippage in production)
sqrtPriceLimitX96: 0
})
);
emit TradeSuccess(requestId, memeCoinAmountInWei, false);
}
}function addMemeCoin(string memory name, address addr, uint8 decimals) external onlyOwner {
memeCoins.push(MemeCoin({name: name, addr: addr, decimals: decimals}));
}function setWethAmountForTrade(uint256 amount) external onlyOwner {
wethAmountForTrade = amount;
}
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "../ADCSConsumerFulfill.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
import "@uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol";
import "@uniswap/v3-periphery/contracts/interfaces/IPeripheryPayments.sol";
import "@openzeppelin/contracts/token/ERC20/IERC20.sol";
contract MockTradeMemeCoin is ADCSConsumerFulfillStringAndBool, Ownable {
using ADCS for ADCS.Request;
// Store the last received response for testing
bytes public lastResponse;
uint256 public lastRequestId;
uint256 public wethAmountForTrade = 1000000000000000; // 0.001 WETH
uint256 public memeCoinAmount = 100; // 100 memecoin
struct MemeCoin {
string name;
address addr;
uint8 decimals;
}
MemeCoin[] public memeCoins;
event DataRequested(uint256 indexed requestId);
event DataFulfilled(uint256 indexed requestId, bytes response);
event MemecoinNotFound(string tokenName);
event TradeSuccess(uint256 indexed requestId, uint256 amountIn, bool isBuy);
address public immutable WETH;
ISwapRouter public immutable swapRouter;
constructor(
address _coordinator,
address _weth,
address _swapRouter
) ADCSConsumerBase(_coordinator) Ownable(msg.sender) {
WETH = _weth;
swapRouter = ISwapRouter(_swapRouter);
}
function setWethAmountForTrade(uint256 amount) external onlyOwner {
wethAmountForTrade = amount;
}
/**
* @notice Add a new memecoin to the list
* @param name The name of the memecoin
* @param addr The contract address of the memecoin
* @param decimals The decimals of the memecoin
*/
function addMemeCoin(string memory name, address addr, uint8 decimals) external onlyOwner {
memeCoins.push(MemeCoin({name: name, addr: addr, decimals: decimals}));
}
/**
* @notice Get the total number of memecoins in the list
* @return The length of the memecoins array
*/
function getMemeCoinCount() external view returns (uint256) {
return memeCoins.length;
}
/**
* @notice Get a memecoin by index
* @param index The index in the memecoins array
* @return name The memecoin name
* @return addr The memecoin contract address
* @return decimals The decimals of the memecoin
*/
function getMemeCoin(
uint256 index
) external view returns (string memory name, address addr, uint8 decimals) {
require(index < memeCoins.length, "Index out of bounds");
MemeCoin memory coin = memeCoins[index];
return (coin.name, coin.addr, coin.decimals);
}
// Function to request bytes data
function requestTradeMemeCoin(
bytes32 jobId,
uint256 callbackGasLimit
) external returns (uint256 requestId) {
bytes32 typeId = keccak256(abi.encodePacked("stringAndbool"));
ADCS.Request memory req = buildRequest(jobId, typeId);
requestId = COORDINATOR.requestData(callbackGasLimit, req);
emit DataRequested(requestId);
return requestId;
}
function fulfillDataRequest(
uint256 requestId,
StringAndBool memory response
) internal virtual override {
string memory tokenName = response.name;
bool result = response.response;
// Find memecoin address and decimals by name
tradeMemeCoin(requestId, tokenName, result);
}
function tradeMemeCoin(uint256 requestId, string memory tokenName, bool result) internal {
// Find memecoin address and decimals by name
address memeTokenAddress;
uint8 tokenDecimals;
for (uint i = 0; i < memeCoins.length; i++) {
if (keccak256(bytes(memeCoins[i].name)) == keccak256(bytes(tokenName))) {
memeTokenAddress = memeCoins[i].addr;
tokenDecimals = memeCoins[i].decimals;
break;
}
}
if (memeTokenAddress == address(0)) {
emit MemecoinNotFound(tokenName);
return;
}
// Execute trade through Uniswap V3
if (result) {
// buy memecoin with eth
IERC20(WETH).approve(address(swapRouter), wethAmountForTrade);
swapRouter.exactInputSingle(
ISwapRouter.ExactInputSingleParams({
tokenIn: WETH,
tokenOut: memeTokenAddress,
fee: 3000,
recipient: address(this),
deadline: block.timestamp + 15 minutes,
amountIn: wethAmountForTrade,
amountOutMinimum: 0,
sqrtPriceLimitX96: 0
})
);
emit TradeSuccess(requestId, wethAmountForTrade, true);
} else {
// sell memecoin for eth
// First approve router to spend our tokens
uint256 memeCoinAmountInWei = memeCoinAmount * (10 ** tokenDecimals);
IERC20(memeTokenAddress).approve(address(swapRouter), memeCoinAmountInWei);
swapRouter.exactInputSingle(
ISwapRouter.ExactInputSingleParams({
tokenIn: memeTokenAddress, // memecoin token
tokenOut: WETH, // eth
fee: 3000, // 0.3% fee tier
recipient: address(this),
deadline: block.timestamp + 15 minutes,
amountIn: memeCoinAmountInWei,
amountOutMinimum: 0, // Set minimum amount out to 0 (should use proper slippage in production)
sqrtPriceLimitX96: 0
})
);
emit TradeSuccess(requestId, memeCoinAmountInWei, false);
}
}
receive() external payable {}
function withdraw() external onlyOwner {
payable(owner()).transfer(address(this).balance);
}
function withdrawToken(address token) external onlyOwner {
IERC20(token).transfer(owner(), IERC20(token).balanceOf(address(this)));
}
}

