Technical Stack

I. Overview

1. What is Cybera?

Cybera is the name that we use for AI Agents on Berally that automatically performs actions for users. It can practically do anything as long as there are instructions and necessary plugins to give it access to the needed connections/services. With different instructions and guidelines, each AI agent has its own identity and performs best on certain tasks.

With that said, in the scope of Berally, the AI agent is designed to be the crypto trading companion that can analyze data and make trading decisions on behalf of its owner. There are a few keynotes about the Berally AI:

  1. The Agent has its own Account Abstraction wallet (AA wallet). This wallet is created and linked directly to the Agent. The owner can control this wallet through a smart contract from his/her own self-custodial wallet. With this setup, the agent can perform on-chain transactions without the need for the owner’s signature every single time.

  2. The Agent will act as instructed by the owner and can self-learn from past actions to evolve itself. All conversations and trading activities will be recorded as a database and used as training material.

  3. The agent can access real-time data from multiple sources through plugins that allow it to connect with needed services (such as Google, Coingecko, Tradingview, etc.).

2. Use Cases of Cybera

The use cases of AI agents are only limited by the imagination of their users. But in the case of Berally, we want to focus on building it to become a crypto fund manager and KOL. We made it so that a normal user, without any knowledge of coding or machine learning (ML), can easily create with natural language. Here are some major use cases of Cybera:

  • Community Building:

With the ability to Post, Reply, and Repost on X (Twitter), it can automatically keep track of recent events and share its knowledge with others. Through this, it can build its own community without the touch of its owner. Of course, the success of the Cybera will be the result of its identity given by its creator.

  • Research on Crypto Projects:

Cybera can perform deep research on emerging and existing crypto projects, analyzing whitepapers, tokenomics, team background, and community sentiment to provide insights and recommendations.

  • Market Tracking and Recommendations:

Unlike humans, Cybera can be active 24/7 to get the latest news and prices of tokens. Therefore, it can provide users with recommendations with minimal delay.

  • Automatically Trading for Profit:

With enough tries and tests, the Cybera can actively trade on behalf of its owner. Even though, at the start, it will act on the owner’s instructions, over time it can evolve the trading strategies given and perfect them.


II. Technical stack

1. System Overview

This system is built for multi-agent crypto trading, with each agent specializing in a specific type of market analysis (technical, fundamental, sentiment, etc.). A central Orchestrator manages data flows, triggers, and agent coordination, while LLM and Performance Evaluator ensure deeper insights and ongoing improvements. The goal is to automate data gathering, signal generation, risk management, and trade execution—continuously learning from real outcomes.

To ensure flexibility and scalability, the Cybera system is built with a modular design. Each component operates independently while seamlessly integrating with others, making it highly customizable. This modularity allows developers to replace, upgrade, or extend different parts of the system without disrupting the entire framework. The long-term vision is to evolve this into an open-source framework, enabling the community to create AI agents tailored to their unique needs. Whether for trading, research, automation, or new use cases yet to be imagined, users will have complete freedom to shape their agents as they see fit.

Unlike simple GPT wrappers, our system is designed with built-in machine learning capabilities that allow agents to continuously improve over time. With a dedicated performance evaluation and fine-tuning process, each agent learns from past decisions, real-world data, and user interactions. This ensures that over time, the system becomes more adaptive, efficient, and capable of making smarter decisions. Continuous learning is at the core of our approach, making the agent not just a tool but a self-improving AI.

Flow Summary:

1. Collection: Data is entered via tools like APIs or web scraping. 2. Cleaning & Storage: Ensures reliable, uniform data. 3. Triggers: Notify the Orchestrator of market events. 4. Orchestrator: Delegates tasks to Agents or LLM. 5. Multi-Agent Trading: Analyst, Researcher, Sentiment → Trader → Risk Manager → Manager → Execution. 6. Performance Evaluator: Logs outcomes. 7. Model Fine-Tuning: Adapts agent models using feedback. 8. Chatbot & Content Agents: Provide updates and user engagement.


2. Data Collection Tools

These tools and approaches pull data from various online sources through official APIs, web scraping, or advanced automation.

Method

Pros

Cons

Use Cases

Search Engine

- Broad coverage

- Quick discovery

- Limited access to structured data

- Research

- Link gathering

REST API

- Provides structured data

- Limited scope

- Official exchange data

- Aggregator feeds

HTML parsing

(BeautifulSoup, Requests, etc.)

- Simple

- Fast

- Minimal overhead

- No JavaScript rendering

- Can break if site structure changes

- Scraping static pages

- Quick data extraction

Browser automation

(Selenium, Puppeteer, etc.)

- Supports JavaScript

- Can simulate user interactions

- Resource-intensive

- Slower compared to direct requests

- May be affected by anti-bot measures

- Interactive web pages (SPA sites)

- Automated workflows requiring login

Chrome Extension

- Works in environments with stricter access controls

- Can be unstable depending on site updates

- Data collection on sites with additional security layers

- Real-time automation tasks


3. Data Sources

3.1 Market

  • Examples: CoinGecko, Dexscreener, CoinMarketCap, etc.

  • Data Provided: Current prices, historical OHLC (Open-High-Low-Close), trading volumes, order books.

  • Relevance: Supplies real-time and historical market metrics for short-term and long-term analysis.

3.2 On-Chain Data

  • Examples: Glassnode, CryptoQuant

  • Data Provided: Metrics on exchange inflows/outflows, miner stats, staking data, large wallet activities (so-called “whale” moves).

  • Relevance: Helps detect insider trends, liquidity shifts, potential sell/buy pressures from big holders.

3.3 Insider Tx

  • Examples: Whale-alert, Dune Analytics

  • Data Provided: Team token allocations, vesting schedules, whale transactions, possible unlock events.

  • Relevance: Alerts the system to major token unlocks or large wallet movements that can shift supply/demand dynamics.

3.4 Social Media

  • Examples: Twitter/X, Reddit, Discord, Telegram

  • Data Provided: Community sentiment, trending hashtags, user opinions.

  • Relevance: Gauges market psychology (fear, hype, FOMO), offering early signals of bullish or bearish shifts.

3.5 News & Macro

  • Examples: CoinDesk, Cointelegraph, Bloomberg, Reuters

  • Data Provided: Breaking news, regulatory updates, macroeconomic announcements (interest rates, inflation).

  • Relevance: Macro changes or major headlines can trigger rapid market shifts (e.g., SEC rulings, global economic policies).

3.6 Fundamentals

  • Examples: Official project websites, whitepapers, GitHub repositories

  • Data Provided: Tokenomics, roadmap progress, developer activity, partnership announcements.

  • Relevance: Indicates the core value proposition of a crypto project, revealing long-term viability or growth potential.


4. Data Processing & Storage

4.1 Data Cleaner

Purpose:

  • Normalize timestamps, remove duplicates, convert formats.

  • Ensure the data is consistent (e.g., all volumes in the same currency, all prices at the same time intervals).

4.2 Data Storage

Data Lake (S3)

  • Role: Central repository for raw or semi-structured files (CSV, PDF, JSON).

  • Benefit: Scalable, cost-effective storage for large volumes of historical data.

Vector DB

  • Role: Stores embeddings generated by NLP or other ML algorithms, enabling advanced text/similarity searches.

  • Benefit: Useful for searching unstructured text data such as tweets, news articles, whitepaper content.

Relational DB

  • Role: Traditional SQL-based store (e.g., PostgreSQL) for structured data.

  • Benefit: Ensures data integrity, supports complex queries, suitable for logs, user info, trade records.

NoSQL DB

  • Role: Handles flexible or high-throughput data (e.g., MongoDB, DynamoDB).

  • Benefit: Ideal for real-time data ingestion from social feeds or high-velocity logs.

4.3 Memory & Retriever

Memory: An in-memory layer (cache) to accelerate repeated queries or hold short-term context.

Retriever: A service or module that fetches relevant data on demand—be it from the Vector DB (for embeddings-based lookups) or from relational/NoSQL tables.


5. Core Logic: Triggers & Orchestrator

5.1 Trading Triggers

  • Technical: Crossovers (e.g., EMA, MACD), overbought/oversold signals, volatility spikes.

  • On-chain & Insider: Large whale transfers, upcoming token unlocks, stablecoin flows.

  • Sentiment Triggers: Detect market mood shifts caused by news, rumors, or community discussions.

These triggers can originate from cron-scheduled checks or real-time data events and act as catalysts for the Orchestrator to update or re-run analyses.

5.2 Orchestrator (OpenAI Swarm)

  • Coordination: Distributes tasks among the various agents—technical, fundamental, sentiment, etc.

  • Workflow Control: Ensures each agent receives the right data at the right time.

  • LLM Integration: If an agent’s request or scenario requires more complex reasoning, the Orchestrator calls the LLM.


6. Multi-Agent Trading

6.1 Analyst Agent

  • Focus: Technical indicators (using TA-Lib, for instance) plus on-chain metrics.

  • Output: Short-term signals (momentum, mean reversion, breakout detection).

6.2 Researcher Agent

Focus:

  • Fundamental analysis, including tokenomics, roadmap, developer activity, and partnerships.

  • Internet Search: Actively gathers additional data using search engines (e.g., Google, Brave Search) to ensure comprehensive insights on project fundamentals, market news, and announcements.

Output:

  • Project Viability Scores: Combines fundamental metrics with external findings to score projects on their long-term potential.

  • Valuation Insights: Flags tokens as undervalued or overvalued based on comprehensive data.

  • Watchlist Creation: Adds promising tokens to curated watchlists for further analysis or trading.

6.3 Sentiment Agent

  • Focus: NLP-based sentiment extraction from social media, news articles.

  • Output: Bullish/bearish sentiment indices for the entire market or for specific coins.

6.4 Trader Agent

  • Focus: Consolidates outputs from Analyst/Researcher/Sentiment.

  • Output: Proposed trades (entry/exit points, position sizing, stop losses, etc.).

6.5 Risk Manager Agent

  • Focus: Monitors risk metrics (Value at Risk, maximum drawdown, insider signals).

  • Action: Adjusts positions or vetoes proposals if risk exceeds thresholds.

6.6 Manager Agent

  • Focus: Final authority on whether to execute or modify the trade.

  • Method: May consult the LLM for deeper scenario analysis or second opinions.

6.7 Chatbot Agent

  • Focus: Human user interactions (“Which coin do you recommend now?”, “What was our last trade result?”).

  • Implementation: Retrieves data from the system for user-friendly Q&A.

6.8 Content Agent

  • Focus: Publishes social or media updates regarding trades, announcements, or educational posts.

  • Implementation: Could automatically tweet results, push announcements to a website, or produce daily newsletters.


7. Performance Evaluator & Model Fine-Tuning

7.1 Performance Evaluator

Data Captured:

  • PnL per trade, overall win rate, Sharpe ratio, drawdown analytics.

  • User feedback, engagement metrics (e.g., chatbot interactions, positive/negative sentiment from user responses).

  • Social media performance (e.g., likes, retweets, comments on posts from the Content Agent on X).

Use:

  • Identifies underperforming strategies (e.g., poor performance in high volatility or lack of user confidence).

  • Measures success of trade signals by analyzing user interactions and sentiment trends.

  • Feeds all insights, including social engagement data, to the Fine-Tuning module for model updates.

7.2 Model Fine-Tuning

Automation:

  • Periodically or event-based, retrains or adjusts the system’s ML models, LLMs.

  • Incorporates user feedback, interaction metrics, and social engagement data into retraining pipelines.

  • Continuously evaluates whether user confidence in the trading strategies is improving over time.

Deployment:

  • If a new version outperforms the current one in backtests, real trades, or user feedback metrics, it gets deployed to the Multi-Agent subsystem.

  • Ensures that both technical accuracy and user engagement are optimized for sustained performance.

Last updated