Run Agent Cloud on your own

Get started with Community and upgrade for enterprise-ready features and professional support.
Free

Community

$0
per month
Github Install guide
Main features
Connect to local LLM and Embedding models
Connect 300 data sources
AGPL 3.0 License
Community Support (Join our Discord)
For enterprise

Enterprise

Custom License
Talk to us
All Community features plus
Professional support and SLAs
Custom SSO
Create multiple teams + user permissions
External Secrets Management
External file storage (aws/gcp/azure)
Infrastructure as code (pulumi) deployment
Kubernetes scaling
Upload files to build RAG chat apps + multi agent apps

Free

$0
per month
Join waitlist
Main features
RAG chat apps with files (chunk, embed, store, retrieve)
Multi agent apps
BYO Open AI Key
Unlimited chats
Build end-to-end RAG chat apps + multi agent apps

Pro

$99 per month
Join waitlist
All Pro features plus
RAG 300+ data sources (chunk, embed, store, retrieve)
Sync hourly
Add team members
Starts with 500mb of indexed storage. Talk to support for upgrades
For companies with custom requirements

Enterprise

Custom
Talk to us
All Team features plus
Create multiple teams
Custom SSO
Professional support + SLAs
Custom file/vector storage space
Custom key management
Custom device clients iOS/Android/Mac/Windows/Linux

FAQs

Got questions? We have some answers below
What is your software license?

AGPL 3.0 - it is a copy left license which can be found here on our github page

What hardware requirements do I need to run Agent Cloud locally?

If running via Docker, we strongly recommend a machine with at least 16 GB of RAM.

E.g. a base Macbook Air M1/M2 with 8GB RAM will not suffice as Airbyte requires more resources.

If you are also running Olama or LM studio locally, you will need the additional RAM if you want agents to inference a local LLM model.

If you are running without docker, 8GB RAM may suffice, but it is harder to get started.

What local OS do you support?

Currently we have a docker install.sh script for Mac/Linux users. For windows we recommend using WSL.

How can I access the ELT platfrom (Airbyte) locally?

You can access the airbyte instance directly by going to http://localhost:8000 with the username airbyte and password password.

How can I access the Vector DB locally?

The vector DB is qdrant, and it is located at http://localhost:6333/dashboard#/collections. For more information about Qdrant, see their docs here.

How can I contribute to the repository?

If you want an initial idea of how your code would fit into the repo, please raise a feature request. Otherwise if you're very keen, write the code and raise a pull request.
You can also chat on discord.

What are your goals?

We aim to be the leading open source platform enabling companies to deploy private and secure AI apps on their infrastructure.

Can I use a local Large Language Model?

Yes. We support any local LLM which has an Open AI compatible endpoint (i.e. it responds the same way Open AI does.).
This means you can use LM Studio or more recently Ollama with our app for local inference.

For cloud inference we currently support Open AI.

Which Cloud models do you support?

Currently we support Open AI and Azure Open AI. We have a long term vision to support all Cloud providers by leveraging the litellm library within our Python backend. Contributors welcome!

Can I use local Embedding models?

Yes. Our app can embed locally via fastembed.
You don't need to do anything to get this working, just go to the Models screen > Fastembed > select model.

For cloud embedding inference we will be supporting Open AI text-ada models for now.

What splitting/chunking methods do you support?

For files
At present we support two methods
Basic: Character splitting (e.g /n)
Advanced: Semantic chunking which leverages an embedding model to chunk sentences which are semantically similar. You can read about semantic chunking here.

For data sources (not files)
At present data sources other than files, will automatically chunk by the message which comes through Rabbit MQ. For example when leveraging Bigquery as the data source, a message will be equal to one row. Therefore it will chunk row by row. We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected will be embedded AND used as metadata.

We intend to enable an API endpoint for vector upsert to enable more custom chunking strategies or alternatively feel free to contribute to the vector-proxy app in the repo if you are comfortable coding in Rust!

Can I control what fields get synced when I sync a data source

Yes, you can select both the table and the fields that are being synced. This differs for Structured vs Unstrcutured data and will conform to the Airbyte stream settings for the source.

Can I control which fields get embedded and which fields get stored as metadata?

We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected when connecting a data source will be embedded AND used as metadata. For files only the document text will be embedded and any other document metadata that is available in the file will be attempted to be extracted and stored as metadata (e.g. page number, document name).

Can I control the sync frequency?

Yes, you can select basic or advanced, giving you hourly, daily or cron expression syncs.

To edit go
Data Sources > Select Data Source > Select Schedule Tab [edit schedule]

What file upload formats are supported?

For local file uploads we support:
PDF, DOCX, TXT, CSV, XLSX

How do you price your platform?

Check out our pricing page where you can see our pricing.

What is agent cloud?

A single place where companies can build and deploy AI apps. This includes single agent chat apps, multi agent chat apps and knowledge retrieval apps. The platform both enables developers and engineers to build apps for themselves and the teams they interface to in sales, HR, operations etc.

How can my data retrieval be truly private.

If you want it to be truly private, don't use our managed cloud product or anyone else's for that matter. For this we reccommend deploying our open source app to your infrasturcture and signing an enterprise self managed license.