Subscribe to newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Account
Sign upRNA Digital Pty Ltd
Agent Cloud enables you to split/chunk, embed, vector store and sync your The Guardian API data, providing a production RAG pipeline.
AGPL 3.0 - it is a copy left license which can be found here on our github page
If running via Docker, we strongly recommend a machine with at least 16 GB of RAM.
E.g. a base Macbook Air M1/M2 with 8GB RAM will not suffice as Airbyte requires more resources.
If you are also running Olama or LM studio locally, you will need the additional RAM if you want agents to inference a local LLM model.
If you are running without docker, 8GB RAM may suffice, but it is harder to get started.
Currently we have a docker install.sh script for Mac/Linux users. For windows we recommend using WSL.
You can access the airbyte instance directly by going to http://localhost:8000 with the username airbyte and password password.
The vector DB is qdrant, and it is located at http://localhost:6333/dashboard#/collections. For more information about Qdrant, see their docs here.
If you want an initial idea of how your code would fit into the repo, please raise a feature request. Otherwise if you're very keen, write the code and raise a pull request.
You can also chat on discord.
We aim to be the leading open source platform enabling companies to deploy private and secure AI apps on their infrastructure.
Yes. We support any local LLM which has an Open AI compatible endpoint (i.e. it responds the same way Open AI does.).
This means you can use LM Studio or more recently Ollama with our app for local inference.
For cloud inference we currently support Open AI.
Currently we support Open AI and Azure Open AI. We have a long term vision to support all Cloud providers by leveraging the litellm library within our Python backend. Contributors welcome!
Yes. Our app can embed locally via fastembed.
You don't need to do anything to get this working, just go to the Models screen > Fastembed > select model.
For cloud embedding inference we will be supporting Open AI text-ada models for now.
For files
At present we support two methods
Basic: Character splitting (e.g /n)
Advanced: Semantic chunking which leverages an embedding model to chunk sentences which are semantically similar. You can read about semantic chunking here.
For data sources (not files)
At present data sources other than files, will automatically chunk by the message which comes through Rabbit MQ. For example when leveraging Bigquery as the data source, a message will be equal to one row. Therefore it will chunk row by row. We are adding support long term to enable users to select fields to embed and fields to store as metadata. For now, any field selected will be embedded AND used as metadata.
We intend to enable an API endpoint for vector upsert to enable more custom chunking strategies or alternatively feel free to contribute to the vector-proxy app in the repo if you are comfortable coding in Rust!
Yes, you can select both the table and the fields that are being synced. This differs for Structured vs Unstrcutured data and will conform to the Airbyte stream settings for the source.
Yes, we have support for users to select field to embed and fields to store as metadata. For files only the document text will be embedded and any other document metadata that is available in the file will be attempted to be extracted and stored as metadata (e.g. page number, document name).
Yes, you can select basic or advanced, giving you hourly, daily or cron expression syncs. Note: for cloud this depends on your plan.
To edit go
Data Sources > Select Data Source > Select Schedule Tab [edit schedule]
For local file uploads we support:
PDF, DOCX, TXT, CSV, XLSX
Check out our pricing page where you can see our pricing.
A single place where companies can build and deploy AI apps. This includes single agent chat apps, multi agent chat apps and knowledge retrieval apps. The platform both enables developers and engineers to build apps for themselves and the teams they interface to in sales, HR, operations etc.
If you want it to be truly private, don't use our managed cloud product or anyone else's for that matter. For this we recommend deploying our open source app to your infrastructure, running an LLM on prem/cloud and signing an enterprise self managed license.