Security

Ensure LLM chat apps don't share more data than they should

How does it work

Agent Cloud comes with a number of security considerations out of the box.

For ultra secure environments we recommend self hosting Agent Cloud on your own compute and serving agents via your own privately hosted LLM.

Our Repository is designed such that it does not require any 3rd party cloud services to operate.

Out of the box we offer users the ability to

  • Self host the entire repository on your own compute
  • Leverage self hosted LLM endpoints so that LLM providers can't train on your data
  • Teams feature which ensures that users can only access data provided to that team
  • Data access per AI agent, which ensures agents can only access the data they are granted access to
  • Functions access per AI agent, which ensures agents can only leverage custom functions that they are granted access to
  • LLM access per AI agent, which ensurs agents can only access LLM models that they are granted access to

Get started with
Security
Grant data access at an agent level (AI agents can't access excessive data)
Connect Agent Cloud to your own LLM to eliminate risk of training by AI providers
Add users using Role based access controls
Get started

Explore other features

Explore

The end to end RAG pipeline

Select your connector

Use our collection of data sources to sync data from other systems like confluence or upload your own pdf, docx, txt or csv file.
When selecting systems like databases (postgres, snowflake, bigquery) you can select tables and even columns to ingest.

Prep your data

For files you can provide instructions on how to split and chunk your data. Leverage Open AI latest text-embedding-3 for embedding or select from open source models like BGE/base.

Vector store your data

Once data has been embedded the platform will store your data within a vector database. We also expose

Keep data fresh

Select what frequency you would like to sync data from the source. This can be manual, scheduled or a cron expression. This means users can query fresh data and know how recent the source was updated.

Start chatting with your data!

Now that data is synced, simply create an agent with your choice of LLM and start a session to talk to your data.