Ingest your own data
allowing the LLM to act and respond based on your data (codebase, document)
Autonomous agents
delegating tasks (defined on the fly) to the LLM, which will strive to complete them
Prompt templates
helping you achieve the higher possible quality of LLM's responses
Memory
providing the contest (including your current and past conversations) to the LLM
Structured outputs
for receiving responses from the LLM with a desired structure as Java POJOs
"AI Services"
for declaratively defining complex AI behavior behind a simple API
Chains
to reduce the need for extensive boilerplate code in common use-cases