llm.chat
The llm.chat: automation command interfaces with Large Language Model (LLM) providers for single-turn chat completions without transcripts, memory, or tools.
(Added in 11.1.3)
Authentication and API calls are automatically handled by the command.
You simply provide a system_prompt
with instructions, one or more messages
, and the LLM provider configuration. The final message must be a user turn.
This is useful for text classification, summarization, and other single-turn AI tasks.
start:
llm.chat:
output: results
inputs:
llm:
anthropic:
model: claude-3-5-haiku-latest
authentication: cerb:connected_account:anthropic
system_prompt@text:
You are a helpful AI assistant that classifies customer messages
as positive, negative, or neutral. Return only the classification.
messages:
0:
role: user
content: Thank you for the quick response! This solved my problem perfectly.
on_success:
return:
classification@key: results:messages:0:content
Syntax
inputs:
Key | Type | Notes |
---|---|---|
llm: |
list | The LLM provider and model to use. |
messages: |
list | The messages to send. |
system_prompt: |
text | The optional instructions for the LLM. |
llm:
The LLM provider is one of:
llm:
anthropic:
model: claude-3-5-haiku-latest
authentication: cerb:connected_account:anthropic
aws_bedrock:
model: anthropic.claude-3-5-haiku-20241022-v1:0
api_endpoint_url: https://bedrock-runtime.us-east-1.amazonaws.com
authentication: cerb:connected_account:aws
docker:
api_endpoint_url: http://model-runner.docker.internal/
model: ai/llama3.2
gemini:
model: gemini-2.0-flash
authentication: cerb:connected_account:gemini
groq:
model: gemma2-9b-it
authentication: cerb:connected_account:groq
huggingface:
model: google/gemma-2-2b-it
authentication: cerb:connected_account:huggingface
ollama:
api_endpoint_url: http://host.docker.internal:11434
model: llama3.2
openai:
model: gpt-4o
authentication: cerb:connected_account:openai
together:
model: meta-llama/Llama-3.3-70B-Instruct-Turbo
authentication: cerb:connected_account:together-ai
The model:
key is the name of the model to use. This must be a chat model.
The authentication:
key is a connected account in URI format (e.g. cerb:connected_account:name
) for API authentication. This may be omitted for local models like Ollama.
The optional api_endpoint_url:
key overrides the default endpoint. For instance, this can be used with the openai:
provider for any compatible API (e.g. SambaNova), or a locally hosted Ollama server.
system_prompt:
system_prompt@text:
You are a helpful AI assistant that classifies customer support messages.
Classify each message as positive, negative, or neutral.
Return only the classification word.
messages:
The messages to send to the LLM. Each message has role:
and content:
keys. The final message must have a role:
of user
.
For a simple single-turn completion:
messages:
0:
role: user
content: Thank you so much for your help! The issue is now resolved.
For few-shot prompting with examples:
messages:
0:
role: user
content: "Great service! Very helpful staff."
1:
role: assistant
content: positive
2:
role: user
content: "This product is terrible and doesn't work."
3:
role: assistant
content: negative
4:
role: user
content: "Thank you for the quick response! This solved my problem perfectly."
output:
The key specified in output:
is set to a dictionary with the following structure:
Key | Description |
---|---|
messages |
An array of response messages from the LLM. |
Each message in the messages
array has the following structure:
Key | Description |
---|---|
content |
The text response from the LLM. |
type |
Currently only text is supported. |
output:
messages:
0:
type: text
content: positive
Examples
Text classification
start:
llm.chat:
output: classification_result
inputs:
llm:
anthropic:
model: claude-3-5-haiku-latest
authentication: cerb:connected_account:anthropic
system_prompt@text:
Classify customer messages as: positive, negative, or neutral.
Return only the classification.
messages:
0:
role: user
content: {{ticket_latest_message_content}}
on_success:
return:
sentiment@key: classification_result:messages:0:content
Text summarization
start:
llm.chat:
output: summary_result
inputs:
llm:
openai:
model: gpt-4o-mini
authentication: cerb:connected_account:openai
system_prompt@text:
Summarize the following text in 2-3 sentences.
Focus on the key points and main outcomes.
messages:
0:
role: user
content@text:
Please summarize this conversation:
{{ticket_conversation_history}}
on_success:
return:
summary@key: summary_result:messages:0:content
Content generation
start:
llm.chat:
output: response_result
inputs:
llm:
gemini:
model: gemini-2.0-flash
authentication: cerb:connected_account:gemini
system_prompt@text:
You are a helpful customer support agent. Generate a professional
and friendly response to the customer's question. Keep it concise
and actionable.
messages:
0:
role: user
content: Customer asked: "How do I reset my password?"
on_success:
return:
suggested_response@key: response_result:messages:0:content