Skip to main content

AI Agents

An AI Agent is the central composition object in Turing ES's GenAI system. It combines a specific LLM Instance, a selected set of tools, and a set of MCP Servers into a single, named, deployable assistant.

Each agent has its own personality, capability set, and visual identity. In the Chat interface, every configured agent appears as a separate tab — users choose which agent to interact with based on its name and description.

AI Agents are configured in Administration → AI Agents.


Agent Composition

An AI Agent is built from four layers: Identity, Brain, Capabilities, and Output.

AI Agent — Composition


Configuration Form

The agent form is organized into four tabs accessible from the agent detail page.

Settings

FieldRequiredDescription
NameYesDisplay name shown as the tab label in the Chat interface
AvatarProfile image representing the agent in the chat UI — supports upload and removal
DescriptionBrief explanation of the agent's purpose and specialization
System PromptInstructions sent as a system message before every conversation. Defines persona, behaviour, and language rules.
EnabledToggle to activate or deactivate the agent. Disabled agents do not appear in the Chat interface.
Default system prompt

If the system prompt is left blank, the agent uses a built-in default:

"You are an AI assistant. Answer the user's questions using the tools available to you. If you have access to MCP server tools, use them when relevant to fulfill the user's request. If the user asks in a specific language, respond in that same language."

LLM

Select one or more LLM Instances that this agent can use for inference. The list shows each instance's title, description, vendor, and model name. At chat time, the user (or frontend) specifies which instance to use from the agent's allowed set.

See LLM Instances for configuration details.

Tools

Select which of the 27 native tools (across 7 categories) are available to this agent. Tools are displayed grouped by service — each group has a select-all checkbox for quick configuration.

CategoryExamples
Semantic Navigationlist_sites, search_site, get_document_details, find_similar_documents, search_by_date_range
RAG / Knowledge Basesearch_knowledge_base, list_knowledge_base_files, get_file_content, knowledge_base_stats
Web Crawlerfetch_webpage, extract_links
Financeget_stock_quote, search_ticker
Weatherget_weather
Image Searchsearch_images
Date / Timeget_current_time
Code Interpreterexecute_python

A lean tool list reduces prompt length and helps the LLM make more precise tool choices.

See Tool Calling for the full tool reference.

MCP Servers

Select which external MCP servers this agent can call. The list shows each server's title, description, and connection type badge:

BadgeTransportDescription
HTTP (blue)SSE over HTTPWeb-based MCP servers
COMMAND (amber)stdioLocal process-based MCP servers

See MCP Servers for configuration details.


Composing Agents for Specific Roles

Because each agent independently selects its LLM Instance, tools, and MCP servers, it is straightforward to build purpose-specific assistants.

Enterprise Search Agent

An agent that helps users find and explore indexed content across the organization.

FieldValue
LLM InstanceAnthropic Claude Sonnet
Toolslist_sites, search_site, get_document_details, find_similar_documents, search_by_date_range
MCP Servers

Data Research Agent

A multi-purpose agent that can browse the web, query financial data, and run data analysis scripts.

FieldValue
LLM InstanceOpenAI GPT-4o
Toolsfetch_webpage, extract_links, get_stock_quote, get_weather, execute_python, search_knowledge_base
MCP ServersInternal data API (HTTP MCP)

IT Operations Agent

A local agent for internal IT queries — runs fully on-premise using a local LLM.

FieldValue
LLM InstanceOllama (local Llama 3)
Toolsexecute_python, get_current_time, search_knowledge_base
MCP ServersInternal ticketing system (stdio MCP)

How an Agent Executes

When a user sends a message to an AI Agent, the following loop runs:

AI Agent — Execution Flow

  1. User Input — the user sends a message (text, file attachments, or follow-up) via the agent's Chat tab.
  2. Prompt Construction — Turing ES builds the prompt from the agent's system prompt, tool definitions (native + MCP), and the full message history.
  3. LLM Inference — the LLM Instance processes the prompt and decides whether to respond directly or call tools.
  4. Tool Execution — if tools are needed, the LLM requests a tool call (name + arguments). Turing ES executes it (native tool or MCP server) and returns the result.
  5. Multi-step Reasoning — the LLM analyses the tool results and may request additional tool calls in a reasoning chain, looping back to step 4 until satisfied.
  6. Final Response — the LLM generates the final answer, grounded in the tool results and conversation context.
  7. Chat Rendering — the response is streamed to the user via SSE with full rich content rendering (Markdown, code blocks, D2 diagrams, HTML, download links).

All tool invocations are wrapped with logging that records the tool name, input, execution time, and response length for debugging.


REST API

Agent Management

MethodEndpointDescription
GET/api/ai-agentList all agents (ordered by title)
GET/api/ai-agent/structureGet empty structure template for a new agent
GET/api/ai-agent/{id}Get a specific agent
POST/api/ai-agentCreate a new agent
PUT/api/ai-agent/{id}Update an existing agent
DELETE/api/ai-agent/{id}Delete an agent

Agent Chat

MethodEndpointDescription
POST/api/v2/ai-agent/{agentId}/chatStream chat response (SSE). Request body: { llmInstanceId, messages[] }
GET/api/v2/ai-agent/{agentId}/chat/context-infoGet LLM context window size. Query param: llmInstanceId

Native Tools

MethodEndpointDescription
GET/api/native-toolList all available tool groups with tool names and descriptions

Caching

Agent data is cached at the repository layer to avoid repeated database reads:

  • turAIAgentfindAll — caches the full list of agents
  • turAIAgentfindById — caches individual agent lookups

Cache entries are invalidated automatically on create, update, or delete.


PageDescription
LLM InstancesConfigure the LLM providers available as agent backends
Tool CallingFull reference of all 27 native tools
MCP ServersConnect agents to external tools via MCP
ChatFront-end where agents are used — AI Agents tab
GenAI & LLM ConfigurationRAG architecture overview