Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 34 additions & 21 deletions modules/ROOT/pages/af-agent-networks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,39 @@ These servers will be published to Exchange.
The asset might also be published to Exchange.
. An asset that has been added to this agent network project from Exchange.

[[llm-support]]
== Large Language Models

Agent network brokers support these LLMs.

* Azure OpenAI
* OpenAI API
* Gemini API

The following table details the requirements and recommended models.

|===
|*Model Provider* |*Required Endpoint* |*Required Capabilities* |*Suggested Models*

|OpenAI |`/responses` a|
* Reasoning
* Native structured output
* Function and Custom tool calling
a|
* For lower latency: GPT-5-mini
* For complex reasoning: Evaluate models

|Gemini |`/generateContent` (Native API) a|
* Native Thinking (via thinkingConfig)
* Native structured output (responseSchema)
* Function and custom tool calling
a|
* For lower latency: Gemini 2.5 Flash, Gemini 2.5 Flash-Lite
* For complex reasoning: Gemini 3 Pro (Deep Think capabilities)
|===

Agent network supports text-based prompts and responses. Image and binary message types aren't supported.

== A2A Protocol

The Agent2Agent (A2A) Protocol governs agent-to-agent communication. This protocol powers orchestration, observability, and governance features in agent networks. MuleSoft supports v0.3.0 of the A2A Protocol Specification.
Expand Down Expand Up @@ -89,28 +122,8 @@ Consider a network with a client and two brokers (1 and 2).

For more information, see https://a2a-protocol.org/v0.3.0/topics/life-of-a-task/#group-related-interactions[Life of a Task - Group Related Interactions].

=== Non-Supported Features

Agent network doesn't support streaming with Server-Sent Events (SSE).

== Considerations

Agent networks have these considerations.
NOTE: Agent network doesn't support streaming with Server-Sent Events (SSE).

* An agent network project supports only one domain (one YAML file per project). For multiple domains, create a separate project for each.
* These authentication types are supported.
** Basic
** Client Credentials
** Anypoint Client Credentials
* Text-based prompts and responses are supported. Image and binary message/artifact types aren't supported.
* All LLM models in your agent network project must support structured responses in JSON format. For example, a `gpt4.1` model is supported; a `gpt4` model isn't.
* These LLMs are supported.
** Azure OpenAI
** OpenAI Direct API
* When you add governance policies to your agent network project YAML file, those policies persist between deployments. However, if you add governance policies at run-time, those policies don't persist. The next time you deploy your agent network, the governance policies used are the ones in the `agent-network.yaml` file.
* These deployment targets are supported.
** CloudHub 2.0
** Runtime Fabric (limited availability)

== See Also

Expand Down
21 changes: 12 additions & 9 deletions modules/ROOT/pages/af-project-files.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -231,14 +231,7 @@ The `spec` element has these properties.

The value of this section is a reference to one of the LLMs defined in Anypoint Exchange or in the `llmProviders` section of `agent-network.yaml`. Because it's a reference, you can choose to share the same LLM across all the brokers in your agent network. Or, you can have different brokers use different LLMs to better suit their tasks.

Agent network brokers support OpenAI models. The model you use must support:

* The `/responses` endpoint
* Reasoning
* Native structured output
* Function and Custom tool calling

GPT-5-mini and GPT-5.1 meet these requirements. GPT-5-mini and GPT-5.1 meet these requirements. Evaluate the different models to find the right balance for your needs. For lower latency, consider smaller models like GPT-5-mini.
For more information on supported LLMs, see xref:af-agent-networks.adoc#llm-support[].

The `llm` element has these properties.

Expand All @@ -264,7 +257,17 @@ a|
|`configuration.openAI.topP` |Nucleus sampling parameter. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Number |Any number |No
|`configuration.openAI.topLogprobs` |Number of most likely tokens to return at each position. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Integer |Any integer |No
|`configuration.openAI.maxOutputTokens` |Maximum number of tokens to generate. |Integer |Any integer |No
|===
|`configuration.gemini` |OpenAI specific configuration settings. |Object |Object with OpenAI settings |No
|`configuration.gemini.thinkingConfig.thinkingBudget` |Sets a token budget for the reasoning phase. (Applies only to Gemini 2.5 series.) |Integer a|
* `0` (Disabled)
* `1024` to `32768` |No
|`configuration.gemini.thinkingConfig.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a|
* `High` (Default)
* `Low` |No
|`configuration.gemini.temperature` |Controls randomness. |Number |For Gemini 3 and Gemini 2.5, Google recommends keeping this at `1.0` (default) to avoid breaking the reasoning chain. |No
|`configuration.gemini.topP` |Nucleus sampling parameter. |Number |Any number |No
|`configuration.gemini.responseLogprobs` |Whether to return log probabilities. |Boolean |true or false |No
|===

[[instructions-section]]
==== Instructions Section
Expand Down