From e7d182c1d4f42d4a35be8b87d83de3b04237813a Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Mon, 5 Jan 2026 10:17:22 -0800 Subject: [PATCH 1/5] Add support for additional large language models in agent networks documentation Updated the af-agent-networks.adoc to include support for Azure OpenAI, OpenAI API, and Gemini API. Added a table detailing requirements and suggested models for each LLM. Revised af-project-files.adoc to reference the new section on LLMs for better clarity on model usage and configuration settings. --- modules/ROOT/pages/af-agent-networks.adoc | 30 +++++++++++++++++++++++ modules/ROOT/pages/af-project-files.adoc | 21 +++++++++------- 2 files changed, 42 insertions(+), 9 deletions(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index 1a2c12506..7fb11575c 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -60,6 +60,36 @@ These servers will be published to Exchange. The asset might also be published to Exchange. . An asset that has been added to this agent network project from Exchange. +== Large Language Models + +Agent network brokers support these LLMs. + +* Azure OpenAI +* OpenAI API +* Gemini API + +The following table details the requirements and recommended models. + +|=== +|*Model Provider* |*Required Endpoint* |*Required Capabilities* |*Suggested Models* + +|OpenAI |`/responses` a| +* Reasoning +* Native structured output +* Function and Custom tool calling +a| +* For lower latency: GPT-5-mini +* For complex reasoning: Evaluate models + +|Gemini |`/generateContent` (Native API) a| +* Native Thinking (via thinkingConfig) +* Native structured output (responseSchema) +* Function and custom tool calling +a| +* For lower latency: Gemini 2.5 Flash, Gemini 2.5 Flash-Lite +* For complex reasoning: Gemini 3 Pro (Deep Think capabilities) +|=== + == Considerations Agent networks have these considerations. diff --git a/modules/ROOT/pages/af-project-files.adoc b/modules/ROOT/pages/af-project-files.adoc index c1b97a017..6027d70d2 100644 --- a/modules/ROOT/pages/af-project-files.adoc +++ b/modules/ROOT/pages/af-project-files.adoc @@ -231,14 +231,7 @@ The `spec` element has these properties. The value of this section is a reference to one of the LLMs defined in Anypoint Exchange or in the `llmProviders` section of `agent-network.yaml`. Because it's a reference, you can choose to share the same LLM across all the brokers in your agent network. Or, you can have different brokers use different LLMs to better suit their tasks. -Agent network brokers support OpenAI models. The model you use must support: - -* The `/responses` endpoint -* Reasoning -* Native structured output -* Function and Custom tool calling - -GPT-5-mini and GPT-5.1 meet these requirements. GPT-5-mini and GPT-5.1 meet these requirements. Evaluate the different models to find the right balance for your needs. For lower latency, consider smaller models like GPT-5-mini. +For more information on supported LLMs, see xref:af-agent-networks.adoc#large-language-models[]. The `llm` element has these properties. @@ -264,7 +257,17 @@ a| |`configuration.openAI.topP` |Nucleus sampling parameter. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Number |Any number |No |`configuration.openAI.topLogprobs` |Number of most likely tokens to return at each position. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Integer |Any integer |No |`configuration.openAI.maxOutputTokens` |Maximum number of tokens to generate. |Integer |Any integer |No -|=== +|`configuration.gemini` |OpenAI specific configuration settings. |Object |Object with OpenAI settings |No +|`configuration.gemini.thinkingConfig.thinkingBudget` |Sets a token budget for the reasoning phase. (Applies only to Gemini 2.5 series.) |Integer a| +* `0` (Disabled) +* `1024` to `32768` |No +|`configuration.gemini.thinkingConfig.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a| +* `High` (Default) +* `Low` |No +|`configuration.gemini.temperature` |Controls randomness. |Number |For Gemini 3 and Gemini 2.5, Google recommends keeping this at `1.0` (default) to avoid breaking the reasoning chain. |No +|`configuration.gemini.topP` |Nucleus sampling parameter. |Number |Any number |No +|`configuration.gemini.responseLogprobs` |Whether to return log probabilities. |Boolean |true or false |No +|=== [[instructions-section]] ==== Instructions Section From fae714b59e5dc1f49f7bd621e4cb9b588ab86ee8 Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Tue, 13 Jan 2026 10:58:55 -0800 Subject: [PATCH 2/5] Update cross-references in agent network documentation to reflect the new section for Large Language Models, improving navigation and clarity. --- modules/ROOT/pages/af-agent-networks.adoc | 1 + modules/ROOT/pages/af-project-files.adoc | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index 84e921ba1..183ad9a2f 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -60,6 +60,7 @@ These servers will be published to Exchange. The asset might also be published to Exchange. . An asset that has been added to this agent network project from Exchange. +[[llm-support]] == Large Language Models Agent network brokers support these LLMs. diff --git a/modules/ROOT/pages/af-project-files.adoc b/modules/ROOT/pages/af-project-files.adoc index 1b0e062b8..f456272ab 100644 --- a/modules/ROOT/pages/af-project-files.adoc +++ b/modules/ROOT/pages/af-project-files.adoc @@ -231,7 +231,7 @@ The `spec` element has these properties. The value of this section is a reference to one of the LLMs defined in Anypoint Exchange or in the `llmProviders` section of `agent-network.yaml`. Because it's a reference, you can choose to share the same LLM across all the brokers in your agent network. Or, you can have different brokers use different LLMs to better suit their tasks. -For more information on supported LLMs, see xref:af-agent-networks.adoc#large-language-models[]. +For more information on supported LLMs, see xref:af-agent-networks.adoc#llm-support[]. The `llm` element has these properties. From 63ca411c7ea64f8f210fdd18e6c4e6225cdedf16 Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Tue, 13 Jan 2026 13:46:46 -0800 Subject: [PATCH 3/5] Refactor agent network documentation by removing outdated sections on deployment targets and authentication types, and clarifying non-supported A2A protocol features for improved accuracy and readability. --- modules/ROOT/pages/af-agent-networks.adoc | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index 183ad9a2f..775c8cb63 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -122,22 +122,7 @@ Consider a network with a client and two brokers (1 and 2). For more information, see https://a2a-protocol.org/v0.3.0/topics/life-of-a-task/#group-related-interactions[Life of a Task - Group Related Interactions]. -== Deployment Targets - -These deployment targets are supported. - -* CloudHub 2.0 -* Runtime Fabric (limited availability) - -== Authentication Types - -These authentication types are supported. - -* Basic -* Client Credentials -* Anypoint Client Credentials - -== Non-Supported Features +== Non-Supported A2A Protocol Features Agent network doesn't support streaming with Server-Sent Events (SSE). From 1df9c11c5d4bc0dd9258a2c78ec80fee1a7ee81d Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Tue, 13 Jan 2026 13:48:11 -0800 Subject: [PATCH 4/5] Update agent network documentation to clarify that streaming with Server-Sent Events (SSE) is not supported, enhancing accuracy and user understanding. --- modules/ROOT/pages/af-agent-networks.adoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index 775c8cb63..ebf4b5a43 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -122,9 +122,8 @@ Consider a network with a client and two brokers (1 and 2). For more information, see https://a2a-protocol.org/v0.3.0/topics/life-of-a-task/#group-related-interactions[Life of a Task - Group Related Interactions]. -== Non-Supported A2A Protocol Features +NOTE: Agent network doesn't support streaming with Server-Sent Events (SSE). -Agent network doesn't support streaming with Server-Sent Events (SSE). == See Also From 4645e4c44659de16619b26edff2f189dfa5b44b8 Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Wed, 14 Jan 2026 15:49:21 -0800 Subject: [PATCH 5/5] Update agent network and project files documentation to reflect changes in configuration parameters, specifically renaming 'thinkingConfig' to 'thinkingBudget' and 'thinkingLevel', and adding 'maxOutputTokens' for clarity and accuracy. --- modules/ROOT/pages/af-agent-networks.adoc | 2 +- modules/ROOT/pages/af-project-files.adoc | 5 ++++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index ebf4b5a43..2891238eb 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -83,7 +83,7 @@ a| * For complex reasoning: Evaluate models |Gemini |`/generateContent` (Native API) a| -* Native Thinking (via thinkingConfig) +* Native Thinking (via thinkingBudget and thinkingLevel) * Native structured output (responseSchema) * Function and custom tool calling a| diff --git a/modules/ROOT/pages/af-project-files.adoc b/modules/ROOT/pages/af-project-files.adoc index f456272ab..e24b4ce64 100644 --- a/modules/ROOT/pages/af-project-files.adoc +++ b/modules/ROOT/pages/af-project-files.adoc @@ -260,13 +260,16 @@ a| |`configuration.gemini` |OpenAI specific configuration settings. |Object |Object with OpenAI settings |No |`configuration.gemini.thinkingConfig.thinkingBudget` |Sets a token budget for the reasoning phase. (Applies only to Gemini 2.5 series.) |Integer a| * `0` (Disabled) +* `-1` (Automatic) * `1024` to `32768` |No -|`configuration.gemini.thinkingConfig.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a| +|`configuration.gemini.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a| * `High` (Default) * `Low` |No |`configuration.gemini.temperature` |Controls randomness. |Number |For Gemini 3 and Gemini 2.5, Google recommends keeping this at `1.0` (default) to avoid breaking the reasoning chain. |No |`configuration.gemini.topP` |Nucleus sampling parameter. |Number |Any number |No |`configuration.gemini.responseLogprobs` |Whether to return log probabilities. |Boolean |true or false |No +|`configuration.gemini.maxOutputTokens` |Maximum number of tokens that can be generated in the response. |Integer | |No + |=== [[instructions-section]]