Parallel agents¶
The ParallelAgent
¶
The ParallelAgent
is a workflow agent that executes its sub-agents concurrently. This dramatically speeds up workflows where tasks can be performed independently.
Use ParallelAgent
when: For scenarios prioritizing speed and involving independent, resource-intensive tasks, a ParallelAgent
facilitates efficient parallel execution. When sub-agents operate without dependencies, their tasks can be performed concurrently, significantly reducing overall processing time.
As with other workflow agents, the ParallelAgent
is not powered by an LLM, and is thus deterministic in how it executes. That being said, workflow agents are only concerned only with their execution (i.e. in parallel), and not their internal logic; the tools or sub-agents of a workflow agent may or may not utilize LLMs.
Example¶
This approach is particularly beneficial for operations like multi-source data retrieval or heavy computations, where parallelization yields substantial performance gains. Importantly, this strategy assumes no inherent need for shared state or direct information exchange between the concurrently executing agents.
How it works¶
When the ParallelAgent
's run_async()
method is called:
- Concurrent Execution: It initiates the
run()
method of each sub-agent present in thesub_agents
list concurrently. This means all the agents start running at (approximately) the same time. - Independent Branches: Each sub-agent operates in its own execution branch. There is no automatic sharing of conversation history or state between these branches during execution.
- Result Collection: The
ParallelAgent
manages the parallel execution and, typically, provides a way to access the results from each sub-agent after they have completed (e.g., through a list of results or events). The order of results may not be deterministic.
Independent Execution and State Management¶
It's crucial to understand that sub-agents within a ParallelAgent
run independently. If you need communication or data sharing between these agents, you must implement it explicitly. Possible approaches include:
- Shared
InvocationContext
: You could pass a sharedInvocationContext
object to each sub-agent. This object could act as a shared data store. However, you'd need to manage concurrent access to this shared context carefully (e.g., using locks) to avoid race conditions. - External State Management: Use an external database, message queue, or other mechanism to manage shared state and facilitate communication between agents.
- Post-Processing: Collect results from each branch, and then implement logic to coordinate data afterwards.
Full Example: Parallel Web Research¶
Imagine researching multiple topics simultaneously:
- Researcher Agent 1: An
LlmAgent
that researches "renewable energy sources." - Researcher Agent 2: An
LlmAgent
that researches "electric vehicle technology." -
Researcher Agent 3: An
LlmAgent
that researches "carbon capture methods."
These research tasks are independent. Using a ParallelAgent
allows them to run concurrently, potentially reducing the total research time significantly compared to running them sequentially. The results from each agent would be collected separately after they finish.
Code
# --- Define Researcher Sub-Agents ---
# Researcher 1: Renewable Energy
researcher_agent_1 = LlmAgent(
name="RenewableEnergyResearcher",
model=GEMINI_MODEL,
instruction="""You are an AI Research Assistant specializing in energy.
Research the latest advancements in 'renewable energy sources'.
Use the Google Search tool provided.
Summarize your key findings concisely (1-2 sentences).
Output *only* the summary.
""",
description="Researches renewable energy sources.",
tools=[google_search], # Provide the search tool
output_key="renewable_energy_result"
)
# Researcher 2: Electric Vehicles
researcher_agent_2 = LlmAgent(
name="EVResearcher",
model=GEMINI_MODEL,
instruction="""You are an AI Research Assistant specializing in transportation.
Research the latest developments in 'electric vehicle technology'.
Use the Google Search tool provided.
Summarize your key findings concisely (1-2 sentences).
Output *only* the summary.
""",
description="Researches electric vehicle technology.",
tools=[google_search], # Provide the search tool
output_key="ev_technology_result"
)
# Researcher 3: Carbon Capture
researcher_agent_3 = LlmAgent(
name="CarbonCaptureResearcher",
model=GEMINI_MODEL,
instruction="""You are an AI Research Assistant specializing in climate solutions.
Research the current state of 'carbon capture methods'.
Use the Google Search tool provided.
Summarize your key findings concisely (1-2 sentences).
Output *only* the summary.
""",
description="Researches carbon capture methods.",
tools=[google_search], # Provide the search tool
output_key="carbon_capture_result"
)
# --- Create the ParallelAgent ---
# This agent orchestrates the concurrent execution of the researchers.
# For running with ADK CLI tools (adk web, adk run, adk api_server),
# this variable MUST be named `root_agent`.
parallel_research_agent = ParallelAgent(
name="ParallelWebResearchAgent",
sub_agents=[researcher_agent_1, researcher_agent_2, researcher_agent_3],
description="Runs multiple research agents in parallel to gather information." # Added description
)
NOTE: Runnable Code Sample:
To run this specific parallel research example yourself, you can find a complete, runnable Python file here