The parameters for creating an invocation context.
OptionalactiveThe running streaming tools of this invocation.
The current agent of this invocation context.
Optional ReadonlyartifactOptionalbranchThe branch of the invocation context.
The format is like agent_1.agent_2.agent_3, where agent_1 is the parent of agent_2, and agent_2 is the parent of agent_3.
Branch is used when multiple sub-agents shouldn't see their peer agents' conversation history.
Optional ReadonlycredentialWhether to end this invocation. Set to True in callbacks or tools to terminate this invocation.
ReadonlyinvocationThe id of this invocation context.
OptionalliveThe queue to receive live requests.
Optional ReadonlymemoryThe manager for keeping track of plugins in this invocation.
OptionalrunConfigurations for live agents under this invocation.
ReadonlysessionThe current session of this invocation context.
Optional ReadonlysessionOptionaltranscriptionCaches necessary, data audio or contents, that are needed by transcription.
Optional ReadonlyuserThe user content that started this invocation.
An invocation context represents the data of a single invocation of an agent.
An invocation: 1. Starts with a user message and ends with a final response. 2. Can contain one or multiple agent calls. 3. Is handled by runner.runAsync().
An invocation runs an agent until it does not request to transfer to another agent.
An agent call: 1. Is handled by agent.runAsync(). 2. Ends when agent.runAsync() ends.
An LLM agent call is an agent with a BaseLLMFlow. An LLM agent call can contain one or multiple steps.
An LLM agent runs steps in a loop until:
A step:
The summarization of the function response is considered another step, since it is another llm call. A step ends when it's done calling llm and tools, or if the end_invocation is set to true at any time.