The actions taken by the agent.
Optionalauthor"user" or the name of the agent, indicating who appended the event to the session.
OptionalbranchThe branch of the event. The format is like agent_1.agent_2.agent_3, where agent_1 is the parent of agent_2, and agent_2 is the parent of agent_3.
Branch is used when multiple sub-agent shouldn't see their peer agents' conversation history.
OptionalcontentThe content of the response.
OptionalcustomThe custom metadata of the LlmResponse. An optional key-value pair to label an LlmResponse. NOTE: the entire object must be JSON serializable.
OptionalerrorError code if the response is an error. Code varies by model.
OptionalerrorError message if the response is an error.
OptionalfinishThe finish reason of the response.
OptionalgroundingThe grounding metadata of the response.
The unique identifier of the event. Do not assign the ID. It will be assigned by the session.
OptionalinputAudio transcription of user input.
OptionalinterruptedFlag indicating that LLM was interrupted when generating the content. Usually it's due to user interruption during a bidi streaming.
The invocation ID of the event. Should be non-empty before appending to a session.
OptionalliveThe session resumption update of the LlmResponse
OptionallongSet of ids of the long running function calls. Agent client will know from this field about which function call is long running. Only valid for function call event
OptionaloutputAudio transcription of model output.
OptionalpartialIndicates whether the text content is part of a unfinished text stream. Only used for streaming mode and when the content is plain text.
The timestamp of the event.
OptionalturnIndicates whether the response from the model is complete. Only used for streaming mode.
OptionalusageThe usage metadata of the LlmResponse.
Represents an event in a conversation between agents and users.
It is used to store the content of the conversation, as well as the actions taken by the agents like function calls, etc.