Closes the llm server connection.
Receives the model response using the llm server connection.
A generator of LlmResponse.
Sends the content to the model.
The model will respond immediately upon receiving the content. If you send function responses, all parts in the content should be function responses.
The content to send to the model.
Sends the conversation history to the model.
You call this method right after setting up the model connection. The model will respond if the last content is from user, otherwise it will wait for new user input before responding.
The conversation history to send to the model.
Sends a chunk of audio or a frame of video to the model in realtime.
The model may not respond immediately upon receiving the blob. It will do voice activity detection and decide when to respond.
The blob to send to the model.
The base class for a live model connection.