ADK for TypeScript: API Reference
    Preparing search index...

    Interface LlmResponse

    LLM response class that provides the first candidate response from the model if available. Otherwise, returns error code and message.

    interface LlmResponse {
        content?: Content;
        customMetadata?: { [key: string]: any };
        errorCode?: string;
        errorMessage?: string;
        finishReason?: FinishReason;
        groundingMetadata?: GroundingMetadata;
        inputTranscription?: Transcription;
        interrupted?: boolean;
        liveSessionResumptionUpdate?: LiveServerSessionResumptionUpdate;
        outputTranscription?: Transcription;
        partial?: boolean;
        turnComplete?: boolean;
        usageMetadata?: GenerateContentResponseUsageMetadata;
    }

    Hierarchy (View Summary)

    Properties

    content?: Content

    The content of the response.

    customMetadata?: { [key: string]: any }

    The custom metadata of the LlmResponse. An optional key-value pair to label an LlmResponse. NOTE: the entire object must be JSON serializable.

    errorCode?: string

    Error code if the response is an error. Code varies by model.

    errorMessage?: string

    Error message if the response is an error.

    finishReason?: FinishReason

    The finish reason of the response.

    groundingMetadata?: GroundingMetadata

    The grounding metadata of the response.

    inputTranscription?: Transcription

    Audio transcription of user input.

    interrupted?: boolean

    Flag indicating that LLM was interrupted when generating the content. Usually it's due to user interruption during a bidi streaming.

    liveSessionResumptionUpdate?: LiveServerSessionResumptionUpdate

    The session resumption update of the LlmResponse

    outputTranscription?: Transcription

    Audio transcription of model output.

    partial?: boolean

    Indicates whether the text content is part of a unfinished text stream. Only used for streaming mode and when the content is plain text.

    turnComplete?: boolean

    Indicates whether the response from the model is complete. Only used for streaming mode.

    usageMetadata?: GenerateContentResponseUsageMetadata

    The usage metadata of the LlmResponse.