async fn make_llm_request(
chat_request: LLMRequest,
endpoint: &Url,
api_key: &str,
) -> Result<LLMCompletionResponse>Expand description
Makes a non-streaming request to an LLM
async fn make_llm_request(
chat_request: LLMRequest,
endpoint: &Url,
api_key: &str,
) -> Result<LLMCompletionResponse>Makes a non-streaming request to an LLM