Did you know ... Search Documentation:
Pack llmpl -- prolog/llm.pl
PublicShow source

This module exposes the predicate llm/2, which posts a user prompt to an HTTP-based large language model (LLM) API and unifies the model's response with the second argument.

Configuration is split between a one-time config predicate and an API key environment variable. Optional per-call settings let you override the model name and timeout.

  • config/2 – set the LLM endpoint and default model name.
  • LLM_API_KEY – secret used to build a bearer token.
  • model/1 – (optional) model identifier, defaults to the configured model.
  • timeout/1 – (optional) request timeout in seconds, defaults to 60.

The library assumes an OpenAI-compatible payload/response. To target a different API adjust llm_request_body/2 or llm_extract_text/2.

 llm(+Input, -Output) is det
Send Input as a prompt to the configured LLM endpoint and unify Output with the assistant's response text.
 llm(+Input, -Output, +Options) is det
Options may include model(Model) and timeout(Seconds).

Undocumented predicates

The following predicates are exported, but not or incorrectly documented.

 config(Arg1, Arg2)