| Did you know ... | Search Documentation: |
| Pack llmpl -- prolog/llm.pl |
This module exposes the predicate llm/2, which posts a user prompt to an HTTP-based large language model (LLM) API and unifies the model's response with the second argument.
Configuration is split between a one-time config predicate and an API key environment variable. Optional per-call settings let you override the model name and timeout.
The library assumes an OpenAI-compatible payload/response. To target a different API adjust llm_request_body/2 or llm_extract_text/2.
model(Model) and timeout(Seconds).The following predicates are exported, but not or incorrectly documented.