Did you know ... Search Documentation:
Pack llmpl -- README.md

llmpl

Use LLMs inside Prolog!

pllm is a minimal SWI-Prolog helper that exposes llm/2. The predicate posts a prompt to an HTTP LLM endpoint and unifies the model's response text with the second argument.

The library currently supports any OpenAI-compatible chat/completions endpoint.

Installation

?- pack_install(pllm).

Configuration

Some services require an API key for authentication. Set the LLM_API_KEY environment variable to your API key. You can do the following in your shell before starting SWI-Prolog:

echo LLM_API_KEY="sk-..." >> .env
set -a && source .env && set +a

Configure the endpoint and default model before calling llm/2 or llm/3:

?- config("https://api.openai.com/v1/chat/completions", "gpt-4o-mini").

You can override the configured model per call with llm/3 options.

Usage

# Fill in .env with your settings
set -a && souce .env && set +a
swipl
?- [prolog/llm].
?- llm("Say hello in French.", Output).
Output = "Bonjour !".

?- llm("Say hello in French.", Output, [model("gpt-4o-mini"), timeout(30)]).
Output = "Bonjour !".

?- llm(Prompt, "Dog").
Prompt = "What animal is man's best friend?",
...

Providers

This library expects an OpenAI-compatible chat/completions endpoint. Below are common providers and endpoints you can try.

OpenAI

Endpoint:[104, 116, 116, 112, 115, 58, 47, 47, 97, 112, 105, 46, 111, 112, 101, 110, 97, 105, 46, 99, 111, 109, 47, 118, 49, 47, 99, 104, 97, 116, 47, 99, 111, 109, 112, 108, 101, 116, 105, 111, 110, 115]
Example:[63, 45, 32, 99, 111, 110, 102, 105, 103, 40, 34, 104, 116, 116, 112, 115, 58, 47, 47, 97, 112, 105, 46, 111, 112, 101, 110, 97, 105, 46, 99, 111, 109, 47, 118, 49, 47, 99, 104, 97, 116, 47, 99, 111, 109, 112, 108, 101, 116, 105, 111, 110, 115, 34, 44, 32, 34, 103, 112, 116, 45, 52, 111, 45, 109, 105, 110, 105, 34, 41, 46]
Ollama (local)
Endpoint:[104, 116, 116, 112, 58, 47, 47, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 49, 49, 52, 51, 52, 47, 118, 49, 47, 99, 104, 97, 116, 47, 99, 111, 109, 112, 108, 101, 116, 105, 111, 110, 115]
Example:[63, 45, 32, 99, 111, 110, 102, 105, 103, 40, 34, 104, 116, 116, 112, 58, 47, 47, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 49, 49, 52, 51, 52, 47, 118, 49, 47, 99, 104, 97, 116, 47, 99, 111, 109, 112, 108, 101, 116, 105, 111, 110, 115, 34, 44, 32, 34, 108, 108, 97, 109, 97, 51, 46, 49, 34, 41, 46]

Reverse prompts

If you call llm/2 with an unbound first argument and a concrete response, the library first asks the LLM to suggest a prompt that would (ideally) produce that response, binds it to your variable, and then sends a second request that wraps the suggested prompt in a hard constraint ("answer only with ..."). This costs two API calls and is still best-effort; the model may ignore the constraint, in which case the predicate simply fails.