LLM calls as pure functions

python
ai
llm
Author

Jaime Ruiz Serra

Published

August 15, 2025

It is useful to frame LLM calls as pure (i.e., stateless) functions1.

Consider the following prompt:

What sounds does a chicken make?

We would like the LLM to respond. But we could convert this into a callable function with an input argument so we can use it for other animals, e.g.

What sounds does a {{ animal }} make?

We would also like to get our responses in a reliable format/following a predefined schema. We can define this schema as a pytantic BaseModel instance:

class AnimalSound(pydantic.BaseModel)
    animal: str
    sounds: list[str]

And we can define our callable LLM function as:

@my_llm_client_wrapper.llm_function(output_schema=AnimalSound)
def foo_bar(animal='cow'):
    '''
    What sounds does a {{ animal }} make?
    '''

Note that there is no actual code in the function, only the signature, a docstring containing the prompt with (jinja2-compliant) variable placeholders, and a decorator declaration determining the desired output format.

This function can be called:

response_obj = foo_bar(animal='pig')

And the results obtained:

response_obj.animal
# 'pig'
response_obj.sounds
# ['oink', 'sniff']

This requires a wrapper class that implements the decorator,

my_llm_client_wrapper = MyLLMClientWrapper(
    api_key,
    llm_endpoint,
    # ...
)

which may be something like

from jinja2 import Template
import textwrap

class MyLLMClientWrapper:
    
    # ...
    
    def llm_function(
            self,
            model="gpt-oss-120b",
            output_schema=None,
            # ...
        ):
        def decorator(func):
            @functools.wraps(func)
            def wrapper(**kwargs):
                prompt_template = func.__doc__
                if not prompt_template:
                    raise ValueError("The decorated function must have a docstring.")
                
                # Construct prompt from template and arguments
                dedented_template = textwrap.dedent(prompt_template).strip()
                template = Template(dedented_template)
                prompt = template.render(**kwargs)

                # LLM call
                llm_call_kwargs = dict(
                    messages=[{"role": "user", "content": prompt}],
                    function_name=func.__name__,
                    text_format=output_schema
                    model=model,
                    #...
                )
                response = self.invoke(**llm_call_kwargs)
                
                response_content = (
                    response.output_parsed 
                    if text_format is not None 
                    else response.output_text
                )
                return response_content
            return wrapper
        return decorator

Footnotes

  1. This framing was inspired by the BAML project, but is much more lightweight and runs on vanilla python↩︎