Skip to content

Generative Functions & Callable Return Type (OpenHosta 4.1+)

OpenHosta 4.1 introduces the ability to return executable Python code from an emulate call. This allows you to generate dynamic logic, algorithms, or transformation functions directly from natural language.


I. Overview

When you annotate a return type as typing.Callable, OpenHosta uses the GuardedCode type to handle the response. Instead of returning a static value, the LLM provides Python source code which is then compiled and returned as a standard Python function pointer.

Key Features

  • Code Compilation: Automatically parses and compiles LLM-generated source code.
  • Closure Support: Can capture types and context to ensure generated code remains type-compliant.
  • Safety: Uses a sandboxed-ish exec() environment (see security note below).

II. Basic Usage

You can use Callable in your function signature just like any other type.

from typing import Callable
from OpenHosta import emulate

def get_math_operation(name: str) -> Callable[[int, int], int]:
    """
    Returns a function that performs the named math operation.
    Only implement simple operations like 'add', 'multiply', etc.
    """
    return emulate()

# usage
adder = get_math_operation("add")
result = adder(5, 7)
print(result) # 12

multiplier = get_math_operation("multiply")
print(multiplier(3, 4)) # 12

III. How it works: GuardedCode

The GuardedCode implementation (see OpenHosta.guarded.subclassablecallables) follows the standard OpenHosta pipeline:

  1. Native (STRICT): If the LLM already returned a function or a callable object, it is validated immediately.
  2. Heuristic (PRECISE):
  3. If the LLM returns a string, GuardedCode strips Markdown formatting (e.g., ```python blocks).
  4. It performs syntax validation via ast.parse.
  5. It compiles the code using exec() into a local scope.
  6. It extracts the first function definition found in that scope and returns it.

IV. Security Considerations ⚠️

[!WARNING] This feature uses exec() to compile code generated by an AI model. This means that the LLM can generate and execute arbitrary Python code.

Never use Callable return types with untrusted models or in environments where a malicious code execution could lead to critical data loss or security breaches without proper sandboxing.


V. Advanced: Subscripted Callables

You can provide more context to the LLM by using subscripted Callable syntax. TypeResolver will extract these types to help the LLM generate code that respects your data structures.

from typing import Callable
from OpenHosta import emulate, guarded_dataclass

@guarded_dataclass
class User:
    name: str
    age: int

def create_user_filter(min_age: int) -> Callable[[User], bool]:
    """Returns a filter function that checks if a User's age is >= min_age."""
    return emulate()

# The LLM will know about the User dataclass and its fields (name, age)
# to generate the correct lambda or function.

VI. Implementation Status

This feature is available starting from OpenHosta v4.1.

Type Support Notes
typing.Callable ✅ Full Resolves to GuardedCode
collections.abc.Callable ✅ Full Supported since Python 3.9+
Subscripted Callable[[...], ...] ✅ Full Injects type context into generation