Skip to content

OpenAI Integration

The Glacis OpenAI integration wraps the official OpenAI client to automatically create cryptographic attestations for every chat completion. Your data is hashed locally and never leaves your environment — only hashes and metadata are sent to the Glacis transparency log.

Terminal window
pip install glacis[openai]
from glacis.integrations.openai import attested_openai, get_last_receipt
client = attested_openai(
glacis_api_key="glsk_live_...",
openai_api_key="sk-..."
)
# Make a normal OpenAI call -- attestation happens automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Get the attestation receipt
receipt = get_last_receipt()
print(f"Attested: {receipt.id}")
print(f"Status: {receipt.witness_status}")

For each chat completion, Glacis captures:

FieldTreatmentDetails
Request messagesHashedSHA-256, never sent to Glacis
Response contentHashedSHA-256, never sent to Glacis
System promptHashedSHA-256 hash included in control plane record
Model nameMetadataSent as-is
TemperatureMetadataIncluded in control plane record
Token countsMetadataprompt, completion, and total tokens
ProviderMetadataAlways "openai"
Terminal window
export OPENAI_API_KEY=sk-...
from glacis.integrations.openai import attested_openai
# OpenAI key read from OPENAI_API_KEY env var automatically
# Glacis API key must be passed explicitly
client = attested_openai(glacis_api_key="glsk_live_...")

Use get_last_receipt() to retrieve the attestation from the most recent API call. Receipts are stored in thread-local storage, so each thread maintains its own last receipt independently.

from glacis.integrations.openai import get_last_receipt
receipt = get_last_receipt()
if receipt:
print(f"ID: {receipt.id}")
print(f"Evidence hash: {receipt.evidence_hash}")
print(f"Status: {receipt.witness_status}") # "WITNESSED" or "UNVERIFIED"
print(f"Service: {receipt.service_id}")

Offline mode creates locally-signed attestations without connecting to the Glacis server. This is useful for development, air-gapped environments, or when you want to defer attestation submission.

Offline mode requires a signing_seed — a 32-byte Ed25519 seed used for local signing:

import os
from glacis.integrations.openai import attested_openai, get_last_receipt
client = attested_openai(
offline=True,
signing_seed=os.urandom(32),
openai_api_key="sk-..."
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
receipt = get_last_receipt()
print(f"Status: {receipt.witness_status}") # "UNVERIFIED"

Controls let you scan inputs and outputs for PII, jailbreak attempts, banned words, and more. Configure them via a glacis.yaml file:

from glacis.integrations.openai import attested_openai, GlacisBlockedError
client = attested_openai(
config="glacis.yaml",
openai_api_key="sk-..."
)
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
except GlacisBlockedError as e:
print(f"Blocked by {e.control_type} (score={e.score})")

You can also pass custom controls programmatically via the input_controls and output_controls parameters. See the BaseControl interface in glacis.controls.base for details on implementing custom controls.

Evidence includes the full input, output, and control plane results that were attested. Evidence is stored locally and never sent to Glacis servers.

from glacis.integrations.openai import get_last_receipt, get_evidence
receipt = get_last_receipt()
if receipt:
evidence = get_evidence(receipt.id)
if evidence:
print(evidence["input"]) # Original request (model, messages)
print(evidence["output"]) # Full response (choices, usage)

get_evidence() accepts optional storage_backend and storage_path parameters to override the default storage location:

evidence = get_evidence(
receipt.id,
storage_backend="sqlite",
storage_path="/path/to/evidence.db"
)
ParameterTypeDefaultDescription
glacis_api_keyOptional[str]NoneGlacis API key. Required for online mode. Must be passed explicitly (no env var fallback).
openai_api_keyOptional[str]NoneOpenAI API key. Falls back to OPENAI_API_KEY env var.
glacis_base_urlstr"https://api.glacis.io"Glacis API base URL.
service_idstr"openai"Service identifier for attestations.
debugboolFalseEnable debug logging.
offlineOptional[bool]NoneEnable offline mode. If None, inferred from config or presence of glacis_api_key.
signing_seedOptional[bytes]None32-byte Ed25519 signing seed. Required when offline=True.
policy_keyOptional[bytes]None32-byte HMAC key for sampling decisions. Falls back to signing_seed if not provided.
configOptional[str]NonePath to glacis.yaml config file for controls, sampling, and policy settings.
input_controlsOptional[list[BaseControl]]NoneCustom controls to run on input text before the LLM call.
output_controlsOptional[list[BaseControl]]NoneCustom controls to run on output text after the LLM call.
**openai_kwargsAnyAdditional keyword arguments passed directly to the OpenAI() client constructor.

Returns: A wrapped OpenAI client. The client.chat.completions.create() method is intercepted to perform attestation automatically.

Raises: GlacisBlockedError if a control blocks the request.

#!/usr/bin/env python3
"""Complete example: OpenAI chat with Glacis attestation."""
import os
from glacis.integrations.openai import attested_openai, get_last_receipt, get_evidence
def main():
# Create attested client (online mode — requires GLACIS_API_KEY)
client = attested_openai(
glacis_api_key=os.environ["GLACIS_API_KEY"],
)
# Have a conversation
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
temperature=0.7,
)
print("Response:", response.choices[0].message.content)
print()
# Get attestation receipt
receipt = get_last_receipt()
if receipt:
print("Attestation Details:")
print(f" ID: {receipt.id}")
print(f" Evidence hash: {receipt.evidence_hash}")
print(f" Status: {receipt.witness_status}")
print(f" Service: {receipt.service_id}")
print()
# Retrieve full evidence
evidence = get_evidence(receipt.id)
if evidence:
print("Evidence stored locally:")
print(f" Input model: {evidence['input']['model']}")
print(f" Output tokens: {(evidence['output'].get('usage') or {}).get('total_tokens')}")
if __name__ == "__main__":
main()