OpenAI Integration
The Glacis OpenAI integration wraps the official OpenAI client to automatically create cryptographic attestations for every chat completion. Your data is hashed locally and never leaves your environment — only hashes and metadata are sent to the Glacis transparency log.
Installation
Section titled “Installation”pip install glacis[openai]Quick Start
Section titled “Quick Start”from glacis.integrations.openai import attested_openai, get_last_receipt
client = attested_openai( glacis_api_key="glsk_live_...", openai_api_key="sk-...")
# Make a normal OpenAI call -- attestation happens automaticallyresponse = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}])
print(response.choices[0].message.content)
# Get the attestation receiptreceipt = get_last_receipt()print(f"Attested: {receipt.id}")print(f"Status: {receipt.witness_status}")What Gets Attested
Section titled “What Gets Attested”For each chat completion, Glacis captures:
| Field | Treatment | Details |
|---|---|---|
| Request messages | Hashed | SHA-256, never sent to Glacis |
| Response content | Hashed | SHA-256, never sent to Glacis |
| System prompt | Hashed | SHA-256 hash included in control plane record |
| Model name | Metadata | Sent as-is |
| Temperature | Metadata | Included in control plane record |
| Token counts | Metadata | prompt, completion, and total tokens |
| Provider | Metadata | Always "openai" |
Environment Variables
Section titled “Environment Variables”export OPENAI_API_KEY=sk-...from glacis.integrations.openai import attested_openai
# OpenAI key read from OPENAI_API_KEY env var automatically# Glacis API key must be passed explicitlyclient = attested_openai(glacis_api_key="glsk_live_...")export OPENAI_API_KEY=sk-...import osfrom glacis.integrations.openai import attested_openai
# No Glacis API key needed for offline modeclient = attested_openai( offline=True, signing_seed=os.urandom(32),)from glacis.integrations.openai import attested_openai
client = attested_openai( glacis_api_key="glsk_live_...", openai_api_key="sk-...")Accessing Receipts
Section titled “Accessing Receipts”Use get_last_receipt() to retrieve the attestation from the most recent API call. Receipts are stored in thread-local storage, so each thread maintains its own last receipt independently.
from glacis.integrations.openai import get_last_receipt
receipt = get_last_receipt()if receipt: print(f"ID: {receipt.id}") print(f"Evidence hash: {receipt.evidence_hash}") print(f"Status: {receipt.witness_status}") # "WITNESSED" or "UNVERIFIED" print(f"Service: {receipt.service_id}")Offline Mode
Section titled “Offline Mode”Offline mode creates locally-signed attestations without connecting to the Glacis server. This is useful for development, air-gapped environments, or when you want to defer attestation submission.
Offline mode requires a signing_seed — a 32-byte Ed25519 seed used for local signing:
import osfrom glacis.integrations.openai import attested_openai, get_last_receipt
client = attested_openai( offline=True, signing_seed=os.urandom(32), openai_api_key="sk-...")
response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}])
receipt = get_last_receipt()print(f"Status: {receipt.witness_status}") # "UNVERIFIED"Using Controls
Section titled “Using Controls”Controls let you scan inputs and outputs for PII, jailbreak attempts, banned words, and more. Configure them via a glacis.yaml file:
from glacis.integrations.openai import attested_openai, GlacisBlockedError
client = attested_openai( config="glacis.yaml", openai_api_key="sk-...")
try: response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)except GlacisBlockedError as e: print(f"Blocked by {e.control_type} (score={e.score})")You can also pass custom controls programmatically via the input_controls and output_controls parameters. See the BaseControl interface in glacis.controls.base for details on implementing custom controls.
Retrieving Evidence
Section titled “Retrieving Evidence”Evidence includes the full input, output, and control plane results that were attested. Evidence is stored locally and never sent to Glacis servers.
from glacis.integrations.openai import get_last_receipt, get_evidence
receipt = get_last_receipt()if receipt: evidence = get_evidence(receipt.id) if evidence: print(evidence["input"]) # Original request (model, messages) print(evidence["output"]) # Full response (choices, usage)get_evidence() accepts optional storage_backend and storage_path parameters to override the default storage location:
evidence = get_evidence( receipt.id, storage_backend="sqlite", storage_path="/path/to/evidence.db")attested_openai() Reference
Section titled “attested_openai() Reference”| Parameter | Type | Default | Description |
|---|---|---|---|
glacis_api_key | Optional[str] | None | Glacis API key. Required for online mode. Must be passed explicitly (no env var fallback). |
openai_api_key | Optional[str] | None | OpenAI API key. Falls back to OPENAI_API_KEY env var. |
glacis_base_url | str | "https://api.glacis.io" | Glacis API base URL. |
service_id | str | "openai" | Service identifier for attestations. |
debug | bool | False | Enable debug logging. |
offline | Optional[bool] | None | Enable offline mode. If None, inferred from config or presence of glacis_api_key. |
signing_seed | Optional[bytes] | None | 32-byte Ed25519 signing seed. Required when offline=True. |
policy_key | Optional[bytes] | None | 32-byte HMAC key for sampling decisions. Falls back to signing_seed if not provided. |
config | Optional[str] | None | Path to glacis.yaml config file for controls, sampling, and policy settings. |
input_controls | Optional[list[BaseControl]] | None | Custom controls to run on input text before the LLM call. |
output_controls | Optional[list[BaseControl]] | None | Custom controls to run on output text after the LLM call. |
**openai_kwargs | Any | — | Additional keyword arguments passed directly to the OpenAI() client constructor. |
Returns: A wrapped OpenAI client. The client.chat.completions.create() method is intercepted to perform attestation automatically.
Raises: GlacisBlockedError if a control blocks the request.
Full Example
Section titled “Full Example”#!/usr/bin/env python3"""Complete example: OpenAI chat with Glacis attestation."""import osfrom glacis.integrations.openai import attested_openai, get_last_receipt, get_evidence
def main(): # Create attested client (online mode — requires GLACIS_API_KEY) client = attested_openai( glacis_api_key=os.environ["GLACIS_API_KEY"], )
# Have a conversation messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the capital of France?"} ]
response = client.chat.completions.create( model="gpt-4o", messages=messages, temperature=0.7, )
print("Response:", response.choices[0].message.content) print()
# Get attestation receipt receipt = get_last_receipt() if receipt: print("Attestation Details:") print(f" ID: {receipt.id}") print(f" Evidence hash: {receipt.evidence_hash}") print(f" Status: {receipt.witness_status}") print(f" Service: {receipt.service_id}") print()
# Retrieve full evidence evidence = get_evidence(receipt.id) if evidence: print("Evidence stored locally:") print(f" Input model: {evidence['input']['model']}") print(f" Output tokens: {(evidence['output'].get('usage') or {}).get('total_tokens')}")
if __name__ == "__main__": main()