Lilypad: Open-Source
AI Engineering Platform
Enable seamless collaboration between developers, business users, and domain experts while maintaining quality and reproducibility in your AI applications.
1import os
2
3import lilypad
4from openai import OpenAI
5
6os.environ["LILYPAD_PROJECT_ID"] = "..."
7os.environ["LILYPAD_API_KEY"] = "..."
8os.environ["OPENAI_API_KEY"] = "..."
9
10lilypad.configure() # Automatically trace LLM API calls
11client = OpenAI()
12
13
14@lilypad.function() # Automatically version & trace LLM functions
15def answer_question(question: str) -> str | None:
16 completion = client.chat.completions.create(
17 model="gpt-4o",
18 messages=[
19 {"role": "system", "content": "You are a helpful assistant."},
20 {"role": "user", "content": f"Answer this question: {question}"},
21 ],
22 )
23 return completion.choices[0].message.content
24
25
26answer = answer_question("What is the capital of France?")
27print(answer)
28# > The capital of France is Paris.
All the tools you need for the
entire LLM development lifecycle
Complete Visibility
Monitor every aspect of your LLM applications with zero code changes. Track results, costs, and performance across all major LLM providers in a unified dashboard.
Version Control for the LLM Era
Traditional version control breaks with non-deterministic LLM outputs. Lilypad captures and versions all your prompts and LLM code into reproducible units, ensuring consistent results across environments.
Outcome-Based Quality Metrics
Define and measure metrics that align LLM outputs with your business objectives. Seamlessly transition from human feedback to automated evaluations you can trust.
Subscribe to Our Newsletter
Stay updated with the latest features, guides, and news about Lilypad. We'll send occasional updates directly to your inbox.