Quickstart
Integrate your LLM application with Lilypad to get automatic versioning and tracing.
Setup
Install uv
and create a new project:
curl -LsSf https://astral.sh/uv/install.sh | sh
uv init quickstart-project
Install Lilypad, specifying the provider(s) you intend to use, and set your API key:
uv add "python-lilypad[openai]"
export OPENAI_API_KEY=XXXXX
Create a new project
-
Navigate to Lilypad App
-
Create an account by signing in with your GitHub.
-
Navigate to Your Organization Settings to create a project and API key.
-
Copy the project ID and API key and place them in your environment
LILYPAD_PROJECT_ID=XXXXX
LILYPAD_API_KEY=XXXXX
You are now ready to start tracing your generations.
Run a generation
Copy-paste this generation into a new Python file (main.py
):
import lilypad
from openai import OpenAI
client = OpenAI()
@lilypad.generation()
def answer_question(question: str) -> str:
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": question}],
)
return str(completion.choices[0].message.content)
if __name__ == "__main__":
lilypad.configure()
answer = answer_question("What is the meaning of life?")
print(answer)
Run the Python file
uv run main.py
Follow the link output by the command to view the trace in the Lilypad app.
The @lilypad.generation
decorator will automatically version the answer_question
function and collect a trace against that version. This ensures that you can always trace back to the exact code that generated a particular output.
This means that you can code as you always have, and Lilypad will take care of the rest.