DocsOverview

Lilypad

ℹ️

Lilypad is currently in a closed beta.

You can install and use the library locally today, but we are working with a few select teams to ensure that the library is ready for broader, production use.

This means that things like user interface, database schemas, etc. are still subject to change.

If you’re interested in joining the beta, join our community and send a message to William Bakst about your team and why you think you would be a good fit.

Lilypad is an open-source prompt engineering framework built on these principles:

  • Prompt engineering is an optimization process, which requires…
  • Versioning and tracing of every call to the LLM for…
  • Building a data flywheel, so you can…
  • Properly evaluate and iterate on your prompts and…
  • Ship confidently.

The most common team structure we’ve seen consists of a mixture of engineers and (likely non-technical) domain experts. The above principles we’ve outlined thus rely on the effective collaboration between the technical and non-technical team members.

We’ve purpose-built Lilypad around the necessity of such collaboration, helping to automate the annoying parts so you can focus on what really matters: making your LLM calls measurably good.

30 Second Quickstart

Integrate your LLM application with Lilypad to get automatic versioning and tracing.

Setup

Install uv and create a new project:

curl -LsSf https://astral.sh/uv/install.sh | sh 
uv init my-first-lily

Install Lilypad, specifying the provider(s) you intend to use, and set your API key:

uv add "python-lilypad[openai]"
export OPENAI_API_KEY=XXXXX

Create a new project

  1. Navigate to Lilypad App

  2. Create an account by signing in with your GitHub.

  3. Navigate to Your Organization Settings to create a project and API key.

  4. Copy the project ID and API key and place them in your environment

LILYPAD_PROJECT_ID=XXXXX
LILYPAD_API_KEY=XXXXX

You are now ready to start tracing your generations.

Run a generation

Copy-paste this generation into a new Python file (main.py):

import lilypad
from openai import OpenAI
 
client = OpenAI()
 
@lilypad.generation()
def answer_question(question: str) -> str:
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": question}],
    )
    return str(completion.choices[0].message.content)
 
if __name__ == "__main__":
    lilypad.configure()
    answer = answer_question("What is the meaning of life?")
    print(answer)

Run the Python file

uv run main.py

Follow the link output by the command to view the trace in the Lilypad app.

The @lilypad.generation decorator will automatically version the answer_question function and collect a trace against that version. This ensures that you can always trace back to the exact code that generated a particular output.

This means that you can code as you always have, and Lilypad will take care of the rest.