DocsOverview

Lilypad

ℹ️

Lilypad is currently in a closed beta.

You can install and use the library locally today, but we are working with a few select teams to ensure that the library is ready for broader, production use.

This means that things like user interface, database schemas, etc. are still subject to change.

If you’re interested in joining the beta, join our community and send a message to William Bakst about your team and why you think you would be a good fit.

Lilypad is an open-source prompt engineering framework built on these principles:

  • Prompt engineering is an optimization process, which requires…
  • Versioning and tracing of every call to the LLM for…
  • Building a data flywheel, so you can…
  • Properly evaluate and iterate on your prompts and…
  • Ship confidently.

The most common team structure we’ve seen consists of a mixture of engineers and (likely non-technical) domain experts. The above principles we’ve outlined thus rely on the effective collaboration between the technical and non-technical team members.

We’ve purpose-built Lilypad around the necessity of such collaboration, helping to automate the annoying parts so you can focus on what really matters: making your LLM calls measurably good.

Automatic Versioning

Use @lilypad.generation to decorate your LLM functions. This will automatically version your generations based on the function closure.

Learn more about versioning your LLM functions.

Trace

Trace your generations and prompts to gain insight into your LLM calls.

Add lilypad.configure() to your application to enable tracing.

import lilypad
 
lilypad.configure()

Any function decorated with @lilypad.generation or @lilypad.prompt will be traced.

trace
View traces in real time.

Prompt Playground

Iterate and test your prompts in the prompt playground. Lilypad Web allows you to quickly test your prompts and see the results.

prompt_playground

Update prompt and model configurations to find your best prompt.

View your results.

prompt_playground_output
View markdown supported output from your prompt.

Managed Prompts

Collaborate with non technical team members by using managed prompts. Use your managed prompt in your generations using the @lilypad.prompt decorator.

Learn more about managed prompts.

import lilypad
from openai import OpenAI
 
client = OpenAI()
 
 
@lilypad.prompt()
def answer_question_prompt(question: str): ...
 
@lilypad.generation()
def answer_question(question: str) -> str:
    prompt = answer_question_prompt(question)
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=prompt.messages("openai"),
    )
    return str(completion.choices[0].message.content)
 
 
if __name__ == "__main__":
    lilypad.configure()
    answer = answer_question("What is the meaning of life?")
    print(answer)

Swap between different prompts safely without having to redeploy your code.