TacoSkill LABTacoSkill LAB

The full-lifecycle AI skills platform.

Product

  • SkillHub
  • Playground
  • Skill Create
  • SkillKit

Resources

  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. promptfoo-evaluation
Improve

promptfoo-evaluation

1.3

by majiayu000

137Favorites
59Upvotes
0Downvotes

Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".

evaluation

1.3

Rating

0

Installs

AI & LLM

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

prompt-engineermcp-developerrag-architect

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

prompt-engineer

Jeffallan

7.0

mcp-developer

Jeffallan

6.4

rag-architect

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4
Try online