TacoSkill LABTacoSkill LAB

The full-lifecycle AI skills platform.

Product

  • SkillHub
  • Playground
  • Skill Create
  • SkillKit

Resources

  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. code-change-verification
Improve

code-change-verification

5.5

by majiayu000

181Favorites
71Upvotes
0Downvotes

Run the mandatory verification stack when changes affect runtime code, tests, or build/test behavior in the OpenAI Agents Python repository.

verification

5.5

Rating

0

Installs

Testing & Quality

Category

Quick Review

Well-structured skill that clearly documents when and how to run code verification checks in the OpenAI Agents Python repository. The description adequately conveys the skill's purpose, and the documentation provides both quick-start scripts and manual workflow steps with clear ordering requirements. The skill references platform-specific scripts (run.sh, run.ps1) that implement the verification stack. Structure is clean with good separation of concerns. However, novelty is moderate since the skill essentially wraps a standard test/lint/format sequence that a CLI agent could execute with moderate token usage, though the enforced ordering and fail-fast semantics do add value by preventing common mistakes.

LLM Signals

Description coverage8
Task knowledge8
Structure8
Novelty5

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000

majiayu000

Skill Author

Related Skills

code-reviewerdebugging-wizardtest-master

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 avatar
majiayu000

Skill Author

Related Skills

code-reviewer

Jeffallan

6.4

debugging-wizard

Jeffallan

6.4

test-master

Jeffallan

6.4

playwright-expert

Jeffallan

6.4
Try online