Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval.
6.9
Rating
0
Installs
AI & LLM
Category
Well-structured RAG skill with clear architectural guidance covering semantic chunking, hierarchical retrieval, and hybrid search patterns. The description adequately conveys when to invoke the skill. Task knowledge is solid with concrete patterns and anti-patterns, though implementation details are conceptual rather than executable. Structure is excellent with organized sections and a comprehensive Sharp Edges table. Novelty is moderate: while RAG expertise is valuable, most content provides conceptual guidance that an LLM could generate given sufficient context. The skill would benefit from more implementation-specific code, tooling recommendations (specific vector DBs, embedding models), or complex orchestration logic that would genuinely reduce token costs for repeated RAG tasks.
Loading SKILL.md…

Skill Author