These tools competes with
DeepEvalvsGalileo
LLM evaluation framework — 14+ metrics versus Real-time LLM evaluation with sub-200ms guardrail models
Compare interactively in Explore →Choose DeepEval when…
- •You want a pytest-style framework for LLM testing
- •Unit-test-like evals for LLM outputs fit your workflow
- •You need RAG-specific metrics like faithfulness and relevancy
Choose Galileo when…
- •You need real-time LLM guardrails in your production pipeline
- •You want eval models fast enough (<200ms) to run inline with inference
- •You need hallucination and RAG quality scoring at production latency
Side-by-side comparison
Field
DeepEval
Galileo
Category
Prompt & Eval
Prompt & Eval
Type
Open Source
Commercial
Free Tier
✓ Yes
✓ Yes
Pricing Plans
—
Free: $0Pro: Usage-based
GitHub Stars
⭐ 5,500
—
Health
●80 — Active
—
DeepEval
Open-source evaluation framework with 14+ metrics including faithfulness, relevancy, and hallucination detection. Integrates with CI/CD.
Galileo
LLM evaluation platform with evaluation models that run in under 200ms — fast enough to use as production guardrails, not just offline eval. Covers hallucination detection, RAG quality, and safety scoring. Distinct from Galileo AI (the UI design tool).
Shared Connections2 tools both integrate with
Only DeepEval (5)
LangfuseRAGASTruLensInspectGalileo
Only Galileo (3)
DeepEvalHumanloopLangChain
Explore the full AI landscape
See how DeepEval and Galileo fit into the bigger picture — 207 tools, 452 relationships, all mapped.