AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Team size

Budget

Use case

Stage

Cluster

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools competes with
DeepEval
vs
Inspect

Choose DeepEval when…

  • •You want a pytest-style framework for LLM testing
  • •Unit-test-like evals for LLM outputs fit your workflow
  • •You need RAG-specific metrics like faithfulness and relevancy

Choose Inspect when…

  • •running capability and safety evaluations on LLMs
  • •building custom benchmarks for model comparison
  • •need government-backed evaluation methodology
Field
DeepEval
Inspect
Category
Prompt & Eval
Prompt & Eval
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
Open Source: Free
Stars
⭐ 5,500
⭐ 1,800
Health
●80 — Active
●75 — Active
Trajectory
— not enough data
— not enough data
Synced
today
today

DeepEval

Open-source evaluation framework with 14+ metrics including faithfulness, relevancy, and hallucination detection. Integrates with CI/CD.

Inspect

Inspect is an open-source framework for building LLM evaluations, developed by the UK AI Safety Institute. It provides task composition, built-in datasets, scorers, and solvers for systematic benchmarking of LLM capabilities, safety, and alignment properties.

DeepEval Website ↗GitHub ↗
Inspect Website ↗GitHub ↗

Only DeepEval (7)

LangfuseRAGASPromptFooOpenAI APITruLensInspectGalileo

Only Inspect (1)

DeepEval
See full comparison in Explore →