AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Team size

Budget

Use case

Stage

Cluster

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools competes with
Galileo
vs
DeepEval

Choose Galileo when…

  • •You need real-time LLM guardrails in your production pipeline
  • •You want eval models fast enough (<200ms) to run inline with inference
  • •You need hallucination and RAG quality scoring at production latency

Choose DeepEval when…

  • •You want a pytest-style framework for LLM testing
  • •Unit-test-like evals for LLM outputs fit your workflow
  • •You need RAG-specific metrics like faithfulness and relevancy
Field
Galileo
DeepEval
Category
Prompt & Eval
Prompt & Eval
Type
SaaS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
Free: $0Pro: Usage-based
—
Stars
—
⭐ 5,500
Health
—
●80 — Active
Trajectory
— not enough data
— not enough data
Synced
—
today

Galileo

LLM evaluation platform with evaluation models that run in under 200ms — fast enough to use as production guardrails, not just offline eval. Covers hallucination detection, RAG quality, and safety scoring. Distinct from Galileo AI (the UI design tool).

DeepEval

Open-source evaluation framework with 14+ metrics including faithfulness, relevancy, and hallucination detection. Integrates with CI/CD.

Galileo Website ↗
DeepEval Website ↗GitHub ↗

Shared Connections (2)

PromptFooOpenAI API

Only Galileo (3)

DeepEvalHumanloopLangChain

Only DeepEval (5)

LangfuseRAGASTruLensInspectGalileo
See full comparison in Explore →