AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Team size

Budget

Use case

Stage

Cluster

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools integrates with
DeepEval
vs
Langfuse

Choose DeepEval when…

  • •You want a pytest-style framework for LLM testing
  • •Unit-test-like evals for LLM outputs fit your workflow
  • •You need RAG-specific metrics like faithfulness and relevancy

Choose Langfuse when…

  • •You want open-source LLM observability
  • •Self-hosting your tracing stack is important
  • •You need cost tracking across models and users
Field
DeepEval
Langfuse
Category
Prompt & Eval
LLM Infrastructure
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
Cloud: $59/mo
Stars
⭐ 5,500
⭐ 7,000
Health
●80 — Active
●80 — Active
Trajectory
— not enough data
— not enough data
Synced
today
today

DeepEval

Open-source evaluation framework with 14+ metrics including faithfulness, relevancy, and hallucination detection. Integrates with CI/CD.

Langfuse

Open-source platform for tracing, evaluations, and prompt management. Self-hostable alternative to LangSmith with clean UX.

DeepEval Website ↗GitHub ↗
Langfuse Website ↗GitHub ↗

Shared Connections (3)

RAGASPromptFooOpenAI API

Only DeepEval (4)

LangfuseTruLensInspectGalileo

Only Langfuse (25)

CursorCrewAIAutoGenLangGraphLangChainLlamaIndexDifyMastra
See full comparison in Explore →