AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Team size

Budget

Use case

Stage

Cluster

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools competes with
PromptFoo
vs
Galileo

Choose PromptFoo when…

  • •You want CLI-first, config-driven LLM evals
  • •Running eval suites in CI/CD pipelines is a goal
  • •You need red-teaming and safety testing built in

Choose Galileo when…

  • •You need real-time LLM guardrails in your production pipeline
  • •You want eval models fast enough (<200ms) to run inline with inference
  • •You need hallucination and RAG quality scoring at production latency
Field
PromptFoo
Galileo
Category
Prompt & Eval
Prompt & Eval
Type
OSS
SaaS
Free Tier
✓ Yes
✓ Yes
Plans
—
Free: $0Pro: Usage-based
Stars
⭐ 5,000
—
Health
●80 — Active
—
Trajectory
— not enough data
— not enough data
Synced
8 days ago
—

PromptFoo

Test and compare prompts across models. Built-in red-teaming, regression testing, and side-by-side model comparison.

Galileo

LLM evaluation platform with evaluation models that run in under 200ms — fast enough to use as production guardrails, not just offline eval. Covers hallucination detection, RAG quality, and safety scoring. Distinct from Galileo AI (the UI design tool).

PromptFoo Website ↗GitHub ↗
Galileo Website ↗

Shared Connections (2)

DeepEvalOpenAI API

Only PromptFoo (4)

LangfuseVellumAgentaGalileo

Only Galileo (3)

PromptFooHumanloopLangChain
See full comparison in Explore →