Galileo
LLM evaluation platform with evaluation models that run in under 200ms — fast enough to use as production guardrails, not just offline eval. Covers hallucination detection, RAG quality, and safety scoring. Distinct from Galileo AI (the UI design tool).
Humanloop
Humanloop is a platform for managing prompts, running experiments, and evaluating LLM outputs in production. It provides a prompt editor, version history, A/B testing across models, and human plus automated eval workflows — keeping your prompts in sync with your code.