AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Team size

Budget

Use case

Stage

Cluster

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools integrates with
vLLM
vs
LlamaFactory

Choose vLLM when…

  • •You're serving LLMs at high throughput in production
  • •Continuous batching and PagedAttention are needed
  • •You're running your own GPU inference cluster

Choose LlamaFactory when…

  • •You need DPO, RLHF, or reward modeling in addition to SFT
  • •You want a no-code web UI for training runs
  • •You're working across many different model families
Field
vLLM
LlamaFactory
Category
LLM Infrastructure
Fine-tuning
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
—
Stars
⭐ 32,000
⭐ 42,000
Health
●75 — Active
—
Trajectory
— not enough data
— not enough data
Synced
today
—

vLLM

Production-grade LLM inference server. PagedAttention enables high throughput and efficient KV cache memory management.

LlamaFactory

Supports full fine-tuning, LoRA, QLoRA, DPO, RLHF, and reward modeling across 100+ models. Web UI (LlamaBoard) for no-code training. The most feature-complete OSS fine-tuning framework.

vLLM Website ↗GitHub ↗
LlamaFactory Website ↗GitHub ↗

Shared Connections (2)

AxolotlUnsloth

Only vLLM (11)

LiteLLMOllamaTogether AILlamaIndexModalRunPodLlamaFactoryTorchtune

Only LlamaFactory (1)

vLLM
See full comparison in Explore →