AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks
GitHub

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools integrates with
Axolotl
vs
vLLM

Choose Axolotl when…

  • •You want a config-driven OSS fine-tuning pipeline
  • •You need support for LoRA, QLoRA, and FSDP in one tool
  • •You prefer HuggingFace-native workflows

Choose vLLM when…

  • •You're serving LLMs at high throughput in production
  • •Continuous batching and PagedAttention are needed
  • •You're running your own GPU inference cluster
Field
Axolotl
vLLM
Category
Fine-tuning
LLM Infrastructure
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
—
Stars
⭐ 9,800
⭐ 32,000
Health
—
●75 — Active

Axolotl

OSS fine-tuning framework built on HuggingFace Transformers. Supports LoRA, QLoRA, full fine-tuning, and FSDP. Config-driven — define your training run in a YAML file.

vLLM

Production-grade LLM inference server. PagedAttention enables high throughput and efficient KV cache memory management.

Axolotl Website ↗GitHub ↗
vLLM Website ↗GitHub ↗

Shared Connections (2)

UnslothLlamaFactory

Only Axolotl (1)

vLLM

Only vLLM (11)

LiteLLMOllamaTogether AILlamaIndexModalRunPodAxolotlTorchtune
See full comparison in Explore →