AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks
GitHub

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools integrates with
LLaVA
vs
Ollama

Choose LLaVA when…

  • •You want an open-source multimodal model for self-hosted deployment
  • •You're doing research on vision-language instruction following
  • •You need a well-documented baseline for multimodal tasks

Choose Ollama when…

  • •You want to run LLMs locally on your machine
  • •Privacy or offline use cases require local models
  • •You're testing open-source models without API costs
Field
LLaVA
Ollama
Category
Multimodal
LLM Infrastructure
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
—
Stars
⭐ 22,000
⭐ 90,000
Health
—
●80 — Active

LLaVA

Large Language and Vision Assistant — connects a vision encoder to an LLM for instruction-following with images. OSS research model widely used as a multimodal base. Runs via Ollama.

Ollama

Dead-simple local LLM serving. Pull and run models like Docker images. Compatible with the OpenAI API format.

LLaVA Website ↗GitHub ↗
Ollama Website ↗GitHub ↗

Shared Connections (1)

Moondream

Only LLaVA (2)

InternVL2Ollama

Only Ollama (6)

ContinueLlamaIndexLiteLLMvLLMllama.cppLLaVA
See full comparison in Explore →