AIchitect
StacksGraphBuilderCompareGenome
207 tools · 25 stacks
GitHub

AI tools are all over the place. This is the full landscape — 207 tools across 17 categories, mapped and connected. Ready to narrow it down? Build your stack →

Stack Layers
What are you building and how is it defined?
How do you write and ship code?
How does your AI think and act?
Which models and infrastructure power it?
How do you build, observe, and extend it?
These tools competes with
LLaVA
vs
Moondream

Choose LLaVA when…

  • •You want an open-source multimodal model for self-hosted deployment
  • •You're doing research on vision-language instruction following
  • •You need a well-documented baseline for multimodal tasks

Choose Moondream when…

  • •You need a vision model that runs on a single GPU or edge device
  • •You want a compact model for image captioning and visual QA
  • •Low memory footprint is a hard constraint
Field
LLaVA
Moondream
Category
Multimodal
Multimodal
Type
OSS
OSS
Free Tier
✓ Yes
✓ Yes
Plans
—
—
Stars
⭐ 22,000
⭐ 11,000
Health
—
—

LLaVA

Large Language and Vision Assistant — connects a vision encoder to an LLM for instruction-following with images. OSS research model widely used as a multimodal base. Runs via Ollama.

Moondream

2B parameter vision-language model optimized to run on edge devices and single GPUs. Supports image captioning, visual QA, and object detection. Runs via Ollama or directly with Python.

LLaVA Website ↗GitHub ↗
Moondream Website ↗GitHub ↗

Shared Connections (1)

Ollama

Only LLaVA (2)

MoondreamInternVL2

Only Moondream (1)

LLaVA
See full comparison in Explore →