Nebula is in beta — new features are shipping every week.
NebulaNebula
Agents

Choosing a Model

Pick the right AI model for your Nebula agents. Compare 300+ models from OpenAI, Anthropic, Google, Meta, and more for speed, cost, and capability.

Every agent in Nebula runs on an AI model. You can choose from over 300 models across providers like OpenAI, Anthropic, Google, Meta, DeepSeek, xAI, and more. The right model depends on what your agent does.

How to change an agent's model

Open the agent's detail page and click the model badge at the top (e.g., "Claude Sonnet 4.6"). This opens the model selector where you can browse and pick from all available models. The change takes effect immediately — the agent keeps its system prompt, tools, and channels.

To reset an agent back to auto-routing, click the reset button in the model selector. This removes the pinned model and lets Nebula choose automatically based on task complexity.

You can also ask Nebula to change it for you in chat:

You

Switch my research agent to use GPT-4.1

Nebula
Nebula

Done! Your research agent now uses GPT-4.1. All existing channels and instructions are unchanged.

When to choose what

Best for high-volume tasks like monitoring, triage, and routine summaries.

Google Gemini Flash — Great default for most tasks. Fast, capable, cheap.
GPT-4o Mini — Lightweight OpenAI model. Good for simple tasks.
DeepSeek V3 — Strong general-purpose model at low cost.

Good for most work — research, writing, data analysis, multi-step tasks.

GPT-4.1 — OpenAI's latest. Strong at following complex instructions.
GPT-4o — Reliable all-rounder from OpenAI.
Google Gemini Pro — Google's capable mid-tier model.

Use when quality matters most — complex reasoning, nuanced writing, detailed code review. Costs more and may be slower.

Claude Sonnet — Anthropic's model. Excellent at coding and detailed analysis.
DeepSeek R1 — Reasoning-focused model. Good for multi-step logic.

Auto-routing

By default, Nebula uses auto-routing — it picks the best model for each task based on what's being asked:

Simple lookups and single-step tasks — routed to a fast, efficient model to keep things snappy and cost-effective.
Standard multi-step tasks — routed to a balanced model that handles most workflows well.
Complex reasoning and deep analysis — routed to a more powerful model capable of nuanced, multi-step thinking.

This happens automatically. You don't have to think about it for day-to-day use. If you want a specific agent to always use a particular model, you can pin one from the agent's detail page by clicking the model badge.

Model selection in automations

When building automated workflows, each step can optionally specify a model tier. This lets you use a fast model for a step that just fetches data and a more capable model for a step that analyzes and summarizes it. This is configured per-step in the recipe, not at the agent level.

Not sure which model to pick? Ask Nebula — it can recommend a model based on what your agent does.

On this page