Nebula is in beta — new features are shipping every week.
NebulaNebula
Agents

System Prompts

Understand how system prompts shape your Nebula agent's behavior, tone, and decision-making. Learn best practices for writing effective instructions.

System prompts define who your agent is, how it behaves, and what it prioritizes. A well-written system prompt is the difference between a generic chatbot and a specialized assistant that consistently delivers useful results.

What system prompts do

When your agent receives a message, its system prompt is sent alongside it as foundational context. The model reads the prompt before processing your message, so everything in it shapes how the agent interprets and responds to requests.

System prompts apply to every message in every conversation with that agent.

Where system prompts come from

System prompts are set when an agent is created — either by you describing what the agent should do in chat, or by Nebula when it auto-creates an agent for a connected app. You can view the full system prompt on the agent's detail page under System Prompts, but it is read-only there.

To modify an agent's system prompt, ask Nebula in chat:

You

Update my Support Triage agent's instructions to always classify issues by priority P1-P4.

Nebula
Nebula

Done — I've updated the Support Triage Expert's system prompt to include priority classification (P1 critical through P4 low) for all incoming issues.

Writing effective instructions

Define the role clearly

Start by telling the agent exactly what it is and what domain it operates in. Be specific about expertise and scope.

You are a senior data analyst specializing in marketing metrics.
You work with campaign performance data, attribution models,
and ROI calculations. You communicate findings clearly to
non-technical stakeholders.

Set behavioral expectations

Describe how the agent should respond: its tone, format preferences, and communication style.

Always structure your responses with a brief summary at the top,
followed by detailed analysis. Use tables and bullet points for
data comparisons. When you are uncertain about something, say so
rather than guessing. Cite your sources when referencing web search results.

Specify what to do and what not to do

Explicit boundaries prevent the agent from drifting outside its intended role.

When asked about topics outside marketing analytics, politely
redirect the conversation. Do not provide legal, financial, or
medical advice. If a question requires data you do not have access
to, explain what data would be needed and suggest how to obtain it.

Include examples when possible

Showing the agent an example of ideal output is often more effective than describing it abstractly.

When summarizing campaign performance, use this format:

Campaign: [Name]
Period: [Date range]
Spend: $X | Revenue: $Y | ROAS: Z.Zx
Key insight: [One sentence takeaway]

Template

Here is a template you can adapt for most use cases:

## Role
You are [role description]. You specialize in [domain].

## Behavior
- [Communication style]
- [Response format preferences]
- [How to handle uncertainty]

## Rules
- [Things the agent should always do]
- [Things the agent should never do]
- [Scope boundaries]

## Context
[Any standing information the agent should know: company name,
product details, audience, terminology]

You do not need to tell the agent about its capabilities in the instructions. Nebula automatically informs the agent about what it can do. Focus your instructions on role, behavior, and domain knowledge.

Common mistakes

Too vague. Instructions like "You are a helpful assistant" give the model no constraints. Be specific about the domain, audience, and expected output format.

Too long. Extremely long instructions can dilute the most important points. Keep them focused. If you need to provide extensive reference material, save it as agent memory instead.

Contradictory. Avoid telling the agent to "be concise" and also "provide thorough, detailed explanations." Pick a clear direction and be consistent.

Repeating capability descriptions. You do not need to write "you can search the web" or "you have access to code execution." The platform handles capability awareness automatically.

Iterating

Instructions rarely work perfectly on the first attempt. The best approach is:

Write an initial set based on the template above.
Test with 5-10 representative messages.
Note where the agent's behavior diverges from your expectations.
Add specific instructions to address those gaps.
Repeat until the agent consistently meets your standards.

To update an agent's system prompt, ask Nebula in chat — it can modify agent configurations directly.

You don't need to write system prompts manually. When you create an agent by describing what it should do, Nebula generates an effective system prompt for you automatically.

What's next?

On this page