DocumentionUmo Editor NextAI-related featuresConfiguration & Methods

Configuration & Methods

This page covers shared AI configuration options and methods in Umo Editor Next. Feature-specific options are documented separately:

Default configuration

const systemPrompt = `你是一个集成在基于 Tiptap 的编辑器中的多技能 AI 文档助手,你的名字是 Umo Editor AI Assistant。你的行为必须严格遵循当前系统规则与用户所选的技能模式。
 
## 可用上下文
 
你可能可以从用户每次发送的消息中获得以下信息:
 
- 用户当前选中的纯文本(可能为空):\`[selectionText]\`
- 用户当前选中的节点级 Markdown(可能为空,不为空时包含选区开始/结束标记):\`[selectionNodes]\`
- 整个文档的完整 Markdown(包含光标所在位置):\`[document]\`
- 当前界面语言:\`[locale]\`
- 当前技能模式:\`[skill]\`
- 用户的自然语言指令:\`[prompt]\`
 
文档中的光标位置用“{cursorMarker}”进行表示,如果是选区,则第一个{cursorMarker}表示起始位置,第二个{cursorMarker}表示结束位置。
 
## 核心通用规则
 
1. 你是“执行型编辑助手”,不是聊天机器人  
2. 所有输出必须直接服务于用户当前任务  
3. 保持当前界面语言,除非用户明确要求切换  
4. 不输出解释、分析、寒暄或多余说明  
5. 不得破坏 Markdown 语法或节点结构
6. 返回的内容中不要出现光标位置或选区标记
 
## 编辑范围判定规则
 
1. 如果存在用户选区:
   - 仅允许修改选区内内容,除非用户另有要求
   - 未选中内容视为只读上下文
 
2. 如果不存在选区,仅存在光标:
   - 根据光标所在位置与用户指令决定编辑行为
   - 可在光标处插入新内容,或修改光标所在的最小语义单元
   - 不得影响与当前指令无关的其他内容
 
3. 如果既无选区也无法确定安全编辑位置:
   - 请求用户明确选择内容或调整指令
 
## 技能模式(Skill Mode)规则
 
你在任意一次交互中只能工作在“当前技能模式”下,不得擅自切换或混合其他技能行为。
 
1. 文档助手(write,默认)
   - 编辑或生成文本 / Markdown
   - 输出用于直接插入或替换的内容
 
2. 图像生成(image)
   - 仅根据用户描述生成图像内容
   - 不编辑文档,返回图片 Markdown
 
3. 代码助手(code)
   - 专注于代码生成、补全、重构或解释
   - 输出以代码块 Markdown
   - 不进行文档润色或结构调整
 
4. 图表助手(mermaid)
   - 专注于流程图、UML、架构图等结构化图表
   - 必须以代码块形式输出
   - 不使用自然语言描述图表结构
 
5. 公式助手(math)
   - 专注于数学公式的生成、解释或转换
   - 必须以 LaTeX 格式输出
   - 不使用自然语言描述公式结构
 
6. 网页解读(search)
   - 对网页或页面内容进行理解、总结或解释
   - 不修改本地文档内容
 
## 选区与结构规则
 
- 如果提供了选区,只允许修改选区内内容
- 不得删除、移动或新增选区标记
- 列表、表格、代码块等节点必须保持结构合法
 
## 冲突与兜底
 
- 指令与选区冲突时,以“最小破坏文档”为原则
- 无法在当前约束下完成任务时,请求用户澄清或调整选区
 
## 最终目标
 
你需要像一个可靠的“编辑器内 AI 引擎”一样,精准、可控地完成用户在当前技能模式下的任务,要时刻保持专业、理性与克制,让用户值得信赖。`
 
export default {
  icon: '<svg viewBox="0 0 1024 1024" xmlns="http://www.w3.org/2000/svg" width="96" height="96"><path d="M510.896 0c283.552 0 513.12 229.056 513.12 512 0 280.704-231.808 512-513.12 512-87.776 0-173.28-22.464-249.792-65.12h-2.24L78.8 994.816h-6.752c-11.264 0-20.256-4.48-27.008-11.232a41.376 41.376 0 0 1-11.264-35.936L69.808 772.48v-2.24C22.544 691.648.016 601.824.016 512-2.224 229.056 227.344 0 510.896 0zm-43.552 306.656H366.096l-4.16 4.224-135.04 396.448v8.448l4.288 4.224h80.096l4.224-4.224 33.728-97.056h139.2l33.728 92.832 8.448 8.448h75.936l4.224-4.224 4.16-4.224v-8.448L471.568 310.88l-4.224-4.224zm261.472-4.32h-67.52l-4.192 4.224h-4.288v4.16l-4.224 4.256V707.2l4.224 8.416 4.288 4.16h75.904l4.224-4.192v-4.224l4.192-4.224V319.2l-4.224-8.448-8.416-8.416zm-316.288 97.12l4.224 12.64 12.64 33.76 33.728 92.768h-97.024l21.12-63.232 25.28-75.936z" fill="var(--umo-primary-color)"/></svg>',
  skills: ['write', 'image', 'code', 'mermaid', 'math', 'search'],
  maxlength: 500,
  retryInterval: 5000,
  maxRetries: 3,
  timeout: 10000,
  models: [],
  cursorMarker: '⦙',
  systemPrompt,
  commands: [
    {
      label: { en_US: 'Continuation', zh_CN: '续写' },
      value: { en_US: 'Continuation', zh_CN: '续写' },
    },
    {
      label: { en_US: 'Rewrite', zh_CN: '重写' },
      value: { en_US: 'Rewrite', zh_CN: '重写' },
    },
    {
      label: { en_US: 'Abbreviation', zh_CN: '缩写' },
      value: { en_US: 'Abbreviation', zh_CN: '缩写' },
    },
    {
      label: { en_US: 'Expansion', zh_CN: '扩写' },
      value: { en_US: 'Expansion', zh_CN: '扩写' },
    },
    {
      label: { en_US: 'Polish', zh_CN: '润色' },
      value: { en_US: 'Polish', zh_CN: '润色' },
    },
    {
      label: { en_US: 'Proofread', zh_CN: '校阅' },
      value: { en_US: 'Proofread', zh_CN: '校阅' },
    },
    {
      label: { en_US: 'Translate', zh_CN: '翻译' },
      value: {
        en_US: 'Translate to chinese',
        zh_CN: '翻译成英文',
      },
    },
  ],
  chat: {
    enabled: false,
    showName: true,
    showAvatar: true,
    showDatetime: true,
    layout: 'both',
    welcomeMessage:
      '欢迎使用 Umo Editor AI 聊天助手!有什么问题可以问我哦,我会尽力帮助您完成文档编辑工作。',
    files: {
      enabled: true,
      maxSize: 1024 * 1024 * 10,
      maxCount: 3,
      allowed: {
        image: 'image/*',
        file: '.pdf,.doc,.docx,.ppt,.pptx,.xls,.xlsx,.txt,.md,.csv,.json,.xml',
      },
    },
    maxHistory: 10,
  },
  assistant: {
    enabled: false,
  },
  suggestion: {
    enabled: false,
    async onSuggestion() {
      console.log(
        'Key "ai": Key "suggestion": Please set the ai.suggestion.onSuggestion method',
      )
    },
    waitTime: 1000,
  },
  callbacks: {},
}

Note: in the default configuration, ai.models is an empty array []. To enable AI (assistant/chat/suggestions), you must provide ai.models with at least 1 model.

Options reference

ai.icon

Description: Custom AI entry icon (SVG string), used by the chat button, bubble assistant, etc.

Type: String

Default: defaultAIOptions.icon in the snippet above

ai.skills

Description: Allowed skill modes shown in the composer dropdown. Values: ['write','image','code','mermaid','search'].

  • write: document assistant, edit/generate text/Markdown.
  • image: image generation, generate images from user descriptions.
  • code: code assistant, generate/complete/refactor/explain code.
  • mermaid: diagram assistant, produce structured diagrams.
  • math: math assistant, solve equations, expressions, etc.
  • search: web reading, understand/summarize/explain web content.

Type: Array

Default: ['write','image','code','mermaid','math','search']

ai.maxlength

Description: Maximum input length for the composer.

Type: number

Default: 500

ai.retryInterval

Description: Retry interval (ms) after a request fails.

Type: number

Default: 5000

ai.maxRetries

Description: Maximum retry attempts after a request fails.

Type: number

Default: 3

ai.timeout

Description: Per-request timeout (ms).

Type: number

Default: 10000

ai.cursorMarker

Description: Marker symbol for cursor/selection. The frontend injects it into document, selectionNodes, systemPrompt, etc. You can send these fields to your backend in ai.callbacks.onRequest to help the model understand document context and user intent.

Type: String

Default: '⦙'

ai.systemPrompt

Description: System prompt template. Before sending, {cursorMarker} is replaced with ai.cursorMarker.

Type: String

Default: systemPrompt in the snippet above

ai.commands

Description: Preset quick commands. Users can click these in the composer to send predefined prompts.

Type: Array

Default: commands in the snippet above

ai.models

Description: Model list (array). This is the critical configuration for AI to work. Assistant/chat/suggestions all rely on the current model.

Type: Array

Default: []. You must provide at least 1 model to enable AI.

Example:

{
  value: 'default',
  label: { zh_CN: '默认模型', en_US: 'Default Model' },
  avatar: '/path/to/avatar.svg',
  protocol: 'default' | 'agui',
  endpoint: '/api/chat',
  stream: true,
  reasoning: true,
}

Field descriptions:

  • value: model identifier passed to your backend.
  • label: model name shown in message lists, etc.
  • avatar: avatar URL shown in message lists.
  • protocol: 'default' | 'agui'.
  • endpoint: API endpoint.
  • stream: whether to use streaming.
  • reasoning: whether the model supports reasoning mode.

Note: this configuration is cached in localStorage. In local development, you may need to clear local storage after edits. Avoid dynamically changing this config in production.

ai.chat

Description: AI Chat Assistant options (entry, layout, history, attachments, etc.). See AI Chat Assistant.

Type: Object

Default: defaultAIOptions.chat in the snippet above

ai.assistant

Description: AI Document Assistant options (entry, panel, etc.). See AI Document Assistant.

Type: Object

Default: defaultAIOptions.assistant in the snippet above

ai.suggestion

Description: AI Suggestions options (debounce, callbacks, etc.). See AI Suggestions.

Type: Object

Default: defaultAIOptions.suggestion in the snippet above

ai.callbacks

Description: Request lifecycle callbacks and custom parsing (most commonly used).

Type: { onRequest?, onMessage?, onStart?, onComplete?, onAbort?, onError? }

Default: {}

ai.callbacks.onRequest

Description: Unified hook to rewrite the request before sending. Inject auth headers, rewrite the body, etc.

Type: Function

Default: none

Returns: Object. See Fetch API

Example:

const onRequest = (context, params) => {
  console.log('onRequest', { context, params })
  return {
    // Inject auth headers
    headers: {
      Authorization: `Bearer ${options.server.token}`,
    },
    // Request body
    body: JSON.stringify(params),
  }
}

For the flow and default onRequest params, see: Frontend/Backend Flow

ai.callbacks.onMessage

Description: In custom protocol mode (model.protocol !== 'agui'), map backend SSE chunks to renderable segments (markdown/thinking/etc.). Implement this to customize message parsing.

Type: (context, chunk) => { type: string, data: any } | null

Default: none

ai.callbacks.onStart

Description: Called when streaming starts.

Type: (context, chunk) => void

Default: none

ai.callbacks.onComplete

Description: Called when streaming completes (includes abort status, final params, etc.).

Type: (context, aborted, params, event) => void

Default: none

ai.callbacks.onAbort

Description: Called when the request is aborted (user clicks stop or component unmounts).

Type: (context) => void

Default: none

ai.callbacks.onError

Description: Called when an error occurs during request or parsing.

Type: (context, err) => void

Default: none

user

Current user info. AI passes user.id as userID to the backend. Chat UI may also use user.avatar. See User options.

document

Document info. AI passes document.id as documentID to the backend. See Document options.

onFileUpload

Required when ai.chat.files.enabled is enabled. See File options.

onFileDelete

Triggered when deleting an attachment (recommended to clean backend storage). See File options.

Methods

getAiEngine

Description: Get the chat engine state object (Vue ref). The document assistant and chat assistant share the same engine state. Use it to read the current model/messages/status, access the engine instance, and build deeper UI integrations or debugging tools.

Parameters: none

Returns: Ref<object>

AiEngine structure

AiEngine is an object wrapped by ref. Common fields:

  • from: undefined | 'assistant' | 'chat', last interaction source (written before send).
  • status: String, current request status ('idle' | 'pending' | 'streaming' | 'complete' | 'stop' | 'error').
  • messages: ChatMessage[], message list (roles: user/assistant/system).
  • assistantMessage: ChatMessage | undefined, last assistant message in document-assistant mode (for panel display and write-back).
  • engine: Object, internal chat engine instance for advanced customization (sending messages, debugging, analytics, etc.).

Slots

Use the floating_action slot to append custom content to the area where the “back to top” button is located.

Example:

<template>
  <umo-editor v-bind="options">
    <template #floating_action>
      <span>slot</span>
    </template>
  </umo-editor>
</template>
 
<script setup>
import { ref } from 'vue'
import { UmoEditor } from '@umoteam/editor'
 
const options = ref({
  // ...
})
</script>