Adventists - API Doc
    Adventists - API Doc
    • Overall
    • v2.5 Updates
    • Quick Start
    • TCP - Chat With Voice
      • Connect with TCP
      • Connect with VAD
      • Connect with MCP
      • Get Token
        POST
    • Settings Management
      • Args Description
      • Template
        • Query Public Template List
        • Query Current Org Template (including private and public template)
        • Create Template
        • Query Template By ID
        • Modify Template
        • Delete Template
        • Send Memories To Template
      • NPC
        • Create Npc With Template
        • Create Npc Without Template
        • Query Npc By Id
        • Modify Npc
        • Delete Npc
        • Send Memories To Npc
        • Query Memories From Npc
      • Skills Book
        • Query Skills Book List
        • Start Create Skills Book Task
        • Upload Skills Book Content
        • Finish Create Skills Book Task
        • Query Progress Of Create Skills Book Task
      • Voice Texture
        • Upload Voice Texture
          • Upload Voice
          • Query Status Of Uploading Voice
        • Query Voice List
    • Other Chat Functions
      • Upload Pictures
        POST

    v2.5 Updates

    Overview#

    Version 2.5 introduces a large number of new features, including an extended Functions engine, an OpenAI-compatible API, on-device MCP support, and more. Built on the Anyone parallel-inference architecture, streaming is now the default—and only—response mode.

    Massive Functions expansion#

    New OpenAI-compatible /chat/completions endpoint#

    On-device MCP (Model Context Protocol) support#

    Streaming is now the default & only response mode#

    Role-based encapsulation for simpler integration#


    Authentication#

    Every request must include the following headers:
    api_key: 32-character string
    org_id: 6-character string

    Functions Reference#

    v2.5 dramatically expands the Functions engine, giving the AI the ability to call external APIs and perform real-world tasks.

    Available Functions#

    #FunctionPurposeTrigger Example
    1get_weatherReal-time weather for the next 5 days“What’s the weather like?”
    2get_newsLatest headlines & hot events“Any news today?”
    3get_visionImage recognition / processing“What’s in this picture?”
    4danceTrigger a character dance (ShuBan only)“Dance for me!”
    5search_internetTavily AI powered web search“Search for the latest GPU benchmarks”
    6search_baiduBaidu search optimised for Chinese“百度一下今天的股市”
    7read_urlExtract main text from any URL“Summarise this article”
    8book_house_serviceBook cleaning, cooking, errands (Aiguo only)“I need a cleaner tomorrow”
    9car_salesEV model & price lookup (SiYou only)“Show me Tesla Model Y prices”
    10tarotTarot card draw & reading“Draw three cards for my career”
    11baziBa-Zi astrology via birth data“Analyse my birth chart”
    12search_memorySearch historical chat context“What did I ask last Friday?”
    13lingtong_actionLing-tong special action (Ling-tong only)“Perform action #3”

    Configuration#

    1.
    Intro
    {
      "functions": ["search_internet", "tarot", "bazi"],
      "voice_id": "wukong"
    }
    2.
    Usage
    Each role can enable multiple functions.
    The system auto-selects the best function(s) per turn.
    nctions can be chained.
    me require extra parameters (e.g., birth date for bazi).

    OpenAI-Compatible API#

    Role-based Design Philosophy#

    Unlike the traditional OpenAI library, we use a role-based wrapper:
    No messages array needed – just send a prompt string.
    Auto context management – the system keeps and uses conversation history automatically.
    Role-driven interaction – pick the character you want to talk to with the model parameter.

    /chat/completions Endpoint#

    POST /chat/completions

    Authentication#

    Request Body#

    {
        "model": "NPC ID",
        "prompt": "User Input",
        "stream": true,
        "lang": "default",
        "emoji_mode": true,
        "audio_mode": true,
        "emotion_status": false,
        "tools": ["search_internet", "tarot"],
        "tool_call": false,
        "function_id": null
    }

    Parameter Reference#

    ArgsTypeMustDefault ValueIntro
    modelstringYes-NPC ID
    promptstringYes-User Input
    langstringNo"default"Language
    emoji_modebooleanNotrueEnable emoji
    audio_modebooleanNotrueEnable auto break
    emotion_statusbooleanNofalseEnable emotion status
    toolsarrayNo[]Avaliable tools list
    tool_callbooleanNofalseIs tool calling
    function_idstringNonullFunction ID

    Response Format#

    Adopt the standard OpenAI streaming response format.
    {
        "id": "chatcmpl-xxx",
        "object": "chat.completion.chunk",
        "created": 1234567890,
        "model": "NPC ID",
        "choices": [
            {
                "index": 0,
                "delta": {"content": "response content"},
                "finish_reason": null
            }
        ]
    }

    Sample#

    On-Device MCP Support#

    What is On-Device MCP?#

    On-device MCP (Model Context Protocol) allows you to embed MCP directives directly within the messages you send, working in conjunction with server-side functions.
    Refer to the TCP interface MCP documentation for details.
    Alternatively, you can use the plain HTTP interface and pass MCP capabilities through functions.

    Streaming Response Features#

    Anyone Parallel-Inference Architecture#

    Streaming by default: All responses are streamed and cannot be disabled
    Parallel inference: Multiple inference tasks run concurrently
    Real-time: Results are emitted while still being computed, providing the best user experience

    Response Formats#

    Legacy interface: Server-Sent Events (SSE)
    OpenAI-compatible interface: Standard OpenAI streaming format

    Error Codes#

    Error CodeIntro
    200success
    400args error
    401auth error
    403forbbiden
    404not found
    429too many
    500internal error
    601login ex

    Changelog#

    v2.5 Major Updates:#

    1.
    Significantly More Functions: Expanded from 4 to 13+
    2.
    OpenAI-Compatible API: New /chat/completions endpoint
    3.
    On-Device MCP Support: MCP directives can now be embedded directly in messages
    4.
    Streaming Response Optimized: Default streaming powered by parallel-inference architecture
    5.
    Role-Based Encapsulation: Simplified calls with automatic context management
    6.
    Legacy Cleanup: Removed fast_mode, route, and related parameters

    Deprecated Features:#

    fast_mode parameter
    route parameter
    /prepare_fast_mode endpoint
    Non-streaming response mode

    Important Notes#

    1.
    Credential Security: Keep your API KEY and ORG ID safe
    2.
    Request Limits: Prompt length capped at 1,500 characters
    3.
    Rate Limiting: All endpoints enforce request-rate limits
    4.
    Function Permissions: Some functions require extra authorization
    5.
    Streaming Only: Only streaming responses are supported; they cannot be disabled
    6.
    Compatibility: New integrations should use the OpenAI-compatible endpoints
    修改于 2025-09-12 11:17:32
    上一页
    Overall
    下一页
    Quick Start
    Built with