
The emotion recognition model can accurately identify the emotional states of both the AI and the conversational partner. By referencing character settings and desires, it generates an emotion vector and an appropriate emotional response.
Based on the AI character's goals and current situation, desires are generated. Using the Monte Carlo Tree Search's inference structure, it searches for the path to realization.
Self-awareness, rational memory, emotional memory, knowledge graph memory, and skill book memory. A variety of memory partitions and intelligent scheduling allow the AI to achieve a memory system that closely resembles that of humans.
We utilized the stage's configuration mode to customize the dialogue logic for car sales. This demo demonstrates the feature's effectiveness in a specific scenario.
This demo shows the effectiveness of talking with the characters in the fiction. It is a good practice of using skill book.
We have voice to voice functionality while communicating with NPCs. The NPCs can send either voice or text depending on users' responses.
We are able to clone different sounds from samples, here is a demo of our trained sounds called GuanYu, a popular character in Romance of Three Kingdoms.
It also supports multiple languages. Below is the effect of an anime character called Soryu Asuka Langley(惣流・アスカ・ラングレー) speaking in a Japanese voice.
Get API access for testing at the earliest opportunity.
Copyright© 2023 Bit Knight Pty Ltd
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.