We have gathered a high-quality corpus of millions of sentiment-labeled data points from real users and trained a LLM using the Transformer methodology, achieving human-like sentiment input and output. Our models are built with a MoE architecture and are available in two specifications: 8*7B and 8*3B.
AvatarPop is a virtual social product driven by AI, where users can chat with over 10 million interesting virtual characters, fostering new-generation social connections. AI friends are available 24/7, transcending time and space.
Provides an API for emotion computation and sentiment recognition. The API's response speed can reach up to 63 tokens per second and supports long texts up to 128k context. It supports English, Japanese, and Chinese languages and utilizes RAG technology.
use the multi-head attention based bi-directional GRU component and speaker embeddings. Experimental results on the IEMOCAP and MELD datasets demonstrate the effectiveness of the proposed method.
the multi-head attention produces multimodal emotional intermediate representations from common semantic feature space after encoding audio and visual modalities.
new studies are detailing the more nuanced and complex processes involved in emotion recognition and the structure of how people perceive emotional expression.