Choi Jung-hoon AI Human Research Project (JANNABI AI Digital Human)

2024 · Digital Human Research · California Institute of the Arts

Overview

Choi Jung-hoon AI Human reconstructs the lead vocalist of JANNABI as a generative, interactive AI digital human. The project examines how identity, emotion, and authorship can be algorithmically simulated, transforming a figure of admiration into computational presence. Through real-time speech, gesture response, and persona-driven interaction, the AI explores the blurry line between replication and authenticity.

Research Context

In an era where fandom, data, and simulation overlap, the project asks: “When emotion becomes code, can affection survive imitation?”

By reconstructing an admired artist, the work reframes parasocial devotion as a research methodology. The digital performer learns, speaks, and responds, turning fandom into feedback and making visible the emotional projection that fuels AI-human intimacy.

Digital Human Construction

High-resolution portrait input → Character Creator 4 Headshot → ZBrush refinement → Maya rigging and UV workflow → Unity real-time rendering.

Voice Cloning (ElevenLabs)

Interview and performance samples were processed through ElevenLabs Instant Voice Cloning, enabling spontaneous speech generation that mimics Choi Jung-hoon’s tone, color, and cadence. Real-time lip-sync was achieved through blendshape animation in Unity.

Conversational AI Integration

A GPT-based persona model was embedded into Unity. Audience speech → transcription → GPT response → ElevenLabs TTS → real-time facial animation.
A coroutine-based pipeline ensured continuous, natural conversational flow.

Sensor-Based Interaction (Azure Kinect)

Azure Kinect provided body tracking, gesture detection, and depth information, allowing the AI human to react physically to visitors—leaning, nodding, and mirroring engagement. Audience forms were rendered as 3D point clouds, visualizing how AI perceives human presence.

Real-Time Unity Integration

All systems—speech, gesture, persona, emotional response, shaders—were unified in Unity’s real-time rendering pipeline, emphasizing digital fragility and emotional realism.

Artistic & Theoretical Focus

The project explores AI-mediated affection and the co-construction of identity in machine-human systems. By merging machine learning, motion capture, and emotional projection, the digital Choi Jung-hoon becomes both a subject of empathy and a simulation of it. This work rethinks authorship and digital intimacy in the era of computational performers.

Keywords

#digital-human #ai-persona #voice-cloning #real-time-interaction #cyborg-psychology

Credits

Principal Investigator / Technical Director: Jonghoon Ahn
Institution: California Institute of the Arts
Tools: Unity, Character Creator 4, ZBrush, Maya, ElevenLabs, Azure Kinect, GPT API