
Hey everyone 333,
I wanted to share an open-source project called ProxyFace. If you're interacting with LLMs and want a more engaging experience, this adds a real-time, pixel-art avatar that reacts to the AI's output with actual emotions—and it runs entirely on your own machine.
Your AI now has a face, voice, and ears, but with zero telemetry and zero cloud dependencies for inference.
✨ What makes it special:
100% Local Emotion Brain: Runs a highly optimized 4MB TinyBERT model at 60ms via WebGPU/WASM. The face reacts to the AI's text (embarrassed, curious, delighted, etc.) without hitting any external APIs.
Hands-Free Voice Interaction: Hold Alt+T to speak and release to send. The AI replies and reacts, making it awesome for language learning or just natural conversation.
On-Device Eye Tracking: Uses MediaPipe locally so the avatar’s pupils follow your gaze. Video never leaves your computer.
Customizable Pixel Art: Comes with 40+ characters. You can easily drop in your own sprite sheet and instantly use your own custom avatar.
️ The Tech Stack: Built with React 18, Vite, Tailwind CSS, ONNX Runtime Web, and packaged for desktop with Electron. It is fully open-source under the GPL-3.0 license.
We are actively looking for feedback, developers, and pixel artists who want to submit their own characters to the official gallery (email us at [email protected]).
If you find the project interesting, giving us a ⭐ on GitHub helps out a lot. Let me know what you think of the tech stack or if you have any questions!
United States
NORTH AMERICA
Related News
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
20h ago
UCP Variant Data: The #1 Reason Agent Checkouts Fail
6h ago

Décryptage technique : Comment builder un téléchargeur de vidéos Reddit performant (DASH, HLS & WebAssembly)
16h ago
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Encryption Protocols for Secure AI Systems: A Practical Guide
20h ago



