A wearable communicator that reads your lip movements and turns them into speech. Just mouth the words, Mimiq does the talking.
Our team has experience at
Three simple steps from thought to speech
Simply move your lips as if speaking. No sound required, Mimiq's sensors detect every subtle movement.
Our ML model processes sensor data in real-time, translating lip movements into accurate text.
Text-to-speech with voice cloning produces natural, personalized audio in real-time.
From accessibility to enterprise, Mimiq adapts to your needs
Empower individuals with speech disabilities to communicate naturally and independently. Give everyone a voice.
Break language barriers instantly. Mouth words in your language, output speech in another. Perfect for globetrotters.
Silent, covert communication for tactical operations. Communicate without revealing your position.
From concept to working prototype
Achieve functional sensor with reliable signal capture. Begin initial data collection for model training.
Collect ~10,000 samples of speech-to-movement data. Ensure diverse dataset for robust model training.
Experiment with different neural network architectures and training techniques to optimize performance.
Implement transformer architecture for full sentences. Achieve 85%+ accuracy. Integrate text-to-speech with voice cloning.
Functional, wearable demo with complete mouth-to-speech translation. Testing, evaluation, and iteration complete.
Hardware & Embedded Systems
Deep experience in embedded systems and signal processing, with hands-on hardware expertise from roles at Tesla, NVIDIA, and Apple.
Join our waitlist to get exclusive updates, early access to our prototype, and be part of shaping the future of silent communication.