Voiceitt raises $4.7M to scale speech recognition for people with disabilities

Voiceitt raises $4.7M to scale speech recognition for people with disabilities

Speech recognition-technology company Voiceitt announced it scored $4.7 million in funding. The startup will extend the round to meet a new goal of $10 million.

The round was led by AMIT Technion with participation from Third Culture Capital (3CC) and Cisco Investments.  

The Israeli company said that it had raised $20 million since its formation in 2012, including $5M of non-dilutive funding from grants and competitions.


Voiceitt offers an AI-based speech recognition app for individuals with speech impairments to communicate, translating unintelligible and atypical speech in real time.

The company will use the latest funds to scale its platform, accelerate commercialization and expand its proprietary speech database. 

“Thanks to the incredible support of new investors Cisco Investments and Third Culture Capital, the positive feedback for Voiceitt has been overwhelming and led to over-subscribing of the round. This enthusiasm affirms what we already know  enterprises need inclusive voice technology solutions to enhance their commercial landscape.  We are excited for the continued momentum to make voice AI more accessible,” Karl Anderson, chairman of the board at Voiceitt and CEO of Viking Maccabee Ventures, wrote to MobiHealthNews in an email.


Voiceitt is an Amazon Alexa Fund portfolio company that participated in the Alexa Accelerator powered by Techstars in Seattle in 2018. In 2020, the company received ​​$10 million in Series A funding to support individuals with speech impairments during the pandemic. 

In October, the University of Illinois Urbana-Champaign announced its own initiative to expand speech capabilities for those with disabilities via the Speech Accessibility Project, a collaboration between the university and tech giants Amazon, Microsoft, Meta, Apple, Google and other nonprofit partners.

The project will collect speech samples from individuals with diverse speech patterns, and the university will recruit paid volunteers to contribute recorded voice samples. The samples collected will help to train machine learning models to identify various speech patterns. The first focus will be on American English.

Leave a Reply