【Move to another page】
Quote
https://ift.tt/2zI9p77
Voice computing
Jim Schwoebel: ←Created page with 'The Amazon echo, an example of a voice computer. '''Voice computing''' is the discipline that develops hardware or softwa...'
[[Image:Amazon Echo Plus 02.jpg|thumb|The Amazon echo, an example of a voice computer.]]
'''Voice computing''' is the discipline that develops hardware or software to process voice inputs. <ref>Schwoebel, J. (2018). An Introduction to Voice Computing in Python. Boston; Seattle, Atlanta: NeuroLex Laboratories. https://ift.tt/2Pj6QmN>
It spans many other fields including [[human-computer interaction]], [[conversational computing]], [[linguistics]], [[natural language processing]], [[automatic speech recognition]], [[audio engineering]], [[digital signal processing]], [[cloud computing]], [[data science]], [[ethics]], [[law]], and [[information security]].
Voice computing has become increasingly significant in modern times, especially with the advent of [[smart speakers]] like the [[Amazon Echo]] and [[Google Assistant]], a shift towards [[serverless computing]], and improved accuracy of [[speech recognition]] and [[text-to-speech]] models.
==History==
Voice computing has a rich history. <ref>Timeline for Speech Recognition. https://ift.tt/2QxlzHb> Inspired by the human vocal tract, [[Wolfgang Kempelen]] built the acoustic-mechanical speech machine to produce the earliest synthetic speech sounds. This led to further work by Thomas Edison to record audio with [[dictation machines]] and play it back in corporate settings. In the 1950s-1960s there were primitive attempts to build automated [[speech recognition]] systems by [[Bell Labs]], [[IBM]], and others. However it was not until the 1980s that [[Hidden Markov Models]] were used to recognize up to 1,000 words that speech recognition systems became relevant.
{| class="wikitable"
|-
! Date
! Event
|-
| 1784
| [[Wolfgang von Kempelen]] creates the Acoustic-Mechanical speech machine.
|-
| 1879
| [[Thomas Edison]] invents the first [[dictation machine]].
|-
| 1952
| [[Bell Labs]] releases [[Audrey]], capable of recognizing spoken digits with 90% accuracy.
|-
| 1962
| [[IBM Shoebox]] can recognize up to 16 words.
|-
| 1971
| [[Harpy]] is created, which can understand over 1,000 words.
|-
| 1986
| IBM Tangora uses [[Hidden Markov Models]] to predict phonemes in speech.
|-
| 2006
| [[National Security Agency]] begins research in hotword detection during normal conversations.
|-
| 2008
| [[Google]] launches a voice application, bring speech recognition to mobile devices.
|-
| 2011
| [[Apple]] releases Siri on iPhone
|-
| 2014
| [[Amazon]] releases [[Amazon Echo]] to make voice computing relevant to the public at large.
|}
Around 2011, [[Siri]] emerged on Apple iPhones as the first voice assistant accessible to consumers. This innovation led to a dramatic shift to building voice-first computing architectures. [[PS4]] was released by Sony in North America in 2013 (70+ million devices), Amazon released the [[Amazon Echo]] in 2014 (30+ million devices), [[Microsoft]] released Cortana (2015 - 400 million Windows 10 users), Google released [[Google Assistant]] (2016 - 2 billion active monthly users on Android phones), and Apple released HomePod (2018 - 500,000 devices sold and 1 billion devices active with iOS/Siri). These shifts, along with advancements in cloud infrastructure (e.g. [[Amazon Web Services]]) and [[codecs]], have solidified the voice computing field and made it widely relevant to the public at large.
==Hardware==
A <strong>voice computer</strong> is assembled hardware and software to process voice inputs.
Note that voice computers do not necessarily need a screen, such as in the traditional [[Amazon Echo]]. In other embodiments, traditional [[laptop computers]] or [[mobile phones]] could be used as as voice computers. Moreover, there has become increasingly more interfaces for voice computers with the advent of [[IoT]]-enabled devices, such as within cars or televisions.
As of September 2018, there are currently over 20,000 types of devices compatible with Amazon Alexa. <ref>Voicebot.AI. https://ift.tt/2Pj67C5>
==Software==
<strong>Voice computing software</strong> can read/write, record, clean, encrypt/decrypt, playback, transcode, transcribe, compress, publish, featurize, model, and visualize voice files.
Here are some popular software packages related to voice computing:
* <strong>[[FFmpeg]]</strong> - for [[transcoding]] audio files from one format to another (e.g. .WAV --> .MP3). <ref>FFmpeg. https://ift.tt/2QxlAed>
* <strong>[[Audacity]]</strong> - for recording and filtering audio. <ref>Audacity. https://ift.tt/2Pj6897>
* <strong>[[SoX]]</strong> - for manipulating audio files and removing environmental noise. <ref>SoX. https://ift.tt/2Qztp3l>
* <strong>Natural Language ToolKit</strong> - for featurizing transcripts with things like [[parts of speech]]. <ref>NLTK. https://ift.tt/2Pj69db>
* <strong>LibROSA</strong> - for visualizing audio file spectrograms and featurizing audio files. <ref>LibROSA. https://ift.tt/2QxlC5P>
* <strong>[[OpenSMILE]]</strong> - for featurizing audio files with things like mel-frequency cepstrum coefficients. <ref>OpenSMILE. https://ift.tt/2Pj69Kd>
* <strong>PocketSphinx</strong> - for transcribing speech files into text <ref>https://ift.tt/2QxlDGV>.
* <strong>Pyttsx3</strong> - for playing back audio files (text-to-speech). <ref>Pyttsx3. https://ift.tt/2Pj6U61>
* <strong>Pycryptodome</strong> - for encrypting and decrypting audio files. <ref>Pycryptodome. https://ift.tt/2QxlEKZ>
==Applications==
Voice computing applications span many industries including voice assistants, healthcare, e-Commerce, finance, supply chain, agriculture, text-to-speech, security, marketing, customer support, recruiting, cloud computing, microphone design, and podcasting. Voice technology is projected to grow at a CAGR of 19-25% by 2025, making it an attractive industry for startups and investors alike. <ref>Businesswire. https://ift.tt/2Pj6bSl>
{| class="wikitable"
|-
! Use case
! Example Product or Startup
|-
| [[Voice assistants]]
| [[Cortana]] <ref>Cortana. https://ift.tt/2QAxW5q>, [[Amazon Alexa]] <ref>Amazon Alexa. https://ift.tt/2Pj6cpn>, [[Siri]] <ref>Siri. https://ift.tt/2QxlFP3>, [[Google Assistant]] <ref>Google Assistant. https://ift.tt/2Pj6VH7>, [[Apple HomePod]] <ref>HomePod. https://ift.tt/2QubR8p>, [[Jasper]] <ref>Jasper https://ift.tt/2Pj6dtr>, and Nala <ref>Nala. https://ift.tt/2QyXXlL>
|-
| [[Healthcare]]
| Cardiocube <ref>Cardiocube. https://ift.tt/2Pj6e0t>, Toneboard <ref>Toneboard. https://ift.tt/2QxlHXb>, Suki <ref>Suki. https://ift.tt/2Pj6exv>, Praktice.ai <ref>Praktice.ai. https://ift.tt/2QzaMfM>, Corti <ref>Corti. https://ift.tt/2Pj6XPf>, and Syllable. <ref>Syllable. https://ift.tt/2QxlL9n>
|-
| [[e-Commerce]]
| Cerebel <ref>Cerebel. https://ift.tt/2Pj6YTj>, Voysis <ref>Voysis. https://ift.tt/2QAN0jE>, Mindori <ref>Mindori. https://ift.tt/2Pj6fBz>, Twiggle <ref>Twiggle. https://ift.tt/2QtPG2i>, and Addstructure. <ref>AddStructure. https://ift.tt/2Pj6ZXn>
|-
| [[Finance]]
| Kasisto <ref>Kasisto. https://ift.tt/2QAlBye>, Personetics <ref>Personetics. https://ift.tt/2Pj711r>, Voxo <ref>Voxo. https://ift.tt/2QxlMKt>, and Active Intelligence <ref>Active Intelligence. https://ift.tt/2Pj6gFD>
|-
| [[Supply Chain]] and [[Manufacturing]]
| Augury <ref>Augury. https://ift.tt/2QxlNhv>, Kextil <ref>Kextil. https://ift.tt/2Pj6hJH>, 3DSignals <ref>3DSignals. https://ift.tt/2QwDBta>, Voxware <ref>Voxware. https://ift.tt/2Pj6jkN>, Otosense <ref>Otosense. https://ift.tt/2QxlOlz>
|-
| [[Agriculture]]
| Agvoice <ref>Agvoice. https://ift.tt/2Pj74dD>
|-
| [[Text-to-speech]]
| Lyrebyrd <ref>Lyrebird. https://ift.tt/2QubRoV> and VocalID <ref>VocalD. https://ift.tt/2Pj74KF>
|-
| [[Security]]
| Pindrop security <ref>Pindrop. https://ift.tt/2QzyrNd> and Aimbrain <ref>Aimbrain. https://ift.tt/2Pj75hH>
|-
| [[Marketing]]
| Convirza <ref> Convirza. https://ift.tt/2QE08Vf>, Dialogtech <ref>Dialogtech. https://ift.tt/2Pj75OJ>, Invoca <ref> Invoca. https://ift.tt/2QE09sh>, Veritonic <ref>Veritonic. https://ift.tt/2Pj6lsV>
|-
| [[Customer support]]
| Cogito. <ref>Cogito. https://ift.tt/2QE0b3n>, Afiniti <ref> Afiniti. https://ift.tt/2Ppjdhh>, Aaron.ai <ref>Aaron.ai. https://ift.tt/2QxlRxL>, Blueworx <ref>Blueworx. https://ift.tt/2Pqf6RV>, Servo.ai <ref>Servo.ai. https://ift.tt/2QE0dbv>, [[SmartAction]], and Chatdesk. <ref>Chatdesk. https://ift.tt/2PjsZBv>
|-
| [[Recruiting]]
| SurveyLex <ref>SurveyLex. https://ift.tt/2QxlS4N> and Voice glance <ref>Voice glance. https://ift.tt/2PnGmR9>.
|-
| [[Speech-to-text]]
| Voicebase <ref>Voicebase. https://ift.tt/2QE0eMB>, Speechmatics <ref>Speechmatics. https://ift.tt/2PnGrEr>, Capio <ref>Capio. https://ift.tt/2QxlT8R>, [[Nuance]], and Spitch <ref>Spitch. https://ift.tt/2PoQOb6>
|-
| [[Cloud computing]]
| AWS <ref>AWS. https://ift.tt/2QGgFYo>, GCP <ref>GCP. https://ift.tt/2Pl6FaK>, IBM Watson <ref>IBM Watson. https://ift.tt/2QyXHTw>, and Microsoft Azure <ref>Microsoft Azure. https://ift.tt/2PnFWKs>.
|-
| [[Microphone]]/[[speaker]] design
| Bose <ref>Bose speakers. https://ift.tt/2QyXIa2> and Audio Technica. <ref>Audio Technica. https://ift.tt/2PnFXOw>
|-
| [[Podcasting]]
| Anchor <ref>Anchor. https://ift.tt/2Qva4Ag> and iTunes <ref>iTunes. https://ift.tt/2PjPMwQ>.
|}
==Legal considerations==
In the United States different states have varying [[telephone recording laws]]. In some states, it is legal to record a conversation with the consent of only one party, in others the consent of all parties is required.
Moreover, [[COPAA]] is a significant law to protect minors on the internet. With an increasing number of minors interacting with voice computing devices (e.g. the Amazon Alexa), the [[Federal Trade Commission]] recently relaxed the COPAA rule so that kids can issue voice searches and commands. <ref>Techcrunch. https://ift.tt/2QAxWCs>
Lastly, [[GDPR]] is a new European law that governs [[right to be forgotten]] and many other clauses for EU citizens. GDPR also is clear that companies need to outline clear measures to obtain consent if audio recordings are made and define the purpose and scope as to how these recordings will be used (e.g. for training purposes). The bar for valid consent has been raised much higher under the GDPR. Consents must be freely given, specific, informed, and unambiguous; tacit consent is no longer be enough. <ref>IAPP. https://ift.tt/2PnFYC4>
All these things make it quite unclear how voice computing technology will be regulated into the future.
==Research Conferences==
There are many research conferences that relate to voice computing. Some of these include:
* [[International Conference on Acoustics, Speech, and Signal Processing]]
* Interspeech <ref>Interspeech 2018. https://ift.tt/2QATlLF>
* AVEC <ref>AVEC 2018. https://ift.tt/2PnFZG8>
* IEEE Int'l Conf. on Automatic Face and Gesture Recognition <ref>2018 FG. https://ift.tt/2QyxCUQ>
* ACII2019 The 8th Int'l Conf. on Affective Computing and Intelligent Interaction <ref>ASCII 2019. https://ift.tt/2PnG0Kc>
==Developer community==
Google Assistant has roughly 2,000 actions as of January 2018. <ref>Voicebot.ai. https://ift.tt/2QxPNtN>
There are over 50,000 Alexa skills worldwide as of September 2018. <ref>Voicebot.ai. https://ift.tt/2wwsokC. </ref>
In June 2017, [[Google]] released AudioSet,<ref>Google AudioSet. https://ift.tt/2QyXY9j> a large-scale collection of human-labeled 10-second sound clips drawn from YouTube videos. It contains 1,010,480 videos of human speech files, or 2,793.5 hours in total. <ref>Audioset data. https://ift.tt/2PnG1he> It was released as part of the IEEE ICASSP 2017 Conference. <ref>Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, & Ritter, M. (2017, March). Audio set: An ontology and human-labeled dataset for audio events. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on (pp. 776-780). IEEE.</ref>
In November 2017, [[Mozilla Foundation]] released the Common Voice Project, a collection of speech files to help contribute to the larger open source machine learning community.<ref>Common Voice Project. https://ift.tt/2Qzlu5V Voice Project. https://ift.tt/2PnG2li> The voicebank is currently 12GB in size, with more than 500 hours of English-language voice data that have been collected from 112 countries since the project's inception in June 2017. <ref>Mozilla's large repository of voice data will shape the future of machine learning. https://ift.tt/2QyRl6R> This dataset has already resulted in creative projects like the DeepSpeech model, an open source transcription model. <ref>DeepSpeech. https://ift.tt/2PnGosf>.
In August 2018, Jim Schwoebel <ref>Jim Schwoebel. https://ift.tt/2QEDSuo> released a book (Introduction to Voice Computing in Python) with a GitHub repository.<ref>Introduction to Voice Computing in Python. https://ift.tt/2PnG2Sk> This book contains 10 chapters and the GitHub repository contains over 200+ starter scripts to help get developers started programming voice computing applications in Python. <ref>Voicebook. https://ift.tt/2QyxDbm>
==See also==
*[[Speech Recognition]]
*[[Natural Language Processing]]
*[[Voice command device]]
*[[Voice user interface]]
*[[Audio codec]]
*[[Ubicomp]]
*[[Hands-free computing]]
==References==
[[Category:Speech recognition]]
[[Category:History of human–computer interaction]]
[[Category:Voice technology]]
[[Category:Natural language processing]]
[[Category:Computational linguistics]]
[[Category:Speech recognition]]
[[Category:Computational fields of study]]
[[Category:Artificial intelligence]]
'''Voice computing''' is the discipline that develops hardware or software to process voice inputs. <ref>Schwoebel, J. (2018). An Introduction to Voice Computing in Python. Boston; Seattle, Atlanta: NeuroLex Laboratories. https://ift.tt/2Pj6QmN>
It spans many other fields including [[human-computer interaction]], [[conversational computing]], [[linguistics]], [[natural language processing]], [[automatic speech recognition]], [[audio engineering]], [[digital signal processing]], [[cloud computing]], [[data science]], [[ethics]], [[law]], and [[information security]].
Voice computing has become increasingly significant in modern times, especially with the advent of [[smart speakers]] like the [[Amazon Echo]] and [[Google Assistant]], a shift towards [[serverless computing]], and improved accuracy of [[speech recognition]] and [[text-to-speech]] models.
==History==
Voice computing has a rich history. <ref>Timeline for Speech Recognition. https://ift.tt/2QxlzHb> Inspired by the human vocal tract, [[Wolfgang Kempelen]] built the acoustic-mechanical speech machine to produce the earliest synthetic speech sounds. This led to further work by Thomas Edison to record audio with [[dictation machines]] and play it back in corporate settings. In the 1950s-1960s there were primitive attempts to build automated [[speech recognition]] systems by [[Bell Labs]], [[IBM]], and others. However it was not until the 1980s that [[Hidden Markov Models]] were used to recognize up to 1,000 words that speech recognition systems became relevant.
{| class="wikitable"
|-
! Date
! Event
|-
| 1784
| [[Wolfgang von Kempelen]] creates the Acoustic-Mechanical speech machine.
|-
| 1879
| [[Thomas Edison]] invents the first [[dictation machine]].
|-
| 1952
| [[Bell Labs]] releases [[Audrey]], capable of recognizing spoken digits with 90% accuracy.
|-
| 1962
| [[IBM Shoebox]] can recognize up to 16 words.
|-
| 1971
| [[Harpy]] is created, which can understand over 1,000 words.
|-
| 1986
| IBM Tangora uses [[Hidden Markov Models]] to predict phonemes in speech.
|-
| 2006
| [[National Security Agency]] begins research in hotword detection during normal conversations.
|-
| 2008
| [[Google]] launches a voice application, bring speech recognition to mobile devices.
|-
| 2011
| [[Apple]] releases Siri on iPhone
|-
| 2014
| [[Amazon]] releases [[Amazon Echo]] to make voice computing relevant to the public at large.
|}
Around 2011, [[Siri]] emerged on Apple iPhones as the first voice assistant accessible to consumers. This innovation led to a dramatic shift to building voice-first computing architectures. [[PS4]] was released by Sony in North America in 2013 (70+ million devices), Amazon released the [[Amazon Echo]] in 2014 (30+ million devices), [[Microsoft]] released Cortana (2015 - 400 million Windows 10 users), Google released [[Google Assistant]] (2016 - 2 billion active monthly users on Android phones), and Apple released HomePod (2018 - 500,000 devices sold and 1 billion devices active with iOS/Siri). These shifts, along with advancements in cloud infrastructure (e.g. [[Amazon Web Services]]) and [[codecs]], have solidified the voice computing field and made it widely relevant to the public at large.
==Hardware==
A <strong>voice computer</strong> is assembled hardware and software to process voice inputs.
Note that voice computers do not necessarily need a screen, such as in the traditional [[Amazon Echo]]. In other embodiments, traditional [[laptop computers]] or [[mobile phones]] could be used as as voice computers. Moreover, there has become increasingly more interfaces for voice computers with the advent of [[IoT]]-enabled devices, such as within cars or televisions.
As of September 2018, there are currently over 20,000 types of devices compatible with Amazon Alexa. <ref>Voicebot.AI. https://ift.tt/2Pj67C5>
==Software==
<strong>Voice computing software</strong> can read/write, record, clean, encrypt/decrypt, playback, transcode, transcribe, compress, publish, featurize, model, and visualize voice files.
Here are some popular software packages related to voice computing:
* <strong>[[FFmpeg]]</strong> - for [[transcoding]] audio files from one format to another (e.g. .WAV --> .MP3). <ref>FFmpeg. https://ift.tt/2QxlAed>
* <strong>[[Audacity]]</strong> - for recording and filtering audio. <ref>Audacity. https://ift.tt/2Pj6897>
* <strong>[[SoX]]</strong> - for manipulating audio files and removing environmental noise. <ref>SoX. https://ift.tt/2Qztp3l>
* <strong>Natural Language ToolKit</strong> - for featurizing transcripts with things like [[parts of speech]]. <ref>NLTK. https://ift.tt/2Pj69db>
* <strong>LibROSA</strong> - for visualizing audio file spectrograms and featurizing audio files. <ref>LibROSA. https://ift.tt/2QxlC5P>
* <strong>[[OpenSMILE]]</strong> - for featurizing audio files with things like mel-frequency cepstrum coefficients. <ref>OpenSMILE. https://ift.tt/2Pj69Kd>
* <strong>PocketSphinx</strong> - for transcribing speech files into text <ref>https://ift.tt/2QxlDGV>.
* <strong>Pyttsx3</strong> - for playing back audio files (text-to-speech). <ref>Pyttsx3. https://ift.tt/2Pj6U61>
* <strong>Pycryptodome</strong> - for encrypting and decrypting audio files. <ref>Pycryptodome. https://ift.tt/2QxlEKZ>
==Applications==
Voice computing applications span many industries including voice assistants, healthcare, e-Commerce, finance, supply chain, agriculture, text-to-speech, security, marketing, customer support, recruiting, cloud computing, microphone design, and podcasting. Voice technology is projected to grow at a CAGR of 19-25% by 2025, making it an attractive industry for startups and investors alike. <ref>Businesswire. https://ift.tt/2Pj6bSl>
{| class="wikitable"
|-
! Use case
! Example Product or Startup
|-
| [[Voice assistants]]
| [[Cortana]] <ref>Cortana. https://ift.tt/2QAxW5q>, [[Amazon Alexa]] <ref>Amazon Alexa. https://ift.tt/2Pj6cpn>, [[Siri]] <ref>Siri. https://ift.tt/2QxlFP3>, [[Google Assistant]] <ref>Google Assistant. https://ift.tt/2Pj6VH7>, [[Apple HomePod]] <ref>HomePod. https://ift.tt/2QubR8p>, [[Jasper]] <ref>Jasper https://ift.tt/2Pj6dtr>, and Nala <ref>Nala. https://ift.tt/2QyXXlL>
|-
| [[Healthcare]]
| Cardiocube <ref>Cardiocube. https://ift.tt/2Pj6e0t>, Toneboard <ref>Toneboard. https://ift.tt/2QxlHXb>, Suki <ref>Suki. https://ift.tt/2Pj6exv>, Praktice.ai <ref>Praktice.ai. https://ift.tt/2QzaMfM>, Corti <ref>Corti. https://ift.tt/2Pj6XPf>, and Syllable. <ref>Syllable. https://ift.tt/2QxlL9n>
|-
| [[e-Commerce]]
| Cerebel <ref>Cerebel. https://ift.tt/2Pj6YTj>, Voysis <ref>Voysis. https://ift.tt/2QAN0jE>, Mindori <ref>Mindori. https://ift.tt/2Pj6fBz>, Twiggle <ref>Twiggle. https://ift.tt/2QtPG2i>, and Addstructure. <ref>AddStructure. https://ift.tt/2Pj6ZXn>
|-
| [[Finance]]
| Kasisto <ref>Kasisto. https://ift.tt/2QAlBye>, Personetics <ref>Personetics. https://ift.tt/2Pj711r>, Voxo <ref>Voxo. https://ift.tt/2QxlMKt>, and Active Intelligence <ref>Active Intelligence. https://ift.tt/2Pj6gFD>
|-
| [[Supply Chain]] and [[Manufacturing]]
| Augury <ref>Augury. https://ift.tt/2QxlNhv>, Kextil <ref>Kextil. https://ift.tt/2Pj6hJH>, 3DSignals <ref>3DSignals. https://ift.tt/2QwDBta>, Voxware <ref>Voxware. https://ift.tt/2Pj6jkN>, Otosense <ref>Otosense. https://ift.tt/2QxlOlz>
|-
| [[Agriculture]]
| Agvoice <ref>Agvoice. https://ift.tt/2Pj74dD>
|-
| [[Text-to-speech]]
| Lyrebyrd <ref>Lyrebird. https://ift.tt/2QubRoV> and VocalID <ref>VocalD. https://ift.tt/2Pj74KF>
|-
| [[Security]]
| Pindrop security <ref>Pindrop. https://ift.tt/2QzyrNd> and Aimbrain <ref>Aimbrain. https://ift.tt/2Pj75hH>
|-
| [[Marketing]]
| Convirza <ref> Convirza. https://ift.tt/2QE08Vf>, Dialogtech <ref>Dialogtech. https://ift.tt/2Pj75OJ>, Invoca <ref> Invoca. https://ift.tt/2QE09sh>, Veritonic <ref>Veritonic. https://ift.tt/2Pj6lsV>
|-
| [[Customer support]]
| Cogito. <ref>Cogito. https://ift.tt/2QE0b3n>, Afiniti <ref> Afiniti. https://ift.tt/2Ppjdhh>, Aaron.ai <ref>Aaron.ai. https://ift.tt/2QxlRxL>, Blueworx <ref>Blueworx. https://ift.tt/2Pqf6RV>, Servo.ai <ref>Servo.ai. https://ift.tt/2QE0dbv>, [[SmartAction]], and Chatdesk. <ref>Chatdesk. https://ift.tt/2PjsZBv>
|-
| [[Recruiting]]
| SurveyLex <ref>SurveyLex. https://ift.tt/2QxlS4N> and Voice glance <ref>Voice glance. https://ift.tt/2PnGmR9>.
|-
| [[Speech-to-text]]
| Voicebase <ref>Voicebase. https://ift.tt/2QE0eMB>, Speechmatics <ref>Speechmatics. https://ift.tt/2PnGrEr>, Capio <ref>Capio. https://ift.tt/2QxlT8R>, [[Nuance]], and Spitch <ref>Spitch. https://ift.tt/2PoQOb6>
|-
| [[Cloud computing]]
| AWS <ref>AWS. https://ift.tt/2QGgFYo>, GCP <ref>GCP. https://ift.tt/2Pl6FaK>, IBM Watson <ref>IBM Watson. https://ift.tt/2QyXHTw>, and Microsoft Azure <ref>Microsoft Azure. https://ift.tt/2PnFWKs>.
|-
| [[Microphone]]/[[speaker]] design
| Bose <ref>Bose speakers. https://ift.tt/2QyXIa2> and Audio Technica. <ref>Audio Technica. https://ift.tt/2PnFXOw>
|-
| [[Podcasting]]
| Anchor <ref>Anchor. https://ift.tt/2Qva4Ag> and iTunes <ref>iTunes. https://ift.tt/2PjPMwQ>.
|}
==Legal considerations==
In the United States different states have varying [[telephone recording laws]]. In some states, it is legal to record a conversation with the consent of only one party, in others the consent of all parties is required.
Moreover, [[COPAA]] is a significant law to protect minors on the internet. With an increasing number of minors interacting with voice computing devices (e.g. the Amazon Alexa), the [[Federal Trade Commission]] recently relaxed the COPAA rule so that kids can issue voice searches and commands. <ref>Techcrunch. https://ift.tt/2QAxWCs>
Lastly, [[GDPR]] is a new European law that governs [[right to be forgotten]] and many other clauses for EU citizens. GDPR also is clear that companies need to outline clear measures to obtain consent if audio recordings are made and define the purpose and scope as to how these recordings will be used (e.g. for training purposes). The bar for valid consent has been raised much higher under the GDPR. Consents must be freely given, specific, informed, and unambiguous; tacit consent is no longer be enough. <ref>IAPP. https://ift.tt/2PnFYC4>
All these things make it quite unclear how voice computing technology will be regulated into the future.
==Research Conferences==
There are many research conferences that relate to voice computing. Some of these include:
* [[International Conference on Acoustics, Speech, and Signal Processing]]
* Interspeech <ref>Interspeech 2018. https://ift.tt/2QATlLF>
* AVEC <ref>AVEC 2018. https://ift.tt/2PnFZG8>
* IEEE Int'l Conf. on Automatic Face and Gesture Recognition <ref>2018 FG. https://ift.tt/2QyxCUQ>
* ACII2019 The 8th Int'l Conf. on Affective Computing and Intelligent Interaction <ref>ASCII 2019. https://ift.tt/2PnG0Kc>
==Developer community==
Google Assistant has roughly 2,000 actions as of January 2018. <ref>Voicebot.ai. https://ift.tt/2QxPNtN>
There are over 50,000 Alexa skills worldwide as of September 2018. <ref>Voicebot.ai. https://ift.tt/2wwsokC. </ref>
In June 2017, [[Google]] released AudioSet,<ref>Google AudioSet. https://ift.tt/2QyXY9j> a large-scale collection of human-labeled 10-second sound clips drawn from YouTube videos. It contains 1,010,480 videos of human speech files, or 2,793.5 hours in total. <ref>Audioset data. https://ift.tt/2PnG1he> It was released as part of the IEEE ICASSP 2017 Conference. <ref>Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, & Ritter, M. (2017, March). Audio set: An ontology and human-labeled dataset for audio events. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on (pp. 776-780). IEEE.</ref>
In November 2017, [[Mozilla Foundation]] released the Common Voice Project, a collection of speech files to help contribute to the larger open source machine learning community.<ref>Common Voice Project. https://ift.tt/2Qzlu5V Voice Project. https://ift.tt/2PnG2li> The voicebank is currently 12GB in size, with more than 500 hours of English-language voice data that have been collected from 112 countries since the project's inception in June 2017. <ref>Mozilla's large repository of voice data will shape the future of machine learning. https://ift.tt/2QyRl6R> This dataset has already resulted in creative projects like the DeepSpeech model, an open source transcription model. <ref>DeepSpeech. https://ift.tt/2PnGosf>.
In August 2018, Jim Schwoebel <ref>Jim Schwoebel. https://ift.tt/2QEDSuo> released a book (Introduction to Voice Computing in Python) with a GitHub repository.<ref>Introduction to Voice Computing in Python. https://ift.tt/2PnG2Sk> This book contains 10 chapters and the GitHub repository contains over 200+ starter scripts to help get developers started programming voice computing applications in Python. <ref>Voicebook. https://ift.tt/2QyxDbm>
==See also==
*[[Speech Recognition]]
*[[Natural Language Processing]]
*[[Voice command device]]
*[[Voice user interface]]
*[[Audio codec]]
*[[Ubicomp]]
*[[Hands-free computing]]
==References==
[[Category:Speech recognition]]
[[Category:History of human–computer interaction]]
[[Category:Voice technology]]
[[Category:Natural language processing]]
[[Category:Computational linguistics]]
[[Category:Speech recognition]]
[[Category:Computational fields of study]]
[[Category:Artificial intelligence]]
November 11, 2018 at 07:42AM