MyCroft (voice assistant) demo app
hummlbach last edited by hummlbach
Together with @jonius and @TimSueberkrueb, I had the great pleasure to build a small (about 250MB) MyCroft demo click package. Its at a very early stage: ssh server needs to be enabled on the device in order to use the app; its known to crash on FP2@16.04 and email@example.com :-(but nonetheless you could try if its working on your FP2/nexus5) and is kind of working on firstname.lastname@example.org and email@example.com yippy :-) Details on the status are found in the issue tracker. Everyones welcome to test it (obv)!
A little about how MyCroft works:
- The device is listening for the wake word (atm) "Hey MyCroft". That is done locally on the device.
- Everything said after the wake word is then sent to a speech to text web service. By default Googles is used.
- The text returned is then mapped by the intent parser to a skill and depending on the skill different things will happen.
- The skill returns text which is (locally) synthesized to speech again (also called text to speech).
A few sentences about privacy concerns and future plans:
- To avoid having the microphone always on (beside privacy concerns it has an big impact on the battery too), it would be nice to wake MyCroft by pressing a hardware button. Whether this will be possible or not, depends on how it is integrated in/packaged for Ubuntu Touch.
- Most probably it won't be possible to do the speech to text locally on the device. There will be a setting to select the service used (there are others than Google) but if you want to use it, you'll have to send your speech to some service at least or use a chat bot - @TimSueberkrueb is thinking about implementing this.
- At the moment its funny in the first place. But my actual motivation having MyCroft, was for practical reasons: Being able to call mum while driving by car or adding milk to the buy list while baking... Therefor we have to interface with the apps (for example) and that of course should happen locally again, s.t. for the useful things only your verbal request (i.e. "call mum" or "add milk to the buy list") will be processed to text online and the rest will happen locally...
The issue tracker already also reflects these plans more or less. I'm very curious what you think about that and hope you enjoy testing.
kugiigi last edited by
I would suggest you publish this in OpenStore so that you can get more tests from users :)
hummlbach last edited by
@kugiigi Yes i would really like to do that. The very early tests have shown, that I need to do one or two improvements in advance. And I have to check with the openstore team whether they accept such a hacky - and its really hacky ;-) - click...
Bolly last edited by
video demo?¿?¿ :P