Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Mycroft: The open source answer to natural language platforms

This article was last updated 7 years ago.


We’re thrilled to be working with Mycroft, the open source answer to proprietary natural language platform. Mycroft has adopted Ubuntu Core and Snaps to deliver their software to Mycroft hardware, as well as Snaps to enable desktop users to install the software regardless of the Linux distribution they are using! CEO of Mycroft, Joshua Montgomery, explains more within his piece below.

One of the best things about the open source community is that it brings in talent from unexpected places. When people think about tech they usually think of San Francisco, Tokyo or London – not Kansas. But thanks to the inclusiveness of open source little Lawrence, Kansas is home to one of our community’s most innovative projects – Mycroft.

Mycroft is the open source answer to proprietary natural language platforms like Apple’s Siri or Amazon’s Alexa. Users can speak to the software naturally and receive a natural response. For example, if a user asks “Mycroft, how is the weather in Seattle”, the system responds by saying “It is currently 60 degrees and raining. It is always 60 degrees and raining in Seattle.” In addition to voice responses, Mycroft can launch applications and initiate commands, so the platform can be used as a voice interface for almost anything. Developers are currently working to integrate Mycroft into devices ranging from a wireless speaker to an automobile.

Funded through a successful Kickstarter campaign, the team has been developing the software since April 2015 and released Mycroft to the public on May 20. Though the Kickstarter campaign revolved around the Mycroft reference device – a wireless speaker based on Raspberry Pi and Arduino – the software can be run on anything from a tablet to an automobile.

The Mycroft project has now released 3 packages – Adapt, Mimic and Mycroft Core. Adapt is an intent parser that takes in natural language and uses it to determine the user’s intent. Mimic is a text to speech engine based on the voice of Ubuntu community manager Alan Pope. Mimic takes in text and converts it to audio for playback. Mycroft Core is the software that ties everything together and makes it useful. Mycroft Core includes a keyword recognition loop and a framework for deploying skills. Skills can include anything from executing a shell command to searching DuckDuckGo. The Mycroft Skills Framework makes it easy for developers to implement new abilities for the platform. Skills are limited only by a developer’s imagination and can include anything from controlling a drone to answering questions about Pokemon.

The idea behind Mycroft is to allow users to voice enable any type of device – desktops, mobile devices, speakers, robots – anything that can benefit from natural language processing. That is why the company is adopting Ubuntu Core. By deploying Mycroft on Ubuntu Core it is easy to install and update Mycroft without worrying about the underlying environment. This frees our developers to focus on creating a superb natural user interaction and fantastic skills rather than operating system issues.

Not only are we using Ubuntu Core and Snaps to deliver our software to the Mycroft hardware, but we are also working to use Snaps to enable desktop users to install the software regardless of the Linux distribution they are using. We see Snaps as a fantastic way to ensure users get the best Mycroft experience, by not having to worry about system library version mismatches or old versions of Mycroft in a distribution’s repositories. We are confident that delivering Mycroft using Snappy will provide a positive experience for our users.

The ultimate goal of the Mycroft project is to provide an experience so natural that it is impossible for users to determine if they are talking to a human or a machine. This will enable users to interact with their technology naturally.

Of course, there is a lot of work to be done to achieve this goal. The speech-to-text component of the project OpenSTT needs to be completed, the Mimic engine needs support for additional languages and Mycroft Core needs enhancements. Developers interested in trying Mycroft or contributing to the project can find the source code at http://docs.mycroft.ai

smart start

IoT as a service

Bring an IoT device to market fast. Focus on your apps, we handle the rest. Canonical offers hardware bring up, app integration, knowledge transfer and engineering support to get your first device to market. App store and security updates guaranteed.

Get your IoT device to market fast ›

smart start logo

IoT app store

Build a platform ecosystem for connected devices to unlock new avenues for revenue generation. Get a secure, hosted and managed multi-tenant app store for your IoT devices.

Build your IoT app ecosystem ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Managing software in complex network environments: the Snap Store Proxy

As enterprises grapple with the evolving landscape of security threats, the need to safeguard internal networks from the broader internet is increasingly...

Docker vs Snaps: a side by side comparison

The Docker project was initiated by dotCloud, a platform-as-a-service (PaaS) company that created Docker to run their internal infrastructure. Slowly, Docker...

Meet Canonical at Embedded World 2024

Embedded World is almost here. With 930+ exhibitors, 200 nonstop hours of knowledge sharing, and an exciting programme structured along 9 tracks with 60+...