Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Hacking Through Machine Learning at the OpenPOWER Developer Congress

This article is more than 6 years old.


By Sumit Gupta, Vice President, IBM Cognitive Systems

10 years ago, every CEO leaned over to his or her CIO and CTO and said, “we got to figure out big data.” Five years ago, they leaned over and said, “we got to figure out cloud.” This year, every CEO is asking their team to figure out “AI” or artificial intelligence.

IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run.

The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures.

Take a Hack at the Machine Learning Work Group

The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimisations. By gathering in a workgroup with a laser focus, people from diverse organisations can better understand and engineer solutions that address similar needs and pain points.

The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem.

Accelerate Your Machine Learning Solution at the Developer Congress

This spring, the OpenPOWER Foundation will host the OpenPOWER Developer Congress, a “get your hands dirty” event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you.

The Congress is focused full stack solutions — software, firmware, hardware infrastructure, and tooling. It’s a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products.

There has never been another event like this one. It’s a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice.

A Developer Congress Agenda Specific to Machine Learning

The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes:

• VisionBrain: Deep Learning Development Platform for Computer Vision
• GPU Programming Training, including OpenACC and CUDA
• Inference System for Deep Learning
• Intro to Machine Learning / Deep Learning
• Develop / Port / Optimize on Power Systems and GPUs
• Advanced Optimization
• Spark on Power for Data Science
• Openstack and Database as a Service
• OpenBMC

Bring Your Laptop and Your Best Ideas

The OpenPOWER Developer Congress will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty!

About the author

Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business.

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Join the Canonical Data and AI team at Data Innovation Summit 2024

Join Canonical Data and AI team at Data Innovation Summit 2024

Canonical releases Charmed MLFlow

Canonical announced today that Charmed MLFlow, Canonical’s distribution of the popular machine learning platform, is now generally available. Charmed MLFlow...

Large language models (LLMs): what, why, how?

Large language models (LLMs) are machine-learning models specialised in understanding natural language. They became famous once ChatGPT was widely adopted...