All posts on August, 2017


What is machine learning? Software derived from data

You’ve probably encountered the term “machine learning” more than a few times lately. Often used interchangeably with artificial intelligence, machine learning is in fact a subset of AI, both of which can trace their roots to MIT in the late 1950s.

Machine learning is something you probably encounter every day, whether you know it or not. The Siri and Alexa voice assistants, Facebook’s and Microsoft’s facial recognition, Amazon and Netflix recommendations, the technology that keeps self-driving cars from crashing into things – all are a result of advances in machine learning.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Google releases TensorFlow Serving library

Google has just moved to a production release of TensorFlow Serving, its open source library for serving machine-learned models in production environments. A beta version of the technology was released in February.

Part of Google’s TensorFlow machine intelligence project, the TensorFlow Serving 1.0 library is intended to aid the deployment of algorithms and experiments while maintaining the same server architecture and APIs. TensoFlow Serving lets you push out multiple versions of models over time, as well as roll them back.

The library of course integrates with TensorFlow learning models, but it can also be extended to serve other model types.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

How Google’s Go language could be improved

To improve development tools for Google’s open source Go language, Go might be getting its own language server, akin to Microsoft and Red Hat’s Language Server Protocol.

The notion came up in a Go language contributors’ discussion group, so it’s not a done deal.

The group’s consensus recommendations are:

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments