Lattice Blog

Share:

Enabling Machine Learning at the Edge

Machine Learning
Posted 05/23/2017 by Juju Joyce

Posted in

What excites me about technology is its prospects for making human life better. Artificial Intelligence (AI), Machine Learning and Deep Learning hold a lot of promise to do just that, if done in a sensible way.

Machine Learning has been defined as the science of getting computers to learn/act without being explicitly programmed. This capability enables us to make computers/robots do things that are too complex to explicitly write code for. For example, imagine writing code that instructs a robot how to walk, or a car to drive safely on its own. In such applications, there are too many details and scenarios to consider, and even new scenarios that we don’t know of yet, to write a perfect set of instructions.

Artificial Intelligence

Deep Learning is a sub-field of Machine Learning, which in turn is a sub-field of Artificial Intelligence

The Learning Process

Deep Learning, a sub-field of Machine Learning, has shown significant progress in its capability and has become very popular where a computer uses a multi-layered Artificial Neural Network (ANN) to learn/act without being explicitly programmed. The concept of ANN is based on the biological neural network of a human brain, where a network of neurons are not really capable of doing much when we are first born. But over a period of time, we learn to walk and talk and read – the ‘training’ phase. Eventually, we become knowledgeable enough to be able to infer a decision when presented with a new situation, by recalling lessons learned from past training and experiences. This is the ‘inference’ phase. Similarly, an ANN goes through a ‘training’ phase, where it is taught using a set of training data, and once trained, is able to act (‘infer’) when presented with new data. For example, to create an ANN that can accurately distinguish between a dog and a cat, during the ‘training’ phase it is fed with thousands of images of dogs and cats till it is able to correctly identify between a dog and a cat at a high level of accuracy. Once ‘trained’, this ANN is now ready for the ‘real world’ where it is able to ‘infer’ (identify) a dog or a cat – even though it has never ‘seen’ an image of that particular dog or cat before.

A key advantage of ANN-based Machine Learning over traditional Machine Learning techniques is that the features/parameters needed to successfully distinguish between a dog and a cat are automatically chosen by the ANN during the ‘training’ phase. In comparison, in traditional Machine Learning techniques, such features/parameters would have to be manually selected by a human subject matter expert (SME). And if the inference accuracy is not satisfactory, the SME will need to modify the features/parameters to improve accuracy. Essentially it becomes a more laborious, time-consuming and iterative process, and the end results may still not be as good as ANN-based Machine Learning.

Strides in ANN

Although ANN technology has been around for many decades, it only started showing better results over traditional algorithms in recent years, due to two factors. First is the availability and affordability of compute power that enables us to build significantly more complex and deeper layers of ANNs, also called Deep Neural Networks (DNNs), which was impractical several decades ago. The second factor is the availability of massive amounts of data that can be used for ‘training’ – such as digital images, videos, sound clips, etc. How well an ANN can perform during inferencing will typically depend on the quantity and quality of training data. In the dogs vs. cats training example above, this would mean the number of images, and whether the images are good enough representations of the cat and dog pictures that the ANN will be asked to identify during the ‘inference’ phase.

Training vs. Inferencing

‘Training’ typically happens in the datacenter/cloud, and ‘inferencing’ at the edge of the network (in embedded/mobile systems). ‘Training’ typically uses high-performance CPUs, GPUs, FPGAs, and/or TPUs (Tensor Processing Units) – and typically uses floating point math needed to represent a wide range of numbers. Trained ANN can then be optimized to become a significantly less complex ANN that can be ported over to a cost and power-optimized embedded/mobile systems for ‘inferencing’ – at which point the floating point math also gets typically converted to fixed point math for reduced complexity and more efficient usage of compute resources.

Edge Computer Platform 

‘Training’ in the cloud/datacenter & ‘Inferencing’ at the edge

Ability to do ‘inferencing’ at the edge of the network (in embedded/mobile systems) minimizes latency for decision making and analytics, vs. sending all of the data via network to a datacenter to be analyzed, while also reducing network congestion, increasing user privacy (since data stays on the local device) and enabling ‘inferencing’ without network connection.

Enabling Inferencing at the Edge

FPGAs are well-suited for ‘inferencing’ at the edge due to their parallel-processing architecture, capable of highest operations per second (OPS) at lowest power consumption (compared to CPUs and GPUs). ANN processing can be significantly accelerated using the parallel-processing architecture offered by FPGAs. Lattice FPGAs are optimized for low power consumption, small form factor, and low cost, all of which are attributes required for processing at the edge. Additionally, since we are still at the very early phases of the Machine Learning revolution, better and improved ANN architectures are being researched and published on a daily basis. Therefore, a flexible and programmable hardware architecture, such as the one offered by FPGAs, is needed to enable easy upgrades to new ANN architectures and techniques as they become available. The new Embedded Vision Development Kit (part of Lattice’s embedded vision solutions portfolio) is one such FPGA-based platform that provides flexible connectivity and acceleration for mobile-influenced intelligent applications at the edge including robotics, drones, Advanced Driver Assistance Systems (ADAS), smart surveillance cameras and AR/VR systems.

In Conclusion…

In humanity’s quest to build ‘things’ that make our lives significantly better, Machine Learning (ANN) appears to be a very promising technology for adding human-like intelligence to many ‘things’, thereby significantly improving their capabilities to serve us. At the same time, it would also be prudent to always define a boundary (or a set of rules) that the intelligent ‘thing’ must not cross (or violate), in order to ensure safe and human-friendly operation.

Share: