Lattice Blog

Share:

The Evolution of Computing

The Evolution of Computing
Posted 12/19/2017 by Darin Billerbeck, President and CEO, Lattice Semiconductor

Posted in

The first “computer” I ever owned was a slide rule. By today’s standards you could say my pencil served as my terminal and the slide rule was my computing agent. People laugh about it now, but my trusty slide rule served its purpose. In the business world, server-based computing was all the rage. Those early systems represented some of the first centralized computing architectures. By connecting your terminal to the network, you could access the computational and storage resources of the server and communicate with other users on the network.

But the popular compute paradigm of the day has never remained static. By the 1980s and 1990s centralized architectures were on the way out. Driven by the emergence of affordable personal computers, users could create their own individual computing environments. The rise of the Internet brought wide-scale connectivity, but most computational tasks were confined to the individual PC.

Eventually that highly distributed architecture morphed into mobility with the rise of laptops, mobile devices of all types, and smartphones. However, as mobile devices matured, their computational and storage requirements rapidly multiplied. To meet that need, designers began moving tasks to the cloud to take advantage of its virtually unlimited resources, high reliability and low cost. So, once again, computing architectures swung back to a centralized model.

Today, the cloud enables centralized computing, intelligence and storage. It’s where businesses perform their high-level computation and analysis. Take Oracle as an example. Companies run Oracle in the cloud and then use their PCs to interpret and analyze the results. To a certain extent, some form of centralized computing is here to stay. In the intelligent city of the future, for example, some devices must be constantly connected to the cloud. Think of things like traffic lights that manage changing traffic patterns, street lights that turn on only when someone is near, or power and communication grids that adjust as demand increases or decreases.

A new trend is emerging that promises to swing the pendulum back toward a more decentralized architecture once again. As IoT devices on the edge become increasingly intelligent, they will need to respond faster to requests. These new devices will be used in smart homes, smart factories and smart cities and employ technologies like voice and facial recognition and detection to customize their function as needs change. By applying machine learning and AI techniques, these devices will be able to operate autonomously and alter their operation based on changes in their environment. Take the autonomous car for example. When it enters a smart city, it won’t go to the cloud to receive input on how to operate. It knows that when it detects a stop sign or a red light, it must stop the car. This ability to make decisions independently will define an entire new type of edge-intelligent device.

What will it take to get there? To accomplish these tasks and operate independently, these new devices will require high levels of processing power, speed and memory, the ability to operate at very low power levels, and to integrate those capabilities into a highly compact footprint. Moreover, they will need these resources on the device to minimize latency; after all, a car can’t wait to decide whether to brake or speed up.

Our success in the mobile arena, extensive expertise in low power, small form factor silicon, and the inherent design flexibility of our FPGA fabric place us in a unique position to meet those needs. In addition, our long history of success in basic cloud connectivity in terms of glue logic, I/O expansion, bridging, and embedded video offer a significant advantage over competitors. Look for Lattice to play a leading role in the development of the exciting next generation of edge-intelligent devices.

Share: