The Criticality of Ethical AI
Posted 12/09/2021 by Bob O’Donnell, President and chief analyst, Technalysis Research
One of the most exciting topics in the tech world is Artificial Intelligence, or AI. From its science-fiction like promise of intelligent machines to its more practical implementations as additional smarts in connected devices, AI is one of the most powerful technologies now available.
But, as the well-known Peter Parker principle dictates, “With great power comes great responsibility” and so it is with AI. Companies building AI-enabled products and services, in particular, have started to become acutely aware of the potential challenges when AI isn’t used in a judicious, fair, and equitable manner.
An interesting example comes from the latest version of Lattice Semiconductor’s sensAI solution stack and its applications on client devices such as PCs. Working in conjunction with major PC OEMs, Lattice is combining low-power FPGAs, such as their CrossLink-NX family of chips, with version 4.1 of sensAI software to enable a range of applications that both improve the experience for the user, while simultaneously extending battery life on notebook PCs.
All of the applications use data from the PC’s onboard camera to analyze the individual in front of the laptop, another person behind them, the environment surrounding the PC, etc. as a sensor input source. From there, the image data is analyzed by the FPGA using trained AI-based inferencing models and then a variety of different actions are taken. User presence detection will turn the screen on (or off) depending on whether or not a person is detected. Attention tracking does similar battery-saving tricks based on whether or not the individual is looking at or away from the screen. Onlooker detection will determine if another user is looking over the shoulder of the primary user and either turn the screen off or take other actions to protect the privacy of the data on the screen. And finally, a face framing feature will ensure that video-based collaboration tools get the best possible image and appropriate cropping of the user’s face from the PC’s onboard camera.
What’s critical about all of these different applications is that they require accurate identification of a wide variety of different people. While that seems like a straightforward requirement, it turns out it can be challenging, particularly for identifying people of color. Unfortunately, many of the image data sets used to train AI models do not include enough images or enough variety of people with diverse skin tones. As a result, people with darker shades of skin, in particular, are often not accurately recognized, leading to poor application and functional performance for some people. Not only is this frustrating, it’s simply not just and is a telling example of how implicit bias can seep into things like technology features.
To avoid these kinds of issues, AI-focused developers need to become much more conscientious about the types of data sets they use to train their models, as well as how extensively the results of their models are tested. It’s this kind of thoughtful, ethical approach to AI that is starting to make a significant difference for people from underrepresented communities. After all, why should the color of your skin, or whether you’re wearing something over your hair, determine how well a tech-based function works? Clearly, it should not, but it takes determined, focused efforts to ensure that is the case.
Companies who are serious about approaching AI applications from a fair and ethical perspective, as Lattice Semiconductor has committed to do, are thinking through these and many other types of examples as they continue to evolve their AI software tools. And, given the criticality of the data that’s used to train models built with these tools, there’s a growing commitment to expand data sets to multiple public sources, with many companies also searching out data sets that are specifically built with an abundance of different skin tones, head wear, and other variations that have been overlooked in the past.
Only with these kinds of thoughtful, intentional steps can companies avoid the potential biases that have already begun to creep into some of today’s AI models to deliver a better, more accurate, and more inclusive user experience. While it may not have been a topic that many organizations previously gave much thought to, it has unquestionably become a critical issue that’s bound to get a lot more attention in the future.
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.