With every wave of innovation, there’s a set of obstacles that can prevent progress. Some are technical blockers, some economic, some regulatory and some based in human nature. At this moment we’re on the cusp of the next wave of innovation, which could equip companies with in-depth business intelligence about their physical environment and how people interact with it. The relevant data can be collected in a fraction of the time of the traditional method (clipboard, stopwatch and the watchful eyes of a time-and-motion specialist) and provides a level of detail and accuracy not previously possible. This forthcoming wave of innovation is powered by computer vision, a decades-old technology that has traditionally experienced a series of limitations and obstacles—until now.
Hardware One of the main blockers to computer vision is the camera that collects the information. Decisions about where to install them, their power supply, lighting, field of view and other factors have to be considered. And in most cases, the cameras send a stream of visual data through the network, which has the very real potential of jamming up your bandwidth. But thanks to breakthroughs in devices and AI technologies, that no longer has to be the case. Let me explain.
Edge vs. Cloud/ Bandwidth Even today, most people use traditional video cameras that send images to the cloud, where the computer vision AI runs on powerful server processors to extract information about what is going on in each image. But depending on how much and how often you stream data to the cloud, this approach can be very expensive. There’s also the concern with securing this data to protect personal privacy.
With the availability of edge computing, it’s now possible to process visual data on the device itself because they have the memory and powerful processors (both CPUs and GPUs) that can run the computer vision inference in real-time. This means that you’ve eliminated the bandwidth-heavy imagery going over your network, and dramatically reduced your computer vision cloud costs. For example, there has been the emergence of edge appliances, computers that sit on-premises that aggregate and then run computer vision AI on multiple video of streams (i.e. Amazon Panorama, Microsoft Azure Stack Edge).
At Nomad Go, we’ve solved this by using commodity smart devices – specifically phones and tablets. They are the perfect convergence of everything you need to collect and process imagery at the edge, combining powerful cameras, robust durability, a variety of networking options and, most importantly, very powerful CPUs and GPUs. They also have an endless supply chain.
Privacy Without question, privacy is one of the biggest obstacles to computer vision. People are concerned about facial recognition and losing their personal privacy—and rightly so. Legislation is being considered in a number of states that would ban the use of facial recognition software by law enforcement, except in very specific cases. Certainly, facial recognition has its uses in certain scenarios but at Nomad-Go we’ve done a lot of work to avoid using it. Why?
Because in virtually every case, you can get the data you need without individually identifying a person. And addressing privacy concerns is much easier without facial recognition, especially if you’re not saving pictures, sending them over the network or storing them anywhere. Instead, you can merely run inferences in real-time on the edge, based on what the camera sees. No image is ever saved.
Utility There are a lot of cool demos out there highlighting the wow factor of computer vision—things like determining a person’s mood or tracking their movements. As interesting as those might be, it begs the question: “What can I do with that to improve my business?” It reminds me a bit of the early days of the web, when companies generated all of this clickstream data and didn’t have a clue how to use it. They needed someone to explain why they needed it, and what to do with it.
Right now, there’s a lack of AI expertise to help companies answer these same questions about their data, which has left many companies in limbo with solutions that don’t impact their bottom line.
That doesn’t have to be the case. Nomad Go equips you with a dashboard that provides a holistic view of your data, customized alerts whenever immediate action or awareness is needed, and out-of-the-box APIs that give you real-time insights on a bunch of different functions—speed of service, occupancy rates, customer engagement, just to name a few. We even have a solution that take occupancy data of the room and converts it to data which can control HVAC, reducing energy and greenhouse gasses by up to 25%.
Perhaps best of all, there’s no need for costly resources such as developers, which frees up additional budget to be used on other mission-critical deliverables. Companies are already using computer vision to reduce their energy costs, improve customer service and drive sustainability through better recycling practices.
Next week we’ll explore the emerging landscape of computer vision providers and how they work together to deliver on the promise of computer vision.