We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
The expanding use of surveillance cameras, whether in service of public safety, health monitoring or commercial operations, has heightened concerns about privacy. These days, it seems people’s movements will be captured on CCTV cameras regardless of where they go.
The number of surveillance systems in use has grown, with no signs of slowing down. According to the U.S. Bureau of Labor Statistics, the number of surveillance camera installations in the U.S. grew from 47 million to 85 million from 2015 to 2021, an increase of 80%. That’s roughly one camera installation for nearly every 4 people in the country. Globally, the number of surveillance cameras in use was expected to exceed a billion in 2021, according to the most recent research by IHS Markit. And the video surveillance market is expected to grow at an annual rate of more than 10% through 2026, according to Reportlinker.
The increasing reach of these systems has heightened fears about infringements on privacy, especially concerning the use of facial recognition. In addition to the loss of privacy such as that resulting from China’s widespread use of facial recognition, studies by MIT and Stanford University, as well as other institutions, have revealed built-in biases in facial recognition systems.
Some cities in the U.S. have responded. In 2019, San Francisco banned the use of facial recognition in local agencies’ surveillance cameras, and since then, at least a dozen other U.S. cities have instituted bans of facial recognition for one use or another. But more surveillance doesn’t necessarily have to mean less privacy.
Improvements in machine learning (ML) technology can both improve the efficiency of gleaning data from surveillance camera feeds, while also going a long way toward protecting the privacy of people who appear in those feeds. A smart camera can, for example, perform processing locally, eliminating the need to transmit and store data. It also can have the intelligence to know the difference between what it should be capturing and what it should ignore. While more efficiently performing its tasks, a smart camera can also help prevent both intentional and unintentional misuse of data.
How deep learning protects privacy
Along with becoming increasingly widespread, surveillance cameras have also become more powerful, with high-resolution lenses, greater local computing capacity and high-bandwidth Internet connections. In some systems, the use of machine learning and artificial intelligence (AI) have improved the ability to search the hundreds or thousands of hours of video recorded by those systems.
While making video surveillance systems more powerful and potentially intrusive, ML and AI can also be used to protect privacy. Video intelligence software based on deep learning — a subset of AI — can be trained to focus on what it should be watching and effectively look away from what it should not.
Deep learning, designed to mimic the functions of the human brain by using a neural network of three or more layers, can discover on its own how to identify and classify objects and patterns. By using tagged data to train the system, a machine can “learn” to work independently, becoming more proficient as it is exposed to more data over time. Significantly, it can do this with a small footprint that allows for embedded, localized processing that can effectively manage data privacy.
In one example, a CCTV system equipped with deep learning software can classify people approaching a building entrance (like an office, stadium or theater), allow or deny entry, and then dispose of any captured information. By processing information locally without the need to transmit or store data, it can collect the minimum amount necessary, then “forget” about it afterward. In another example, a camera monitoring a business’ parking lot might also have a view into the window of a neighboring house. The system can prevent recording any images from that window. The software thus corrects for any complications caused by the positioning of the camera, and avoids both accidental mistakes or intentional activity involving recording images not on the business’ property.
ML makes data actionable
Along with keeping improper information out, video intelligence software also makes finding the right information in both live and archived video feeds more efficient. Monitoring or retrieving information from video recordings has often involved manual review by human eyes, which is not only time-consuming but can easily lead to oversights, mistakes and privacy violations. ML video content analysis software with deep learning can extract, classify and quickly index targeted objects — such as humans or vehicles — making video feeds significantly more searchable, actionable and quantifiable.
The classification and indexing of objects also enable intelligent alerts when certain objects, behaviors or anomalous activity is detected. This can include count-based alerts when the number of people in a certain area exceeds a set limit, alerts triggered by object identification or, where applicable, face recognition.
Video content analysis also aggregates metadata from live or archived feeds, allowing analysts to understand trends and develop procedures for improving safety, operations and security. And by using properly implemented deep learning technology, it can do it without increasing risks to privacy.
Improving video surveillance while managing data privacy
Concerns over privacy and attempts to limit the use of facial recognition notwithstanding, the amount of video and other data being collected isn’t going to slow down. Video systems can, for example, help health officials track the number of people wearing masks or who are observing safe-distancing practices. Municipal officials can get a clear view of traffic flows and bottlenecks. Businesses can monitor people’s shopping habits. The security of public places increasingly depends on good video surveillance.
Beyond those uses, the spread of home systems with surveillance capabilities also is driving fears over lost privacy. More than 128 million cloud-connected voice assistants—such as Google Home, Amazon Echo and Facebook Portal—are in use in U.S. homes, with the ability to record and share information. And 76% of TV households report that they have smart TVs, which have raised concerns over their potential to spy on users.
However, the way video is collected, processed and searched can achieve the goals of tighter security, better operations or improved safety without further compromising privacy. The current approach of using cloud-connected surveillance cameras with cloud-based analytics doesn’t stand up to privacy and bias concerns. But ML software with deep learning capabilities allows for localized, embedded intelligence and analytics — delivering high performance at low power — that can improve safety while managing data privacy. In the case of CCTV video surveillance systems, intelligent video technology can also be seamlessly integrated with most existing systems.
Using deep learning technologies can also drive future improvements, empowering organizations to continually increase the sophistication of their systems through additional AI applications.
David Gamba is the vice president of Sima AI.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers