FYI.

This story is over 5 years old.

Tech

The Plan to Surveil the Sounds of Cities

Surveillance cameras are a near-ubiquitous feature in our cities, and those eyes will soon be matched with ears.
​Image: Derek Mead

Surveillance cameras are a near-ubiquitous feature in our cities, and if Pedro Maló and Philippe Cousin have their way, those eyes will soon be matched with ears. The pair's EAR-IT system, which is currently being piloted as part of the SmartSantander smart city in Santander, Spain, aims to collect, analyze, and data mine all the sounds of a city.

Maló sees this as a tool for authorities responding to emergencies and other "outstanding events," such as emergency vehicle sirens, people in distress, gun shots, and so on. The tech calls to mind the gunshot sensor system ShotSpotter and Way to Safety, both of which made some waves in the last few years. With EAR-IT's SmartSantander, that concept could be amplified to a far greater degree.

Advertisement

As Maló told me, he and Cousin built an acoustic processing unit (APU) that "listens" to the ambient environment for these types of events. According to Maló, this APU powers the whole system, which includes an "acoustic event detection" framework and "acoustic-based density estimation" software. A brand new daughter audio processing board is able encode/decode audio data at acceptable rates, enabling audio streaming over internet-connected sensors. And a new sound pressure level sensor board is able to, as Maló explained, "adequately perform the calculations needed for very accurate SPL processing."

Once the APU collects the sound data, it's sent to the SmartSantander data infrastructure, presented to authorities, then acted upon. This can be done in a number of ways, whether it's responding to emergency vehicle sirens to change traffic lights, or to alert police about gunshot events.

Image: ​EAR-IT

"The APU software can make traffic estimations based on sound data," Maló said. "We also collected noise data (SPL, or sound pressure level) with the APU but also with many Internet of Things motes [sensors]. We are able to stream small chucks of data over basic IoT motes, for instance, to know if some distress call was actually real or maybe some advertisement sound or [something] else."

Maló said that in some places in Santander—near bullfighting arenas and hospitals, for example—pedestrians can see EAR-IT APU nodes. Maló and Cousin see these nodes as alternatives to surveillance cameras, which have their weaknesses despite the all-knowing presence they project. Since sound propagates around corners and barriers, it could fill in the blanks left by CCTV's line-of-sight issues.

Advertisement

"The advantage of [non-line-of-sight] sensing can be obvious when compared with video capture and its [reliance on] camera position, not seeing through obstacles and having limited viewing angles," Maló said. "This is important, as many sensors may need to be added into the environment in order to have a full coverage of an area of interest whereas one could need fewer sensors and less complex sensing solution if using acoustics."

Microphones can, for example, capture an environment's sound and "take advantage of it in myriad ways," allowing for the detection of acoustic events (gun shots, distress); the pin-pointing and tracking of these events until authorities act upon them; the measurement of an environment's sound dynamics, such as estimated traffic or the number of people in a room; and potentially much more.

The housing for the APU. Image: EAR-IT

"One can also measure noise levels to assess conformance to regulations and/or human comfort recommendations in different environments," said Maló. "And, of course, acoustic sensing can be further complemented with other types of sensing for providing additional value," with motion sensing and facial recognition being two such option.

All of this can be done on the cheap with basic electronic microphone setups, and deployed, as Maló noted, in sufficient quantities. EAR-IT is also developing technology that would create noise maps using data for understanding the "dynamic noise pace of the city." But, this isn't the only effort to map and mine sounds and urban noise. Microsoft researcher Yu Zheng's project CityNoise "aims to diagnose a city's noise pollution with crowdsensing and ubiquitous data." As Zheng writes on the CityNoise website, the research reveals the "fine-grained noise situation throughout a city and analyzes the composition of noises in a particular location, by using 311 complaint data together with road network data, points of interests, and social media."

Advertisement

As Zheng told me, he wants to turn people into smart sensors through complaints via phone calls or mobile clients. In other words, Zheng envisions cities mapping sounds through "crowd-sensing." When filing a complaint, people are required to provide location, time, and category of the complaint. The CityNoise research also used road network data, check-in data, and point of interests, and even let the system know how they feel about the noise.

How might this be used? Zheng said this urban noise data can help government's decision-making on tackling noise pollution. "For instance, if we know traffic is the major source of the noise pollution in a particular location, we can build some noise barriers along streets in that location," said Zheng. "If bars are the major cause of noise, we can consider issuing bar control policies in the particular region for particular time slots."

millions of sensors are needed in a metropolis. This is unfeasible

"The fine-grained information of urban noise can also enable interdisciplinary research," Zheng added. "For instance, we can inter-relate the noise data with healthcare or education data for research projects."

Zheng noted that urban noise changes quickly over time and varies significantly by location. If authorities are aiming to monitor fine-grained urban noise by deploying sound sensors, they would need to install a sensor every 100 square meters. "[This] means millions of sensors are needed in a metropolis. This is unfeasible," said Zheng. "[And] even if we could deploy sound sensors everywhere, diagnosing urban noise pollution solely based on sensor data is not thorough."

But, again, EAR-IT's SmartSantander will see just how feasible sound surveillance can be. And it doesn't take much guess work to imagine how this could be used outside of emergencies. National, state, and metropolitan governments would probably jump at the opportunity of using sound gathered by EAR-IT and mobile device-enabled human sensors to combat terrorism. And where there is this ambition, there is the potential for abuse, no matter how accurate and useful EAR-IT and human sensors can be in locating minor and major accidents.

Of course, a perversion of city sound data isn't Maló and Cousin's or Zheng's intention. Going forward, a very public debate on the wisdom of installing smart sound sensors throughout cities will need to be had, or the technology will just suddenly be there, always listening in, just like all those surveillance cameras watching us.