Automated Method Allows Rapid Analysis of Disaster Damage to Structures
Computer vision advances and deep learning algorithms speed image processing
Published on November 9, 2016
Emil Venere, Purdue University
WEST LAFAYETTE, Ind. Researchers are harnessing deep learning algorithms and powerful computer vision technology to dramatically reduce the time it takes for engineers to assess damage to buildings after disasters.
In the aftermath of a disaster, engineers descend on the scene and must quickly document damage to structures such as buildings, bridges and pipelines before crucial data are destroyed.
These teams of engineers take a lot of photos, perhaps 10,000 images per day, and these data are critical to learn how the disaster affected structures, said Shirley Dyke, a Purdue University professor of mechanical and civil engineering. Every image has to be analyzed by people, and it takes a tremendous amount of time for them to go through each image and put a description on it so that others can use it.
Engineering teams routinely spend several hours after a day of collecting data to review their images and determine how to proceed the next day.
Unfortunately, there is no way to quickly organize these thousands of images, which are essential to determine how to understand the damage from an event, and the potential for human error is a key drawback, said doctoral student Chul Min Yeum. When people look at images for more than one hour they get tired, whereas a computer can keep going.
He is leading the work with Dyke to develop a computerized system using advanced computer vision algorithms that could exponentially speed the process. The automated method could turn several hours of work into several minutes. (A YouTube video is available at https://youtu.be/WO3XmXKu4uI)
This is the first-ever implementation of deep learning for these types of images, Dyke said. We are dealing with real-world images of buildings that are damaged in some major way by tornados, hurricanes, floods and earthquakes. Design codes for buildings are often based on or started by lessons that can be derived from these data. So if we could organize more data more quickly, the images could be used to inform design codes.
Deep learning commonly refers to artificial neural network algorithms that use numerous layers of computations to analyze specific problems. The researchers have to train the algorithms to recognize scenes and locate objects in the images. The method harnesses graphical processing units (GPUs), which have led to high-performance machine vision applications.
The researchers used a large set of data containing about 8,000 images, with labels on photographs showing building components that were either collapsed or not collapsed, and areas affected by spalling, where concrete chips off structural elements due to large tensile deformations.
This is a typical type of damage that researchers are interested in investigating, Dyke said. We were able to automatically classify images based on whether spalling exists or not, and also to pinpoint specifically where it was located within the image.
The photos show damage to specific parts of buildings, outlined within green boxes for easy reference.
The research began about two years ago and recently has been awarded a three-year National Science Foundation grant of $299,000.
The automated deep learning approach would be especially useful because of a proliferation of databases containing vast collections of digital images.
The nation has been investing for years in the acquisition of these valuable and perishable data, and is now investing in large databases of experimental data and also field mission data to preserve it and make it easier to study and distill the important lessons to improve the resilience of our communities, she said.
The NSF grant specifically supports research to improve these methods to directly assist engineers as they make decisions in the field regarding which data to collect.
We are studying how we can enable field teams to harness the power of computer vision methods to extract the right information to make decisions, Dyke said.
Also involved are Bedrich Benes, a professor in the Purdue Department of Computer Graphics Technology, and Thomas Hacker, a professor in the Department of Computer and Information Technology, both in the Purdue Polytechnic Institute.
The researchers have gathered about 90,000 digital images from recent earthquakes in Nepal, Chile, Taiwan and Turkey, including images from Santiago Pujol, a Purdue professor of civil engineering who led a team of nine faculty and students from Purdue as they surveyed damage in Taiwan in March. He also directs the Center for Earthquake Engineering and Disaster Data (CrEEDD) at Purdue.
But there is at least 20 or 30 times more data out there from other disasters that we would like to have access to in order to continue training the algorithms, Dyke said. By expanding the diversity of disasters we can make the algorithms more applicable to all sorts of data from all over the world, and thats what we are trying to do.
Writer: Emil Venere, 765-494-4709, venere@purdue.edu
Source: Shirley Dyke, 765-494-7434, sdyke@purdue.edu
This article was originally published at: Purdue Newsroom