Reality capture means the process to catch, compute, create, and save a digital 3D model to represent an object from the real world. This article introduces the technologies of photography, videos, laser scanning, and drones and explains how they can help with construction projects.
“Industry 4.0 for the Built Environment” is a key topic in our sector and it is also the title of a new Springer book edited by Marzia Bolpagni, Rui Gavina, and Diogo Rodrigo Ribeiro. Reality capture is one of the topics covered in the book.
Reality Capture and Implementations
Reality capture technologies help people to collect three-dimensional (3D) data about objects and have brought considerable advantages, such as performance increase and accessibility, to various projects.
Laser scanners and photogrammetry can generate point clouds or images. The following figure is a point cloud with a collection of data points in a coordinate system that contains X, Y, and Z coordinates along with RGB (red, green, and blue color-combinations) values for each point. The red, green, and blue values are stored with 8 bits each, which contain an integer value in the range of [0, 255], thus they could produce 256x256x256 = 16,777,216 colors.
The storage required for coordinate and RGB data sets can span from a few megabytes to bigger gigabytes, terabytes, or even petabytes.
Reality capture not only digitizes the physical conditions of an object or location but also converts the captured data into a format that supports executable intelligence. Currently, the collection systems for advanced data procurement include UAV (Unmanned Aviation Vehicle, such as drones), photogrammetry, mobile or airborne LiDAR, and laser scanning for terrestrial environments (see Figure 2).
The deliverables include point clouds, fly-through animations, 3D models, BIM Models /Information Models, renderings, and AR/VR environments. Reality capture relies on various surveying and computer vision techniques, which can produce necessary information for infrastructure lifecycle management.
Tools and Methods
Photogrammetry and remote sensing (PaRS) hardware, computer vision and artificial intelligence (AI) processing for automation, and BIM-based methodologies can help to integrate precise engineering data.
The processed reality capture enables the development of an accurate synthetic environment that can be used to simulate the interior (utilities, HVAC systems, furniture, elevators, walls, doors, windows, and structural details), outside (air services, entire city blocks with 3D details, road access), and below (groundwater, wastewater, gas, power, and telecommunications systems) of any urban environment, hence allowing the creation of intelligent models that can be used for visualization, analysis, and simulation.
In the case of PaRS hardware, its evolution over the last decades has been mainly directed into not only improving their accuracy and easiness but in a more impressive way, by reducing the cost and size of this equipment. New advances in underground PaRS techniques (radars) are emerging to the potential market of the location of underground infrastructures.
Increased computational power emerges in both edge and cloud, transmission bandwidths, or simply better and cheaper batteries for wireless sensor networks. In addition, AI algorithms are reaching an unprecedented depth and speed in the interpretation of the received raw data. Moreover, advances in computer vision enable the extraction of semantic information from plain images or videos. Veriﬁcation for construction operations planning, defect detection, or simply improving the materials management are examples of how these advances can be applied on construction sites.
The extension of Building Information Models towards the best practice in product lifecycle management includes wide adoptions of Digital Twins. However, there are still huge cost-effect barriers in how to keep updated the contextual information of Digital Twins, especially when considering the diverse scales of projects in the AECO domain, from manufactured goods scales, like appliances, to city scales, like districts, power grids, and linear civil works. This contextual information relates mostly to geometrical and asset datasets and currently they pose as the most difficult information to properly track back from the real asset.
Photogrammetry and Videos
Photogrammetry uses photography in surveying and mapping to measure distances between objects. This technology is frequently utilized together with drones for maximum efficiency. Therefore, it is expected to generate 3D measurements from a series of photogrammetric data, whose quality can achieve a resolution of mm-cm, point density of 10s-1000s points/square-meter, and has a data acquisition time in minutes. However, data post-processing requires time within hours, and the exact amount of time depends on the number of images. The resulting output is a point cloud which is then computed to a reference position.
Reality images can be captured through radio or laser scans, such as RAdio Detection And Ranging (RADAR) or Light Detection and Ranging (LiDAR). LiDAR systems have a wider scope of applications in the construction and engineering industries than RADAR systems. A stitching process is required when a project has multiple scanned files. It is time-consuming and requires powerful computer processors and high capacities of graphics cards.
After the point clouds are stitched together piece by piece, the complete 3D scan model can be imported to a transition software system for further processing. Data quality (i.e., accuracy, precision, and resolution) of a scan is affected by the geometry of the system and scene, calibration quality, projector properties, object radiometric properties, scene radiometric effects, and cameras.
Drones for Reality Capture
Drones help to capture reality, such as production management, pedestrian guidance, and animal welfare, and help with structural analysis for bridges, surveillance for firefighters during emergencies, and continuing vehicle navigation in tunnels or car parks.
While flying a drone may seem to be attainable, immense planning and preparation are required. The quality control of the data-collection outcome from drone flights should respect public safety and behavioral privacy.
There are still outstanding challenges in reality capture, especially the selection of appropriate tools for a given scenario and automated dimensional quality controls. The information, together with the influencing factors and suggestions, could be used to improve the performance of reality capture and encourage more industry practitioners to implement the technologies.
The substantial advantages of reality capture are still limited by time, cost, portability, range, and accuracy. Most importantly, artificial intelligence can play a critical role to improve the technologies and make them more robust to lighting conditions and better adapted to various scenarios.
Dr. Haiyan Sally Xie, a Professor of Construction Management in the Department of Technology, College of Applied Science and Technology, Illinois State University, USA.
Prof Ioannis Brilakis, a Laing O’Rourke Professor of Construction Engineering and the Director of the Construction Information Technology Laboratory at the Division of Civil Engineering of the Department of Engineering at the University of Cambridge.
Eduard Loscos, R&D Manager at IDP Group.