
Civil and Environmental Engineering Professor Yizhi Liu has been selected for NVIDIA’s Academic Grant Program to support groundbreaking research in autonomous infrastructure inspection.
Liu’s project will develop an end-to-end workflow that uses high-performance GPUs to speed up the robotics cycle – large-scale simulation, model training, and real-time deployment. The grant will provide substantial hardware support from NVIDIA, including NVIDIA RTX PRO 6000 GPUs and Jetson AGX Orin units. This equipment will enable a GPU-accelerated robotic deployment pipeline that supports safe and efficient inspection of elevated structures and hard-to-reach areas.
Addressing Critical Safety Challenges
Infrastructure inspection in complex or difficult-to-access environments can pose significant risks to human workers. Falls from heights remain among the leading causes of workplace injuries in construction and maintenance fields. Liu’s research aims to reduce these hazards by developing robotic systems that can navigate challenging environments autonomously.
“These robots can help inspectors spend less time on ladders or lifts, reduce exposure to hazards, and collect inspection data in a more consistent way,” explains Liu.
Bridging the Sim-to-Real Gap
A major innovation in the project is tackling the “sim-to-real gap” – the challenge of transferring robotic capabilities trained in controlled simulations to unpredictable real-world conditions. Real environments present variables like changing lighting, wind, surface materials, clutter and unexpected obstacles that are difficult to replicate in simulation.
The research team will build a GPU-accelerated, end-to-end pipeline that enables faster training and testing in simulation before deploying models to physical robots. The received NVIDIA grant will power both large-scale training experiments and simulations, while also providing real-time, on-board computing for autonomous decision-making during field trials.
Multimodal Robotic Sensing for Reliable Inspection
A key focus of the project is advancing multimodal sensing to improve how robots perceive and interpret complex inspection environments. Instead of relying on a single sensor stream, the system will integrate complementary modalities – such as vision, depth, and other on-board signals – to capture richer, more reliable evidence about surface conditions, geometry, and contextual hazards. By fusing these signals, the robot can reduce sensitivity to common failure cases (e.g., poor lighting, reflective materials, partial occlusion, and clutter) and produce more consistent inspection outputs. This multimodal design also supports better on-board decision-making by enabling the robot to cross-check observations across modalities and prioritize the informative viewpoints or measurements during data collection.
Impact and Future Vision
“Our goal is to make robot-assisted inspection and related construction tasks safer, more reliable, and easier to scale,” says Liu. “This work supports proactive maintenance strategies and significantly reduces risk for human workers.”
The NVIDIA Academic Grant Program supports cutting-edge academic research in Robotics, AI, edge computing, computer graphics, high-performance computing, and related fields by providing hardware and software resources to researchers worldwide.