City-scale Digital Twin Creation from Remote Sensing Data Using Deep Learning

dc.contributor.advisorNajafirad, Peyman
dc.contributor.authorJoseph, Rinu
dc.contributor.committeeMemberLin, Wei-Ming
dc.contributor.committeeMemberJamshidi, Mo
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractOmni-use of digital technologies leading to digitally enhanced replicas of real-world physical systems are accelerated by the rapid advancements of Industrial Revolution 4.0 (Industry 4.0). Improvements in digital sensor technologies, cyber-physical systems, cloud computing, and artificial intelligence (AI) have enabled advanced machine-to-machine communication, data collection, and analyses. A common denominator in Industry 4.0 is the need to rely on geographic information systems (GIS) which can create, manage, analyze, and integrate various digital sensory data according to the physical location on the earth's surface. Even though GIS benefited from Industry 4.0, careful integration of smart digital technologies and AI is crucial to generate better cyber-physical ecosystems for businesses and government agencies to plan their development projects (including urban planning, energy management) and production effectively. Digital Twins (DT), which are digital replicas of physical objects and represents both structural and behavioral attributes of a physical asset, presents itself as a valuable technology to bridge the gap between traditional GIS systems and AI-enabled smart digital technologies. However, existing implementations of GIS DTs are flawed with limitations such as inaccurate and misaligned assets, expensive data collection and build, inability to handle big data, locked ecosystem due to 3rd part vendor agreements, etc. The current methods use street network data and building elevation data that are not precise. In this thesis, we propose a deep-learning-based process pipeline to generate a city-scale digital twin from various types of remote sensing data with all thematic layers of GIS such as buildings, streets and vegetation. The proposed architecture focuses on overcoming the aforementioned limitations of current state-of-the-art (SOTA) methods or tools by geo-processing the collected remote sensing data to extract the features of every objects in the selected regions and generate elevation models such as Digital Surface Models (DSM), Digital Elevation Models(DEM) and Digital Terrain Models (DTM) to get more accurate geographic information that are crucial for digital transformation. Our proposed method can generate digital twins for various use cases such as smart city planning, dynamic physics simulations, healthcare, risk analysis, climate change detection, and more. In order to apply our proposed model to regions where lidar point cloud data is not present, we propose a deep residual autoencoder to generate Digital Surface Model (DSM) from aerial RGB images alone. DSM data plays a pivotal role in the process of creating digital version of real-world as it comprises of elevation information of all the objects including man-made and natural on the earth's surface.
dc.description.departmentElectrical and Computer Engineering
dc.format.extent68 pages
dc.subject.classificationComputer engineering
dc.titleCity-scale Digital Twin Creation from Remote Sensing Data Using Deep Learning
dcterms.accessRightspq_closed and Computer Engineering of Texas at San Antonio of Science


Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
33.13 MB
Adobe Portable Document Format