{"title":"sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images","authors":"Yoones Rezaei, Stephen Lee","doi":"10.1145/3648006","DOIUrl":"https://doi.org/10.1145/3648006","url":null,"abstract":"\u0000 Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose\u0000 sat2Map\u0000 , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses\u0000 sat2pc\u0000 to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce\u0000 sat2height\u0000 , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that\u0000 sat2Map\u0000 consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our\u0000 sat2height\u0000 model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139781517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"sat2Map: Reconstructing 3D Building Roof from 2D Satellite Images","authors":"Yoones Rezaei, Stephen Lee","doi":"10.1145/3648006","DOIUrl":"https://doi.org/10.1145/3648006","url":null,"abstract":"\u0000 Three-dimensional (3D) urban models have gained interest because of their applications in many use cases, such as disaster management, energy management, and solar potential analysis. However, generating these 3D representations of buildings require lidar data, which is usually expensive to collect. Consequently, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. We propose\u0000 sat2Map\u0000 , a novel deep learning-based approach that predicts building roof geometries and heights directly from a single 2D satellite image. Our method first uses\u0000 sat2pc\u0000 to predict the point cloud by integrating two distinct loss functions, Chamfer Distance and Earth Mover’s Distance, resulting in a 3D point cloud output that balances overall structure and finer details. Additionally, we introduce\u0000 sat2height\u0000 , a height estimation model that estimates the height of the predicted point cloud to generate the final 3D building structure for a given location. We extensively evaluate our model on a building roof dataset and conduct ablation studies to analyze its performance. Our results demonstrate that\u0000 sat2Map\u0000 consistently outperforms existing baseline methods by at least 18.6%. Furthermore, we show that our refinement module significantly improves the overall performance, yielding more accurate and fine-grained 3D outputs. Our\u0000 sat2height\u0000 model demonstrates a high accuracy in predicting height parameters with a low error rate. Furthermore, our evaluation results show that we can estimate building heights with a median mean absolute error of less than 30 cm while still preserving the overall structure of the building.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139841143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee
{"title":"Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees","authors":"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee","doi":"10.1145/3643892","DOIUrl":"https://doi.org/10.1145/3643892","url":null,"abstract":"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\u0000 The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\u0000 Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139801162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee
{"title":"Memory-based Distribution Shift Detection for Learning Enabled Cyber-Physical Systems with Statistical Guarantees","authors":"Yahan Yang, Ramneet Kaur, Souradeep Dutta, Insup Lee","doi":"10.1145/3643892","DOIUrl":"https://doi.org/10.1145/3643892","url":null,"abstract":"Incorporating learning based components in the current state-of-the-art cyber physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine and other safety critical domains. This is because, it would allow system designers to use high dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs.\u0000 The ability to handle such high dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting, and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. But, achieving a meaningful coverage is impossible. This naturally leads to the following question: Is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled.\u0000 Our aim in this paper is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from in-distribution setting can potentially lead to unsafe behavior. Although it should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches such a feedback is difficult to obtain due to reliance on techniques whi","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139860789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Collaborative Visual Sensing System for Precise Quality Inspection at Manufacturing Lines","authors":"Jiale Chen, Duc Van Le, Rui Tan, Daren Ho","doi":"10.1145/3643136","DOIUrl":"https://doi.org/10.1145/3643136","url":null,"abstract":"\u0000 Visual sensing has been widely adopted for quality inspection in production processes. This paper presents the design and implementation of a smart collaborative camera system, called\u0000 BubCam\u0000 , for automated quality inspection of manufactured ink bags in Hewlett-Packard (HP) Inc.’s factories. Specifically, BubCam estimates the volume of air bubbles in an ink bag, which may affect the printing quality. The design of BubCam faces challenges due to the dynamic ambient light reflection, motion blur effect, and data labeling difficulty. As a starting point, we design a single-camera system which leverages various deep learning (DL)-based image segmentation and depth fusion techniques. New data labeling and training approaches are proposed to utilize prior knowledge of the production system for training the segmentation model with a small dataset. Then, we design a multi-camera system which additionally deploys multiple wireless cameras to achieve better accuracy due to multi-view sensing. To save power of the wireless cameras, we formulate a configuration adaptation problem and develop the single-agent and multi-agent deep reinforcement learning (DRL)-based solutions to adjust each wireless camera’s operation mode and frame rate in response to the changes of presence of air bubbles and light reflection. The multi-agent DRL approach aims to reduce the retraining costs during the production line reconfiguration process by only retraining the DRL agents for the newly added cameras and the existing cameras with changed positions. Extensive evaluation on a lab testbed and real factory trial shows that BubCam outperforms six baseline solutions including the current manual inspection and existing bubble detection and camera configuration adaptation approaches. In particular, BubCam achieves 1.3x accuracy improvement and 300x latency reduction, compared with the manual inspection approach.\u0000","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139593820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David R. Keppler, M. F. Karim, Matthew Mickelson, J. S. Mertoguno
{"title":"Experimentation and Implementation of BFT++ Cyber-attack Resilience Mechanism for Cyber Physical Systems","authors":"David R. Keppler, M. F. Karim, Matthew Mickelson, J. S. Mertoguno","doi":"10.1145/3639570","DOIUrl":"https://doi.org/10.1145/3639570","url":null,"abstract":"Cyber-physical systems (CPS) are used in various safety-critical domains such as robotics, industrial manufacturing systems, and power systems. Faults and cyber attacks have been shown to cause safety violations, which can damage the system and endanger human lives. Traditional resiliency techniques fall short of protecting against cyber threats. In this paper, we show how to extend resiliency to cyber resiliency for CPS using a specific combination of diversification, redundancy, and the physical inertia of the system.","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}