Anup Kumar, Manish K. Verma, Biswajeet Jena, D. Tiwary, N. Singh, Kamdeo D. Mandal
{"title":"Studies of multifunctional Bi12GeO20 compound synthesized by chemical route","authors":"Anup Kumar, Manish K. Verma, Biswajeet Jena, D. Tiwary, N. Singh, Kamdeo D. Mandal","doi":"10.1117/12.3012321","DOIUrl":"https://doi.org/10.1117/12.3012321","url":null,"abstract":"Increasing water pollution poses a serious threat to both humankind and animals in the current situation. Low cost optical especially photocatalytic material is of utmost relevance to improve situation and meet the global energy demand with little environmental damage. The aim of this study is to develop low-cost low temperature reproducible method to synthesize multifunctional material suitable for degradation of a very dangerous water contaminant dye under visible light exposure. A semiwet chemical route was used to synthesize a multifunctional Bi12GeO20 compound suitable for photocatalytic activity for the degradation of Rhodamine B (RhB) dye under visible light exposure. Bi12GeO20 (BGO) ceramic with polycrystalline structure was prepared successfully e using a low temperature chemical process. X-ray powder diffraction reveals that single-phase BGO ceramic was formed. Nanosized BGO ceramic particles that had been stabilized, XRD and TEM to showed particle sizes in the 60–10 nm range. Due to the favorable band gap (2.72 eV) and the sillenite type Bi12GeO20 exhibits strong photocatalytic activity for the degradation of Rhodamine B (RhB) dye under visible light exposure.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141378411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye on the back: augmented visuals for improved ROV teleoperation in deep water surveillance and inspection","authors":"Md. Jahidul Islam","doi":"10.1117/12.3015091","DOIUrl":"https://doi.org/10.1117/12.3015091","url":null,"abstract":"Underwater ROVs (Remotely Operated Vehicles) play a crucial role in subsea inspection, remote surveillance, and deep-water explorations. Typically, a surface operator controls the ROV based on its real-time camera data, which is first-person visual feedback. However, underwater ROVs’ onboard camera feed only offers a low-resolution and often noisy egocentric view - that is not very informative in deep water and adverse visual conditions. To address this, we introduce the “Eye On the Back (EOB)” technology to provide a third-person view for improved underwater ROV teleoperation. Integrating EOB views to teleoperation consoles facilitates interactive features with augmented visuals for the teleoperator as well as for enabling semi-autonomous behavior such as next-best-view planner and active ROV localization. We conduct a series of field experiments to validate this technology for remote ROV teleoperation in underwater cave exploration and subsea structure inspection tasks. We are currently developing an end-to-end portable solution that can be integrated into existing ROV platforms for general-purpose subsea telerobotics applications.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141379722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advances in spaceborne hyperspectral imagery, a comparative study between nano satellites and large satellites","authors":"Yousuf Faroukh, Maryam Alansaari, Amel Alhammadi, Abdulrahman Sulaiman, Fatima Alketbi, Tarifa Alkaabi, Ilias Fernini, Hamid Alnaimiy","doi":"10.1117/12.3013252","DOIUrl":"https://doi.org/10.1117/12.3013252","url":null,"abstract":"The space sector's rapid growth, coupled with increased accessibility to space, has led to the popularity of miniaturized satellites known as CubeSats. These cost-effective and agile nanosatellites have gained international recognition in government, education, and private sectors. CubeSats, standardized at 10 cm x 10 cm x 10 cm, come in various sizes (1U, 2U, 3U, and 6U) and are preferred by the GIS/RS community for earth observation capabilities. Sharjah Academy for Astronomy, Space Science and Technology (SAASST) in the UAE has established a CubeSat laboratory, launched the Sharjah-Sat-1 (3U+) and now embarking on the Sharjah-Sat-2 mission. Sharjah-Sat-2 is a 6U CubeSat equipped with an advanced high-definition hyperspectral camera, Hyperscape100, to enhance infrastructure projects and establish an early warning system for environmental phenomena. This paper will discuss advancements in spaceborne hyperspectral imagers, compare nanosatellites to larger satellites, highlight the Sharjah-Sat-2 project, and explore its positive impact on the GIS/RS community.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141379455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Binkele, Theo Hengstermann, Tobias Schmid, Jens Wellhausen, Carolin Leluschko, Christoph Tholen
{"title":"Automatic litter detection using AI in environmental surveillance aircraft","authors":"Tobias Binkele, Theo Hengstermann, Tobias Schmid, Jens Wellhausen, Carolin Leluschko, Christoph Tholen","doi":"10.1117/12.3013922","DOIUrl":"https://doi.org/10.1117/12.3013922","url":null,"abstract":"Plastic pollution is an always-growing problem in earth’s oceans. In this paper, we propose an aerial method to detect marine plastic litter, which can be utilized on oil pollution control aircraft already in use in many parts of the globe. With this approach resources are saved, and emission are reduced, as no additional aircraft has to take off. To prevent the growing accumulate of plastic litter in our oceans, two major approaches are necessary. First, one has to detect and collect the plastic that has already reached the ocean. Second, sources of plastic litter have to be found to prevent more plastic from reaching the oceans. Both approaches can be targeted using sensors on airborne platforms. To achieve this, we propose a method for litter detection from aircraft using artificial intelligence on data gathered with sensors that are already in use. For oil pollution control multiple aircraft are already flying in different regions all over the world. Sensors used on these aircraft are partially adapted and utilized in a new way. The detection of plastic is performed using a high frequency, low resolution visual line sensor. If plastic is detected, a high-resolution camera system is targeted on the detected plastic using a gimbal. These high-resolution images are used for verification and classification purposes. In addition to the development of the method for plastic detection, results from intermediate field tests are presented.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141377489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kang Jun Bai, Jack Lombardi, Clare Thiem, Nathan R. McDonald
{"title":"The game-changing memristive technology for next-gen AI/ML hardware","authors":"Kang Jun Bai, Jack Lombardi, Clare Thiem, Nathan R. McDonald","doi":"10.1117/12.3013474","DOIUrl":"https://doi.org/10.1117/12.3013474","url":null,"abstract":"Neuromorphic computing is of high importance in Artificial Intelligence (AI) and Machine Learning (ML) to sidestep challenges inherent to neural-inspired computations in modern computing systems. Throughout the development history of neuromorphic computing, Compute-In-Memory (CIM) with emerging memory technologies, such as Resistive Random-Access Memory (RRAM), offer advantages by performing tasks in place, in the memory itself, leading to significant improvement in architectural complexity, data throughput, area density, and energy efficiency. In this article, in-house research efforts in designing and applying innovative memristive circuitry for AI/ML related workloads are showcased. To be specific, Multiply-and-Accumulate (MAC) operations and classification tasks can be obtained on a crossbar array made of 1-transistor-1-RRAM (1T1R) cells. With the same circuit structure, flow-based Boolean arithmetic is made possible by directing the paths of current flow through the crossbar. Better yet, high-precision operations for in-situ training can be realized with an enhanced crossbar array made of 6-transistor-1-RRAM (6T1R) cells alongside the bidirectional current control mechanism. Where possible, our neuromorphic solutions optimized for AI-enabled cognitive operations offer faster and more robust yet more efficient decision-making to support future battlespaces.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141381409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meghan Brandt, Nicholas Schmidt, Aria Tauraso, Rachit B. Sood, C. Su, Bradley Arnold, Fow-Sen Choa, Brian Cullum, N. Singh
{"title":"Flux growth of optical sensor zinc selenide crystals","authors":"Meghan Brandt, Nicholas Schmidt, Aria Tauraso, Rachit B. Sood, C. Su, Bradley Arnold, Fow-Sen Choa, Brian Cullum, N. Singh","doi":"10.1117/12.3013178","DOIUrl":"https://doi.org/10.1117/12.3013178","url":null,"abstract":"Binary and ternary selenide crystals have been proven as multifunctional for optical sensors and laser applications. The aim of this study was to evaluate reactive flux growth process of the doped zinc selenide crystals and compared with bulk Physical Vapor Transport (PVT) grown large single crystals. The experimental process of synthesis involved PVP (Polyvinyl Pyrrolidone) flux dissolved in DI water which was heated at 65°C, stirred until all PVP dissolved. We added Se powder dissolved in ethanol and heated again for few minutes. We added ZnCl2 solution in ethanol/Se mixture and heated at well below 100 0C. Water and ethanol solvent was separated and placed at 200C. The residue material was doped with transition metal. This material was characterized for the luminescence and compared with the results of bulk crystals grown by PVD process.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141381186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez
{"title":"Combining AI control systems and human decision support via robustness and criticality","authors":"Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez","doi":"10.1117/12.3016311","DOIUrl":"https://doi.org/10.1117/12.3016311","url":null,"abstract":"AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world. Yet, the ability of these systems to always make correct or safe decisions is a constant source of criticism and reluctance to use them. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. Additionally, by leveraging an AI control system built specifically to assist in joint human/machine decisions, the opportunity naturally arises to then use human interactions to continuously improve the AI control system’s accuracy and robustness. We extend a methodology for Adversarial Explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing Strategically Similar Autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI’s decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141377508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huibin Zhou, Yuxiang Duan, Hao Song, Zile Jiang, M. Ramakrishnan, X. Su, Robert Bock, M. Tur, A. Willner
{"title":"Ranging through turbid underwater using structured optical beams","authors":"Huibin Zhou, Yuxiang Duan, Hao Song, Zile Jiang, M. Ramakrishnan, X. Su, Robert Bock, M. Tur, A. Willner","doi":"10.1117/12.3017230","DOIUrl":"https://doi.org/10.1117/12.3017230","url":null,"abstract":"We demonstrate optical ranging through turbid underwater medium using a structured beam. This beam consists of two Bessel modes, each carrying a pair of orbital angular momentum order and longitudinal wavenumber. As a result, the beam has a “petal-like” intensity profile with different rotation angles at different distances. The object’s distance (z) is retrieved by measuring the rotation angle of the petal-like profile of the back-reflected beam. We demonstrate ⪅ 20-mm ranging errors through scattering with extinction coefficient γ up to 9.4 m-1 from z = 0 to 0.4 m. We further experimentally demonstrate the enhancement of ranging accuracy using multiple (⪆2) Bessel modes. With the number of modes increasing from two to eight, the average error decreases from approximately 16 mm to approximately 3 mm for a Υ of 5 m-1. Moreover, we simulate both coarse- and fine-ranging by using two different structured beams. One beam has a slower rotating petal-like profile, leading to a 4X larger dynamic range for coarse ranging. A second beam has a faster rotating profile, resulting in higher accuracy for fine ranging. In our simulation, ⪅ 7-mm errors over a 2-m dynamic range are achieved under 𝛾 = 4 m-1 .","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141377534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced robot state estimation using physics-informed neural networks and multimodal proprioceptive data","authors":"Yuqing Liu, Yajie Bao, Peng Cheng, Dan Shen, Genshe Chen, Hao Xu","doi":"10.1117/12.3022666","DOIUrl":"https://doi.org/10.1117/12.3022666","url":null,"abstract":"In this study, we introduce an innovative Robot State Estimation (RSE) methodology incorporating a learning-based contact estimation framework for legged robots, which obviates the need for external physical contact sensors. This approach integrates multimodal proprioceptive sensory data, employing a Physics-Informed Neural Network (PINN) in conjunction with an Unscented Kalman Filter (UKF) to enhance the state estimation process. The primary objective of this RSE technique is to calibrate the Inertial Measurement Unit (IMU) effectively and furnish a detailed representation of the robot’s dynamic state. Our methodology exploits the PINN to mitigate IMU drift issues by imposing constraints on the loss function via Ordinary Differential Equations (ODEs). The advantages of utilizing a contact estimator based on proprioceptive sensory data are multifold. Unlike vision-based state estimators, our proprioceptive approach is immune to visual impairments such as obscured or ambiguous environments. Moreover, it circumvents the necessity for dedicated contact sensors—components not universally present on robotic platforms and challenging to integrate without substantial hardware modifications. The contact estimator within our network is trained to discern contact events across various terrains, thereby facilitating resilient proprioceptive odometry. This enables the contact-aided invariant Kalman Filter to produce precise odometric trajectories. Subsequently, the UKF algorithm estimates the robot’s three-dimensional attitude, velocity, and position. Experimental validation of our proposed PINN-based method illustrates its capacity to assimilate data from multiple sensors, effectively reducing the influence of sensor biases by enforcing ODE constraints, all while preserving intrinsic sensor characteristics. When juxtaposed with the employment of the UKF algorithm in isolation, our integrated RSE model demonstrates superior performance in state estimation. This enhanced capability automatically reduces sensor drift impacts during operational deployment, rendering our proposed solution applicable to real-world scenarios.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141375817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas Sheldon, Elizabeth Hou, Evan Bouillet, George Cybenko, Jessica Dorismond
{"title":"Quantifying decision complexity in IADS operations","authors":"Lucas Sheldon, Elizabeth Hou, Evan Bouillet, George Cybenko, Jessica Dorismond","doi":"10.1117/12.3013534","DOIUrl":"https://doi.org/10.1117/12.3013534","url":null,"abstract":"Decision Advantage is a goal in current and future military operations. Achieving such an advantage can be done by degrading adversaries’ decision-making ability through imposition of complexity into the decision problems they have to make. This paper describes mathematical techniques for quantifying decision complexity in Integrated Air Defense Systems (IADS). The methods are based on graph properties derived from the defender’s IADS’ System of Systems description and the attacker’s Course of Action (COA) plans. Multiple plans can be compared quantitatively with respect to the decision complexity they impose on the defender. using metrics that are semantically meaningful to planners. The metrics developed are able to expose subtle ways that COAs impose complexity on an adversary, that may not be obvious to an operational planner at first glance.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141378656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}