{"title":"Web-based Automatic Deep Learning Service Generation System by Ontology Technologies","authors":"Incheon Paik, Kungan Zeng, Munhan Bae","doi":"10.1109/CSE57773.2022.00019","DOIUrl":"https://doi.org/10.1109/CSE57773.2022.00019","url":null,"abstract":"Although deep learning (DL) has obtained great achievements in the industry, the involvement of artificial intelligence (AI) experts in developing customized DL services raises high costs and hinders its wide application in the business domain. In this research, a Web-based automatic DL service generation system is presented to address the problem. The system can generate customized DL services without involving AI experts. The main principle of the system adopts ontology technologies to organize DL domain knowledge and generate target services based on the user's requests posted from the front-end web page. In the empirical study, the whole scenario of the system is demonstrated, and the scalability is also evaluated. The result shows that our system can generate customized services correctly and has good scalability.","PeriodicalId":165085,"journal":{"name":"2022 IEEE 25th International Conference on Computational Science and Engineering (CSE)","volume":"516 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125669030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the General Chairs: CSE 2022","authors":"","doi":"10.1109/cse57773.2022.00005","DOIUrl":"https://doi.org/10.1109/cse57773.2022.00005","url":null,"abstract":"","PeriodicalId":165085,"journal":{"name":"2022 IEEE 25th International Conference on Computational Science and Engineering (CSE)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127782524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Development of Operation Status Monitoring System for Large Glass Substrate Handling Robot","authors":"Xinhe Pu, Xiaofang Yuan, Liangsen Li, Weiming Ji","doi":"10.1109/CSE57773.2022.00020","DOIUrl":"https://doi.org/10.1109/CSE57773.2022.00020","url":null,"abstract":"Handling glass substrates is a component of the Flat Panel Display (FPD) industry's back-end process. Giant glass substrate handling robots have been developed in order to handle oversized, fragile, and delicate glass substrates. These robots have high precision, clean, smooth operation, and multi-constrained space requirements. In order to diagnose and maintain the handling robot, it is necessary to obtain accurate and rapid information regarding the defects. Currently, the display industry's actual needs for high-speed operation and a stable manufacturing line cannot be satisfied by manual diagnosis by maintenance engineers due to its inefficiency. This paper designed and developed a remote monitoring system based on the Web Internet platform. The system has the goal of monitoring and diagnosing the high-frequency operation, high reliability, and smooth operation of the large glass substrate handling robot. This system can monitor the running state of the robot in real time, quickly carry out fault alarm and diagnosis, and timely provide a fault warning function, ensuring the safe operation of the robot. It provides a link between companies that manufacture robots and companies that use them, which simplifies the diagnosis and repair of robot malfunctions.","PeriodicalId":165085,"journal":{"name":"2022 IEEE 25th International Conference on Computational Science and Engineering (CSE)","volume":"50 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuesan Su, Jianxu Mao, Yaonan Wang, Yurong Chen, Hui Zhang
{"title":"Data-driven Prior for Pharmaceutical Snapshot Spectral Imaging","authors":"Xuesan Su, Jianxu Mao, Yaonan Wang, Yurong Chen, Hui Zhang","doi":"10.1109/CSE57773.2022.00015","DOIUrl":"https://doi.org/10.1109/CSE57773.2022.00015","url":null,"abstract":"This paper proposes a new method for pharmaceutical hyperspectral compressive imaging and has a significant improvement for the quality of reconstruction. It's known that coded aperture snapshot spectral imager(CASSI) overcomes the limitation of hyperspectral image acquisition. However, the spatial and spectral information is coded and overlapped which make it difficult to reconstruct the original images. The reconstruction is an inverse mathematical problem which is barely solved precisely especially in complex imaging scenes such as irregular pharmaceutical product imaging. Thus, we consider the real pharmaceutical imaging demands and propose a novel image restoration method with the data-driven prior. Our method is based on the generalized alternating projection(GAP) framework and propose a novel denoising part to solve the problem of detail texture feature extraction with the dense block module employed. Our method is tested on real pharmaceutical hyperspectral data and achieve higher performance compared with state of the art methods.","PeriodicalId":165085,"journal":{"name":"2022 IEEE 25th International Conference on Computational Science and Engineering (CSE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117081041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense 3D Face Reconstruction from a Single RGB Image","authors":"Jianxu Mao, Yifeng Zhang, Caiping Liu, Ziming Tao, Junfei Yi, Yaonan Wang","doi":"10.1109/CSE57773.2022.00013","DOIUrl":"https://doi.org/10.1109/CSE57773.2022.00013","url":null,"abstract":"Monocular 3D face reconstruction is a computer vision problem of extraordinary difficulty. Restrictions of large poses and facial details(such as wrinkles, moles, beards etc.) are the common deficiencies of the most existing monocular 3D face reconstruction methods. To resolve the two defects, we propose an end-to-end system to provide 3D reconstructions of faces with details which express robustly under various backgrounds, pose rotations and occlusions. To obtain the facial detail informations, we leverage the image-to-image translation network (we call it p2p-net for short) to make pixel to pixel estimation from the input RGB image to depth map. This precise per-pixel estimation can provide depth value for facial details. And we use a procedure similar to image inpainting to recover the missing details. Simultaneously, for adapting pose rotation and resolving occlusions, we use CNNs to estimate a basic facial model based on 3D Morphable Model(3DMM), which can compensate the unseen facial part in the input image and decrease the deviation of final 3D model by fitting with the dense depth map. We propose an Identity Shape Loss function to enhance the basic facial model and we add a Multi-view Identity Loss that compare the features of the 3D face fusion and the ground truth from multi-view angles. The training data for p2p-net is from 3D scanning system, and we augment the dataset to a larger magnitude for a more generic training. Comparing to other state-of-the-art methods of 3D face reconstruction, we evaluate our method on in-the-wild face images. the qualitative and quantitative comparison show that our method performs both well on robustness and accuracy especially when facing non-frontal pose problems.","PeriodicalId":165085,"journal":{"name":"2022 IEEE 25th International Conference on Computational Science and Engineering (CSE)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}