{"title":"Occurrence Prediction of Dislocation Regions in Photoluminescence Image of Multicrystalline Silicon Wafers Using Transfer Learning of Convolutional Neural Network","authors":"H. Kudo, T. Matsumoto, K. Kutsukake, N. Usami","doi":"10.1587/transfun.2020imp0010","DOIUrl":"https://doi.org/10.1587/transfun.2020imp0010","url":null,"abstract":"In this paper, we evaluate a prediction method of regions including dislocation clusters which are crystallographic defects in a photoluminescence (PL) image of multicrystalline silicon wafers. We applied a method of a transfer learning of the convolutional neural network to solve this task. For an input of a sub-region image of a whole PL image, the network outputs the dislocation cluster regions are included in the upper wafer image or not. A network learned using image in lower wafers of the bottom of dislocation clusters as positive examples. We experimented under three conditions as negative examples; image of some depth wafer, randomly selected images, and both images. We examined performances of accuracies and Youden’s J statistics under 2 cases; predictions of occurrences of dislocation clusters at 10 upper wafer or 20 upper wafer. Results present that values of accuracies and values of Youden’s J are not so high, but they are higher results than ones of bag of features (visual words) method. For our purpose to find occurrences dislocation clusters in upper wafers from the input wafer, we obtained results that randomly select condition as negative examples is appropriate for 10 upper wafers prediction, since its results are better than other negative examples conditions, consistently. key words: prediction, transfer learning, convolutional neural network","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"104-A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128784462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis and Design of Aggregate Demand Response Systems Based on Controllability","authors":"Kazuhiro Sato, S. Azuma","doi":"10.1587/transfun.2020eap1093","DOIUrl":"https://doi.org/10.1587/transfun.2020eap1093","url":null,"abstract":"We address analysis and design problems of aggregate demand response systems composed of various consumers based on controllability to facilitate to design automated demand response machines that are installed into consumers to automatically respond to electricity price changes. To this end, we introduce a controllability index that expresses the worst-case error between the expected total electricity consumption and the electricity supply when the best electricity price is chosen. The analysis problem using the index considers how to maximize the controllability of the whole consumer group when the consumption characteristic of each consumer is not fixed. In contrast, the design problem considers the whole consumer group when the consumption characteristics of a part of the group are fixed. By solving the analysis problem, we first clarify how the controllability, average consumption characteristics of all consumers, and the number of selectable electricity prices are related. In particular, the minimum value of the controllability index is determined by the number of selectable electricity prices. Next, we prove that the design problem can be solved by a simple linear optimization. Numerical experiments demonstrate that our results are able to increase the controllability of the overall consumer group. key words: aggregate demand response, controllability, real-time pricing","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114826558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scene Adaptive Exposure Time Control for Imaging and Apparent Motion Sensor","authors":"Misaki Shikakura, Yusuke Kameda, T. Hamamoto","doi":"10.1587/transfun.2020iml0007","DOIUrl":"https://doi.org/10.1587/transfun.2020iml0007","url":null,"abstract":"CMOS image sensors have been developed for surveillance and industrial equipment cameras. In these applications, it is important to capture images with clear details for image recognition and object tracking. Since the exposure parameters (including exposure time, which is examined in this paper) are not appropriate, overexposure and underexposure may occur when the illuminance varies with artificial or natural light. Therefore, it is necessary to capture an image with the exposure time that is adjusted to the illuminance level of the scene. At the same time, motion blur must be suppressed when the camera or subject is moving. Many auto exposure algorithms adopt the average brightness value of the scene to control the exposure time. The method based on the brightness value in [1] adjust the exposure time for the important object such as a moving object. The weight of the segmented moving object is higher through object tracking, and control the proper exposure time for moving objects. However, this approach probably cause motion blur When the moving subject is dark and exposure time is set long. Imaging with a short exposure time can suppress motion blur, and the exposure time can be estimated by using motion estimation. High frame rate imaging is effective for improving the accuracy of motion estimation [2], [3]. The correlation between frames at high frame rate is so high that the computational complexity of motion estimation is reduced. However, many frames are needed to output from the image sensor to a signal processing circuit outside the sensor, and imaging at high frame rate increases the data rate. Several methods [4]–[7] mount a simple processing circuit","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"542 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123116602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video Smoke Removal from a Single Image Sequence","authors":"Shiori Yamaguchi, K. Hirai, T. Horiuchi","doi":"10.1587/transfun.2020imp0013","DOIUrl":"https://doi.org/10.1587/transfun.2020imp0013","url":null,"abstract":"In this study, we present a novel method for removing smoke from videos based on a single image sequence. Smoke is a significant artifact in images or videos because it can reduce the visibility in disaster scenes. Our proposed method for removing smoke involves two main processes: (1) the development of a smoke imaging model and (2) smoke removal using spatio-temporal pixel compensation. First, we model the optical phenomena in natural scenes including smoke, which is called a smoke imaging model. Our smoke imaging model is developed by extending conventional haze imaging models. We then remove the smoke from a video in a frame-by-frame manner based on the smoke imaging model. Next, we refine the appearance of the smoke-free video by spatio-temporal pixel compensation, where we align the smoke-free frames using the corresponding pixels. To obtain the corresponding pixels, we use SIFT and color features with distance constraints. Finally, in order to obtain a clear video, we refine the pixel values based on the spatio-temporal weightings of the corresponding pixels in the smoke-free frames. We used simulated and actual smoke videos in our validation experiments. The experimental results demonstrated that our method can obtain effective smoke removal results from dynamic scenes. We also quantitatively assessed our method based on a temporal coherence measure. key words: moving camera, smoke imaging model, smoke removal, spatiotemporal pixel compensation, video processing","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"104-A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131379905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miho Shinohara, Y. Tamura, Shinya Mochiduki, H. Kudo, M. Yamada
{"title":"Occlusion Avoidance Behavior During Gazing at a Rim Drawn by Blue-Yellow Opposite Colors","authors":"Miho Shinohara, Y. Tamura, Shinya Mochiduki, H. Kudo, M. Yamada","doi":"10.1587/transfun.2020iml0001","DOIUrl":"https://doi.org/10.1587/transfun.2020iml0001","url":null,"abstract":"","PeriodicalId":348826,"journal":{"name":"IEICE Trans. Fundam. Electron. Commun. Comput. Sci.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}