{"title":"Research on Classroom Lighting Automatic Control System Based on Personnel Target Detection","authors":"Qing-zhen Wang","doi":"10.58396/cvs020104","DOIUrl":"https://doi.org/10.58396/cvs020104","url":null,"abstract":". With the popularity of digital campus and the proposal of building a conservation campus, classroom lighting power saving is an important expense of electricity consumption in universities. Since most classrooms are equipped with cameras, this paper proposes an automatic classroom lighting control system based on personnel target detection to achieve intelligent lighting control and power saving. The main research consists of three parts. One is to study and optimize the YOLO personnel target detection algorithm, and use the data set for model training to improve the recognition accuracy of the system. The second is to build the hardware platform of the system, using the Raspberry Pi with GPU as the controller to improve the image processing speed. The third is to carry out the control flow design of the light groups and the design of the image display window to realize the centralized seating and lighting control of classroom. Classroom personnel identification experiments and ambient light brightness detection experiments were carried out respectively, and the experimental results showed that the system had faster response time consumption and higher classification accuracy. The system is simple and cost-saving to implement with the help of existing equipment, and the study has certain practical application value and universality.","PeriodicalId":248353,"journal":{"name":"Computer Vision Studies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing the selection bias in voice assistance: training voice assistance model in python with equal data selection","authors":"Piya Kashav","doi":"10.58396/cvs020103","DOIUrl":"https://doi.org/10.58396/cvs020103","url":null,"abstract":"In recent times, voice assistants have become a part of our day-to-day lives, allowing information retrieval by voice synthesis, voice recognition, and natural language processing. These voice assistants can be found in many modern-day devices such as Apple, Amazon, Google, and Samsung. This project is primarily focused on Virtual Assistance in Natural Language Processing. Natural Language Processing is a form of AI that helps machines understand people and create feedback loops. This project will use deep learning to create a Voice Recognizer and use Commonvoice and data collected from the local community for model training using Google Colaboratory. After recognizing a command, the AI assistant will be able to perform the most suitable actions and then give a response. The motivation for this project comes from the race and gender bias that exists in many virtual assistants. The computer industry is primarily dominated by the male gender, and because of this, many of the products produced do not regard women. This bias has an impact on natural language processing. This project will be utilizing various open-source projects to implement machine learning algorithms and train the assistant algorithm to recognize different types of voices, accents, and dialects. Through this project, the goal to use voice data from underrepresented groups to build a voice assistant that can recognize voices regardless of gender, race, or accent. Increasing the representation of women in the computer industry is important for the future of the industry. By representing women in the initial study of voice assistants, it can be shown that females play a vital role in the development of this technology. In line with related work, this project will use first-hand data from the college population and middle-aged adults to train voice assistant to combat gender bias.","PeriodicalId":248353,"journal":{"name":"Computer Vision Studies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127265420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms for closeness, additional closeness and residual closeness","authors":"C. Dangalchev","doi":"10.58396/cvs020102","DOIUrl":"https://doi.org/10.58396/cvs020102","url":null,"abstract":"The residual and additional closeness are very important characteristics of graphs. They are measures of graphs’ vulnerability and growth potentials. Calculating the closeness, the residual, and the additional closeness of graphs is a difficult computational problem. In this article we propose an algorithm for additional closeness and an approximate algorithm for closeness. Calculating the residual closeness of graphs is the most difficult of the three closenesses. We use Branch and Bound like algorithms to solve this problem. In order for the algorithms to be effective, we need good upper bounds of the residual closeness. In this article we have calculated upper bounds for the residual closeness of 1-connected graphs. We use these bounds in combination with the approximate algorithm to calculate the residual closeness of 1connected graphs. We have done experiments with randomly generated graphs and have calculated the decrement in steps, delivered by the proposed algorithm.","PeriodicalId":248353,"journal":{"name":"Computer Vision Studies","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125248133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object size measurement and camera distance evaluation for electronic components using Fixed-Position camera","authors":"M. Hoang","doi":"10.58396/cvs020101","DOIUrl":"https://doi.org/10.58396/cvs020101","url":null,"abstract":"This article works on applying Open-Source Computer Vision Library (OpenCV) minimum area rectangle to measure electronics component dimension. A rotative contour covers the considered objects with the detected width and length. The pixel and real-world unit ratio are identified with a reference object for other device size accomplishment. The experiment contains Arduino UNO, microchip ESP32-WROOM, Inertial Measurement Unit (IMU) sensor, and a 9 V battery. The approach shows the less complicated way to achieve the appropriate results, with an absolute error of less than 3mm. The distance between the camera and object is also calculated based on the relationship between camera parameters and actual object height. The research concentrates on the size measurement of electronic components and the distance estimation from the object to the monitoring camera.","PeriodicalId":248353,"journal":{"name":"Computer Vision Studies","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115167367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}