2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)最新文献
Jessie R. Balbin, Ramon G. Garcia, Flordeliza L. Valiente, Brian Christopher F. Aaron, Christopher John D. Celimen, Juan Carlos K. De Peralta, Joshua P. Despabiladeras
{"title":"Vehicle identification system through the interoperability of an ultra high frequency radio frequency identification system and its database","authors":"Jessie R. Balbin, Ramon G. Garcia, Flordeliza L. Valiente, Brian Christopher F. Aaron, Christopher John D. Celimen, Juan Carlos K. De Peralta, Joshua P. Despabiladeras","doi":"10.1109/HNICEM.2017.8269457","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269457","url":null,"abstract":"Carnapping is on the rise in the Philippines, the inefficiency of public transportation has made car ownership more of a necessity than a luxury. People prefer to drive themselves to work or destination because they feel comfortable and safer from criminals in public transportation. However, carnapping became rampant in most recent years. Vehicles are not only stolen while parked but some were forcibly taken and the owners are being harmed. Modus operandi of carnapping groups or syndicates have become more creative and bolder, mutating their nefarious activities from stealing parked cars to stealing while the owners are inside. This is the issue the research is trying to address, by building a system that will be installed on the entrance/exit of a place to detect if a vehicle passes by is a hot car. By the use of RFID technology and android device, each vehicle will have their unique RFID stickers and will be registered to the database which contains the vehicle's and owner's information. If a vehicle is carnapped, the owner can simply report and the assigned person will update the information in the database so that if that vehicle passes by the installed device it will display to the connected android device that that vehicle is carnapped. Upon testing the system, there are no error in transmissions of the vehicle's information to the android device. Having known that the system is reliable and the device can be used for monitoring hot cars.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127268865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Emmanuel T. Munsayac, Lea Monica B. Alonzo, Delfin Enrique G. Lindo, R. Baldovino, N. Bugtai
{"title":"Implementation of a normalized cross-correlation coefficient-based template matching algorithm in number system conversion","authors":"Francisco Emmanuel T. Munsayac, Lea Monica B. Alonzo, Delfin Enrique G. Lindo, R. Baldovino, N. Bugtai","doi":"10.1109/HNICEM.2017.8269520","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269520","url":null,"abstract":"In digital image processing, template matching is a technique used for finding or searching for areas of an image that could either match or be similar to the template image. In this study, an algorithm that utilizes both Python programming and the OpenCV library for template matching in number system conversion was successfully demonstrated. Images containing binary numbers were tested for template matching and converted to string. Then, these strings were converted to their respective decimal equivalents. It was found that OpenCV offers a tool that is easy to use for systems that require recognizing patterns of an image. Furthermore, it was observed that the ease of use is accompanied with various limitations such as dependence to pre-processing or having fixed scale, rotation, font, and background color.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125188971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A single ended zero aware asymmetric 4T SRAM cell","authors":"Calvin Benzien C. Chan, F. Cruz, Wen-Yaw Chung","doi":"10.1109/HNICEM.2017.8269556","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269556","url":null,"abstract":"As SRAM capacity continue to increase to maximize microprocessor performance, power consumption also increases. This paper describes a new 4T SRAM cell that uses zero aware characteristic and single ended bit line and word line in order to achieve low energy consumption in all write and read operations compare to 6T. Design was done using 1.1 V and 45 nm CMOS from PTM. Simulation results showed that the speed of 4T reached 3 and 74 times slower compared to 6T in read and write, respectively. However, the energy consumptions of 4T were at least 37.6 % and 66.4 % smaller compared to write and read energies of 6T respectively.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123287961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. V. Caya, John Patrick H. Durias, N. Linsangan, Wen-Yaw Chung
{"title":"Recognition of tongue print biometrie using binary robust independent elementary features","authors":"M. V. Caya, John Patrick H. Durias, N. Linsangan, Wen-Yaw Chung","doi":"10.1109/HNICEM.2017.8269441","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269441","url":null,"abstract":"The study presents a tongue print biometrie recognition system that can use both SIFT keypoint descriptor and BRIEF keypoint descriptor algorithms. The main purpose of the study is to compare which of the two algorithms has faster recognition speed. The system captures tongue print images using a Raspberry Pi Camera. After image capture, the image is pre-processed using Contrast Limited Adaptive Histogram Equalization. SIFT feature extractor is then applied to the image to extract its features. The descriptor computation used both SIFT and BRIEF and they descriptors computed are stored in a database together with a unique user ID. The unique user ID used in the C# program to search the database for the user's information. Sample size of (30) thirty user was used for testing the proposed system. The test results show that using the BRIEF algorithm for tongue print recognition has an average recognition speed of 7.644 seconds while the SIFT algorithm's 13.829 seconds. The accuracy test results show that using the BRIEF algorithm also results to an improvement of recall, precision and accuracy over the SIFT algorithm.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ballado, Ramon G. Garcia, Joanne Gem Z. Chichoco, Bianca Marie B. Domingo, Kimberly Joy M. Santuyo, Van Jay S. Sulmaca, Sarah Alma P. Bentir, Shydel M. Sarte
{"title":"Forest mapping and classification of forest Type using LiDAR data and tree specie identification through image processing based on leaf extraction algorithms","authors":"A. Ballado, Ramon G. Garcia, Joanne Gem Z. Chichoco, Bianca Marie B. Domingo, Kimberly Joy M. Santuyo, Van Jay S. Sulmaca, Sarah Alma P. Bentir, Shydel M. Sarte","doi":"10.1109/HNICEM.2017.8269434","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269434","url":null,"abstract":"With the use of Light Detection and Ranging (LiDAR) Data, this study focuses on the processing of the LiDAR derived data through different software tools to generate a map that can classify forest types. A 20 × 20 meter plot in the selected forest area was identified in this study for the field validation of the classified leaf type. Leaf recognition is performed using Neural Network in Matlab. The leaf statistics were measured through the prototype developed using leaf extraction algorithms T-test is used for the comparative measurement between the perimeter of the extracted data and the actual perimeter of a sample leaf. The result shows that for the specie, the actual perimeter is statistically the same with the perimeter measured by the developed prototype. The accuracy of classification was calculated as 91.25%. The overall minimum and maximum precision of the prototype is computed to be 90.40% and 99.14%, respectively.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123441002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Paglinawan, Marianne M. Sejera, C. Paglinawan, Ryan Elbert H. Ancheta, Nicole Jan D. Guatato, Rey James H. Nava, Kimberly P. Sison
{"title":"Detection of three visual impairments: Strabismus, blind spots, and blurry vision in rural areas using Raspberry PI by implementing hirschberg, visual field, and visual acuity tests","authors":"A. Paglinawan, Marianne M. Sejera, C. Paglinawan, Ryan Elbert H. Ancheta, Nicole Jan D. Guatato, Rey James H. Nava, Kimberly P. Sison","doi":"10.1109/HNICEM.2017.8269550","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269550","url":null,"abstract":"This paper introduces an integrated way of detecting common visual impairments namely: Strabismus, Blind Spots and Blurry Vision. According to the Chairman of National Committee on Site Preservation, 70% of ophthalmologists are found in urban areas while the remaining 30% is scattered to rural areas. Thus, a low cost and portable product that can efficiently and effectively detect the said impairments is necessary. In addition to this, the device need not be operated by a medical practitioner. A Raspberry Pi with a monitor and camera would be used to implement Hirschberg Test for Strabismus, Visual Field Test for Blind Spot and Visual Acuity Tests for Blurry Vision. Hirschberg Test is implemented by taking the Central Corneal Light Reflex Ratio (CCLRR) of the patient through a picture taken from them. Visual Field test is done by presenting stimuli at different area of the visual field and recording the patient's response through a button press. The Visual Acuity test is done by using a Landolt C optotype and recording the patient's response through an 8 button keypad. The results from the Hirschberg Test were computed to have an accuracy of 90% by using z-test. For the Visual Field test and Visual Acuity test, the Mann Whitney Test proved that the data gathered from the patients had no significant difference with that from the medical records of patients.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125548223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Free space optical communication based outdoor wireless sensor node data acquisition using 532 nm laser","authors":"Joel G. Amora, M. V. Caya, Wen-Yaw Chung","doi":"10.1109/HNICEM.2017.8269442","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269442","url":null,"abstract":"Free-Space Optical Communication (FSOC) system uses light as a medium to transfer information wirelessly along free-space. FSOC system that uses visible lasers typically uses the commonly available Red lasers. This paper implements a laser based FSOC system for outdoor use, particularly under sunlight condition, to transfer sample temperature data within a distance of 10 meters. A transmitter modulates the laser using On-Off keying and the receiver uses two phototransistor — one for receiving laser signal and the other as a reference on the ambient light present within the receiver cover. Testing is done using Red and Green laser one at a time for both day time and night time conditions to obtain the Bit-Error-Rate (BER) of the system. The Two-tailed T-test of the experiment results shows a T-value within the critical values which indicates the acceptance of the null hypothesis that Green laser has no observable advantage and disadvantage over Red laser on both night time and day time within a 10 meter gap. Testing also shows that the system performs practically the same under day time and night time condition which suggest the resilience of the system to ambient light inside the receiver cover caused by sunlight.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Cloie T. Mallare, Dianne Faye G. Pineda, Gerald M. Trinidad, Reymond D. Serafica, Jules Benedict K. Villanueva, Angelo R. dela Cruz, R. R. Vicerra, Kanny Krizzy D. Serrano, Edison A. Roxas
{"title":"Sitting posture assessment using computer vision","authors":"John Cloie T. Mallare, Dianne Faye G. Pineda, Gerald M. Trinidad, Reymond D. Serafica, Jules Benedict K. Villanueva, Angelo R. dela Cruz, R. R. Vicerra, Kanny Krizzy D. Serrano, Edison A. Roxas","doi":"10.1109/HNICEM.2017.8269473","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269473","url":null,"abstract":"Since most people usually sit most of the time nowadays because of the modern lifestyle, proper sitting posture must be exhibited so that postural abnormalities can be avoided. In order to see how proper the posture is, posture assessment is usually done by measuring the sitting parameters defining the sitting position which are the thoracic angle [TA], cervical angle [CA], retraction angle [RA], sitting height [SH], sitting eye height [SEH], sitting shoulder height [SSH], shoulder breadth [SB], hip breadth [HB], buttock-popliteal height [BPH], and the popliteal height [PH]. These sitting parameters can be measured through different methods, namely: the plumbline method, which is usually done by physical therapists; using goniometers; using accelerometers; the radiographic method; and the pressure distribution analysis through pressure sensor on a chair. However, these methods do not able to measure all the sitting parameters mentioned. Thus, there is a need to develop an algorithm that can measure and assess the parameters mentioned, which can serve as assistance for the physical therapist for posture correction. In this paper, the researchers present a method of obtaining sitting posture parameters and assess it through the use of Computer Vision in order to be used as an assistance for physical therapists in their sitting posture assessment and correction. With 42 samples, the proposed algorithm gave an accuracy of 61.9% in assessing sitting posture.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114549372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative analysis of solving traveling salesman problem using artificial intelligence algorithms","authors":"S. G. Brucal, E. Dadios","doi":"10.1109/HNICEM.2017.8269423","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269423","url":null,"abstract":"This paper aims to provide a comparative study of the different artificial intelligence (AI) algorithms applied to solve the traveling salesman problem (TSP). Four (4) AI algorithms such as genetic algorithm, nearest neighbor, ant colony optimization, and neuro-fuzzy are executed in MatLab software to determine which among these techniques will provide the least execution time to solve a TSP. The objective of comparing and analyzing each AI algorithm — as applied to a single problem with the different program execution — is to identify if significant difference in execution time could lead to significant saving in energy consumption. The simulations using MatLab resulted to strong correlation at an R2 of 0.95 in the average execution time with the number of code lines, but do not give a significant execution time variance as when ANOVA and t-test measures were performed. The result of this paper could be used as a basis in the design phase of software development life cycle to arrive into an energy efficient software application with respect to time needed to execute a program.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mary Anne M. Sahagun, J. D. dela Cruz, Ramon G. Garcia
{"title":"Wireless sensor nodes for flood forecasting using artificial neural network","authors":"Mary Anne M. Sahagun, J. D. dela Cruz, Ramon G. Garcia","doi":"10.1109/HNICEM.2017.8269462","DOIUrl":"https://doi.org/10.1109/HNICEM.2017.8269462","url":null,"abstract":"The Pampanga River is considered as the fourth largest river basin in the Philippines. The lower basin of the river is one of the most frequently affected by flooding such as Masantol, Pampanga. At present, the Disaster Risk Reduction Management Office (DRRMO) uses a conventional way of water level measurement. The study aims to develop a real-time flood water level for medium and high risk areas and use these data for short forecasting. A standalone sensor station was developed with ultrasonic sensor, microcontroller, GSM module, and solar panel. Nonlinear autoregressive and Nonlinear autoregressive network with external input were used for modeling and prediction carried into 5 cases. Backpropagation technique, feed forward architecture, and optimized training algorithm known as Levenberg-Marquardt were used to develop the model in Matlab. The result with model prediction accuracy ranging 1.2e-3 to 3.12e-2 in terms of root mean square error (rmse), 9.97e-4 to 1.35e-2 mean absolute error (mae), 7.5e-1 to 1 correlation coefficient (r-value) for cases 1–3; and for cases 4–5, the result range from 1.3e-3 to 2.39e-2, 1.1e-3 to 2.11e-2, 7.618e-1 to 1 in terms of rmse, mae and r-value, respectively. This study may be a useful tool to DRRMO to provide early warning to the community.","PeriodicalId":104407,"journal":{"name":"2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128662991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}