{"title":"NoFish; Total Anti-Phishing Protection System","authors":"Dhanushka Niroshan Atimorathanna, Tharindu Shehan Ranaweera, R.A.H. Devdunie Pabasara, Jayani Rukshila Perera, Kavinga Yapa Abeywardena","doi":"10.1109/icac51239.2020.9357145","DOIUrl":"https://doi.org/10.1109/icac51239.2020.9357145","url":null,"abstract":"Phishing attacks have been identified by researchers as one of the major cyber-attack vectors which the general public has to face today. Although many vendors constantly launch new anti-phishing products, these products cannot prevent all the phishing attacks. The proposed solution, “NoFish” is a total anti-phishing protection system created especially for end-users as well as for organizations. This paper proposes a machine learning & computer vision-based approach for intelligent phishing detection. In this paper, a realtime anti-phishing system, which has been implemented using four main phishing detection mechanisms, is proposed. The system has the following distinguishing properties from related studies in the literature: language independence, use of a considerable amount of phishing and legitimate data, real-time execution, detection of new websites, detecting zero hour phishing attacks and use of feature-rich classifiers, visual image comparison, DNS phishing detection, email client plugin and especially the overall system is designed using a level-based security architecture to reduce the time-consumption. Users can simply download the NoFish browser extension and email plugin to protect themselves, establishing a relatively secure browsing environment. Users are more secure in cyberspace with NoFish which depicts a 97% accuracy level.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kulathunga, Chamika Muthukumarana, Umindu Pasan, Chamudika Hemachandra, Muditha Tissera, H. De Silva
{"title":"PatientCare: Patient Assistive Tool with Automatic Hand-written Prescription Reader","authors":"D. Kulathunga, Chamika Muthukumarana, Umindu Pasan, Chamudika Hemachandra, Muditha Tissera, H. De Silva","doi":"10.1109/ICAC51239.2020.9357136","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357136","url":null,"abstract":"Most people in the world prefer to be conscious of the medications prescribed by physicians. Especially, the importance of handwritten prescriptions is prodigious in Sri Lanka because they are widely used in the healthcare sector. However, due to the illegible handwriting and the medical abbreviations of the physicians, patients are unable to find the prescribed medication information. This research is an attempt to assist the patients in identifying the prescribed medicine information and minimizes misreading errors of medical prescriptions. When a patient uploads the image of a prescription, the system converts it into unstructured text data by using OCR and segmentation, then NER is used to categorize medical information from given text. According to the other research, some solutions exist in other domains for the above mechanisms. But they gave less accuracy when tried to apply for this research due to the domain specialty. Therefore, as a solution to overcome the above discrepancy this approach allows users to scan handwritten medical prescriptions and blood reports and obtain analyzed reports in medical history. Results have shown that this approach will give 64%-70% accuracy level in doctor's handwriting recognition and 95%-98% accuracy in medical information categorization of the prescription format.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Achintha Thennakoon, Deneth Perera, Shanuka Sugathapala, Samadhi Weerasingha, Pradeepa Samarasinghe, D. Dahanayake, Vijani S. Piyawardana
{"title":"Individualized Edutainment and Parent Supportive Tool for ADHD Children","authors":"Achintha Thennakoon, Deneth Perera, Shanuka Sugathapala, Samadhi Weerasingha, Pradeepa Samarasinghe, D. Dahanayake, Vijani S. Piyawardana","doi":"10.1109/icac51239.2020.9357207","DOIUrl":"https://doi.org/10.1109/icac51239.2020.9357207","url":null,"abstract":"Attention-Deficit/Hyperactivity Disorder (ADHD) is a comorbid disorder that can impact a child and his/her family. ADHD children have considerable obstacles in managing time, understanding instructions, and paying attention to the activities. To address these perplexities, this research has designed a mobile application to help parents to have better interaction with the children and for the children to enjoy their learning activities. The specialty of this application is the models are trained on individual child skills and needs. Issues with time management are handled by the Scheduler component while the Instruction Predictor module supports the parent in recognizing the child's understandability level. Furthermore, the children are provided with edutainment activities based on their attention and ability levels. Different models have been used in predicting the results through these modules and the prediction result accuracy exceeds 90% in most of the cases. Out of the many models, The Random Forest model resulted in the best overall performance. The application was tried by many parents and health professionals and received satisfactory and commendable reviews.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114263253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Secure Communication Using Steganography in IoT Environment","authors":"M. Amjath, V. Senthooran","doi":"10.1109/icac51239.2020.9357260","DOIUrl":"https://doi.org/10.1109/icac51239.2020.9357260","url":null,"abstract":"IoT is an emerging technology in modern world of communication. As the usage of IoT devices is increasing in day to day life, the secure data communication in IoT environment is the major challenge. Especially, small sized Single-Board Computers (SBCs) or Microcontrollers devices are widely used to transfer data with another in IoT. Due to the less processing power and storage capabilities, the data acquired from these devices must be transferred very securely in order to avoid some ethical issues. There are many cryptography approaches are applied to transfer data between IoT devices, but there are obvious chances to suspect encrypted messages by eavesdroppers. To add more secure data transfer, steganography mechanism is used to avoid the chances of suspicion as another layer of security. Based on the capabilities of IoT devices, low complexity images are used to hide the data with different hiding algorithms. In this research study, the secret data is encoded through QR code and embedded in low complexity cover images by applying image to image hiding fashion. The encoded image is sent to the receiving device via the network. The receiving device extracts the QR code from image using secret key then decoded the original data. The performance measure of the system is evaluated by the image quality parameters mainly Peak Signal to Noise Ratio (PSNR), Normalized Coefficient (NC) and Security with maintaining the quality of contemporary IoT system. Thus, the proposed method hides the precious information within an image using the properties of QR code and sending it without any suspicion to attacker and competes with the existing methods in terms of providing more secure communication between Microcontroller devices in IoT environment.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"544 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115248553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Rajapakshe, M. Shamil, P.M.C.P. Paththinisekara, S.K. Liyanage, Udara Srimath S. Samaratunge Arachchillage, Anuththara Kuruppu
{"title":"Smart Intelligent Troubleshooter to Solve Windows Operating System Specific Issues","authors":"D. Rajapakshe, M. Shamil, P.M.C.P. Paththinisekara, S.K. Liyanage, Udara Srimath S. Samaratunge Arachchillage, Anuththara Kuruppu","doi":"10.1109/ICAC51239.2020.9357148","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357148","url":null,"abstract":"While working on computers, people frequently confront with various kinds of problems, those beyond their extensive expertise. Microsoft Windows is the widely used Operating System running on numerous personal computers and the reason which gives more irritating problems that require to be addressed. Currently, troubleshooting is considered as a costly and time-consuming approach. The SAITA is an Artificial Intelligent Troubleshooting Agent that utilizes natural language generation, machine learning, and dependency resolving and ontology-based methodologies for solving most common Windows-specific issues within a short period of time than the traditional approach. The assistant learns from the accessible data and accomplishes the task for users as performed by human experts. The main objective of this exploration venture is to distinguish the constraints of existing troubleshooting software and create an AI troubleshooting assistant to provide solutions to fix the identified user issues. The use of this assistant would be economical as an IT help desk alternative in the industry. SAITA is developed to serve as a representative troubleshooter for fundamental user issues, service issues, application issues, and perform environment setup by analyzing software. This system will be able to solve the common Windows user's issues as same as a human with less time.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116526124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Wickramasinghe, H.P Wijethunga, S. Yapa, D. Vishwajith, Udara Srimath S. Samaratunge Arachchillage, N.C Amarasena
{"title":"Smart Exam Evaluator for Object-Oriented Programming Modules","authors":"M. Wickramasinghe, H.P Wijethunga, S. Yapa, D. Vishwajith, Udara Srimath S. Samaratunge Arachchillage, N.C Amarasena","doi":"10.1109/ICAC51239.2020.9357320","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357320","url":null,"abstract":"Worldwide educators considered that, automate the evaluation of programming language-based exams is a more challenging task due to its complexity and the diversity of solutions implemented by students. This research investigates and provides insight into the applicability and development of a java based online exam evaluator as a solution to traditional onerous manual exam assessment methodology. The proposed system allows students to take online exams in Java for an implemented source code in a practical exam, automatically reporting the results to the administrator simultaneously. Accordingly, this research examines existing methods, identifies their limitations, and explores the significance of introducing a smart object-oriented program-based exam evaluator as a solution. This method minimizes all human errors and makes the system more efficient. An automated answer checker checks and marks are given as human-counterpart and generate a report with possible suggestions for improvement of the answer scripts and generate a classification report to predict the student's final exam marks. This software application uses a Knowledge base, Abstract Syntax tree (AST), ANTLR, Image processing, and Machine Learning (ML) as key technologies. The proposed system gains a higher accuracy of 95% as performed by a separate human-counterpart. These results show a high level of accuracy and automate marking is the major emphasis to save human evaluation effort and maximize productivity.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116633002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Salgado, H.A.D.D Hettiarachchi, T.U Munasinghe, K.A.U Fernando, Ishara Gamage, Thusithanjana Thilakarathna, Neil Crishantha Cooray
{"title":"Assist: Rendering, Pipeline Management, and Pipeline Tracking Software","authors":"M. Salgado, H.A.D.D Hettiarachchi, T.U Munasinghe, K.A.U Fernando, Ishara Gamage, Thusithanjana Thilakarathna, Neil Crishantha Cooray","doi":"10.1109/icac51239.2020.9357162","DOIUrl":"https://doi.org/10.1109/icac51239.2020.9357162","url":null,"abstract":"Video production is one of the most dominant industries in the 21st century, and research into the automation of tasks associated with it has drastically increased. The production of videos take place in three stages: pre-production, production, and post-production. These three stages consist of script writing, scheduling, logistics, and other administration work. There are commercial products to automate these individual tasks. Incorporating all these software into video production can be expensive and difficult to manage. This study proposes the “Assist” software to handle all processes in video production. It has resulted in a product that covers the three main stages featuring scripts, storyboards, inventory management, production progress tracking and management, and rendering. The mentioned features were designed and developed using decision tree algorithm, PyQt5, general decimation algorithm, mesh simplification algorithm, and multi-variable regression.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134103873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R.L.M.A.P.C. Wijethunga, D.M.K. Matheesha, A. Noman, K.H.B. De Silva, Muditha Tissera, L. Rupasinghe
{"title":"Deepfake Audio Detection: A Deep Learning Based Solution for Group Conversations","authors":"R.L.M.A.P.C. Wijethunga, D.M.K. Matheesha, A. Noman, K.H.B. De Silva, Muditha Tissera, L. Rupasinghe","doi":"10.1109/ICAC51239.2020.9357161","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357161","url":null,"abstract":"The recent advancements in deep learning and other related technologies have led to improvements in various areas such as computer vision, bio-informatics, and speech recognition etc. This research mainly focuses on a problem with synthetic speech and speaker diarization. The developments in audio have resulted in deep learning models capable of replicating natural-sounding voice also known as text-to-speech (TTS) systems. This technology could be manipulated for malicious purposes such as deepfakes, impersonation, or spoofing attacks. We propose a system that has the capability of distinguishing between real and synthetic speech in group conversations.We built Deep Neural Network models and integrated them into a single solution using different datasets, including but not limited to Urban-Sound8K (5.6GB), Conversational (12.2GB), AMI-Corpus (5GB), and FakeOrReal (4GB). Our proposed approach consists of four main components. The speech-denoising component cleans and preprocesses the audio using Multilayer- Perceptron and Convolutional Neural Network architectures, with 93% and 94% accuracies accordingly. The speaker diarization was implemented using two different approaches, Natural Language Processing for text conversion with 93% accuracy and Recurrent Neural Network model for speaker labeling with 80% accuracy and 0.52 Diarization-Error-Rate. The final component distinguishes between real and fake audio using a CNN architecture with 94 % accuracy. With these findings, this research will contribute immensely to the domain of speech analysis.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132266805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Manoj Kumar, K. Bavanraj, S. Thavananthan, G.M.A.S. Bastiansz, S. Harshanath, J. Alosious
{"title":"EasyTalk: A Translator for Sri Lankan Sign Language using Machine Learning and Artificial Intelligence","authors":"D. Manoj Kumar, K. Bavanraj, S. Thavananthan, G.M.A.S. Bastiansz, S. Harshanath, J. Alosious","doi":"10.1109/ICAC51239.2020.9357154","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357154","url":null,"abstract":"Sign language is used by the hearing-impaired and inarticulate community to communicate with each other. But not all Sri Lankans are aware of the sign language or verbal languages and a translation is required. The Sri Lankan Sign Language is tightly bound to the hearing-impaired and inarticulate. The paper presents EasyTalk, a sign language translator which can translate Sri Lankan Sign Language into text and audio formats as well as translate verbal language into Sri Lankan Sign Language which would benefit them to express their ideas. This is handled in four separate components. The first component, Hand Gesture Detector captures hand signs using pre-trained models. Image Classifier component classifies and translates the detected hand signs. The Text and Voice Generator component produces a text or an audio formatted output for identified hand signs. Finally, Text to Sign Converter works on converting an entered English text back into the sign language based animated images. By using these techniques, EasyTalk can detect, translate and produce relevant outputs with superior accuracy. This can result in effective and efficient communication between the community with differently-abled people and the community with normal people.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132994576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Jayasekara, Kalpani Omalka, Pamuditha Hewawelengoda, Chanuka Kanishka, Pradeepa Samarasinghe, L. Weerasinghe
{"title":"OMNISCIENT: A Branch Monitoring System for Large-scale Organizations","authors":"T. Jayasekara, Kalpani Omalka, Pamuditha Hewawelengoda, Chanuka Kanishka, Pradeepa Samarasinghe, L. Weerasinghe","doi":"10.1109/ICAC51239.2020.9357271","DOIUrl":"https://doi.org/10.1109/ICAC51239.2020.9357271","url":null,"abstract":"Omniscient is a system that enables higher-level management of massive organizations to remotely monitor and scrutinize the activities that take place in the branches from the head office itself by providing exclusive insight in the form of detailed reports on the employees' behaviour and performance daily, weekly and monthly. The system further monitors the branch and provides reports on any suspicious behaviour and also on the customers' activity within the branch premises. Omniscient rates the customer's level of satisfaction by capturing the customer's facial expressions and analyzing their emotions while they are being served. The employee face and dress recognition models have accuracies of 90.90% and 87.00% respectively while, employee activity detection has an accuracy of 89.00%. Customer emotion and miscellaneous activities detection models have the accuracies of 91.50% and 83.00% respectively. All of the aforementioned procedures were made possible by systematically analyzing the IP camera video footage obtained throughout the day to analyze the work productivity and performance of the branch as accurately as possible using deep learning and modern visual computing techniques like CNN, OpenCV, Haar Cascade classifier, face recognition, Dlib and Darknet.","PeriodicalId":253040,"journal":{"name":"2020 2nd International Conference on Advancements in Computing (ICAC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127626520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}