{"title":"An Iris-Based Authentication Framework to Prevent Presentation Attacks","authors":"H. Shahriar, Hisham M. Haddad, Mahbubul Islam","doi":"10.1109/COMPSAC.2017.60","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.60","url":null,"abstract":"Attacks on authentication services are major security concerns. Password-based authentication systems can be compromised using known techniques, such as brute force and dictionary-based attacks. Biometric-based authentication systems are becoming the preferred choice to replace password-based authentication systems. Among several variations of biometrics (e.g., face, eye, fingerprint), iris-based authentication is commonly used in various applications. In iris-based authentication systems, iris images from legitimate users are captured and certain features are extracted to be used for matching during the authentication process. Literature works suggest that iris-based authentication systems can be subject to presentation attacks where an attacker obtains printed copy of the victim's eye image and displays it in front of an authentication system to gain unauthorized access. Such attacks can be performed by displaying static eye images on mobile devices or ipads (known as screen attacks). Since human iris features so not changed, once the iris image is compromised, it is hard to avoid this type of attack. To address this challenge, this paper proposes a framework for iris code generation by considering the changes of the area between the pupil and the sclera due to light density level. The proposed approach relies on capturing iris images using near infrared light. We train HaarCascade and LBP classifiers to capture the area between the pupil and the cornea. The image of iris is then stored in the database. This approach also generates a QR code from the iris. The code acts as a password and the user is required to provide it during authentication. A prototype is built using OpenCV platform tool. The prototype has been tested using samples obtained from publicly available iris database. The initial results show that the proposed approach has lower false positive and false negative rates.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"3 1","pages":"504-509"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74976355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Ahsan, Md Osman Gani, M. K. Hasan, Sheikh Iqbal Ahamed, W. Chu, Mohammad Adibuzzaman, Joshua Field
{"title":"A Novel Real-Time Non-invasive Hemoglobin Level Detection Using Video Images from Smartphone Camera","authors":"G. Ahsan, Md Osman Gani, M. K. Hasan, Sheikh Iqbal Ahamed, W. Chu, Mohammad Adibuzzaman, Joshua Field","doi":"10.1109/COMPSAC.2017.29","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.29","url":null,"abstract":"Hemoglobin level detection is necessary for evaluating health condition in the human. In the laboratory setting, it is detected by shining light through a small volume of blood and using a colorimetric electronic particle counting algorithm. This invasive process requires time, blood specimens, laboratory equipment, and facilities. There are also many studies on non-invasive hemoglobin level detection. Existing solutions are expensive and require buying additional devices. In this paper, we present a smartphone-based non-invasive hemoglobin detection method. It uses the video images collected from the fingertip of a person. We hypothesized that there is a significant relation between the fingertip mini-video images and the hemoglobin level by laboratory \"gold standard.\" We also discussed other non-invasive methods and compared with our model. Finally, we described our findings and discussed future works.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"63 1","pages":"967-972"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77255517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finding All Breadth First Full Spanning Trees in a Directed Graph","authors":"H. Khalil, Y. Labiche","doi":"10.1109/COMPSAC.2017.128","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.128","url":null,"abstract":"This paper proposes an algorithm that is particularly concerned with generating all possible distinct spanning trees that are based on breadth-first-search directed graph traversal. The generated trees span all edges and vertices of the original directed graph. The algorithm starts by generating an initial tree, and then generates the rest of the trees using elementary transformations. It runs in O(E+T) time where E is the number of edges and T is the number of generated trees. In the worst-case scenario, this is equivalent to O (E+En/Nn) time complexity where N is the number of nodes in the original graph. The algorithm requires O(T) space. However, possible modifications to improve the algorithm space complexity are suggested. Furthermore, experiments are conducted to evaluate the algorithm performance and the results are listed.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"1 1","pages":"372-377"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77113726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. D. S. Landi, F. Chagas, B. M. Santos, Renato S. Costa, R. Durelli, Ricardo Terra, V. V. D. Camargo
{"title":"Supporting the Specification and Serialization of Planned Architectures in Architecture-Driven Modernization Context","authors":"A. D. S. Landi, F. Chagas, B. M. Santos, Renato S. Costa, R. Durelli, Ricardo Terra, V. V. D. Camargo","doi":"10.1109/COMPSAC.2017.225","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.225","url":null,"abstract":"Architecture-Driven Modernization (ADM) intends to standardize software reengineering by relying on a family of standard metamodels. Knowledge-Discovery Metamodel (KDM) is the main ADM ISO metamodel aiming at representing all aspects of existing legacy systems. One of the internal KDM metamodels is called Structure, responsible for representing architectural abstractions (Layers, Components and Subsystems) and their relationships. Planned Architecture (PA) is an artifact that involves not only the architectural abstractions of the system but also the access rules that must exist between them and be maintained over time. Although PAs are frequently used in Architecture-Conformance Checking processes, up to this moment, there is no contribution showing how to specify and serialize PAs in ADM-based modernization projects. Therefore, in this paper we present an approach that i) involves a DSL (Domain-Specific Language) for the specification of PAs using the Structure metamodel concepts, and ii) proposes a strategy for the serialization of PAs as a Structure metamodel instance without modifying it. We have conducted a comparison between DCL-KDM and other techniques for specifying and generating PAs. The results showed that DCL-KDM is an efficient alternative to to generate instances of the Structure metamodel as a PA and to serialize it.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"31 1","pages":"327-336"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76180631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Barana, Michele Fioravera, M. Marchisio, S. Rabellino
{"title":"Adaptive Teaching Supported by ICTs to Reduce the School Failure in the Project “Scuola Dei Compiti”","authors":"A. Barana, Michele Fioravera, M. Marchisio, S. Rabellino","doi":"10.1109/COMPSAC.2017.44","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.44","url":null,"abstract":"Adaptive teaching consists in a teaching style aimed at overcoming inner differences among learners by providing them tailored ways to achieve outcomes according to their learning style, culture and needs. ICTs play an important role in supporting teachers with the adoption of adaptive strategies. Adaptive teaching acquires relevance in contexts of scholastic remediation and in reducing failures. The paper presents and discusses how adaptive teaching is used within \"Scuola dei Compiti\", a project proposed by the City of Turin (Italy) in collaboration with the University and the Polytechnic of Turin, aimed at reducing the drop-out rate and at enhancing scholastic success. University students help groups of 5-6 students aged 13-15 with their difficulties in the main subjects (Mathematics, Italian Language, Foreign Languages). Part of the Mathematics courses adopt a set of innovative computer based methodologies developed by the Department of Mathematics of the University of Turin, including a virtual learning environment integrated with an advanced computing environment, an automatic assessment system and a web-conference tool, which supports adaptive teaching. The project has involved over 7500 students in the last 5 years, the results achieved, discussed in the paper, show the effectiveness of these methodologies with respect to traditional teaching methods.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"42 1","pages":"432-437"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76515741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying Fundamental Goals from Objects in a Scenario to Facilitate System Introduction","authors":"Y. Chubachi, H. Kaiya","doi":"10.1109/COMPSAC.2017.143","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.143","url":null,"abstract":"To introduce suitable and innovative information systems into our daily activity, we have to know actual goals of its stakeholders. However, requirements analysts cannot directly know such goals, and stakeholders usually cannot state them explicitly even with the help of why questions. Analysts can directly observe or know the way of tasks in the activity, i.e. scenario. In this paper, we propose an early requirements analysis method using a scenario and goals in a bottom up way. In the method, the analysts first identify superficial objects and their attributes from the scenario, and construct a superficial goal model. The analysts then identify conceptual objects which do not appear in the scenario by combining and generalizing the superficial objects and goals. On the basis of the objects and superficial goals, the analysts identify the fundamental goals of the stakeholders. We are applying the method to examples to confirm how it works.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"43 1","pages":"276-277"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76807757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Searching Promising Researchers through Network Centrality Measures of Co-author Networks of Technical Papers","authors":"Masanori Fujita, Hiroto Inoue, T. Terano","doi":"10.1109/COMPSAC.2017.205","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.205","url":null,"abstract":"This paper proposes a method of searching for promising young researchers using network centrality of co-author relationship. This method estimates researchers' potential based on transition of betweenness centralities in co-author networks. We thought that excellent researchers always show high betweenness centralities, in contrast, betweenness centralities of promising researchers will grow with time. As a result of network analysis, we detected some researchers with growing betweenness centralities from the academic paper database of Japan Science and Technology Agency (JST). In addition, Japan Society for the Promotion of Science (JSPS) Research Fellows who had been evaluated as a promising young researcher by JSPS showed growing betweenness centralities. Therefore, our proposed method is useful to search for promising young researchers in collaborative research fields such as biological science.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"5 1","pages":"615-618"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79789392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeDuCT: A Data Dependence Based Concern Tagger for Modularity Analysis","authors":"Andrea Fornaia, E. Tramontana","doi":"10.1109/COMPSAC.2017.98","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.98","url":null,"abstract":"Modularity of a software system can be assessed once responsibilities of each method and class have been determined. Generally, developers attribute responsibilities to methods and classes manually. This can be problematic given that it relies on developers judgement and effort. This paper proposes an approach to automatically attribute concern tags to each instructions. The approach is based on taint analysis to determine which code lines are related to each other by data dependence. Moreover, Java APIs provide the tags used to mark code lines. The automatic concern tagging that we bring about is used to find out how responsibilities are spread in the code, and then to suggest refactoring activities in case tangling occurs.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"24 1","pages":"463-468"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81539078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Binarization Scheme for Real-Valued Biometric Feature","authors":"Jialiang Peng, Bian Yang","doi":"10.1109/COMPSAC.2017.26","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.26","url":null,"abstract":"Biometric binarization is the feature-type transformation that converts a specific feature representation into a binary representation. It is a fundamental issue to transform the real-valued feature vectors to the binary vectors in biometric template protection schemes. The transformed binary vectors should be high for both discriminability and privacy protection when they are employed as the input data for biometric cryptosystems. In this paper, we propose a novel binarization scheme based on random projection and random Support Vector Machine (SVM) to further enhance the security and privacy of biometric binary vectors. The proposed scheme can generate a binary vector of any given length as an ideal input for biometric cryptosystems. In addition, the proposed scheme is independent of the biometric feature data distribution. Several comparative experiments are conducted on multiple biometric databases to show the feasibility and efficiency of the proposed scheme.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"1 1","pages":"724-729"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79727184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HybridFS — A High Performance and Balanced File System Framework with Multiple Distributed File Systems","authors":"Lidong Zhang, Yongwei Wu, Ruini Xue, Tse-Chuan Hsu, Hongji Yang, Yeh-Ching Chung","doi":"10.1109/COMPSAC.2017.140","DOIUrl":"https://doi.org/10.1109/COMPSAC.2017.140","url":null,"abstract":"In the big data era, the distributed file system is getting more and more significant due to the characteristics of its scale-out capability, high availability, and high performance. Different distributed file systems may have different design goals. For example, some of them are designed to have good performance for small file operations, such as GlusterFS, while some of them are designed for large file operations, such as Hadoop distributed file system. With the divergence of big data applications, a distributed file system may provide good performance for some applications but fails for some other applications, that is, there has no universal distributed file system that can produce good performance for all applications. In this paper, we propose a hybrid file system framework, HybridFS, which can deliver satisfactory performance for all applications. HybridFS is composed of multiple distributed file systems with the integration of advantages of these distributed file systems. In HybridFS, on top of multiple distributed file systems, we have designed a metadata management server to perform three functions: file placement, partial metadata store, and dynamic file migration. The file placement is performed based on a decision tree. The partial metadata store is performed for files whose size is less than a few hundred Bytes to increase throughput. The dynamic file migration is performed to balance the storage usage of distributed file systems without throttling performance. We have implemented HybridFS in java on eight nodes and choose Ceph, HDFS, and GlusterFS as designated distributed file systems. The experimental results show that, in the best case, HybridFS can have up to 30% performance improvement of read/write operations over a single distributed file system. In addition, if the difference of storage usage among multiple distributed file systems is less than 40%, the performance of HybridFS is guaranteed, that is, no performance degradation.","PeriodicalId":6556,"journal":{"name":"2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)","volume":"40 1","pages":"796-805"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79978763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}