{"title":"Cloud computing meets 5G networks: efficient cache management in cloud radio access networks","authors":"Gurpreet Kaur, M. Moh","doi":"10.1145/3190645.3190674","DOIUrl":"https://doi.org/10.1145/3190645.3190674","url":null,"abstract":"Advances in cellular network technology continue to develop to address increasing demands from the growing number of Internet of Things (IoT) devices. IoT has brought forth countless new equipment competing for service on cellular networks. The latest in cellular technology is 5th Generation Cloud Radio Access Networks, or 5G C-RAN, which consists of applying cloud computing technology to the RAN architecture for better resource utilization and increased flexibility and scalability. A cache is included in each VM for speedy cellular network services, there is thus a necessity for efficient cache management schemes, which ultimately will provide better user experiences. This paper designs new cache management schemes and evaluates their performance. The new algorithms include a probability-based scoring scheme, a hierarchical, or tiered, approach, and enhancements to previously existing approaches. Performance evaluation shows that some of the new schemes, while simple in design, offer high cache hit ratios, low latency of request services, preferential treatment based on users' service levels, and a reduction in network traffic as compared with other existing and classic caching mechanisms. We believe that this work is important in advancing 5G technology for supporting IoT services, and is also useful to other cache management systems.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115964346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards reproducible research: automatic classification of empirical requirements engineering papers","authors":"Clinton Woodson, J. Hayes, S. Griffioen","doi":"10.1145/3190645.3190689","DOIUrl":"https://doi.org/10.1145/3190645.3190689","url":null,"abstract":"Research must be reproducible in order to make an impact on science and to contribute to the body of knowledge in our field. Yet studies have shown that 70% of research from academic labs cannot be reproduced. In software engineering, and more specifically requirements engineering (RE), reproducible research is rare, with datasets not always available or methods not fully described. This lack of reproducible research hinders progress, with researchers having to replicate an experiment from scratch. A researcher starting out in RE has to sift through conference papers, finding ones that are empirical, then must look through the data available from the empirical paper (if any) to make a preliminary determination if the paper can be reproduced. This paper addresses two parts of that problem, identifying RE papers and identifying empirical papers within the RE papers. Recent RE and empirical conference papers were used to learn features and to build an automatic classifier to identify RE and empirical papers. We introduce the Empirical Requirements Research Classifier (ERRC) method, which uses natural language processing and machine learning to perform supervised classification of conference papers. We compare our method to a baseline keyword-based approach. To evaluate our approach, we examine sets of papers from the IEEE Requirements Engineering conference and the IEEE International Symposium on Software Testing and Analysis. We found that the ERRC method performed better than the baseline method in all but a few cases.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121520523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-core real-time scheduling in multilevel feedback queue with starvation mitigation (MLFQ-RT)","authors":"K. Hoganson","doi":"10.1145/3190645.3190668","DOIUrl":"https://doi.org/10.1145/3190645.3190668","url":null,"abstract":"Process scheduling for real-time processes is a critical function of real-time operating systems, which are required to guarantee soft and hard deadlines for completing real-time processes. The behavior of Multi-Level Feedback Queue (MLFQ) scheduling mechanisms intrinsically support a scheduling that favors short CPU bursts to the complete exclusion of all other processes in the ready queues. This MLFQ feature has been extended to support meeting both hard and soft real-time process deadlines in robotics and automated manufacturing applications. This research explores a new derivative of MLFQ for real-time scheduling called MLFQ-Real-Time (MLFQ-RT) investigated through simulation for multi-core processors. The MLFQ-RT real-time extension for multi-core processors builds upon research previously solved for a known weakness of MLFQ scheduling: a vulnerability to starvation of processes in the lowest priority queue, so that the operating system is unable to guarantee that all processes will make progress. This scheduling algorithm is extended to multi-core processors with three hypothesis examined and validated through simulation, showing hard and soft real-time process scheduling while maintaining the previously demonstrated mitigation of starvation in low priority queues.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124594811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor Martin, Justin Zhang, William Nick, Cory Sabol, A. Esterline
{"title":"Implementing webIDs + biometrics","authors":"Taylor Martin, Justin Zhang, William Nick, Cory Sabol, A. Esterline","doi":"10.1145/3190645.3190711","DOIUrl":"https://doi.org/10.1145/3190645.3190711","url":null,"abstract":"In this paper, our main focus will be on the integration of WebIDs and biometrics. biometrics is the process of utilizing a user's physical characteristics to identify them. There are three types of authentication. Knowledge-based authentication, based on the user's knowledge, is where the user will use a pin number or a password to gain access. Token-based authentication uses some form of physical identification to verify the user. The final form of authentication is biometric-based authentication. Genetic and Evolutionary Feature Extraction (GEFE) is a feature extraction technique that can be used to evolve local binary pattern (LBP) based feature extractors that are disposable for users of biometric-based authentication systems. LBP compares intensity values of a pixel in a group of pixels to form a texture pattern. Each of these segmented regions has its own histogram that stores the frequency of these unique texture patterns that occur in a region. GEFE is an instance of a genetic and evolutionary computation (GEC). A WebID is a uniform resource identifier (URI) that represents some agent, such as a person, organization, group, or device. A URI is a sequence of characters that identifies a logical or physical resource. Many services that require any type of authentication rely on centralized systems. This means that users are forced to have a different account and identifier for each service they are using. For every service, a new registration needs to be created, which can be a burden on both the user and the service. A WebID will represent a user's WebID profile. A user's WebID profile contains a set of relations that describe the user. When the user's profile is de-referenced, it will resolve to their profile document with structured data in RDF. WebIDs provide a relatively simple and safe alternative to traditional username/password user verification. However, they can still be compromised if an attacker gains direct access to a user's computer, or if the user's unique certificate is stolen. Adding biometrics to the authentication process can help solve this issue since biometric data (e.g., fingerprints, iris scans) is unique and not easily duplicated. If a biometric element can be added to WebID profiles, then users could be verified through both their WebID and biometric authentication. We are implementing a method of user verification that is convenient, widely applicable via the Internet, and protected against intrusion. Traditionally, sites store user log-in information on their own servers.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125168966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving offensive cyber security assessments using varied and novel initialization perspectives","authors":"Jacob Oakley","doi":"10.1145/3190645.3190673","DOIUrl":"https://doi.org/10.1145/3190645.3190673","url":null,"abstract":"Offensive cyber security assessment methods such as red teaming and penetration testing have grown in parallel with evolving threats to evaluate traditional and diverging attack surfaces. This paper provides a taxonomy of ethical hacker conducted offensive security assessments by categorization of their initial evaluation perspectives. Included in this taxonomy are the traditional assessment perspectives which initiate analysis and attack simulation against networks either externally, from within a DMZ or internally. A novel paradigm of critical perspective as an initial point for offensive security evaluation processes is also presented. This initialization from a critical perspective bolsters the holistic capabilities of offensive cyber security assessment by providing a new offensive security assessment option intended to begin evaluation at the last line of defense between malicious actors and the crown jewels of an organization. Then from such a perspective assess outwards from the deepest levels of trust and security. This method will be shown to improve the ability to mitigate the impact of threats regardless of their originating from within or without an organization. As such, the assessment initialization at a critical perspective provides a new approach to offensive security assessment different from what has traditionally been practiced by red teams and penetration testers.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A software engineering schema for data intensive applications","authors":"S. Suthaharan","doi":"10.1145/3190645.3190675","DOIUrl":"https://doi.org/10.1145/3190645.3190675","url":null,"abstract":"The features developed by a software engineer (system specification) for a software system may significantly differ from the features required by a user (user requirements) for their envisioned system. These discrepancies are generally resulted from the complexity of the system, the vagueness of the user requirements, or the lack of knowledge and experience of the software engineer. The principles of software engineering and the recommendations of the ACM's Software Engineering Education Knowledge (SEEK) document can provide solutions to minimize these discrepancies; in turn, improve the quality of a software system and increase user satisfaction. In this paper, a software development framework, called SETh, is presented. The SETh framework consists of a set of visual models that support software engineering education and practices in a systematic manner. It also enables backward tracking/tracing and forward tracking/tracing capabilities - two important concepts that can facilitate the greenfield and evolutionary type software engineering projects. The SETh framework connects every step of the development of a software system tightly; hence, the learners and the experienced software engineers can study, understand, and build efficient software systems for emerging data science applications.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using locality sensitive hashing to improve the KNN algorithm in the mapreduce framework","authors":"S. Bagui, A. Mondal, S. Bagui","doi":"10.1145/3190645.3190700","DOIUrl":"https://doi.org/10.1145/3190645.3190700","url":null,"abstract":"The K-Nearest Neighbor! (KNN) algorithm is one of the most widely used algorithms in data mining for classification and prediction. The algorithm has several applications: in facial detection when used with deep learning, in biometric security applications etc. The traditional KNN algorithm involves an iterative process of computing the distance between a test data point and every data point in the training dataset, and classifying the object based on the closest training sample. This method first selects K nearest training data points for classifying a test data point and then predicts the test sample's class based on the majority class among those neighbors. If both the train and test datasets are large, this conventional form can be considered computationally expensive. Reduction of the massive calculation that is required to predict a data vector was our main goal, and with this intention, the training dataset was split into several buckets. The KNN algorithm was then performed inside a bucket, instead of iterating over the whole training dataset. We used the Jaccard Coefficient to determine the degree of similarity of a data vector with some arbitrarily defined data points P and placed similar data points in the same bucket. This was the core functionality of our hash function. The hash function determines the bucket number where the similar data vectors will be placed. Unlike the standard hashing algorithm, our approach of hashing was to maximize the probability of the hash collision to preserve the locality sensitiveness. Both the conventional and proposed methods were implemented in Hadoop's MapReduce framework. Hadoop gives us an architecture for handling large datasets on a computer cluster in a distributed manner and gives us massive scalability. The use of the locality sensitive hashing in KNN in Hadoop's MapReduce environment took less time than conventional KNN to classify a new data object.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"29 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaleb E. Smith, Chunhua Dong, M. Naghedolfeizi, Xiangyan Zeng
{"title":"Detail preservation of morphological operations through image scaling","authors":"Kaleb E. Smith, Chunhua Dong, M. Naghedolfeizi, Xiangyan Zeng","doi":"10.1145/3190645.3190691","DOIUrl":"https://doi.org/10.1145/3190645.3190691","url":null,"abstract":"Morphological techniques probe an image with a structuring element. By varying the size and the shape of structuring elements, geometrical information of different parts of an image and their interrelation can be extracted for the applications of demodulating boundary, identifying components or removing noise. While large size elements benefits eliminating noise, they may be disadvantageous for preserving details in an image. Taking this into consideration, in this paper, we propose an image scaling method that will preserve detailed information when applying morphological operations to remove noise. First, a binary image is obtained, from which a Preservation Ratio Scalar (PRS) is calculated. The PRS is used for upscaling the image before morphological operations, which aims at preserving structural fine details otherwise eliminated in the original image. Finally, the morphological operator processed image is downscaled using the PRS. Experiments of target detection demonstrated the effectiveness of the proposed method in preserving the structural details such as edges while eliminating noises.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130175227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A team software process approach to database course","authors":"R. Tashakkori, Zachary W. Andrews","doi":"10.1145/3190645.3190676","DOIUrl":"https://doi.org/10.1145/3190645.3190676","url":null,"abstract":"In recent years, some programs have created a database track to provide an opportunity for students to further their database skills and expertise. As database management systems are being utilized widely in the real-world and become an integral part of computer science applications, it is critical for students to gain practical experiences in this field.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130456802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Johnson, Drashti Patel, Adel Alluhayb, Nannan Li, Chi Shen, T. Webster
{"title":"Preliminary studies of honey queen bee conditions using Cyranose 320 nose technology","authors":"G. Johnson, Drashti Patel, Adel Alluhayb, Nannan Li, Chi Shen, T. Webster","doi":"10.1145/3190645.3190696","DOIUrl":"https://doi.org/10.1145/3190645.3190696","url":null,"abstract":"Over the last ten years, the bee keeping industry has been struggling to understand and stop the sudden widespread loss or collapse of honey bee colonies, known collectively as Colony Collapse Disorder (CCD), in the U.S. and around the world. While honey bee colonies experience many stressors that could cause a colony to collapse, we are focusing on the quality, health and reproductive ability of honey bee queens. The purpose of this line of research is to identify relationships between the pheromone signatures of honey bee queens and the quality of honey bee queens. The ultimate goal of this research is to find a reliable, non-invasive tool that does not harm the queen, but still allows beekeepers to make informed decisions about purchasing honey bee queens and deciding when to replace a queen bee before a colony collapses. In this portion of the research, we use an electronic nose (e-nose) device, which is a device that digitizes smells. The scope of this paper is to determine whether an e-nose device is viable for our research, and if so, to determine the best way to configure the settings to improve data collection. Also, to gather data on queen bee pheromones production was considered since that is an indicator of a queen bee's reproductive ability. We were able to use the e-nose device to digitize pheromone signatures from 20 queen bees. Using Microsoft excel and R programming language, we were able to see patterns that will be useful in configuring the e-nose device for future research. We also noticed an early indication that the e-nose can distinguish between a healthy bee and a sick bee.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}