{"title":"Pilot decontamination in multi-cell massive MIMO systems","authors":"S. Memon, Zhe Chen, F. Yin","doi":"10.1145/3018009.3018013","DOIUrl":"https://doi.org/10.1145/3018009.3018013","url":null,"abstract":"In this paper, the performance of massive multiple-input-multiple-output (M-MIMO) with an infinite number of the base station (BS) antennas is studied. The performance of such systems is only limited by the pilot contamination, which is a negative effect of reusing uplink pilot sequences in the neighboring cells. The channel estimates obtained in the presence of pilot contamination is contaminated, which results in inter-cell interference in the downlink data transmission. This paper shows the severity of pilot contamination in the multi-cell scenario and proposes the implementation of distinct orthogonal variable spreading factor code at each BS to mitigate the pilot contamination in the time-division duplex (TDD) multi-cell M-MIMO systems. The performance of the uplink and downlink sum rates of the proposed scheme is compared with those of the time-shifted pilot scheme for an infinite number of BS antennas. The simulation results show that the proposed scheme outperformed time-shifted pilot scheme, which shows the effectiveness and validity of the proposed scheme.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ma, J. Ben-othman, Feng Gang, M. Ooki, Gihwan Cho
{"title":"Proceedings of the 2nd International Conference on Communication and Information Processing","authors":"M. Ma, J. Ben-othman, Feng Gang, M. Ooki, Gihwan Cho","doi":"10.1145/3018009","DOIUrl":"https://doi.org/10.1145/3018009","url":null,"abstract":"The major goal and feature of the conference is to bring academic scientists, engineers, industry researchers together to exchange and share their experiences and research results, and discuss the practical challenges encountered and the solutions adopted.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124455394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emulation of analog audio circuits on FPGA using wave digital filters","authors":"D. Hernandez, Y. Hsieh, Jin-Huang Huang","doi":"10.1145/3018009.3018028","DOIUrl":"https://doi.org/10.1145/3018009.3018028","url":null,"abstract":"This paper presents a Wave Digital Filter (WDF) emulation system suitable for audio applications implemented on a Field Programmable Gate Array (FPGA). The WDF structures are programmed in the reconfigurable FPGA CompactRIO via LabVIEW. When the system runs in real-time, the circuit parameters are controlled and monitored via a graphical user interface on a host computer. A tone control and a three way crossover filter are presented as application examples. The WDF emulations are validated by measuring the signals from the FPGA I/O modules with a data acquisition system. Measurement results confirm the accuracy of the presented examples and the reliability of CompactRIO to perform WDF audio circuit emulations.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125585367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling of physiological tremor with quaternion variant of extreme learning machines","authors":"S. Tatinati, Yubo Wang, K. Veluvolu","doi":"10.1145/3018009.3018053","DOIUrl":"https://doi.org/10.1145/3018009.3018053","url":null,"abstract":"Hand-held robotic surgical instruments are developed to acquire the maneuvered hand motion of the surgeon and then provide a control signal for real-time compensation of the physiological tremor in three-dimensional (3-D) space. For active tremor compensation, accurate modeling and estimation of physiological tremor is essential. The current modeling techniques that models tremor in 3D space consider the motion in three-axes [x, y, and z axes) as three separate one-dimensional signals and then perform modeling separately. Recently, it has been shown that for physiological tremor motion there exists cross dimensional coupling and it improves the modeling accuracy. Motivated by this, a quaternion variant for extreme learning machines is developed for accurate 3D modeling of tremor. The developed method is validated with real tremor data and the obtained results highlighted the suitability of this method for accurate tremor modeling in 3D space.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"307 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132722855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodel Felipe Miguel, Akankshita Dash, Khin Mi Mi Aung
{"title":"A study of secure DBaaS with encrypted data transactions","authors":"Rodel Felipe Miguel, Akankshita Dash, Khin Mi Mi Aung","doi":"10.1145/3018009.3018042","DOIUrl":"https://doi.org/10.1145/3018009.3018042","url":null,"abstract":"The emergence of cloud computing allowed different IT services to be outsourced to cloud service providers (CSP). This includes the management and storage of user's structured data called Database as a Service (DBaaS). However, DBaaS requires users to trust the CSP to protect their data, which is inherent in all cloud-based services. Enterprises and Small-to-Medium Businesses (SMB) see this as a roadblock in adopting cloud services (and DBaaS) because they do not have full control of the security and privacy of the sensitive data they are storing on the cloud. One of the solutions is for the data owners to store their sensitive data in the cloud's storage services in encrypted form. However, to take full advantage of DBaaS, there should be a solution to manage the structured data while it is encrypted. Upcoming technologies like Secure Multi-Party Computing (MPC) and Fully Homomorphic Encryption (FHE) are recent advances in security that allow computation on encrypted data. FHE is considered as the holy grail of cryptography and the original blue print's processing performance is in the order of 1014 times longer than without encryption. Our work gives an insight on how far the state-of-the-art is into realizing it into a practical and viable solution for cloud computing data services. We achieved this by comparing two types of encrypted database management system (DBMS). We performed well-known complex database queries and measured the performance results of the two DBMS. We used an FHE-encrypted relational DBMS (RDBMS) and for specific query sets it takes only a few milliseconds, and the highest is in the order of 104 times longer than encrypted object-oriented DBMS (OODBMS). Aside from focusing on performance of the two databases, we also evaluated the network resource usage, standards availability, and application integration.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121038826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance evaluation of depth map generation algorithm for stereo endoscopic camera","authors":"Jiho Chang, Jae-chan Jeong, Ho-chul Shin","doi":"10.1145/3018009.3018050","DOIUrl":"https://doi.org/10.1145/3018009.3018050","url":null,"abstract":"This paper introduces a performance evaluation method for algorithms that generates a depth map using an image from a stereo endoscopic camera for image processing of laparoscope operations. The depth image was created by using a space-time stereo method by illuminating various patterns on scenes consisting of models of the 3D-printed organ model and actual organs from a pig, and the ground truth image was generated for each sub-pixel unit to achieve high accuracy and high precision. Different algorithms were evaluated using the ground truth image data. The number of effective depth pixels compared to the ground truth and the distance error was measured from algorithms based on an edge-preserving filter as real-time algorithms and quasi-dense algorithms. This paper presents an analysis of each algorithm from its evaluation indices to determine which algorithm is appropriate to compute the depth map from laparoscopic images.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121242964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software-defined load balancer in cloud data centers","authors":"Renuga Kanagavelu, Khin Mi Mi Aung","doi":"10.1145/3018009.3018014","DOIUrl":"https://doi.org/10.1145/3018009.3018014","url":null,"abstract":"Today's Data Centers deploy load balancers to balance the traffic load across multiple servers. Commercial load balancers are highly specialized machines that are located at the front end of a Data Center. When a client request arrives at the Data Center, the load balancer would determine the server to service this client's request. It routes the request to an appropriate server based on the native policies such as round-robin, random or others without considering the traffic state. It is not possible to implement arbitrary polices as load balancers as they are vendor specific. Apart from that, the piece of hardware is expensive and becomes single point of failure. In this paper, we develop a software defined network (SDN) based load balancing architecture with a load-aware policy using OpenFlow switch connected to SDN controller and commodity servers. It is less expensive when compared to the commercial load balancer and has programming flexibility in terms of applying arbitrary polices by writing modules in the SDN controller. With the facility of supporting multiple controller connections in commercially-available OpenFlow switches, the system is robust to single point of controller failures. We develop a prototype implementation of the proposed SDN based load balancer and carry out performance study to demonstrate its effectiveness.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123916254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to make a telephone speech corpus","authors":"Yin Zhigang","doi":"10.1145/3018009.3018049","DOIUrl":"https://doi.org/10.1145/3018009.3018049","url":null,"abstract":"The telephone speech corpus is the basis of developing a Human-machine interaction system designed for communication and mobile internet. The main problem nowadays for constructing a qualified speech corpus is lack of a standard scheme. This research tries to find a standardization program which can make the corpus be established more efficiently and be used or shared easier. The specifications of constructing a speech corpus are also introduced in the paper. Finally, a telephone speech corpus, TSC973, be exemplified to illuminate the standardization program.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127180880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reference-based data compression for genome in cloud","authors":"Haixiang Shi, Yongqing Zhu, J. Samsudin","doi":"10.1145/3018009.3018030","DOIUrl":"https://doi.org/10.1145/3018009.3018030","url":null,"abstract":"In this paper, we propose a new reference-based data compression method for efficient compressing of genome sequencing data in FASTQ format. With the advance of the next sequencing technology, the genome data can be generated faster and cheaper, which brings the challenges for efficient storage of these data when used in cloud computing. In order to efficiently store these types of genome data in cloud, content-aware compressing methods have to be developed to make use of the specific file structures. Compared with existing genome-specific compression methods, our proposed content-aware method focused on high compression ratio by taking advantages of repetitive nature of DNA sequence, and using reference genomes in compressing the sequences inside the FASTQ files. The benchmark results of 8 datasets show that our method can achieve highest compression ratio compared with existing FASTQ file compressors.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129805646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a librarian of the web","authors":"M. Kubek, H. Unger","doi":"10.1145/3018009.3018031","DOIUrl":"https://doi.org/10.1145/3018009.3018031","url":null,"abstract":"If the World Wide Web (WWW) is considered to be a huge library, it would need a librarian, too. Google and other web search engines are more or less just keyword databases and cannot fulfil this person's tasks in a sufficient manner. Therefore, an approach to improve cataloguing and classifying documents in the WWW is introduced and its efficiency demonstrated in first simulations.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127934501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}