{"title":"List of reviewers","authors":"F. Niederman, A. Aggarwal","doi":"10.3109/14397595.2016.1132811","DOIUrl":"https://doi.org/10.3109/14397595.2016.1132811","url":null,"abstract":"The Editor-in-Chief of Modern Rheumatology and the members of the Editorial Board as well as the Advisory Board would like to thank all the individuals who dedicated their considerable time and their excellent work as reviewers. This list covers the period from November 15, 2014 to November 20, 2015. An asterisk indicates peer review of two or more manuscripts and the number in parentheses indicates the total reviews of who handled more than 7 manuscripts during that period. We would once again deeply appreciate frequent reviewers for the great contribution to the Modern Rheumatology.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131422322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mouth covered detection for yawn","authors":"M. M. Ibrahim, John S. Soroghan, L. Petropoulakis","doi":"10.1109/ICSIPA.2013.6707983","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707983","url":null,"abstract":"Yawn is one of the common fatigue sign phenomena. The common technique to detect yawn is based upon the measurement of mouth opening. However, the spontaneous human action to cover the mouth during yawn can prevent such measurements. This paper presents a new technique to detect the covered mouth by employing the Local Binary Pattern (LBP) features. Subsequently, the facial distortions during the yawn process are identified by measuring the changes of wrinkles using Sobel edges detector. In this research the Strathclyde Facial Fatigue (SFF) database that contains genuine fatigue signs is used for training, testing and evaluation of the developed algorithms. This database was created from sleep deprivation experiments that involved twenty participants.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth Image Layers Separation (DILS) algorithm of image view synthesis based on stereo vision","authors":"N. A. Manap, J. Soraghan, L. Petropoulakis","doi":"10.1109/ICSIPA.2013.6707978","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707978","url":null,"abstract":"A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129636763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Level View Synthesis (MLVS) based on Depth Image Layer Separation (DILS) algorithm for multi-camera view system","authors":"N. A. Manap, J. Soraghan, L. Petropoulakis","doi":"10.1109/ICSIPA.2013.6707980","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707980","url":null,"abstract":"A novel Multi-Level View Synthesis (MLVS) approach for 3D vision and free-viewpoint video applications, such as light field imaging, is presented. MLVS exploits the advantages of Depth Image Layer Separation (DILS), a new inter-view interpolation algorithm, by extending stereo to multiple camera configurations. The technique finds the pixel correspondences and synthesis through two levels of matching and synthesis process. The main aim of MLVS is to create a multi-camera view system through a reduced number of actual image acquisition cameras, whilst maintaining the quality of the virtual view synthesis images. The proposed technique is shown to offer improved performance and provide additional views with fewer cameras compared to conventional high volume camera configurations for free-viewpoint video acquisition. Thus, substantial cost savings can ensue in processing, calibration, bandwidth and storage requirements.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127331620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Al-Baghdadi, A. Chong, P. Milburn, R. Newsham-West
{"title":"Correlating video-captured 3D foot model with foot loading during walking","authors":"J. Al-Baghdadi, A. Chong, P. Milburn, R. Newsham-West","doi":"10.1109/ICSIPA.2013.6707996","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707996","url":null,"abstract":"The intensity of research to study the functioning of the human foot and the body-weight loading impact on its performance has increased considerably in the last five years. This type of research is particularly important for injured or deformed foot. Low-cost HD video cameras are becoming popular for capturing accurate three-dimensional (3D) model of the human body parts and they have shown to be useful for the study of the human foot during walking. A research was carried out to determine whether continuous capture of the 3D models of the foot during walking can assist in the understanding of the loading characteristics of the foot. This paper provides discussion on the methods used to correlate the video-captured 3D model of the foot and the force plate recording of the foot-loading during walking. The discussion covers the test methods and the results of the study. The studies show that the techniques developed produce precise correlation between foot loading and the video-captured 3D models and these data could be used for the mentioned applications.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130830404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accurate videogrammetric data for human limb movement research","authors":"A. Chong, J. Al-Baghdadi","doi":"10.1109/ICSIPA.2013.6707995","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707995","url":null,"abstract":"Recently, off-the-shelf HD video cameras are recognized low-cost video capture for the study of human movements associating with sport training, sport performance evaluation, physical impairment evaluation and rehabilitation evaluation. The data required for these applications are usually dimensions, 3D distance, angular elements and speed of movement of the various body components such as the head, trunk and limbs. More complex data include isoline plots, profile and cross-section and 3D textured models of these body parts. This paper focuses on the developed techniques that are used for acquiring high accuracy data for these applications. In the paper, the results of three current case studies are provided to show the quality of the videogrammetric acquired data. The studies show that the techniques developed produce high accuracy data for various applications in movement research.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121135806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Liu, O. Déforges, François Pasteau, Khouloud Samrouth
{"title":"Low complexity RDO model for locally subjective quality enhancement in LAR coder","authors":"Yi Liu, O. Déforges, François Pasteau, Khouloud Samrouth","doi":"10.1109/ICSIPA.2013.6707999","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707999","url":null,"abstract":"This paper introduces a rate distortion optimization (RDO) scheme with subjective quality enhancement applied to a still image codec called Locally Adaptive Resolution (LAR). This scheme depends on the study of the relation between compression efficiency and relative parameters, and has a low complexity. Linear models are proposed first to find suitable parameters for RDO. Next, these models are combined with an image segmentation method to improve the local image quality. This scheme not only keeps an effective control in balance between bitrate and distortion, but also improves the spatial structure of images. Experiments are done both in objective and subjective ways. Results show that after this optimization, LAR has an efficient improvement of subjective image quality of decoded images. This improvement is significantly visible and compared with other compression methods using objective and subjective quality metrics.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128663608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive threshold method for mass detection in mammographic images","authors":"M. Eltoukhy, I. Faye","doi":"10.1109/ICSIPA.2013.6708036","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708036","url":null,"abstract":"An early detection of abnormalities is the key point to improve the prognostic of breast Cancer. Masses are among the most frequent abnormalities. Their detection is however a very tedious and time-consuming task. This paper presents an automatic scheme to perform both detection and segmentation of breast masses. Firstly, the breast region is determined and extracted from the whole mammogram image. Secondly, an adaptive algorithm is proposed to perform an accurate identification of the mass region. Finally, a false positive reduction method is applied through a feature extraction method and classification using the advantages of multiresolution representations (curvelet and wavelet). The classification step is achieved using SVM and KNN classifiers to distinguish between normal and abnormal tissues. The proposed method is tested on 118 images from mammographic images analysis society (MIAS) datasets. The experimental results demonstrate that the proposed scheme achieves 100% sensitivity with average of 1.87 False Positive (FP) detections per image.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124286610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keynote speaker I: From pixels to medical imaging","authors":"A. Hani","doi":"10.1109/ICSIPA.2013.6707963","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707963","url":null,"abstract":"Pixels represent picture elements and it is the smallest unit of picture information that is stored as bits to form a digital image. In general, more bits stored per pixel will result in clearer image due to higher greyscale or colour resolution in the image, and the more pixels used to represent an image, the closer the image resembles the original. In imaging science, the analysis, manipulation, storage, and display of pixel information from sources as photographs, drawings, and video refers to image processing. Output of the image processing is an image or a set of characteristics or parameters related to the image. Studies at the Centre for Signal & Imaging Research (CISIR) focus on these characteristics and parameters in the area of surface imaging, optical imaging and functional imaging for early detection and monitoring of various medical health problems. Subjective assessment is replaced with objective assessment based on engineering measurements with given accuracy and precision. For example, Diabetic Retinopathy monitoring and grading using measurements of 2D surface imaging, Psoriasis Area Severity Index (PASI) monitoring system using data points of 3D surface imaging techniques, Pigmentation disorder (PD) such as vitiligo and melasma assessment, and skin modelling using optical imaging. Outcomes from such translational research have already been tested at hospitals in Malaysia to help clinicians in diagnosis and monitoring several diseases that can lead to betterment the treatment process in early stages of such disease.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114927949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norul Uyuun Mohd Noor, Hezerul Abdul Karim, N. Arif, A. Sali
{"title":"Multiview plus depth video using High Efficiency Video Coding method","authors":"Norul Uyuun Mohd Noor, Hezerul Abdul Karim, N. Arif, A. Sali","doi":"10.1109/ICSIPA.2013.6708005","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708005","url":null,"abstract":"The main problem associated with 3D video delivery is huge data transmission rate especially when the data to be submitted are large video files such as multiview plus depth videos. This paper describes a new method of compression for multiview videos with depth data by using the new High Efficiency Video Coding (HEVC) technology. We propose a new compression method by applying the Reduced Resolution Depth Coding (RRDC) method to the depth videos. RRDC is applied by Down-Sampling and Up-Sampling (DSUS) the depth data of the multiview videos. The depth data is down-sampled before HEVC encoding and up-sampled after HEVC decoding operation. The proposed depth compression method used with HEVC showed a 20% savings at low bit rate when tested with 2 views plus depth video sequence.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115017936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}