{"title":"A Review for the Genetic Algorithm and the Red Deer Algorithm Applications","authors":"Raed Abu Zitar","doi":"10.1109/CISP-BMEI53629.2021.9624319","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624319","url":null,"abstract":"The Red Deer algorithm (RD), a contemporary population-based meta heuristic algorithm, applications are thoroughly examined in this paper. The RD algorithm blends evolutionary algorithms' survival of the fittest premise with the productivity and richness of heuristic search approaches. On the other a well-known and relatively older evolutionary based algorithm called the Genetic Algorithm applications are also shown. The contemporary algorithm; the RDA, and the older algorithm; the GA have wide applications in computer science and engineering. This paper sheds the light on all those applications and enable researchers to exploit the possibilities of adapting them in any applications they may have either in engineering, computer science, or business.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129390531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jixuan Liu, Ke Xu, Chunpeng Zhao, Gang Zhu, Yu Wang, Xinbao Wu, Xu Sun, Wei Tian
{"title":"An Experiment Investigation and FE Simulation Analysis on Elastic Traction Method Applied in the Pelvic Reduction","authors":"Jixuan Liu, Ke Xu, Chunpeng Zhao, Gang Zhu, Yu Wang, Xinbao Wu, Xu Sun, Wei Tian","doi":"10.1109/CISP-BMEI53629.2021.9624437","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624437","url":null,"abstract":"Pelvic fracture is the most complicated fracture in traumatic orthopedics. Tremendous muscle resistance is the main difficulty in carrying out pelvic reduction for the physician or robot-assisted reduction surgery in the future. During pelvic fracture surgery, the traction method is commonly utilized to reduce the reduction force against the sizeable muscle resistance. We proposed an elastic traction method to provide flexibility in the pelvic reduction and reduce the force needed to reset the pelvic. According to the experimental results in this paper, when adopting the elastic traction method in surgery, the reduction force can be reduced up to 56.2%. A musculoskeletal model of pelvic fracture reduction based on spring constraints was also established and verified by the experimental data. Through the simulation model, we investigated the influences of spring stiffness on the performance of elastic traction. The simulation results show that the minor spring stiffness is, the smaller the reduction force would be. In addition, we compared the maximum stress and reduction force applied on k-wire during a single K-wire pathway holding to complete reduction. We consider that the Kirschner pin 1 pathway is the optimal pathway in pelvic reduction.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133791735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmentation and Registration of Ultrasound Images of Uterine Fibroids for USgHIFU","authors":"Xin Luo, Qianwen Huang, Xiang Ji, Jingfeng Bai","doi":"10.1109/CISP-BMEI53629.2021.9624342","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624342","url":null,"abstract":"Ultrasound-guided high-intensity focused ultrasound(USgHIFU) is a minimally invasive ablation treatment method for uterine fibroids. It completes the image guidance of the HIFU ablation operation by acquiring ultrasound images of the patient in real time. When HIFU ablates the nourishing arteries of fibroids, it causes arterial vasoconstriction and block blood vessels from delivering nutrients to fibroids and induce shrinkage of uterine fibroids. Since the guidance ultrasound imaging integrated on the USgHIFU treatment head is deeper and will be affected by the flowing water in the water bladder, it is difficult to use the ultrasound probe integrated on the treatment head to collect Doppler color flow imaging. External handheld ultrasound acquisition color Doppler flow imaging(CDFI) is needed to assist the nourishing artery ablation operation. This process requires sonographers to manually identify blood vessels. This study proposes a method to automaticly segment and register USgHIFU guidance ultrasound images and handheld ultrasound images. Firstly, use ReFineNet to segment complete fibroids contours in handheld ultrasound images and manually label upper boundaries of fibroids in guidance ultrasound. Then, use iterative nearest point(ICP) and shape context to register two image. In this study, a clinical ultrasound dataset was established to verify the method. Dice of segmentation can reach 0.879, mean distance error(MDE) of registration is less than 1mm.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132915022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunyan Hong, Ailing Zeng, Min Li, Cewu Lu, Li Jiang, Qiang Xu
{"title":"Skimming and Scanning for Efficient Action Recognition in Untrimmed Videos","authors":"Yunyan Hong, Ailing Zeng, Min Li, Cewu Lu, Li Jiang, Qiang Xu","doi":"10.1109/CISP-BMEI53629.2021.9624415","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624415","url":null,"abstract":"Video action recognition (VAR) aims to classify videos into a predefined set of classes, which is a primary task of video understanding. We mainly focus on the VAR of untrimmed videos because they are most common videos in real-life scenes. Untrimmed videos have redundant and diverse clips containing contextual information, so sampling the clips is essential. Recently, some works attempt to train a generic model to select the $N$ most representative clips. However, it is difficult to model the complex relations from intra-class clips and inter-class videos within a single model and fixed selected number, and the entanglement of multiple relations is also hard to explain. Thus, instead of “only look once”, we argue “divide and conquer” strategy will be more suitable in untrimmed VAR. Inspired by the speed reading mechanism, we propose a simple yet effective clip-level solution based on skim-scan techniques. Specifically, the proposed Skim-Scan framework first skims the entire video and drops those uninformative and misleading clips. For the remaining clips, it scans clips with diverse features gradually to drop redundant clips but cover essential content. The above strategies can adaptively select the necessary clips according to the difficulty of the different videos. In order to further cut computational overhead, we observe the similar statistical expression between lightweight and heavy networks. Thus, we explore the combination of them to trade off the computational complexity and performance. Comprehensive experiments are performed on ActivityNet and mini-FCVID datasets, and results demonstrate that our solution surpasses the state-of-the-art performance in terms of accuracy and efficiency.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Red Deer Algorithm in Optimizing Complex functions","authors":"R. A. Zitar, L. Abualigah","doi":"10.1109/CISP-BMEI53629.2021.9624345","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624345","url":null,"abstract":"The Red Deer algorithm (RDA), a recently developed population-based meta-heuristic algorithm, is examined in this paper with the optimization task of complex functions. The RD algorithm blends evolutionary algorithms' survival of the fittest concept with heuristic search techniques' productivity and richness. It is critical to assess this algorithm's performance in comparison with other well-known heuristic methods. The findings are presented along with additional recommendations for increasing RDA performance based on the analysis. The readers of this paper will gain a grasp of the RD algorithm and its optimization ability to determine whether this algorithm is appropriate for their particular business, research, or industrial needs.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128990000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaole Guan, Yanfei Lin, Qun Wang, Zhiwen Liu, Cheng-Shui Liu
{"title":"Sports fatigue detection based on deep learning","authors":"Xiaole Guan, Yanfei Lin, Qun Wang, Zhiwen Liu, Cheng-Shui Liu","doi":"10.1109/CISP-BMEI53629.2021.9624395","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624395","url":null,"abstract":"Moderate exercise is good for human health. However, when the exercise intensity exceeds a certain level, it will be harmful to the human body. Therefore, precise control and adjustment of exercise load can ensure athletes' sports safety and improve their competitive performance. In this work, we have developed wearable exercise fatigue detection technology to estimate the human body's exercise fatigue state using real-time monitoring of the ECG signal and Inertial sensor signal of the human body. 14 young healthy volunteers participated in the running experiment, wearing ECG acquisition equipment and inertial sensors. ECG, acceleration and angular velocity signals were collected to extract features. And then Bidirectional long and short-term memory neural network (Bi-LSTM) was used to classify three levels of sports fatigue. The results showed that the recognition accuracy of the user-independent model was 80.55%. The experimental results verified the effectiveness of the algorithm.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127601308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanning Kong, Ming Cheng, Ning Wang, Huaisheng Cao, Zaifeng Shi
{"title":"Metal Artifact Reduction by Using Dual-Energy Raw Data Constraint Learning","authors":"Fanning Kong, Ming Cheng, Ning Wang, Huaisheng Cao, Zaifeng Shi","doi":"10.1109/CISP-BMEI53629.2021.9624233","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624233","url":null,"abstract":"Computed tomography (CT) is of great significance in the field of medical diagnosis. However, metal artifacts in the reconstruction images are disadvantageous for doctors to make a fast and accurate diagnosis when high-density metals present in the scanned location. The spectral CT has excellent performance in metal artifact reduction (MAR) method, which can combinate the prior information can to realize the information complementarity. In this paper, a MAR method based on dual-energy raw data constrained learning is proposed in this paper. The raw projection data of high/low energy and the results of normalized metal artifact reduction (NMAR) are input to the dual-stream U-Net (DSU-Net) for getting the virtual monoenergetic image (VMI) to reduce the secondary artifacts. The experimental results show that the peak signal-to-noise ratio (PSNR) of the output image is up to 49.60, SSIM to 0.997. It is proved that the raw data constrained learning method can suppress the residual artifacts from the traditional information pretreatment method.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facial Expression Recognition with Attention Mechanism","authors":"Caixia Wang, Zhihui Wang, Dong Cui","doi":"10.1109/CISP-BMEI53629.2021.9624355","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624355","url":null,"abstract":"With the development of artificial intelligence, facial expression recognition (FER) has greatly improved performance in deep learning, but there is still a lot of room for improvement in the study of combining attention to focus the network on key parts of the face. For facial expression recognition, this paper designs a network model, which use spatial transformer network to transform the input image firstly, and then adding channel attention and spatial attention to the convolutional network. In addition, in this paper, the GELU activation function is used in the convolutional network, which improves the recognition rate of facial expressions to a certain extent.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121679847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Ma, G. Shen, Shenyan Zong, Jiatao Gu, Hao Wu, Shengfa Zhang, Bo Wei
{"title":"Acoustic Coupling Bath Using Heavy Water For Transranial Magnetic Resonance-Guided Focused Ultrasound Surgery","authors":"Xiao Ma, G. Shen, Shenyan Zong, Jiatao Gu, Hao Wu, Shengfa Zhang, Bo Wei","doi":"10.1109/CISP-BMEI53629.2021.9624348","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624348","url":null,"abstract":"In transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) treatments, the degassed water flowing between the gap of the skull and the focused ultrasound (FUS) transducer was used for acoustic coupling and skull skin cooling. However, the circulating water may affect the magnetic resonance (MR) images, such as introducing uncertain artifacts or reducing signal-to-noise ratio (SNR). This study, tested the feasibility of using heavy water (D2O) as a coupling bath for tcMRgFUS. Phantom studies were conducted to examine the validity of heavy water on eliminating image artifacts and improving SNR. Meanwhile, the acoustic properties of D2O in coupling were measured. The results suggested that the acoustic attenuation of heavy water can be neglected and the acoustic velocity in heavy water is close to that of pure water. Thus, heavy water is a suitable material for tcMRgFUS coupling baths replacing the regular cooling and coupling water (H2O).","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122513045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based context aggregation network for tumor diagnosis","authors":"Lin Zhu, Xinliang Qu, Shoushui Wei","doi":"10.1109/CISP-BMEI53629.2021.9624424","DOIUrl":"https://doi.org/10.1109/CISP-BMEI53629.2021.9624424","url":null,"abstract":"Craniopharyngioma is a type of benign brain tumor but has severe biological malignant behavior. Whether the craniopharyngioma invades the surrounding brain tissue has important influence on making treatment plan and the prognosis of patients, so the accurate diagnosis of craniopharyngioma is a crucial step in the treatment processing. It is important to explore some methods for judging the invasiveness of craniopharyngioma preoperatively. Therefore, we proposed a context aggregation network (CA-2D Network) based on deep learning algorithm, which can diagnose the invasiveness of craniopharyngioma by judging the characteristics of head MRI images. The proposed CA-2D Network utilizes ResNet as the backbone, and has a context modeling block and feature aggregating head to correlate features from different slices, capture context information, and aggregate features for classification. The features extracted by the CA-2D Network yield area under the curve (AUC) values of 82.59% for the test set. As demonstrated in the results, the proposed CA-2D Network is promising.","PeriodicalId":131256,"journal":{"name":"2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123381254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}