Vanshali Sharma, Pradipta Sasmal, M. Bhuyan, P. Das
{"title":"Keyframe Selection from Colonoscopy Videos to Enhance Visualization for Polyp Detection","authors":"Vanshali Sharma, Pradipta Sasmal, M. Bhuyan, P. Das","doi":"10.1109/IV56949.2022.00076","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00076","url":null,"abstract":"Colonoscopy video acquisition and recording have been increasingly performed for comprehensive diagnosis and retrospective analysis of colorectal cancer (CRC). Reviewing video streams helps detect and inspect polyps, the precursor to CRC. However, visualizing these streams in their raw form puts a considerable burden on clinicians as most of the frames are clinically insignificant and are not useful for pathological interpretation. For improved visualization of diagnostically significant information, we have proposed an automated framework that discards the uninformative frames from raw videos. Our approach initially extracts high-quality colonoscopy frames using a deep learning model to assist clinicians in visualizing data in a refined form. Subsequently, our work validates the effectiveness of keyframe selection by employing polyp detection models. All the evaluations are performed either patient-wise or cross-dataset to suffice the real-time requirements. Experimental results show that the keyframe extraction saves reviewing time and enhances the detection performances. The proposed approach achieves a polyp detection F1-score of 79.78% (patient-wise) and 89.22% (cross-dataset) on the SUN and CVC-VideoClinicDB databases, respectively.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130077110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aicha Ben Makhlouf, Anass Ayed, Nessrine Elloumi, B. Louhichi, M. Jaidane, J. Tavares
{"title":"Biomechanical Modeling and Pre-Operative Projection of A Human Organ using an Augmented Reality Technique During Open Hepatic Surgery","authors":"Aicha Ben Makhlouf, Anass Ayed, Nessrine Elloumi, B. Louhichi, M. Jaidane, J. Tavares","doi":"10.1109/IV56949.2022.00079","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00079","url":null,"abstract":"Augmented Reality (AR) technology offers innovative ways in order to visualize and manipulate a 3D model of an object by superimposing computer-generated images onto another object interactively. The ability to interact with digital and spatial information in real-time offers new opportunities to manipulate and process medical data easily and efficiently. During surgical interventions, surgeons face various challenges dealing with digital patient data. Several methods are used to visualize the operative areas, such as fluoroscopy and ultrasound techniques. These techniques have several limitations. Thus, the augmented reality technique could serve as a better alternative to project a three-dimensional model of the target organ into the surgeon's perspective and field of view to improve the accuracy and efficiency of the medical intervention intraoperatively. In this paper, a new AR method is proposed in order to visualize and simulate the biomechanical model of the liver organ during open hepatic surgery. In this regard, the 3D model based on the patient's preoperative CT scans is first reconstructed. Then, the reconstructed model is projected using the AR headset. After that, the biomechanical model is generated and prepared for the simulation. The proposed approach is validated using acquired CT scans of the human organ.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"26 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130719373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting individual sentiment for emotion-evoking pictures using metrics of oculo-motors","authors":"M. Nakayama","doi":"10.1109/iv56949.2022.00030","DOIUrl":"https://doi.org/10.1109/iv56949.2022.00030","url":null,"abstract":"Relationships between features of oculo-motors and perceptual impressions of Valence and Arousal are analysed using viewer's reactions to 67 emotion-evoking photographs. Individual rating scores are compensated for using item response theory, and chronological changes of oculo-motor indices are analysed in response to two-dimensional ratings. These reactions are summarised as regression models, and predicted emotional categories based on oculo-motor reactions are evaluated. Prediction performance is also evaluated using mean similarities for the predicted categories of emotion. While performance improved when these features were added, individual reactions to features should be included in order to improve prediction performance for each participant. Also, temporal features of oculo-motors for Valence and Arousal are selected independently of their contribution to prediction.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132944219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing Disks and Labels with Good Visibility and Correspondence","authors":"S. Poon, Jiachen Yu","doi":"10.1109/IV56949.2022.00014","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00014","url":null,"abstract":"For researchers in the fields of cartography and graph drawing, the placement problem of text labels for the corresponding graphical objects is a challenging issue they may often encounter. This work focuses on studying the problem of placing rectangular text labels onto a set of fixed disks possibly with overlapping such that users may easily recognize the disks and labels, and the correspondence relationship between the disks and the corresponding labels by just visualising the layout of the disks and the labels. In this paper, we propose an innovative method based on force-directed mechanism to place labels at the appropriate locations with respect to the given positions of the disks. The method starts by placing the given labels at the centers of their corresponding disks respectively, and proceeds with pushing around the labels via applying specific attractive and repulsive forces between the disks and labels so that the overlapping situation between the labels are greatly improved or even completely eliminated, and the correspondence relationship between the disks and the corresponding labels become more easily recognizable. User survey results show that our method is effective in producing outputs with both good visibility rates of disks and labels and good correspondence rate between disks and the corresponding labels. The applications of such a label placement problem studied include labeling a set of circular regions appeared in the real world, such as labeling a set of signal coverage zones of the base stations in the cellular telephone network of some telecommunication company in an urban city.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122257310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabiola De Marco, Luigi Di Biasi, Alessia Auriemma Citarella, M. Tucci, G. Tortora
{"title":"Identification of Morphological Patterns for the Detection of Premature Ventricular Contractions","authors":"Fabiola De Marco, Luigi Di Biasi, Alessia Auriemma Citarella, M. Tucci, G. Tortora","doi":"10.1109/IV56949.2022.00071","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00071","url":null,"abstract":"Premature ventricular contractions (PVCs) are abnormal heartbeats that begin in the lower ventricles or pumping chambers and disrupt the normal heart rhythm. The electrocardiogram (ECG) is the most often used tool for detecting abnormalities in the heart's electrical activity. PVCs are very frequent and usually harmless, but they can be extremely harmful in patients with significant heart problems. As a result, appropriate prevention combined with adequate treatment can improve patients' lives. This paper presents preliminary results on the main challenge associated with the detection of PVCs: identifying common patterns. The images used were extrapolated from the MIT-BIH Arrhythmia Database and then pre-processed to remove any signal noise before creating a distance matrix based on the wave distances of each pair of analyzed images. Finally, we clustered the distance into four groups using clustering algorithms such as K-means. We used a graph-based structure to graphically represent and explore cluster elements in this work. Preliminary results suggest the presence of four distinct patterns.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115658767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data. Information and Knowledge Visualization for Frequent Patterns","authors":"Calvin S. H. Hoi, C. Leung, Adam G. M. Pazdor","doi":"10.1109/IV56949.2022.00045","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00045","url":null,"abstract":"In the current fast information-technological world, data are kept growing bigger. Big data refer to the data flow of huge volume, high velocity, wide variety, and different levels of veracity. Embedded in these big data are implicit, previously unknown, but valuable information and knowledge. With huge volumes of information and knowledge that can be discovered by techniques like data mining, a challenge is to validate and visualize the data mining results. To validate data for better data aggregation in estimation and prediction and for establishing trustworthy artificial intelligence, the synergy of visualization models and data mining strategies are needed. Hence, in this paper, we present a solution for data, information and knowledge visualization for frequently occurring patterns. Our solution transforms textual frequent patterns into their equivalent but more comprehendible graphical representations with important information: frequency distribution. The solution reveals interesting information and valuable knowledge mined from the transactional databases in various applications and services. Evaluation with real-life data demonstrates the effectiveness and practicality of our solution in visualizing data and information of the discovered frequent patterns.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114278203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VRGrid: Efficient Transformation of 2D Data into Pixel Grid Layout","authors":"Adrien Halnaut, R. Giot, Romain Bourqui, D. Auber","doi":"10.1109/IV56949.2022.00012","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00012","url":null,"abstract":"Projecting a set of $n$ points on a grid of size $sqrt{n}timessqrt{n}$ provides the best possible information density in two dimensions without overlap. We leverage the Voronoi Relaxation method to devise a novel and versatile post-processing algorithm called VRGrid: it enables the arrangement of any 2D data on a grid while preserving its initial positions. We apply VRGrid to generate compact and overlap-free visualization of popular and overlap-prone projection methods (e.g., t-SNE). We prove that our method complexity is $O(sqrt{n}.i.n.log(n))$, with i a determined maximum number of iterations and $n$ the input dataset size. It is thus usable for visualization of several thousands of points. We evaluate VRGrid's efficiency with several metrics: distance preservation (DP), neighborhood preservation (NP), pairwise relative positioning preservation (RPP) and global positioning preservation (GPP). We benchmark VRGrid against two state-of-the-art methods: Self-Sorting Maps (SSM) and Distance-preserving Grid (DGrid). VRGrid outperforms these two methods, given enough iterations, on DP, RPP and GPP which we identify to be the key metrics to preserve the positions of the original set of points.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114278596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Siirtola, Javier Gracia-Tabuenca, R. Raisamo, Marianna Niemi, M. Reeve, Tarja Laitinen
{"title":"Glyph-based visualization of health trajectories","authors":"H. Siirtola, Javier Gracia-Tabuenca, R. Raisamo, Marianna Niemi, M. Reeve, Tarja Laitinen","doi":"10.1109/IV56949.2022.00075","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00075","url":null,"abstract":"Whenever a diagnosis is given, a procedure is performed, or a drug is prescribed, it leads to an entry into an electronic health record (EHR) system. Previously, this data was difficult to utilize because of rules regarding confidentiality, but new security approaches and pseudonymization have enabled us to work with this data. Health-related data is voluminous and complex, and it can be difficult to abstract a meaningful overview. One of the complexities is its longitudinality. Often medical research is cross-sectional - we often take a point in time for analysis, when instead, it might be more beneficial to see the trajectory that led to the point in time. We are currently developing a trajectory visualization tool for longitudinal electronic health data. It is a web-based tool that interfaces with the OHDSI data infrastructure and visualizes the cohorts and concept sets (groups of medical codes) defined via the OHDSI Atlas GUI. Currently, our tool is in user testing and it will be deployed to a wider user group during the spring. The user feedback has been positive. Users find the tool especially useful in understanding and debugging their OHDSI Atlas cohort definitions.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133858984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying Data-driven Visualization with Seven-Step Process for Academic Research","authors":"Chia-Chi Shih, T. Chang, Y. Fang, Shih-Ting Tsai","doi":"10.1109/IV56949.2022.00037","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00037","url":null,"abstract":"Recently, the related research on data visualization design has been increasing, and it can be seen that the visualization of the WoCAD (Web of CAADRIA) network is becoming more and more important. Based on the works published at the CAADRIA conference from 2006 to 2015, this study collects data on authors and papers from all the works published in the past ten years then explores the seven-steps strategy model through data visualization experiments and implementations. Expect to see whether there are hidden messages, previously inconspicuous messages, relationships, and contexts among the three data items among the data and data of authors, papers, and keywords. The research was found that in steps 4~6, it will continue to iterate, and various nodes make their connections dense, to determine that the more the three nodes are connected, the closer to the center of the circle, and the second most related nodes are connected to the surrounding area.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123588269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization of the Relationship between Void and Eye Movement Scan Paths in Shan Shui Paintings","authors":"Kuan-Chen Chen, Chang-Franw Lee, T. Chang","doi":"10.1109/IV56949.2022.00040","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00040","url":null,"abstract":"In this article, a study is conducted to explore the role of void in viewing. Void is a feature of Chinese art expression, but it is often ignored or lacks some empirical evidence to prove that void is also a specific element. This study proposes a method to analyze and visualize eye movement data, which can help us find useful and valuable knowledge from much eye movement information. The study results found that eye movement will cause many paths and repeated saccades in the void of the picture. Such findings are different from past heat maps of visualizing eye-tracking information. This allows us to find different interaction patterns between attention and void, confirming that the void is no longer a cutscene but a focal.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134462335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}