{"title":"Integrating TAM and IS success model: exploring the role of blockchain and AI in predicting learner engagement and performance in e-learning","authors":"Damien Tyron Naidoo","doi":"10.3389/fcomp.2023.1227749","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1227749","url":null,"abstract":"This study innovatively intertwines technology adoption and e-learning by integrating blockchain and AI, offering a novel perspective on how cutting-edge technologies revolutionize learning processes. The present study investigates the factors that influence the behavioral use of learners to use blockchain and artificial intelligence (AI) in e-learning. The study proposes the integrated model of Technology Acceptance Model (TAM) and Information System (IS) success Model that include perceived usefulness, perceived ease of use, system quality, information quality, and service quality as antecedents to behavioral use of blockchain and AI in e-learning. The model also examines the moderating effect of learner self-efficacy on the relationship between behavioral use and e-learning engagement and performance. The study collected data from 322 respondents and analyzed the data using partial least squares structural equation modeling (PLS-SEM) with a bootstrapping technique. The results show that the factors of TAM model and IS model have the significant and positive effects on behavior to use blockchain and AI in e-learning. Additionally, learner self-efficacy has a significant positive effect on e-learning engagement and performance, but it does not moderate the relationship between behavior to use blockchain or AI and e-learning engagement and performance. Overall, the study provides insights into the factors that influence the adoption of blockchain and AI in e-learning and offers practical implications for educators and policymakers.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135207950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Bruno, Giorgia Guerra, Brigitta Pia Alioto, Alessandra Cecilia Jacomuzzi
{"title":"Shareability: novel perspective on human-media interaction","authors":"Nicola Bruno, Giorgia Guerra, Brigitta Pia Alioto, Alessandra Cecilia Jacomuzzi","doi":"10.3389/fcomp.2023.1106322","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1106322","url":null,"abstract":"Interpersonal communication in the twenty-first century is increasingly taking place within digital media. This poses the problem of understanding the factors that may facilitate or hinder communication processes in virtual contexts. Digital media require a human-machine interface, and the analysis of human-machine interfaces traditionally focuses on the dimension of usability. However, interface usability pertains to the interaction of users with digital devices, not to the interaction of users with other users. Here we argue that there is another dimension of human-media interaction that has remained largely unexplored, but plays a key role in interpersonal communication within digital media: shareability. We define shareability as the resultant of a set of interface features that: (i) make sharing of materials with fellow users easy, efficient, and timely (sharing-related usability); (ii) include features that intuitively invite users to share materials (sharing-related affordances); and (iii) provide a sensorimotor environment that includes perceptual information about both presented materials and the behavior of other users that are experiencing these materials through the medium at hand (support to shared availability). Capitalizing on concepts from semiotics, proxemics, and perceptual and cognItive neuroscience, we explore potential criteria to asses shareability in human-machine interfaces. Finally, we show how these notions may be applied in the analysis of three prototypical cases: online gaming, visual communication on social media, and online distance teaching.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135205936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Chern, Kelly Boothby, Jack Raymond, Pau Farré, Andrew D. King
{"title":"Tutorial: calibration refinement in quantum annealing","authors":"Kevin Chern, Kelly Boothby, Jack Raymond, Pau Farré, Andrew D. King","doi":"10.3389/fcomp.2023.1238988","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1238988","url":null,"abstract":"Quantum annealing has emerged as a powerful platform for simulating and optimizing classical and quantum Ising models. Quantum annealers, like other quantum and/or analog computing devices, are susceptible to non-idealities including crosstalk, device variation, and environmental noise. Compensating for these effects through calibration refinement or “shimming” can significantly improve performance but often relies on ad-hoc methods that exploit symmetries in both the problem being solved and the quantum annealer itself. In this tutorial, we attempt to demystify these methods. We introduce methods for finding exploitable symmetries in Ising models and discuss how to use these symmetries to suppress unwanted bias. We work through several examples of increasing complexity and provide complete Python code. We include automated methods for two important tasks: finding copies of small subgraphs in the qubit connectivity graph and automatically finding symmetries of an Ising model via generalized graph automorphism. We conclude the tutorial by surveying additional methods, providing practical implementation tips, and discussing limitations and remedies of the calibration procedure. Code is available at: https://github.com/dwavesystems/shimming-tutorial.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135437680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christos Andreas Makridis, Anthony Boese, Rafael Fricks, Don Workman, Molly Klote, Joshua Mueller, Isabel J. Hildebrandt, Michael Kim, Gil Alterovitz
{"title":"Informing the ethical review of human subjects research utilizing artificial intelligence","authors":"Christos Andreas Makridis, Anthony Boese, Rafael Fricks, Don Workman, Molly Klote, Joshua Mueller, Isabel J. Hildebrandt, Michael Kim, Gil Alterovitz","doi":"10.3389/fcomp.2023.1235226","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1235226","url":null,"abstract":"Introduction The rapid expansion of artificial intelligence (AI) has produced many opportunities, but also new risks that must be actively managed, particularly in the health care sector with clinical practice to avoid unintended health, economic, and social consequences. Methods Given that much of the research and development (R&D) involving human subjects is reviewed and rigorously monitored by institutional review boards (IRBs), we argue that supplemental questions added to the IRB process is an efficient risk mitigation technique available for immediate use. To facilitate this, we introduce AI supplemental questions that provide a feasible, low-disruption mechanism for IRBs to elicit information necessary to inform the review of AI proposals. These questions will also be relevant to review of research using AI that is exempt from the requirement of IRB review. We pilot the questions within the Department of Veterans Affairs–the nation's largest integrated healthcare system–and demonstrate its efficacy in risk mitigation through providing vital information in a way accessible to non-AI subject matter experts responsible for reviewing IRB proposals. We provide these questions for other organizations to adapt to fit their needs and are further developing these questions into an AI IRB module with an extended application, review checklist, informed consent, and other informational materials. Results We find that the supplemental AI IRB module further streamlines and expedites the review of IRB projects. We also find that the module has a positive effect on reviewers' attitudes and ease of assessing the potential alignment and risks associated with proposed projects. Discussion As projects increasingly contain an AI component, streamlining their review and assessment is important to avoid posing too large of a burden on IRBs in their review of submissions. In addition, establishing a minimum standard that submissions must adhere to will help ensure that all projects are at least aware of potential risks unique to AI and dialogue with their local IRBs over them. Further work is needed to apply these concepts to other non-IRB pathways, like quality improvement projects.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134970710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oh-Hyeon Choung, Einat Rashal, Marina Kunchulia, Michael H. Herzog
{"title":"Specific Gestalt principles cannot explain (un)crowding","authors":"Oh-Hyeon Choung, Einat Rashal, Marina Kunchulia, Michael H. Herzog","doi":"10.3389/fcomp.2023.1154957","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1154957","url":null,"abstract":"The standard physiological model has serious problems accounting for many aspects of vision, particularly when stimulus configurations become slightly more complex than the ones classically used, e.g., configurations of Gabors rather than only one or a few Gabors. For example, as shown in many publications, crowding cannot be explained with most models crafted in the spirit of the physiological approach. In crowding, a target is neighbored by flanking elements, which impair target discrimination. However, when more flankers are added, performance can improve for certain flanker configurations (uncrowding), which cannot be explained by classic models. As was shown, aspects of perceptual organization play a crucial role in uncrowding. For this reason, we tested here whether known principles of perceptual organization can explain crowding and uncrowding. The answer is negative. As shown with subjective tests, whereas grouping is indeed key in uncrowding, the four Gestalt principles examined here did not provide a clear explanation to this effect, as variability in performance was found between and within categories of configurations. We discuss the philosophical foundations of both the physiological and the classic Gestalt approaches and sketch a way to a happy marriage between the two.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135551753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dirk B. Walther, Delaram Farzanfar, Seohee Han, Morteza Rezanejad
{"title":"The mid-level vision toolbox for computing structural properties of real-world images","authors":"Dirk B. Walther, Delaram Farzanfar, Seohee Han, Morteza Rezanejad","doi":"10.3389/fcomp.2023.1140723","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1140723","url":null,"abstract":"Mid-level vision is the intermediate visual processing stage for generating representations of shapes and partial geometries of objects. Our mechanistic understanding of these operations is limited, in part, by a lack of computational tools for analyzing image properties at these levels of representation. We introduce the Mid-Level Vision (MLV) Toolbox, an open-source software that automatically processes low- and mid-level contour features and perceptual grouping cues from real-world images. The MLV toolbox takes vectorized line drawings of scenes as input and extracts structural contour properties. We also include tools for contour detection and tracing for the automatic generation of vectorized line drawings from photographs. Various statistical properties of the contours are computed: the distributions of orientations, contour curvature, and contour lengths, as well as counts and types of contour junctions. The toolbox includes an efficient algorithm for computing the medial axis transform of contour drawings and photographs. Based on the medial axis transform, we compute several scores for local mirror symmetry, local parallelism, and local contour separation. All properties are summarized in histograms that can serve as input into statistical models to relate image properties to human behavioral measures, such as esthetic pleasure, memorability, affective processing, and scene categorization. In addition to measuring contour properties, we include functions for manipulating drawings by separating contours according to their statistical properties, randomly shifting contours, or rotating drawings behind a circular aperture. Finally, the MLV Toolbox offers visualization functions for contour orientations, lengths, curvature, junctions, and medial axis properties on computer-generated and artist-generated line drawings. We include artist-generated vectorized drawings of the Toronto Scenes image set, the International Affective Picture System, and the Snodgrass and Vanderwart object images, as well as automatically traced vectorized drawings of set architectural scenes and the Open Affective Standardized Image Set (OASIS).","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134989789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen M Pizer, J S Marron, James N Damon, Jared Vicory, Akash Krishna, Zhiyuan Liu, Mohsen Taheri
{"title":"Skeletons, Object Shape, Statistics.","authors":"Stephen M Pizer, J S Marron, James N Damon, Jared Vicory, Akash Krishna, Zhiyuan Liu, Mohsen Taheri","doi":"10.3389/fcomp.2022.842637","DOIUrl":"https://doi.org/10.3389/fcomp.2022.842637","url":null,"abstract":"<p><p>Objects and object complexes in 3D, as well as those in 2D, have many possible representations. Among them skeletal representations have special advantages and some limitations. For the special form of skeletal representation called \"s-reps,\" these advantages include strong suitability for representing slabular object populations and statistical applications on these populations. Accomplishing these statistical applications is best if one recognizes that s-reps live on a curved shape space. Here we will lay out the definition of s-reps, their advantages and limitations, their mathematical properties, methods for fitting s-reps to single- and multi-object boundaries, methods for measuring the statistics of these object and multi-object representations, and examples of such applications involving statistics. While the basic theory, ideas, and programs for the methods are described in this paper and while many applications with evaluations have been produced, there remain many interesting open opportunities for research on comparisons to other shape representations, new areas of application and further methodological developments, many of which are explicitly discussed here.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10488910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10231002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefania Marcotti, Deandra Belo de Freitas, Lee D Troughton, Fiona N Kenny, Tanya J Shaw, Brian M Stramer, Patrick W Oakes
{"title":"A workflow for rapid unbiased quantification of fibrillar feature alignment in biological images.","authors":"Stefania Marcotti, Deandra Belo de Freitas, Lee D Troughton, Fiona N Kenny, Tanya J Shaw, Brian M Stramer, Patrick W Oakes","doi":"10.3389/fcomp.2021.745831","DOIUrl":"10.3389/fcomp.2021.745831","url":null,"abstract":"<p><p>Measuring the organisation of the cellular cytoskeleton and the surrounding extracellular matrix (ECM) is currently of wide interest as changes in both local and global alignment can highlight alterations in cellular functions and material properties of the extracellular environment. Different approaches have been developed to quantify these structures, typically based on fibre segmentation or on matrix representation and transformation of the image, each with its own advantages and disadvantages. Here we present <i>AFT-Alignment by Fourier Transform</i>, a workflow to quantify the alignment of fibrillar features in microscopy images exploiting 2D Fast Fourier Transforms (FFT). Using pre-existing datasets of cell and ECM images, we demonstrate our approach and compare and contrast this workflow with two other well-known ImageJ algorithms to quantify image feature alignment. These comparisons reveal that <i>AFT</i> has a number of advantages due to its grid-based FFT approach. 1) Flexibility in defining the window and neighbourhood sizes allows for performing a parameter search to determine an optimal length scale to carry out alignment metrics. This approach can thus easily accommodate different image resolutions and biological systems. 2) The length scale of decay in alignment can be extracted by comparing neighbourhood sizes, revealing the overall distance that features remain anisotropic. 3) The approach is ambivalent to the signal source, thus making it applicable for a wide range of imaging modalities and is dependent on fewer input parameters than segmentation methods. 4) Finally, compared to segmentation methods, this algorithm is computationally inexpensive, as high-resolution images can be evaluated in less than a second on a standard desktop computer. This makes it feasible to screen numerous experimental perturbations or examine large images over long length scales. Implementation is made available in both MATLAB and Python for wider accessibility, with example datasets for single images and batch processing. Additionally, we include an approach to automatically search parameters for optimum window and neighbourhood sizes, as well as to measure the decay in alignment over progressively increasing length scales.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8654057/pdf/nihms-1753534.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39710314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youxiang Zhu, Xiaohui Liang, John A Batsis, Robert M Roth
{"title":"Exploring Deep Transfer Learning Techniques for Alzheimer's Dementia Detection.","authors":"Youxiang Zhu, Xiaohui Liang, John A Batsis, Robert M Roth","doi":"10.3389/fcomp.2021.624683","DOIUrl":"10.3389/fcomp.2021.624683","url":null,"abstract":"<p><p>Examination of speech datasets for detecting dementia, collected via various speech tasks, has revealed links between speech and cognitive abilities. However, the speech dataset available for this research is extremely limited because the collection process of speech and baseline data from patients with dementia in clinical settings is expensive. In this paper, we study the spontaneous speech dataset from a recent ADReSS challenge, a Cookie Theft Picture (CTP) dataset with balanced groups of participants in age, gender, and cognitive status. We explore state-of-the-art deep transfer learning techniques from image, audio, speech, and language domains. We envision that one advantage of transfer learning is to eliminate the design of handcrafted features based on the tasks and datasets. Transfer learning further mitigates the limited dementia-relevant speech data problem by inheriting knowledge from similar but much larger datasets. Specifically, we built a variety of transfer learning models using commonly employed MobileNet (image), YAMNet (audio), Mockingjay (speech), and BERT (text) models. Results indicated that the transfer learning models of text data showed significantly better performance than those of audio data. Performance gains of the text models may be due to the high similarity between the pre-training text dataset and the CTP text dataset. Our multi-modal transfer learning introduced a slight improvement in accuracy, demonstrating that audio and text data provide limited complementary information. Multi-task transfer learning resulted in limited improvements in classification and a negative impact in regression. By analyzing the meaning behind the AD/non-AD labels and Mini-Mental State Examination (MMSE) scores, we observed that the inconsistency between labels and scores could limit the performance of the multi-task learning, especially when the outputs of the single-task models are highly consistent with the corresponding labels/scores. In sum, we conducted a large comparative analysis of varying transfer learning models focusing less on model customization but more on pre-trained models and pre-training datasets. We revealed insightful relations among models, data types, and data labels in this research area.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8153512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39027802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Dietz, Curtis T Rueden, Stefan Helfrich, Ellen T A Dobson, Martin Horn, Jan Eglinger, Edward L Evans, Dalton T McLean, Tatiana Novitskaya, William A Ricke, Nathan M Sherer, Andries Zijlstra, Michael R Berthold, Kevin W Eliceiri
{"title":"Integration of the ImageJ Ecosystem in the KNIME Analytics Platform.","authors":"Christian Dietz, Curtis T Rueden, Stefan Helfrich, Ellen T A Dobson, Martin Horn, Jan Eglinger, Edward L Evans, Dalton T McLean, Tatiana Novitskaya, William A Ricke, Nathan M Sherer, Andries Zijlstra, Michael R Berthold, Kevin W Eliceiri","doi":"10.3389/fcomp.2020.00008","DOIUrl":"https://doi.org/10.3389/fcomp.2020.00008","url":null,"abstract":"<p><p>Open-source software tools are often used for analysis of scientific image data due to their flexibility and transparency in dealing with rapidly evolving imaging technologies. The complex nature of image analysis problems frequently requires many tools to be used in conjunction, including image processing and analysis, data processing, machine learning and deep learning, statistical analysis of the results, visualization, correlation to heterogeneous but related data, and more. However, the development, and therefore application, of these computational tools is impeded by a lack of integration across platforms. Integration of tools goes beyond convenience, as it is impractical for one tool to anticipate and accommodate the current and future needs of every user. This problem is emphasized in the field of bioimage analysis, where various rapidly emerging methods are quickly being adopted by researchers. ImageJ is a popular open-source image analysis platform, with contributions from a global community resulting in hundreds of specialized routines for a wide array of scientific tasks. ImageJ's strength lies in its accessibility and extensibility, allowing researchers to easily improve the software to solve their image analysis tasks. However, ImageJ is not designed for development of complex end-to-end image analysis workflows. Scientists are often forced to create highly specialized and hard-to-reproduce scripts to orchestrate individual software fragments and cover the entire life-cycle of an analysis of an image dataset. KNIME Analytics Platform, a user-friendly data integration, analysis, and exploration workflow system, was designed to handle huge amounts of heterogeneous data in a platform-agnostic, computing environment and has been successful in meeting complex end-to-end demands in several communities, such as cheminformatics and mass spectrometry. Similar needs within the bioimage analysis community led to the creation of the KNIME Image Processing extension which integrates ImageJ into KNIME Analytics Platform, enabling researchers to develop reproducible and scalable workflows, integrating a diverse range of analysis tools. Here we present how users and developers alike can leverage the ImageJ ecosystem via the KNIME Image Processing extension to provide robust and extensible image analysis within KNIME workflows. We illustrate the benefits of this integration with examples, as well as representative scientific use cases.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fcomp.2020.00008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38359736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}