{"title":"Deep Gated Multi-modal Fusion for Image Privacy Prediction","authors":"Chenye Zhao, Cornelia Caragea","doi":"https://dl.acm.org/doi/10.1145/3608446","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3608446","url":null,"abstract":"<p>With the rapid development of technologies in mobile devices, people can post their daily lives on social networking sites such as Facebook, Flickr, and Instagram. This leads to new privacy concerns due to people’s lack of understanding that private information can be leaked and used to their detriment. Image privacy prediction models are developed to predict whether images contain sensitive information (private images) or are safe to be shared online (public images). Despite significant progress on this task, there are still some crucial problems that remain to be solved. Firstly, images’ content and tags are found to be useful modalities to automatically predict images’ privacy. To date, most image privacy prediction models use single modalities (image-only or tag-only), which limits their performance. Secondly, we observe that current image privacy prediction models are surprisingly vulnerable to even small perturbations in the input data. Attackers can add small perturbations to input data and easily damage a well-trained image privacy prediction model. To address these challenges, in this paper, we propose a new decision-level Gated multi-modal fusion (GMMF) approach that fuses object, scene, and image tags modalities to predict privacy for online images. In particular, the proposed approach identifies fusion weights of class probability distributions generated by single-modal classifiers according to their reliability of the privacy prediction for each target image in a sample-by-sample manner and performs a weighted decision-level fusion, so that modalities with high reliability are assigned with higher fusion weights while ones with low reliability are restrained with lower fusion weights. The results of our experiments show that the gated multi-modal fusion network effectively fuses single modalities and outperforms state-of-the-art models for image privacy prediction. Moreover, we perform adversarial training on our proposed GMMF model using multiple types of noise on input data (i.e., images and/or tags). When some modalities are failed by input data with noise attacks, our approach effectively utilizes clean modalities and minimizes negative influences brought by degraded ones using fusion weights, achieving significantly stronger robustness over traditional fusion methods for image privacy prediction. The robustness of our GMMF model against data noise can even be generalized to more severe noise levels. To the best of our knowledge, we are the first to investigate the robustness of image privacy prediction models against noise attacks. Moreover, as the performance of decision-level multi-modal fusion depends highly on the quality of single-modal networks, we investigate self-distillation on single-modal privacy classifiers and observe that transferring knowledge from a trained teacher model to a student model is beneficial in our proposed approach.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"42 36","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Gated Multi-modal Fusion for Image Privacy Prediction","authors":"Chenye Zhao, Cornelia Caragea","doi":"10.1145/3608446","DOIUrl":"https://doi.org/10.1145/3608446","url":null,"abstract":"With the rapid development of technologies in mobile devices, people can post their daily lives on social networking sites such as Facebook, Flickr, and Instagram. This leads to new privacy concerns due to people’s lack of understanding that private information can be leaked and used to their detriment. Image privacy prediction models are developed to predict whether images contain sensitive information (private images) or are safe to be shared online (public images). Despite significant progress on this task, there are still some crucial problems that remain to be solved. Firstly, images’ content and tags are found to be useful modalities to automatically predict images’ privacy. To date, most image privacy prediction models use single modalities (image-only or tag-only), which limits their performance. Secondly, we observe that current image privacy prediction models are surprisingly vulnerable to even small perturbations in the input data. Attackers can add small perturbations to input data and easily damage a well-trained image privacy prediction model. To address these challenges, in this paper, we propose a new decision-level Gated multi-modal fusion (GMMF) approach that fuses object, scene, and image tags modalities to predict privacy for online images. In particular, the proposed approach identifies fusion weights of class probability distributions generated by single-modal classifiers according to their reliability of the privacy prediction for each target image in a sample-by-sample manner and performs a weighted decision-level fusion, so that modalities with high reliability are assigned with higher fusion weights while ones with low reliability are restrained with lower fusion weights. The results of our experiments show that the gated multi-modal fusion network effectively fuses single modalities and outperforms state-of-the-art models for image privacy prediction. Moreover, we perform adversarial training on our proposed GMMF model using multiple types of noise on input data (i.e., images and/or tags). When some modalities are failed by input data with noise attacks, our approach effectively utilizes clean modalities and minimizes negative influences brought by degraded ones using fusion weights, achieving significantly stronger robustness over traditional fusion methods for image privacy prediction. The robustness of our GMMF model against data noise can even be generalized to more severe noise levels. To the best of our knowledge, we are the first to investigate the robustness of image privacy prediction models against noise attacks. Moreover, as the performance of decision-level multi-modal fusion depends highly on the quality of single-modal networks, we investigate self-distillation on single-modal privacy classifiers and observe that transferring knowledge from a trained teacher model to a student model is beneficial in our proposed approach.","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44036319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Bayesian Contrastive Predictive Coding Model for Personalized Product Search","authors":"Bin Wu, Zaiqiao Meng, Shangsong Liang","doi":"https://dl.acm.org/doi/10.1145/3609225","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3609225","url":null,"abstract":"<p>In this paper, we study the problem of dynamic personalized product search. Due to the data-sparsity problem in the real world, existing methods suffer from the challenge of data inefficiency. We address the challenge by proposing a Dynamic Bayesian Contrastive Predictive Coding model (DBCPC), which aims to capture the rich structured information behind search records to improve data efficiency. Our proposed DBCPC utilizes the contrastive predictive learning to jointly learn dynamic embeddings with structure information of entities (i.e., users, products and words). Specifically, our DBCPC employs the structured prediction to tackle the intractability caused by non-linear output space and utilizes the time embedding technique to avoid designing different encoders for each time in the Dynamic Bayesian models. In this way, our model jointly learns the underlying embeddings of entities (i.e., users, products and words) via prediction tasks, which enables the embeddings to focus more on their general attributes and capture the general information during the preference evolution with time. For inferring the dynamic embeddings, we propose an inference algorithm combining the variational objective and the contrastive objectives. Experiments were conducted on an Amazon dataset and the experimental results show that our proposed DBCPC can learn the higher-quality embeddings and outperforms the state-of-the-art non-dynamic and dynamic models for product search.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"42 37","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Bayesian Contrastive Predictive Coding Model for Personalized Product Search","authors":"Bin Wu, Zaiqiao Meng, Shangsong Liang","doi":"10.1145/3609225","DOIUrl":"https://doi.org/10.1145/3609225","url":null,"abstract":"In this paper, we study the problem of dynamic personalized product search. Due to the data-sparsity problem in the real world, existing methods suffer from the challenge of data inefficiency. We address the challenge by proposing a Dynamic Bayesian Contrastive Predictive Coding model (DBCPC), which aims to capture the rich structured information behind search records to improve data efficiency. Our proposed DBCPC utilizes the contrastive predictive learning to jointly learn dynamic embeddings with structure information of entities (i.e., users, products and words). Specifically, our DBCPC employs the structured prediction to tackle the intractability caused by non-linear output space and utilizes the time embedding technique to avoid designing different encoders for each time in the Dynamic Bayesian models. In this way, our model jointly learns the underlying embeddings of entities (i.e., users, products and words) via prediction tasks, which enables the embeddings to focus more on their general attributes and capture the general information during the preference evolution with time. For inferring the dynamic embeddings, we propose an inference algorithm combining the variational objective and the contrastive objectives. Experiments were conducted on an Amazon dataset and the experimental results show that our proposed DBCPC can learn the higher-quality embeddings and outperforms the state-of-the-art non-dynamic and dynamic models for product search.","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":" ","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47728212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Closeness Centrality on Uncertain Graphs","authors":"Zhenfang Liu, Jianxiong Ye, Zhaonian Zou","doi":"https://dl.acm.org/doi/10.1145/3604912","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3604912","url":null,"abstract":"<p>Centrality is a family of metrics for characterizing the importance of a vertex in a graph. Although a large number of centrality metrics have been proposed, a majority of them ignores uncertainty in graph data. In this article, we formulate closeness centrality on uncertain graphs and define the batch closeness centrality evaluation problem that computes the closeness centrality of a subset of vertices in an uncertain graph. We develop three algorithms, <sans-serif>MS-BCC</sans-serif>, <sans-serif>MG-BCC,</sans-serif> and <sans-serif>MGMS-BCC</sans-serif>, based on sampling to approximate the closeness centrality of the specified vertices. All these algorithms require to perform breadth-first searches (BFS) starting from the specified vertices on a large number of sampled possible worlds of the uncertain graph. To improve the efficiency of the algorithms, we exploit operation-level parallelism of the BFS traversals and simultaneously execute the shared sequences of operations in the breadth-first searches. Parallelization is realized at different levels in these algorithms. The experimental results show that the proposed algorithms can efficiently and accurately approximate the closeness centrality of the given vertices. <sans-serif>MGMS-BCC</sans-serif> is faster than both <sans-serif>MS-BCC</sans-serif> and <sans-serif>MG-BCC</sans-serif> because it avoids more repeated executions of the shared operation sequences in the BFS traversals.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"113 1","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138516924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Into the Unknown: Exploration of Search Engines’ Responses to Users with Depression and Anxiety","authors":"Ashlee Milton, Maria Soledad Pera","doi":"https://dl.acm.org/doi/10.1145/3580283","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580283","url":null,"abstract":"<p>Researchers worldwide have explored the behavioral nuances that emerge from interactions of individuals afflicted by mental health disorders (MHD) with persuasive technologies, mainly social media. Yet, there is a gap in the analysis pertaining to a persuasive technology that is part of their everyday lives: web search engines (SE). Each day, users with MHD embark on information seeking journeys using popular SE, like Google or Bing. Every step of the search process for better or worse has the potential to influence a searcher’s mindset. In this work, we empirically investigate what subliminal stimulus SE present to these vulnerable individuals during their searches. For this, we use synthetic queries to produce associated query suggestions and search engine results pages. Then we infer the subliminal stimulus present in text from SE, i.e., query suggestions, snippets, and web resources. Findings from our empirical analysis reveal that the subliminal stimulus displayed by SE at different stages of the information seeking process differ between MHD searchers and our control group composed of “average” SE users. Outcomes from this work showcase open problems related to query suggestions, search engine result pages, and ranking that the information retrieval community needs to address so that SE can better support individuals with MHD.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"43 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Review Helpfulness Measure Based on the User-Review-Item Paradigm","authors":"Luca Pajola, Dongkai Chen, Mauro Conti, V.S. Subrahmanian","doi":"https://dl.acm.org/doi/10.1145/3585280","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3585280","url":null,"abstract":"<p>Review platforms are viral online services where users share and read opinions about products (e.g., a smartphone) or experiences (e.g., a meal at a restaurant). Other users may be influenced by such opinions when deciding what to buy. The usability of review platforms is currently limited by the massive number of opinions on many products. Therefore, showing only the most <i>helpful</i> reviews for each product is in the best interest of both users and the platform (e.g., Amazon). The current state of the art is far from accurate in predicting how helpful a review is. First, most existing works lack compelling comparisons as many studies are conducted on datasets that are not publicly available. As a consequence, new studies are not always built on top of prior baselines. Second, most existing research focuses only on features derived from the review text, ignoring other fundamental aspects of the review platforms (e.g., the other reviews of a product, the order in which they were submitted).</p><p>In this article, we first carefully review the most relevant works in the area published during the last 20 years. We then propose the User-Review-Item (URI) paradigm, a novel abstraction for modeling the problem that moves the focus of the feature engineering from the review to the platform level. We empirically validate the URI paradigm on a dataset of products from six Amazon categories with 270 trained models: on average, classifiers gain +4% in F1-score when considering the whole review platform context. In our experiments, we further emphasize some problems with the helpfulness prediction task: (1) the users’ writing style changes over time (i.e., concept drift), (2) past models do not generalize well across different review categories, and (3) past methods to generate the ground truth produced unreliable helpfulness scores, affecting the model evaluation phase.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"43 7","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reverse Maximum Inner Product Search: Formulation, Algorithms, and Analysis","authors":"Daichi Amagata, Takahiro Hara","doi":"https://dl.acm.org/doi/10.1145/3587215","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3587215","url":null,"abstract":"<p>The maximum inner product search (MIPS), which finds the item with the highest inner product with a given query user, is an essential problem in the recommendation field. Usually e-commerce companies face situations where they want to promote and sell new or discounted items. In these situations, we have to consider the following questions: Who is interested in the items, and how do we find them? This article answers this question by addressing a new problem called reverse maximum inner product search (reverse MIPS). Given a query vector and two sets of vectors (user vectors and item vectors), the problem of reverse MIPS finds a set of user vectors whose inner product with the query vector is the maximum among the query and item vectors. Although the importance of this problem is clear, its straightforward implementation incurs a computationally expensive cost.</p><p>We therefore propose Simpfer, a simple, fast, and exact algorithm for reverse MIPS. In an offline phase, Simpfer builds a simple index that maintains a lower bound of the maximum inner product. By exploiting this index, Simpfer judges whether the query vector can have the maximum inner product or not, for a given user vector, in a constant time. Our index enables filtering user vectors, which cannot have the maximum inner product with the query vector, in a batch. We theoretically demonstrate that Simpfer outperforms baselines employing state-of-the-art MIPS techniques. In addition, we answer two new research questions. Can approximation algorithms further improve reverse MIPS processing? Is there an exact algorithm that is faster than Simpfer? For the former, we show that approximation with quality guarantee provides a little speed-up. For the latter, we propose Simpfer++, a theoretically and practically faster algorithm than Simpfer. Our extensive experiments on real datasets show that Simpfer is at least two orders of magnitude faster than the baselines, and Simpfer++ further improves the online processing time.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"43 6","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Berlin, Mat Kelly, Michael L. Nelson, Michele C. Weigle
{"title":"To Re-experience the Web: A Framework for the Transformation and Replay of Archived Web Pages","authors":"John Berlin, Mat Kelly, Michael L. Nelson, Michele C. Weigle","doi":"https://dl.acm.org/doi/10.1145/3589206","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589206","url":null,"abstract":"<p>When replaying an archived web page, or <i>memento</i>, the fundamental expectation is that the page should be viewable and function exactly as it did at the archival time. However, this expectation requires web archives upon replay to modify the page and its embedded resources so that all resources and links reference the archive rather than the original server. Although these modifications necessarily change the state of the representation, it is understood that without them the replay of mementos from the archive would not be possible. The process of replaying mementos and the modifications made to the representations by web archives varies between archives. Because of this, there is no standard terminology for describing the replay and needed modifications. In this article, we propose terminology for describing the existing styles of replay and the modifications made on the part of web archives to mementos to facilitate replay. Because of issues discovered with server-side only modifications, we propose a general framework for the auto-generation of client-side rewriting libraries. Finally, we evaluate the effectiveness of using a generated client-side rewriting library to augment the existing replay systems of web archives by crawling mementos replayed from the Internet Archive’s Wayback Machine with and without the generated client-side rewriter. By using the generated client-side rewriter, we were able to decrease the cumulative number of requests blocked by the content security policy of the Wayback Machine for 577 mementos by 87.5% and increased the cumulative number of requests made by 32.8%. We were also able to replay mementos that were previously not replayable from the Internet Archive. Many of the client-side rewriting ideas described in this work have been implemented into Wombat, a client-side URL rewriting system that is used by the Webrecorder, Pywb, and Wayback Machine playback systems.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"43 8","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shawn M. Jones, Martin Klein, Michele C. Weigle, Michael L. Nelson
{"title":"Summarizing Web Archive Corpora Via Social Media Storytelling By Automatically Selecting and Visualizing Exemplars","authors":"Shawn M. Jones, Martin Klein, Michele C. Weigle, Michael L. Nelson","doi":"https://dl.acm.org/doi/10.1145/3606030","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3606030","url":null,"abstract":"<p>People often create themed collections to make sense of an ever-increasing number of archived web pages. Some of these collections contain hundreds of thousands of documents. Thousands of collections exist, many covering the same topic. Few collections include standardized metadata. This scale makes understanding a collection an expensive proposition. Our Dark and Stormy Archives (DSA) five-process model implements a novel summarization method to help users understand a collection by combining web archives and social media storytelling. The five processes of the DSA model are: select exemplars, generate story metadata, generate document metadata, visualize the story, and distribute the story. Selecting exemplars produces a set of <i>k</i> documents from the <i>N</i> documents in the collection, where <i>k</i> < <<i>N</i>, thus reducing the number of documents visitors need to review to understand a collection. Generating story and document metadata selects images, titles, descriptions, and other content from these exemplars. Visualizing the story ties this metadata together in a format the visitor can consume. Without distributing the story, it is not shared for others to consume. We present a research study demonstrating that our algorithmic primitives can be combined to select relevant exemplars that are otherwise undiscoverable using a conventional search engine and query generation methods. Having demonstrated improved methods for selecting exemplars, we visualize the story. Previous work established that the social card is the best format for visitors to consume surrogates. The social card combines metadata fields, including the document’s title, a brief description, and a striking image. Social cards are commonly found on social media platforms. We discovered that these platforms perform poorly for mementos and rely on web page authors to supply the necessary values for these metadata fields. With web archives, we often encounter archived web pages that predate the existence of this metadata. To generate this missing metadata and ensure that storytelling is available for these documents, we apply machine learning to generate the images needed for social cards with a [email protected] of 0.8314. We also provide the length values needed for executing automatic summarization algorithms to generate document descriptions. Applying these concepts helps us create the visualizations needed to fulfill the final processes of story generation. We close this work with examples and applications of this technology.</p>","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"43 9","pages":""},"PeriodicalIF":3.5,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}