ASSETS. Annual ACM Conference on Assistive Technologies最新文献

筛选
英文 中文
Uncovering Patterns in Reviewers' Feedback to Scene Description Authors. 从审稿人对场景描述作者的反馈中发现模式。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2021-01-01 DOI: 10.1145/3441852.3476550
Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara
{"title":"Uncovering Patterns in Reviewers' Feedback to Scene Description Authors.","authors":"Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3476550","DOIUrl":"10.1145/3441852.3476550","url":null,"abstract":"<p><p>Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) <b>Quality</b>; commenting on different AD quality variables, (ii) <b>Speech Act</b>; the utterance or speech action that the reviewers used, (iii) <b>Required Action</b>; the recommended action that the authors should do to improve the AD, and (iv) <b>Guidance</b>; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"93 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855355/pdf/nihms-1752255.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Efficacy of Collaborative Authoring of Video Scene Descriptions. 视频场景描述协同创作的有效性研究。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2021-01-01 DOI: 10.1145/3441852.3471201
Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara
{"title":"The Efficacy of Collaborative Authoring of Video Scene Descriptions.","authors":"Rosiana Natalie,&nbsp;Joshua Tseng,&nbsp;Jolene Loh,&nbsp;Ian Luke Yi-Ren Chan,&nbsp;Huei Suen Tan,&nbsp;Ebrima H Jarjue,&nbsp;Hernisa Kacorri,&nbsp;Kotaro Hara","doi":"10.1145/3441852.3471201","DOIUrl":"https://doi.org/10.1145/3441852.3471201","url":null,"abstract":"<p><p>The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with <i>N</i> = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of <i>i.e.,</i> US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (<i>e.g.,</i> learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"17 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855356/pdf/nihms-1752253.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users. TableView:为屏幕放大镜用户提供对Web数据记录的有效访问。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-10-01 DOI: 10.1145/3373625.3417030
Hae-Na Lee, Sami Uddin, Vikas Ashok
{"title":"TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users.","authors":"Hae-Na Lee,&nbsp;Sami Uddin,&nbsp;Vikas Ashok","doi":"10.1145/3373625.3417030","DOIUrl":"https://doi.org/10.1145/3373625.3417030","url":null,"abstract":"<p><p>People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3417030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Screen Magnification for Office Applications. 屏幕放大办公应用程序。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-10-01 DOI: 10.1145/3373625.3418049
Hae-Na Lee, Vikas Ashok, I V Ramakrishnan
{"title":"Screen Magnification for Office Applications.","authors":"Hae-Na Lee,&nbsp;Vikas Ashok,&nbsp;I V Ramakrishnan","doi":"10.1145/3373625.3418049","DOIUrl":"https://doi.org/10.1145/3373625.3418049","url":null,"abstract":"<p><p>People with low vision use screen magnifiers to interact with computers. They usually need to zoom and pan with the screen magnifier using predefined keyboard and mouse actions. When using office productivity applications (e.g., word processors and spreadsheet applications), the spatially distributed arrangement of UI elements makes interaction a challenging proposition for low vision users, as they can only view a fragment of the screen at any moment. They expend significant chunks of time panning back-and-forth between application ribbons containing various commands (e.g., formatting, design, review, references, etc.) and the main edit area containing user content. In this demo, we will demonstrate MagPro, an interface augmentation to office productivity tools, that not only reduces the interaction effort of low-vision screen-magnifier users by bringing the application commands as close as possible to the users' current focus in the edit area, but also lets them easily explore these commands using simple mouse actions. Moreover, MagPro automatically synchronizes the magnifier viewport with the keyboard cursor, so that users can always see what they are typing, without having to manually adjust the magnifier focus every time the keyboard cursor goes of screen during text entry.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3418049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos. ViScene:视频场景描述的协作创作工具。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-10-01 DOI: 10.1145/3373625.3418030
Rosiana Natalie, Ebrima Jarjue, Hernisa Kacorri, Kotaro Hara
{"title":"ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos.","authors":"Rosiana Natalie,&nbsp;Ebrima Jarjue,&nbsp;Hernisa Kacorri,&nbsp;Kotaro Hara","doi":"10.1145/3373625.3418030","DOIUrl":"https://doi.org/10.1145/3373625.3418030","url":null,"abstract":"<p><p>Audio descriptions can make the visual content in videos accessible to people with visual impairments. However, the majority of the online videos lack audio descriptions due in part to the shortage of experts who can create high-quality descriptions. We present ViScene, a web-based authoring tool that taps into the larger pool of sighted non-experts to help them generate high-quality descriptions via two feedback mechanisms-succinct visualizations and comments from an expert. Through a mixed-design study with <i>N</i> = 6 participants, we explore the usability of ViScene and the quality of the descriptions created by sighted non-experts with and without feedback comments. Our results indicate that non-experts can produce better descriptions with feedback comments; preliminary insights also highlight the role that people with visual impairments can play in providing this feedback.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3418030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39202022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Ontology-Driven Transformations for PDF Form Accessibility. PDF表单可访问性的本体驱动转换。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-10-01 DOI: 10.1145/3373625.3418047
Utku Uckun, Ali Selman Aydin, Vikas Ashok, I V Ramakrishnan
{"title":"Ontology-Driven Transformations for PDF Form Accessibility.","authors":"Utku Uckun, Ali Selman Aydin, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3373625.3418047","DOIUrl":"10.1145/3373625.3418047","url":null,"abstract":"<p><p>Filling out PDF forms with screen readers has always been a challenge for people who are blind. Many of these forms are not interactive and hence are not accessible; even if they are interactive, the serial reading order of the screen reader makes it difficult to associate the correct labels with the form fields. This demo will present TransPAc[5], an assistive technology that enables blind people to fill out PDF forms. Since blind people are familiar with web browsing, TransPAc leverages this fact by faithfully transforming a PDF document with forms into a HTML page. The blind user fills out the form fields in the HTML page with their screen reader and these filled-in data values are transparently transferred onto the corresponding form fields in the PDF document. TransPAc thus addresses a long standing problem in PDF form accessibility.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871703/pdf/nihms-1664031.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Sensory Changes in Everyday Technology use by People with Mild to Moderate Dementia. 感官变化在轻度至中度痴呆症患者日常技术使用中的作用。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-10-01 DOI: 10.1145/3373625.3417000
Emma Dixon, Amanda Lazar
{"title":"The Role of Sensory Changes in Everyday Technology use by People with Mild to Moderate Dementia.","authors":"Emma Dixon, Amanda Lazar","doi":"10.1145/3373625.3417000","DOIUrl":"10.1145/3373625.3417000","url":null,"abstract":"<p><p>Technology design for dementia primarily focuses on cognitive needs. This includes providing task support, accommodating memory changes, and simplifying interfaces by reducing complexity. However, research has demonstrated that dementia affects not only the cognitive abilities of people with dementia, but also their sensory and motor abilities. This work provides a first step towards understanding the interaction between sensory changes and technology use by people with dementia through interviews with people with mild to moderate dementia and practitioners. Our analysis yields an understanding of strategies to use technology to overcome sensory changes associated with dementia as well as barriers to using certain technologies. We present new directions for the design of technologies for people with mild to moderate dementia, including intentional sensory stimulation to facilitate comprehension, as well as opportunities to leverage advances in technology design from other disabilities for dementia.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299872/pdf/nihms-1710953.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39221676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IncluSet: A Data Surfacing Repository for Accessibility Datasets. IncluSet:可访问性数据集的数据表面存储库。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2020-01-01 DOI: 10.1145/3373625.3418026
Hernisa Kacorri, Utkarsh Dwivedi, Sravya Amancherla, Mayanka K Jha, Riya Chanduka
{"title":"IncluSet: A Data Surfacing Repository for Accessibility Datasets.","authors":"Hernisa Kacorri,&nbsp;Utkarsh Dwivedi,&nbsp;Sravya Amancherla,&nbsp;Mayanka K Jha,&nbsp;Riya Chanduka","doi":"10.1145/3373625.3418026","DOIUrl":"https://doi.org/10.1145/3373625.3418026","url":null,"abstract":"<p><p>Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to smaller populations, disparate characteristics, lack of expertise for data annotation, as well as privacy concerns. Even when data are collected and are publicly available, it is often difficult to locate them. We present a novel data surfacing repository, called IncluSet, that allows researchers and the disability community to discover and link accessibility datasets. The repository is pre-populated with information about 139 existing datasets: 65 made publicly available, 25 available upon request, and 49 not shared by the authors but described in their manuscripts. More importantly, IncluSet is designed to expose existing and new dataset contributions so they may be discoverable through Google Dataset Search.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"72 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3418026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39349004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Revisiting Blind Photography in the Context of Teachable Object Recognizers. 在可教物体识别器的背景下重新审视盲人摄影。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2019-10-01 DOI: 10.1145/3308561.3353799
Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri
{"title":"Revisiting Blind Photography in the Context of Teachable Object Recognizers.","authors":"Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri","doi":"10.1145/3308561.3353799","DOIUrl":"10.1145/3308561.3353799","url":null,"abstract":"<p><p>For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (<i>N</i> = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"83-95"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7415326/pdf/nihms-1609036.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38252920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Modules to Teach Accessibility in a User-Centered Design Course. 在以用户为中心的设计课程中使用模块来教授可访问性。
ASSETS. Annual ACM Conference on Assistive Technologies Pub Date : 2019-10-01 DOI: 10.1145/3308561.3354632
Amanda Lazar, Jonathan Lazar, Alisha Pradhan
{"title":"Using Modules to Teach Accessibility in a User-Centered Design Course.","authors":"Amanda Lazar,&nbsp;Jonathan Lazar,&nbsp;Alisha Pradhan","doi":"10.1145/3308561.3354632","DOIUrl":"https://doi.org/10.1145/3308561.3354632","url":null,"abstract":"<p><p>Courses in user-centered design, where students learn about centering design on the needs of individuals, is one natural point in which accessibility content can be injected into the curriculum. We describe the approach we have taken with sections in the undergraduate User-Centered Design Course at the University of Maryland, College Park. We initially introduced disability and accessibility in four modules: 1) websites and design portfolios, 2) introduction to understanding user needs, 3) prototyping, and 4) UX evaluation. We present a description of this content that was taught as an extended version in one Fall 2018 section and as an abbreviated version in all sections in Spring 2019. Survey results indicate that students' understanding of accessibility and assistive technology increased with the introduction of these modules.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"554-556"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3308561.3354632","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38186490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信