{"title":"Automatic Evaluation of Discussion Quality Using Topic Relevance and Participants' Performance","authors":"Yoko Nishihara, W. Sunayama, Shiho Imashiro","doi":"10.1109/TAAI57707.2022.00023","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00023","url":null,"abstract":"People often have discussions with others in working cooperatively. Discussions are conducted with given topics. Participants in discussions give their opinions to have an agreement on an issue. They listen to others' opinions and give their views. The duration of a discussion is limited, so a discussion can not be held for a long time. Since the participants have to get an agreement in a limited duration, the quality of discussion must keep high. However, it is difficult for the participants to assess the quality of the discussion. An automatic evaluation method for the quality of discussion is required. This paper proposes an automatic evaluation method of the discussion quality. We assume if a discussion has much information related to discussion topics, the quality of the discussion must be high. We also assume if all participants give their opinions related to the discussion topics, the quality of the discussion must be high. Based on the assumptions, the proposed method takes discussion texts and keywords related to discussion topics to evaluate topic relevance and participant's performance. The proposed method generates an equation of the discussion quality using multiple regression analysis. If a new discussion is given, the generated equation can automatically assess the discussion's Quality. In evaluation experiments, we used 20 discussion texts and obtained an equation for the quality of discussion. We found that the obtained equation could assess the quality of discussion with high accuracy (a multiple $R$ was 0.92).","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121460303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying Reply-to Relation in Textual Group Chat using Unlabeled Dialogue Scripts and Next Sentence Prediction","authors":"Junjie Shan, Yoko Nishihara, Yihong Han","doi":"10.1109/TAAI57707.2022.00025","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00025","url":null,"abstract":"With instant message (IM) software becoming an important part of work and study, more and more research has begun to aim at supporting people's communication by analyzing their chat messages, such as topic provision and relationship sustainment. As the essential step for achieving these research works, the first task is to identify the relations between those large amounts of chat messages, especially in the situation of group chats. In this paper, we propose a method to identify the “reply-to” relations in IM's group chat from unlabeled textual data by using the next sentence prediction (NSP) approach. First, we proposed a method of automatically sampling two messages with and without the “reply-to” relation from unlabeled dialogue scripts to prepare the training data. Second, we built and trained three settings of the NSP model through the training data to identify the “reply-to” relations between two text chat messages. These NSP models were based on the pre-trained Japanese BERT (bidirectional encoder representation from transformers) model. Last, we evaluated the trained models through actual text group chat data with manual labels. The evaluation data contains 444 textual chat messages from three chat groups, each group has three chat members. Evaluation results showed that the models reached a max accuracy up to 69.6%, higher than past methods, and the top F1 score is 0.558.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126400046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal for Visualizing Sound Information in Movies","authors":"Y. Takama, Ikuya Sasakata, H. Shibata","doi":"10.1109/TAAI57707.2022.00046","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00046","url":null,"abstract":"This paper proposes a method for visualizing sound information accompanying movies. Sometimes we have a situation where we want to watch a movie with the sound off, such as when watching multiple movies at the same time, while traveling by public transport, etc. As alternatives to audio, subtitles are commonly used to represent sound information in a movie. However, using subtitles only is difficult to give us the same impression as audio, such as the emotion of the speakers, music played, and noise in the movie. As another alternative to representing sound information, this paper aims to visualize the sound information in movies. The proposed method employs speech bubbles, frame borders, and concentration lines, which are inspired by the representation used in comics. This paper shows the effectiveness and characteristics of the proposed method by comparing it with a subtitle-based method by user experiments.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114500515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three Player Otrio will be Strongly Solved","authors":"Runa Kubota, Lucien Troillet, Kiminori Matsuzaki","doi":"10.1109/TAAI57707.2022.00015","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00015","url":null,"abstract":"Otrio is a 3D extension of classic Tic-Tac-Toe played with two to four players. Three-player Otrio has a big state space due to the additional dimension and players, and stands on the line between strongly and weakly solvable games. In this paper, we analyze the feasibility of strongly solving 3-player Otrio by giving a tighter upper bound and a fairly accurate approximation of the state-space complexity. We propose a new indexing scheme to subdivide the state space into slices that are manageable by a commodity computer, and then develop an algorithm to conduct a retrograde analysis of the game. We find that the state-space complexity of three player Otrio is in the order of 1012 and that fully solving it would require approximately 248 days of CPU time per player cooperation configuration.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131800693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Clustering Uncertainty-weighted Embeddings for Active Domain Adaptation","authors":"Shengsen Wu, Hsuan-Tien Lin","doi":"10.1109/TAAI57707.2022.00013","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00013","url":null,"abstract":"Domain adaptation generalizes deep neural networks to new target domains under domain shift. Active domain adaptation (ADA) does so efficiently by allowing the learning model to strategically ask data annotation questions. The state-of-the-art active domain adaptation via clustering uncertainty-weighted embeddings (ADA-CLUE) uses uncertainty-weighted clustering to identify target instances for labeling. In this work, we carefully study how ADA-CLUE balances uncertainty and diversity during active learning. We compare the original ADA-CLUE with a variant that weights clusters by a constant instead of by the uncertainty, and confirm that constant-weighted clustering sampling outperforms ADA-CLUE at early stages due to its stability. We then merge constant-weighted sampling and uncertainty-weighted sampling with a threshold to get the best of the two worlds. The merged solution, called CLUE with a loop threshold, is shown to be an empirically better choice than the original ADA-CLUE.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116134754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary Design, Implementation and Observations of the Multi-player Game Tri-Othello","authors":"Jr-Chang Chen, Chao-Liang Weng, T. Hsu","doi":"10.1109/TAAI57707.2022.00016","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00016","url":null,"abstract":"To study game characteristics, e.g., effective searching strategies and consequence of rules on playing styles, of multi-player games, we extend the popular two-player game Othello to be played by three players and is called Tri-Othello. Previous multi-player games are either too much knowledge involved, e.g., Poker and Mahjong, so that the focus is usually on designing heuristics, or not too much different by simply extending from their two-player versions so that not much extra game related information can be found. Tri-Othello is a simple, fun-to-play, asymmetry, imperfect information game where one player, initially unknown to the opponents, has a goal different from the rest. Through the design and implementation, we understand better how that piece of hided information can be used towards the advantage of particular players. From experimental results, we also understand better why and when multi-player alpha-beta pruning cannot be as effective as its two-player version.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Values Estimation on News Articles through BERT-extracted Opinion Expressions","authors":"Yihong Han, Yoko Nishihara, Junjie Shan","doi":"10.1109/TAAI57707.2022.00026","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00026","url":null,"abstract":"This paper proposes a human values estimation method with an opinion extraction approach. We assume that opinion sentences reflect human values the most. For a given article, the proposed method extracts the opinion sentences from the texts and estimates the human values included in the opinion sentences. The opinion sentence extraction is conducted by classifying each sentence as an opinion sentence or a non-opinion sentence. The proposed method concatenates sentences from the same article to extend the input texts as an upsampling approach while estimating the human values. The upsampling approach enriches the information volume of the training data. The distribution of human values from each opinion sentence shows the article's general human values. We conducted two evaluation experiments. We used the editorial articles from Mainichi Newspaper as the corpus data. The first experiment evaluated the performance of opinion sentence extraction. The training accuracy of opinion sentence extraction was 92%. In the evaluation test, the model reached a $F_{1}$ score of 85.5%. The results showed that opinion sentences could be extracted with high accuracy. The second experiment evaluated the performance of human values estimation. There are five categories for human values estimation. The experiment was conducted with the same editorial articles from Mainichi Newspaper as the corpus data. We applied a training data enhancement approach by increasing the sentence number of training input and achieved a training accuracy up of over 50%. The results showed that the human values of opinion sentences could be estimated with high accuracy.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117069502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Food Allocation in Food Banks with Multi-agent Deep Reinforcement Learning","authors":"Tomoshi Iiyama, D. Kitakoshi, Masato Suzuki","doi":"10.1109/TAAI57707.2022.00045","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00045","url":null,"abstract":"Food loss and waste (FLW) is becoming a more serious problem, especially in developed nations. Meanwhile, vegetables with imperfect shapes and/or appearances are often discarded even when their flavor and safety have not been compromised. This study proposes a food allocation method with a multi-agent deep reinforcement learning algorithm (called QMIX) to optimize the food allocation process in food support organizations. Several experiments are conducted in a virtual environment to evaluate the basic characteristics of the proposed method and to understand how agents can behave effectively and cooperatively (i.e., to ensure food supply works fairly) through this method. Empirical results showed that agents using the proposed method can acquire reasonable behaviors that can be applied to simple scenarios in the real world.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126863550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiao-Hua Tseng, Yu-Ting Lin, Wen-Chieh Lin, Chieh-Chih Wang
{"title":"Improving LiDAR Semantic Segmentation on Minority Classes and Generalization Capability for Autonomous Driving","authors":"Chiao-Hua Tseng, Yu-Ting Lin, Wen-Chieh Lin, Chieh-Chih Wang","doi":"10.1109/TAAI57707.2022.00032","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00032","url":null,"abstract":"LiDARs have emerged as an important sensor in autonomous driving systems because they offer more accurate geometric measurements than cameras and radars. Therefore, LiDARs have been commonly combined with cameras or radars to tackle many perception problems in autonomous driving, such as object detection, semantic segmentation, or navigation. For semantic segmentation of LiDAR data, due to the class imbalance issue of large-scale scene, there is a performance gap between majority classes and minority classes of large-scale dataset. The minority classes usually include the crucial classes to the autonomous driving, such as “person”, “motorcyclist”, “traffic-sign”. To improve the performance of minority classes, we adopt U-Net++ as the architecture, KPConv as convolution operator, and use both dice loss and cross entropy as loss functions. We get 5.1% mIoU improvement on SemanticKITTI of all classes and 9.5% mIoU improvement of minority classes. Moreover, due to the different resolution of LiDAR sensors, we show the generalization capability of our model by training it on 64-beam dataset and testing on 32-beam and 128-beam dataset. We get 3.3% mIoU improvement on 128-beam dataset and 1.9% mIoU improvement on 32-beam dataset.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Greedy-Based Precise Expansion Algorithm for Customized Group Team Formation Problem","authors":"Pei-Yi Yeh, Shiou-Chi Li, Hao-Shang Ma, Jen-Wei Huang","doi":"10.1109/TAAI57707.2022.00030","DOIUrl":"https://doi.org/10.1109/TAAI57707.2022.00030","url":null,"abstract":"The team formation problem is a social network research issue. Studies on this topic have recently proposed various team structures. However, these structures have been inflexible and imperfect. Moreover, in order to build teams with the lowest communication cost, proposed structures have always contained some communicators assigned to the team who lack expert skills. To improve upon defective structures, we propose the Customized Group Team Formation. Teams can be constructed automatically without any predefined structure. This allows project leaders to choose exemplary teammates based on skill requirements. Additionally, the proposed greedy-based precise expansion algorithm, the method of expertise query closest to the real world, can also save preprocessing time and reduce the amount of non-professional people on the team. Experiments on real datasets show effective performance on communication cost and cardinality.","PeriodicalId":111620,"journal":{"name":"2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126589121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}