2022 International Conference on Machine Learning and Cybernetics (ICMLC)最新文献

筛选
英文 中文
Infrared Guided White Cane for Assisting the Visually Impaired to Walk Alone 辅助视障人士独自行走的红外线白色手杖
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941336
Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki
{"title":"Infrared Guided White Cane for Assisting the Visually Impaired to Walk Alone","authors":"Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki","doi":"10.1109/ICMLC56445.2022.9941336","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941336","url":null,"abstract":"This study proposes an indoor navigation system using a white cane equipped with an infrared beacon and receiver installed on the ceiling of a facility, and sound and speech as an option for assisting visually impaired persons to walk alone. This support does not require extensive facility modifications or detailed environmental mapping, and is compact and simple enough to be used and obtained as a tool, like a white cane. This support was verified by a visually impaired person, and its potential to be used as a stand-alone walking support was demonstrated.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of a New Graphic Description Language for Line Drawings -- Assuming the Use of the Visually Impaired 一种新的线条画图形描述语言的发展——假设视障人士使用
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941294
Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi
{"title":"Development of a New Graphic Description Language for Line Drawings -- Assuming the Use of the Visually Impaired","authors":"Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi","doi":"10.1109/ICMLC56445.2022.9941294","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941294","url":null,"abstract":"In recent years, the development of information technology has made it easier for visually impaired people to access language information by developing electronic books and OCR applications. However, graphics are still inaccessible to the visually impaired. Therefore, it is very difficult for the visually impaired to create graphics without help of sighted people. Conventional graphic description languages such as TikZ and SVG and so on are difficult for the visually impaired to write codes because they require numerical coordinates precisely when drawing basic shapes; hence calculating such numerical coordinates is quite difficult for blind users. To solve this problem, we are developing a graphic description language and a drawing assistance system that enables visually impaired people to create figures independently. Our language is based on an object-oriented design in order to reduce the difficulties on the visually impaired. In this paper, we describe our language and show the result of an experiment for evaluating the effectiveness of our language.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism 基于波动预测和注意机制的猪肉价格预测方法
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941318
S. Zhao, Xudong Lin, Xiaojian Weng
{"title":"A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism","authors":"S. Zhao, Xudong Lin, Xiaojian Weng","doi":"10.1109/ICMLC56445.2022.9941318","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941318","url":null,"abstract":"With the continuous development of the economy and improvement of people’s living standards, people’s consumption of meat is getting higher and higher, and China has become the largest pork consumer and producer. The price of pork affects not only the quality of life of the residents but also the development of the pig farming industry to a certain extent. Effective pork price forecasting contributes to social stability and unity, not only to ensure farmers’ income, but also to ensure the relation between supply and demand. This paper synthesizes various indicators related to pork prices in the Chinese pork market, and respectively establishes XGboost, SVM and Random Forest models to make preliminary upward and downward forecasts for the samples. The best forecasting results are used to add price forecasting features, and then the LSTM model optimized by the attention mechanism is used to forecast specific prices. The weekly price data of 201501-202106 from the National Bureau of Statistics used in the experiment compared the forecasting effects of three kinds of price increase and decrease forecasting models and eight kinds of numerical price forecasting models. The results show that the Attention-LSTM method of forecasting pork prices based on up and down forecasts is superior to other methods in pork price forecasting accuracy. RMSE = 1.57, MAE = 1.28, MAPE = 2.83%, all belong to a minimum.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Access Control Method with Secret Key for Semantic Segmentation Models 语义分割模型的密钥访问控制方法
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-08-28 DOI: 10.1109/ICMLC56445.2022.9941323
Teru Nagamori, Ryota Iijima, H. Kiya
{"title":"An Access Control Method with Secret Key for Semantic Segmentation Models","authors":"Teru Nagamori, Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941323","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941323","url":null,"abstract":"A novel method for access control with a secret key is proposed to protect models from unauthorized access in this paper. We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR). Most existing access control methods focus on image classification tasks, or they are limited to CNNs. By using a patch embedding structure that ViT has, trained models and test images can be efficiently encrypted with a secret key, and then semantic segmentation tasks are carried out in the encrypted domain. In an experiment, the method is confirmed to provide the same accuracy as that of using plain images without any encryption to authorized users with a correct key and also to provide an extremely degraded accuracy to unauthorized users.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Encryption Method of Convmixer Models without Performance Degradation 一种无性能退化的卷积混合模型加密方法
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-07-25 DOI: 10.1109/ICMLC56445.2022.9941283
Ryota Iijima, H. Kiya
{"title":"An Encryption Method of Convmixer Models without Performance Degradation","authors":"Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941283","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941283","url":null,"abstract":"In this paper, we propose an encryption method for ConvMixer models with a secret key. Encryption methods for DNN models have been studied to achieve adversarial defense, model protection and privacy-preserving image classification. However, the use of conventional encryption methods degrades the performance of models compared with that of plain models. Accordingly, we propose a novel method for encrypting ConvMixer models. The method is carried out on the basis of an embedding architecture that ConvMixer has, and models encrypted with the method can have the same performance as models trained with plain images only when using test images encrypted with a secret key. In addition, the proposed method does not require any specially prepared data for model training or network modification. In an experiment, the effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection in an image classification task on the CIFAR10 dataset.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification Against Ciphertext-Only Attacks 针对纯密文攻击的可压缩图像加密保密图像分类安全性评估
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-07-17 DOI: 10.1109/ICMLC56445.2022.9941309
Tatsuya Chuman, H. Kiya
{"title":"Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification Against Ciphertext-Only Attacks","authors":"Tatsuya Chuman, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941309","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941309","url":null,"abstract":"The security of learnable image encryption schemes for image classification using deep neural networks against several attacks has been discussed. On the other hand, block scrambling image encryption using the vision transformer has been proposed, which applies to lossless compression methods such as JPEG standard by dividing an image into permuted blocks. Although robustness of the block scrambling image encryption against jigsaw puzzle solver attacks that utilize a correlation among the blocks has been evaluated under the condition of a large number of encrypted blocks, the security of encrypted images with a small number of blocks has never been evaluated. In this paper, the security of the block scrambling image encryption against ciphertext-only attacks is evaluated by using jigsaw puzzle solver attacks.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116752423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training 对比对抗训练中认知解离缓解的稳健性
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-03-16 DOI: 10.1109/ICMLC56445.2022.9941337
Adir Rahamim, I. Naeh
{"title":"Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training","authors":"Adir Rahamim, I. Naeh","doi":"10.1109/ICMLC56445.2022.9941337","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941337","url":null,"abstract":"In this paper, we introduce a novel neural network training framework that increases model’s adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive learning (CL) with adversarial training (AT). We propose to improve model robustness to adversarial attacks by learning feature representations that are consistent under both data augmentations and adversarial perturbations. We leverage contrastive learning to improve adversarial robustness by considering an adversarial example els another positive example, and aim to maximize the similarity between random augmentations of data samples and their adversarial example, while constantly updating the classification head in order to avoid a cognitive dissociation between the classification head and the embedding space. This dissociation is caused by the fact that CL updates the network up to the embedding space, while freezing the classification head which is used to generate new positive adversarial examples. We validate our method, Contrastive Learning with Adversarial Features (CLAF), on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":" 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114051312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adversarial Robust Classification by Conditional Generative Model Inversion 基于条件生成模型反演的对抗鲁棒分类
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2022-01-12 DOI: 10.1109/ICMLC56445.2022.9941288
Mitra Alirezaei, T. Tasdizen
{"title":"Adversarial Robust Classification by Conditional Generative Model Inversion","authors":"Mitra Alirezaei, T. Tasdizen","doi":"10.1109/ICMLC56445.2022.9941288","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941288","url":null,"abstract":"Most adversarial attack defense methods rely on obfuscating gradients. These methods are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction against black-box attacks without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we \"invert\" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers. On the other hand, a generative model is typically a low-to-high-dimensional mapping. Since the range of images that can be generated by the model for a given class is limited to its learned manifold, the \"inversion\" process cannot generate images that are arbitrarily close to adversarial examples leading to a robust model by construction. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and does not depend on previous knowledge about the attack strength.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131970268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding and Quantifying Adversarial Examples Existence in Linear Classification 理解和量化线性分类中对抗性例子的存在
2022 International Conference on Machine Learning and Cybernetics (ICMLC) Pub Date : 2019-10-27 DOI: 10.1109/ICMLC56445.2022.9941315
Xupeng Shi, A. Ding
{"title":"Understanding and Quantifying Adversarial Examples Existence in Linear Classification","authors":"Xupeng Shi, A. Ding","doi":"10.1109/ICMLC56445.2022.9941315","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941315","url":null,"abstract":"State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129079129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信