2023 IEEE International Conference on Smart Computing (SMARTCOMP)最新文献

筛选
英文 中文
On Learning Data-Driven Models For In-Flight Drone Battery Discharge Estimation From Real Data 基于实际数据的无人机电池放电估计数据驱动模型学习研究
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00038
Austin Coursey, Marcos Quiñones-Grueiro, G. Biswas
{"title":"On Learning Data-Driven Models For In-Flight Drone Battery Discharge Estimation From Real Data","authors":"Austin Coursey, Marcos Quiñones-Grueiro, G. Biswas","doi":"10.1109/SMARTCOMP58114.2023.00038","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00038","url":null,"abstract":"Accurate estimation of the battery state of charge (SOC) for unmanned aerial vehicles (UAV) in-flight monitoring is essential for the safety and survivability of the system. Successful physics-based models of the battery have been developed in the past, however, these models do not take into account the effects of mission profile and environmental conditions during flight on the battery power consumption. Recently, data-driven methods have become popular given their ease of use and scalability. Yet, most benchmarking experiments have been conducted on simulated battery datasets. In this work, we compare different data-driven models for battery SOC estimation of a hexacopter UAV system using real flight data. We analyze the importance of a number of flight variables under different environmental conditions to determine the factors that affect battery SOC over the course of the flight. Our experiments demonstrate that additional flight variables are necessary to create an accurate SOC estimation model through data-driven methods.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122092828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FactionFormer: Context-Driven Collaborative Vision Transformer Models for Edge Intelligence FactionFormer:边缘智能的上下文驱动协同视觉转换模型
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00084
Sumaiya Tabassum Nimi, Md. Adnan Arefeen, M. Y. S. Uddin, Biplob K. Debnath, S. Chakradhar
{"title":"FactionFormer: Context-Driven Collaborative Vision Transformer Models for Edge Intelligence","authors":"Sumaiya Tabassum Nimi, Md. Adnan Arefeen, M. Y. S. Uddin, Biplob K. Debnath, S. Chakradhar","doi":"10.1109/SMARTCOMP58114.2023.00084","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00084","url":null,"abstract":"Edge Intelligence has received attention in the recent times for its potential towards improving responsiveness, reducing the cost of data transmission, enhancing security and privacy, and enabling autonomous decisions by edge devices. However, edge devices lack the power and compute resources necessary to execute most Al models. In this paper, we present FactionFormer, a novel method to deploy resource-intensive deep-learning models, such as vision transformers (ViT), on resource-constrained edge devices. Our method is based on a key observation: edge devices are often deployed in settings where they encounter only a subset of the classes that the resource-intensive Al model is trained to classify, and this subset changes across deployments. Therefore, we automatically identify this subset as a faction, devise on-the fly a bespoke resource-efficient ViT called a modelette for the faction, and set up an efficient processing pipeline consisting of a modelette on the device, a wireless network such as 5G, and the resource-intensive ViT model on an edge server, all of which work collaboratively to do the inference. For several ViT models pre-trained on benchmark datasets, FactionFormer’s modelettes are up to 4× smaller than the corresponding baseline models in terms of the number of parameters, and they can infer up to 2.5× faster than the baseline setup where every input is processed by the resource-intensive ViT on the edge server. Our work is the first of its kind to propose a device-edge collaborative inference framework where bespoke deep learning models for the device are automatically devised on-the-fly for most frequently encountered subset of classes.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124326734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BeautyNet: A Makeup Activity Recognition Framework using Wrist-worn Sensor BeautyNet:一个使用腕带传感器的化妆活动识别框架
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00072
Fatimah Albargi, Naima Khan, Indrajeet Ghosh, Ahana Roy
{"title":"BeautyNet: A Makeup Activity Recognition Framework using Wrist-worn Sensor","authors":"Fatimah Albargi, Naima Khan, Indrajeet Ghosh, Ahana Roy","doi":"10.1109/SMARTCOMP58114.2023.00072","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00072","url":null,"abstract":"The significance of enhancing facial features has grown increasingly in popularity among all groups of people bringing a surge in makeup activities. The makeup market is one of the most profitable and founding sectors in the fashion industry which involves product retailing and demands user training. Makeup activities imply exceptionally delicate hand movements and require much training and practice for perfection. However, the only available choices in learning makeup activities are hands-on workshops by professional instructors or, at most, video-based visual instructions. None of these exhibits immense benefits to beginners, or visually impaired people. One can consistently watch and listen to the best of their abilities, but to precisely practice, perform, and reach makeup satisfaction, recognition from an IoT (Internet-of-Things) device with results and feedback would be the utmost support. In this work, we propose a makeup activity recognition framework, BeautyNet which detects different makeup activities from wrist-worn sensor data collected from ten participants of different age groups in two experimental setups. Our framework employs a LSTM-autoencoder based classifier to extract features from the sensor data and classifies five makeup activities (i.e., applying cream, lipsticks, blusher, eyeshadow, and mascara) in controlled and uncontrolled environment. Empirical results indicate that BeautyNet achieves 95% and 93% accuracy for makeup activity detection in controlled and uncontrolled settings, respectively. In addition, we evaluate BeautyNet with various traditional machine learning algorithms using our in-house dataset and noted an increase in accuracy by ≈ 4-7%.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyber Framework for Steering and Measurements Collection Over Instrument-Computing Ecosystems 仪器计算生态系统转向和测量收集的网络框架
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00046
Anees Al-Najjar, Nageswara S. V. Rao, R. Sankaran, H. Zandi, Debangshu Mukherjee, M. Ziatdinov, Craig Bridges
{"title":"Cyber Framework for Steering and Measurements Collection Over Instrument-Computing Ecosystems","authors":"Anees Al-Najjar, Nageswara S. V. Rao, R. Sankaran, H. Zandi, Debangshu Mukherjee, M. Ziatdinov, Craig Bridges","doi":"10.1109/SMARTCOMP58114.2023.00046","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00046","url":null,"abstract":"We propose a framework to develop cyber solutions to support remote steering of science instruments and measurements collection over instrument-computing ecosystems. It is based on provisioning separate data and control connections at the network level, and developing software modules consisting of Python wrappers for instrument commands and Pyro server-client codes that make them available across the ecosystem network. We demonstrate automated measurement transfers and remote steering operations in a microscopy use case for materials research over an ecosystem of Nion microscopes and computing platforms connected over site networks. The proposed framework is currently under further refinement and being adopted to science workflows with automated remote experiments steering for autonomous chemistry laboratories and smart energy grid simulations.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130749601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NextGenGW - a Software Framework Based on MQTT and Semantic Definition Format NextGenGW——基于MQTT和语义定义格式的软件框架
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00035
Carlos Resende, Waldir Moreira, Luís Almeida
{"title":"NextGenGW - a Software Framework Based on MQTT and Semantic Definition Format","authors":"Carlos Resende, Waldir Moreira, Luís Almeida","doi":"10.1109/SMARTCOMP58114.2023.00035","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00035","url":null,"abstract":"To access all the potential value present in IoT, the IoT devices need to be interoperable. Some works in the literature target this issue, but it is not yet entirely solved, mainly because the proposed solutions are not standard-based at the semantic level. This paper presents the detailed implementation of our standard-based software framework targeting IoT interoperability, named NextGenGW. With NextGenGW, we propose the first integration of IETF SDF with the MQTT protocol. We define an evaluation baseline for validating IoT gateway performance while focusing on interoperability. Our evaluation results show the NextGenGW suitability for deployment in devices with reduced resources and for use cases that require high scalability both in terms of connected IoT end nodes and the number of requests per time interval.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"439 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123505824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-modal AI Systems for Human and Animal Pose Estimation in Challenging Conditions 挑战性条件下人类和动物姿态估计的多模态人工智能系统
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00060
Qianyi Deng
{"title":"Multi-modal AI Systems for Human and Animal Pose Estimation in Challenging Conditions","authors":"Qianyi Deng","doi":"10.1109/SMARTCOMP58114.2023.00060","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00060","url":null,"abstract":"This paper explores the development of multi-modal AI systems for pose estimation in challenging conditions for both humans and animals. Existing single-modality approaches struggle in challenging scenarios such as emergency response and wildlife observation due to factors like smoke, low light, obstacles, and long-distance observations. To address these challenges, this research proposes integrating multiple sensor modalities and leveraging the strengths of different sensors to enhance accuracy and robustness in pose estimation.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116796725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BITS 2023 Welcome Message from General Chairs and TPC Chairs 总主席和TPC主席欢迎辞
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/smartcomp58114.2023.00010
{"title":"BITS 2023 Welcome Message from General Chairs and TPC Chairs","authors":"","doi":"10.1109/smartcomp58114.2023.00010","DOIUrl":"https://doi.org/10.1109/smartcomp58114.2023.00010","url":null,"abstract":"","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131036738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformer-based Real-Time Camouflaged Object Detection System at Edge 基于视觉变换的边缘伪装目标实时检测系统
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00029
Rohan Putatunda, Azim Khan, A. Gangopadhyay, Jianwu Wang, Carl E. Busart, R. Erbacher
{"title":"Vision Transformer-based Real-Time Camouflaged Object Detection System at Edge","authors":"Rohan Putatunda, Azim Khan, A. Gangopadhyay, Jianwu Wang, Carl E. Busart, R. Erbacher","doi":"10.1109/SMARTCOMP58114.2023.00029","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00029","url":null,"abstract":"Camouflaged object detection is a challenging task in computer vision that involves identifying objects that are intentionally or unintentionally hidden in their surrounding environment. Vision Transformer mechanisms play a critical role in improving the performance of deep learning models by focusing on the most relevant features that help object detection under camouflaged conditions. In this paper, we utilized a vision transformer (VT) in two phases, a) By integrating VT with a deep learning architecture for efficient monocular depth map generation for camouflaged objects and b) By embedding VT multiclass object detection model with multimodal feature input (RGB with RGB-D) that increases the visual cues and provides more representational information to the model for performance enhancement. Additionally, we performed an ablation study to understand the role of the vision transformer in camouflaged object detection and incorporated GRAD-CAM on top of the model to visualize the performance improvement achieved by embedding the VT in the model architecture. We deployed the model on resource-constrained edge devices for real-time object detection to realistically test the performance of the trained model.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keynotes 主题演讲
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/smartcomp58114.2023.00008
{"title":"Keynotes","authors":"","doi":"10.1109/smartcomp58114.2023.00008","DOIUrl":"https://doi.org/10.1109/smartcomp58114.2023.00008","url":null,"abstract":"","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrating Real-World City Traffic Simulation Model Using Vehicle Speed Data 使用车速数据校准真实城市交通仿真模型
2023 IEEE International Conference on Smart Computing (SMARTCOMP) Pub Date : 2023-06-01 DOI: 10.1109/SMARTCOMP58114.2023.00076
Seyedmehdi Khaleghian, H. Neema, Mina Sartipi, Toan V. Tran, Rishav Sen, Abhishek Dubey
{"title":"Calibrating Real-World City Traffic Simulation Model Using Vehicle Speed Data","authors":"Seyedmehdi Khaleghian, H. Neema, Mina Sartipi, Toan V. Tran, Rishav Sen, Abhishek Dubey","doi":"10.1109/SMARTCOMP58114.2023.00076","DOIUrl":"https://doi.org/10.1109/SMARTCOMP58114.2023.00076","url":null,"abstract":"Large-scale traffic simulations are necessary for the planning, design, and operation of city-scale transportation systems. These simulations enable novel and complex transportation technology and services such as optimization of traffic control systems, supporting on-demand transit, and redesigning regional transit systems for better energy efficiency and emissions. For a city-wide simulation model, big data from multiple sources such as Open Street Map (OSM), traffic surveys, geo-location traces, vehicular traffic data, and transit details are integrated to create a unique and accurate representation. However, in order to accurately identify the model structure and have reliable simulation results, these traffic simulation models must be thoroughly calibrated and validated against real-world data. This paper presents a novel calibration approach for a city-scale traffic simulation model based on limited real-world speed data. The simulation model runs a microscopic and mesoscopic realistic traffic simulation from Chattanooga, TN (US) for a 24-hour period and includes various transport modes such as transit buses, passenger cars, and trucks. The experiment results presented demonstrate the effectiveness of our approach for calibrating large-scale traffic networks using only real-world speed data. This paper presents our proposed calibration approach that utilizes 2160 real-world speed data points, performs sensitivity analysis of the simulation model to input parameters, and genetic algorithm for optimizing the model for calibration.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"72 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133770359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信