{"title":"Harpocrates: Privacy-Preserving and Immutable Audit Log for Sensitive Data Operations","authors":"Mohit Bhasi Thazhath, J. Michalak, Thang Hoang","doi":"10.1109/TPS-ISA56441.2022.00036","DOIUrl":"https://doi.org/10.1109/TPS-ISA56441.2022.00036","url":null,"abstract":"The audit log is a crucial component to monitor fine-grained operations over sensitive data (e.g., personal, health) for security inspection and assurance. Since such data operations can be highly sensitive, it is vital to ensure that the audit log achieves not only validity and immutability, but also confidentiality against active threats to standard data regulations (e.g., HIPAA) compliance. Despite its critical needs, state-of-the-art privacy-preserving audit log schemes (e.g., Ghostor (NSDI ’20), Calypso (VLDB ’19)) do not fully obtain a high level of privacy, integrity, and immutability simultaneously, in which certain information (e.g., user identities) is still leaked in the log.In this paper, we propose Harpocrates, a new privacy-preserving and immutable audit log scheme. Harpocrates permits data store, share, and access operations to be recorded in the audit log without leaking sensitive information (e.g., data identifier, user identity), while permitting the validity of data operations to be publicly verifiable. Harpocrates makes use of blockchain techniques to achieve immutability and avoid a single point of failure, while cryptographic zero-knowledge proofs are harnessed for confidentiality and public verifiability. We analyze the security of our proposed technique and prove that it achieves non-malleability and indistinguishability. We fully implemented Harpocrates and evaluated its performance on a real blockchain system (i.e., Hyperledger Fabric) deployed on a commodity platform (i.e., Amazon EC2). Experimental results demonstrated that Harpocrates is highly scalable and achieves practical performance.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130079482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks","authors":"Naoya Tezuka, H. Ochiai, Yuwei Sun, H. Esaki","doi":"10.1109/TPS-ISA56441.2022.00030","DOIUrl":"https://doi.org/10.1109/TPS-ISA56441.2022.00030","url":null,"abstract":"Wireless ad hoc federated learning (WAFL) is a fully decentralized collaborative machine learning framework organized by opportunistically encountered mobile nodes. Compared to conventional federated learning, WAFL performs model training by weakly synchronizing the model parameters with others, and this shows great resilience to a poisoned model injected by an attacker. In this paper, we provide our theoretical analysis of the WAFL’s resilience against model poisoning attacks, by formulating the force balance between the poisoned model and the legitimate model. According to our experiments, we confirmed that the nodes directly encountered the attacker has been somehow compromised to the poisoned model but other nodes have shown great resilience. More importantly, after the attacker has left the network, all the nodes have finally found stronger model parameters combined with the poisoned model. Most of the attack-experienced cases achieved higher accuracy than the no-attack-experienced cases.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115794933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Secure and Efficient Privacy-preserving Authentication Scheme using Cuckoo Filter in Remote Patient Monitoring Network","authors":"Shafika Showkat Moni, Deepti Gupta","doi":"10.1109/TPS-ISA56441.2022.00034","DOIUrl":"https://doi.org/10.1109/TPS-ISA56441.2022.00034","url":null,"abstract":"With the ubiquitous advancement in smart medical devices and systems, the potential of Remote Patient Monitoring (RPM) network is evolving in modern healthcare systems. The medical professionals (doctors, nurses, or medical experts) can access vitals and sensitive physiological information about the patients and provide proper treatment to improve the quality of life through the RPM network. However, the wireless nature of communication in the RPM network makes it challenging to design an efficient mechanism for secure communication. Many authentication schemes have been proposed in recent years to ensure the security of the RPM network. Pseudonym, digital signature, and Authenticated Key Exchange (AKE) protocols are used for the Internet of Medical Things (IoMT) to develop secure authorization and privacy-preserving communication. However, traditional authentication protocols face overhead challenges due to maintaining a large set of key-pairs or pseudonyms results on the hospital cloud server. In this research work, we identify this research gap and propose a novel secure and efficient privacy-preserving authentication scheme using cuckoo filters for the RPM network. The use of cuckoo filters in our proposed scheme provides an efficient way for mutual anonymous authentication and a secret shared key establishment process between medical professionals and patients. Moreover, we identify the misbehaving sensor nodes using a correlation-based anomaly detection model to establish secure communication. The security analysis and formal security validation using SPAN and AVISPA tools show the robustness of our proposed scheme against message modification attacks, replay attacks, and man-in-the-middle attacks.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117139006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic defenses and the transferability of adversarial examples","authors":"Sam Thomas, Farnoosh Koleini, Nasseh Tabrizi","doi":"10.1109/TPS-ISA56441.2022.00041","DOIUrl":"https://doi.org/10.1109/TPS-ISA56441.2022.00041","url":null,"abstract":"Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little- explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}