{"title":"Requirements Traceability Ontology to Support Requirements Management","authors":"A. Wibowo, Joseph G. Davis","doi":"10.1145/3373017.3373038","DOIUrl":"https://doi.org/10.1145/3373017.3373038","url":null,"abstract":"Requirements management (RM) is an important phase in software requirements engineering that addresses changes in the requirements over time. Any effective ontology of requirements management needs to support the capturing of all the artefacts at every stage of the development life cycle along with a flexible granularity that fits the chosen development framework and project characteristics. One of the artefact types captured in the ontology should be the requirement to support the impact analysis in the requirements change management. We report on the Requirements Traceability Ontology (RTOnto) which is designed to support a flexible trace granularity by separating the ontology into three layers. The second and third layers of the ontology can be adapted to a specific development framework. Artefacts in the ontology are classified into eight sub-classes to enable capturing artefacts at every development life cycle.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123359184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design Guidelines for Effective Occupation-Based Personal Health Records","authors":"M. Fernando, C. Fidge, T. Sahama","doi":"10.1145/3373017.3373042","DOIUrl":"https://doi.org/10.1145/3373017.3373042","url":null,"abstract":"Personal Health Record (PHR) systems in occupational health are an emerging technology for employee health and well-being promotions in organizations. Since PHRs increasingly experience low rates of adoption, practitioners and developers are searching for effective design guidelines to match current occupational health and well-being trends and user needs. We examined existing literature and conducted focus group discussions with a group of employees (n=26) to identify typically expected PHR features. We evaluated those PHR features through a questionnaire survey with employees (n=360) to identify features that make PHRs more user-friendly and motivate their use in an occupational environment. We found that the ability of easy data entry to PHR systems is the most important feature to motivate the use of PHRs in a work environment. Additionally, clear guidance to overcome health risks, displaying the current status of overall health and well-being information, and displaying the most current information in the main interface were identified as relatively important features to make PHRs more usable in occupational environments. Based on those features prioritization, we designed and implemented an occupation-based PHR prototype. Finally the PHR prototype was evaluated for usability through practical trials with real-world employees (n=42) to identify design guidelines for occupation-based PHRs in the light of standard heuristics. As findings, we discovered that designers of occupation-based PHRs should pay more attention to pleasurable and respectful interaction between users and the system, enhancing user skills and knowledge, user control and freedom, visibility of system status, and privacy.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129382721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SGX-based Users Matching with Privacy Protection","authors":"Junwei Luo, Xuechao Yang, X. Yi","doi":"10.1145/3373017.3373021","DOIUrl":"https://doi.org/10.1145/3373017.3373021","url":null,"abstract":"For users who rely heavily on social networks for entertaining or making friends, sensitive information such as ages, incomes and addresses will be stored in a database without protection. While many companies try their best to protect user privacy, data breaches still happen, resulting in the loss of millions or billions of dollars and the faith of their customers. Therefore, we propose a solution that guarantees the confidentiality and integrity of information while preserving the ability to perform matching over encrypted values. Our solution is built on homomorphic encryption with secure hardware enclaves such as Intel SGX. Our solution resolves challenges such as performing user profile matching on encrypted values without revealing any information to anyone. With the help of multiple servers, user privacy can be protected as long as at least one server is honest and the guarantee of secure hardware makes the secret unlikely to be revealed. Furthermore, a prototype of our system is implemented to measure its performance. The performance analysis and security analysis show the feasibility of our proposed protocols.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129687915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Analysis of Phishing Blacklists: Google Safe Browsing, OpenPhish, and PhishTank","authors":"Simon Bell, P. Komisarczuk","doi":"10.1145/3373017.3373020","DOIUrl":"https://doi.org/10.1145/3373017.3373020","url":null,"abstract":"Blacklists play a vital role in protecting internet users against phishing attacks. The effectiveness of blacklists depends on their size, scope, update speed and frequency, and accuracy - among other characteristics. In this paper we present a measurement study that analyses 3 key phishing blacklists: Google Safe Browsing (GSB), OpenPhish (OP), and PhishTank (PT). We investigate the uptake, dropout, typical lifetimes, and overlap of URLs in these blacklists. During our 75-day measurement period we observe that GSB contains, on average, 1.6 million URLs, compared to 12,433 in PT and 3,861 in OP. We see that OP removes a significant proportion of its URLs after 5 and 7 days, with none remaining after 21 days - potentially limiting the blacklist’s effectiveness. We observe fewer URLs residing in all 3 blacklists as time-since-blacklisted increases – suggesting that phishing URLs are often short-lived. None of the 3 blacklists enforce a one-time-only URL policy - therefore protecting users against reoffending phishing websites. Across all 3 blacklists, we detect a significant number of URLs that reappear within 1 day of removal – perhaps suggesting premature removal or re-emerging threats. Finally, we discover 11,603 unique URLs residing in both PT and OP – a 12% overlap. Despite its smaller average size, OP detected over 90% of these overlapping URLs before PT did.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127509898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Negin Foroughimehr, Ali Yavari, M. Hanlon, J. Wallace, Ryan Smith, R. Franich
{"title":"Data-driven Optimisation of in vivo Radioactive Source-tracking for Real-time Cancer Radiotherapy Treatment Verification","authors":"Negin Foroughimehr, Ali Yavari, M. Hanlon, J. Wallace, Ryan Smith, R. Franich","doi":"10.1145/3373017.3373046","DOIUrl":"https://doi.org/10.1145/3373017.3373046","url":null,"abstract":"Precise treatment delivery in high dose rate radiation therapy for prostate cancer requires comprehensive treatment verification to ensure effective tumour control and patient safety. Our group has developed a system that tracks the radiation source as it moves inside the patient’s tumour. The major aim of this study was to optimise the source-tracking algorithm to improve accuracy without excessive cost in processing time to enable real-time analysis. The source is tracked by analysing the distribution of radiation from the brachytherapy source (Iridium-192) that exits the patient’s skin and reaches a Flat Panel Detector (FPD) mounted in the couch beneath the patient. The radiation distribution in this 2-dimensional ‘image’ is analysed to estimate the source position. In this study, measurements were conducted in a ‘phantom’ - an artificial surrogate for the patient - in which the ground-truth positions were known by an independent means that cannot be achieved in a live patient. Various algorithms were examined for accuracy, efficiency, and the influence of asymmetric radiation scattering caused by inhomogeneous media interfaces e.g. air/tissue boundaries. The most accurate algorithm was identified, and some tunable parameters were able to be optimised for accuracy. The comparison of measured source positions revealed some skewing of measured positions due to asymmetric scattering existing in the proximity of the phantom edge. The algorithm with the lowest sensitivity to asymmetric scattering was identified. Computation times were compared for suitability in the clinical environment where evaluation at up to 30 frames per second may be required. The optimised algorithms could improve the quality assurance value of source-position tracking in high dose rate brachytherapy.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121344089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Blockchain-based trust model for crowd environments.","authors":"Mo Nguyen, Quan Bai, Jian Yu","doi":"10.1145/3373017.3373037","DOIUrl":"https://doi.org/10.1145/3373017.3373037","url":null,"abstract":"Nowadays, social media has become an important platform for people to share information and opinions. More and more businesses owners and individual users rely on information shared on social media and contributed by the crowd for product promotion, decision making, etc. However, it is difficult to evaluate the trustworthiness of such information effectively as the open environment allows all people with different backgrounds and expertise to contribute. Furthermore, it gives chances for malicious users or astroturfers who pursue profits by giving fake reviews or plausible answers, and those who want to build up their business’s reputation without improving quality of good or service. For solving this challenging problem, the paper considers trust as a key element and define it in the expert-based context. Then we propose a Blockchain-based trust model for information sharing in crowd environments. The proposed model uses a weighted consensus mechanism to infer the trustworthiness of shared information basing on reviews from the crowd and defines the accuracy of an agent as the ratio of his winning reviews to his total reviews. The model can reward agents effectively equivalent to his winning reviews. Experimental results show that the proposed model has higher accuracy in defining the trustworthiness of information and obtains better Collective Intelligence than existing models.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114241204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srikanth Thudumu, P. Branch, Jiong Jin, Jugdutt Singh
{"title":"Estimation of Locally Relevant Subspace in High-dimensional Data","authors":"Srikanth Thudumu, P. Branch, Jiong Jin, Jugdutt Singh","doi":"10.1145/3373017.3373032","DOIUrl":"https://doi.org/10.1145/3373017.3373032","url":null,"abstract":"High-dimensional data is becoming more and more available due to the advent of big data and IoT. Having more dimensions makes data analysis cumbersome increasing the sparsity of data points due to the problem called “curse of dimensionality“. To address this problem, global dimensionality reduction techniques are used; however, these techniques are ineffective in revealing hidden outliers from the high-dimensional space. This is due to the behaviour of outliers being hidden in the subspace where they belong; hence, a locally relevant subspace is needed to reveal the hidden outliers. In this paper, we present a technique that identifies a locally relevant subspace and associated low-dimensional subspaces by deriving a final correlation score. To verify the effectiveness of the technique in determining the generalised locally relevant subspace, we evaluate the results with a benchmark data set. Our comparative analysis shows that the technique derived the locally relevant subspace that consists of relevant dimensions presented in benchmark data set.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"526 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133472348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehnaz Adnan, Ben Waite, Richard Dean, C. Newbern, T. Wood, Raewyn Campbell, Nooriyan Poonawala-Lohani
{"title":"Enabling Near Real-time Surveillance of Influenza-like Illness","authors":"Mehnaz Adnan, Ben Waite, Richard Dean, C. Newbern, T. Wood, Raewyn Campbell, Nooriyan Poonawala-Lohani","doi":"10.1145/3373017.3373048","DOIUrl":"https://doi.org/10.1145/3373017.3373048","url":null,"abstract":"Traditional public health surveillance systems would benefit from near real-time data integration and visualization that combines information from traditional and internet sources. In this paper, we describe a prototype system implemented to better automate the process of data collection, analysis and visualization of Influenza-Like Illness surveillance data. This approach enables timelier responses to abnormal events such as clusters, outbreaks and trends.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114458776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Butler-Henderson, K. Gray, Karen Day, R. Grainger
{"title":"Defining the Health Information Technology discipline: results from the 2018 Australian and New Zealand censuses","authors":"K. Butler-Henderson, K. Gray, Karen Day, R. Grainger","doi":"10.1145/3373017.3373043","DOIUrl":"https://doi.org/10.1145/3373017.3373043","url":null,"abstract":"The health information technology specialist group is a hidden workforce supporting the information technology and system needs in the healthcare sector. For the first time, this paper explores the demographic, educational, and occupational characteristics of this specialist group in Australia and New Zealand. A total of 227 responses from the 2018 Australian Health Information Workforce Census and the 2018 New Zealand Health Information Workforce Census were analysed. The analysis reports two-third of respondents were born in Australia or New Zealand, with the majority (98.3%) citizens or permanent residents. Most of this specialist group is male (58.1%) aged 45 year or older (53.3%), and nearly half do not possess a formal qualification in this field (47.6%). Most roles are permanent, full-time position in the public healthcare system, such as hospitals or state/federal departments, or in the health technology industry. Roles in this specialist group are still emerging, with respondents working in the field on average 11.9 years, and in their current role 5.6 years. There is an opportunity to build capacity in this specialist group through workforce planning, in particular developing specialist qualification and career pathways. This will be necessary to meet the growing demand for specialist in these roles to support digital transformation and innovation.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"26 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125683780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computationally Efficient Epileptic Seizure Prediction based on Extremely Randomised Trees","authors":"S. Wong, L. Kuhlmann","doi":"10.1145/3373017.3373058","DOIUrl":"https://doi.org/10.1145/3373017.3373058","url":null,"abstract":"Epilepsy is a neurological disorder that affects close to 60 million of the world's population and is commonly categorized by having unpredictable seizure episodes. Over the years, in attempt to predict epileptic seizures in patients using electroencephalographic (EEG) data, several machine learning based models and algorithms have been developed but many of them present shortcomings such as having computationally inefficient algorithms, limited EEG data and there is no one size fits all patients model. Here a generalised seizure prediction algorithm based on extremely randomised tree classification is presented that can be applied to all patients with a minimal number of features to provide increased computational efficiency and comparable performance score relative to a more complicated state-of-the-art algorithm. The new algorithm achieves a 3.25 factor speed up in computation time while achieving an average Area under the curve, AUC of 0.74 relative to 0.72 for the state-of-the-art algorithm. The algorithm is designed to be implemented on small implantable/wearable EEG devices with little computing power, in order to preserve battery life and help make seizure prediction a clinically viable option for patients with epilepsy.","PeriodicalId":297760,"journal":{"name":"Proceedings of the Australasian Computer Science Week Multiconference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}