ARIS2 - Advanced Research on Information Systems Security https://aris-journal.com/aris/index.php/journal <p>Welcome, colleague.</p> <p>The <em><strong>ARIS<sup>2</sup> - Advanced Research on Information Systems Security, an</strong></em><em><strong> International Journal,</strong> </em>focuses on the original research and practice-driven applications with relevance to Information Security and Data Protection, published by the <strong>Association for Industry Sciences and Computer Sciences Innovation</strong>, based in Porto, Portugal, edited by Prof. Dr. Nuno Mateus-Coelho, and supported by COPLEABS - Universidade Lusófona.</p> <p><strong><em>ARIS<sup>2</sup></em></strong> provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view of modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions.</p> <p>Submitted articles are published immediately after the process of submission, review, and camera-ready. All articles are included in editions, and these are published biannually in a volume.</p> <p><strong><em>ARIS<sup>2</sup></em></strong> issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.</p> <p>We have the pleasure of extending a warm welcome to everyone planning to submit to <strong>ARIS<sup>2</sup> – Advanced Research on Information Systems Security.</strong></p> <p><strong>Online ISSN: 2795-4560</strong></p> <p><strong>Print ISSN: </strong><strong>2795-4609</strong></p> <p>Best Regards,</p> <p>Editorial Team</p> en-US secretariat@aris-journal.com (Prof. Dr. Nuno Mateus-Coelho) andre.costa@aris-journal.com (Dr. André Costa) Fri, 16 May 2025 13:05:42 +0000 OJS 3.3.0.13 http://blogs.law.harvard.edu/tech/rss 60 AI-Driven Threats in Social Learning Environments - A Multivocal Literature Review https://aris-journal.com/aris/index.php/journal/article/view/60 <p>In recent years, artificial intelligence (AI) has become important in improving educational processes by facilitating personalized learning and enhancing collaborative platforms. However, the same technologies that offer these advantages can also enable sophisticated cyber threats. This multivocal literature review (MLR) explores four major areas of concern in social learning environments: (1) phishing and social engineering, (2) AI-generated misinformation, (3) deepfake media, and (4) AI-driven detection systems. Gathering insights from recent academic articles, industry reports, and news/blog analyses, the study demonstrates AI’s dual function as both a channel for educational innovation and a tool for malicious exploitation. Findings indicate that AI-powered attacks not only erode trust and academic integrity but also target the inherent vulnerability of collaborative platforms, including Massive Open Online Courses (MOOCs). Additionally, while academic literature focuses on theoretical solutions such as explainable AI (XAI) and advanced machine learning detection, gray literature highlights practical challenges like regulatory gaps, limited funding, and insufficient user training. Blockchain-based audit trails and robust user-awareness campaigns also emerge as critical strategies for enhancing security. This review highlights the importance of interdisciplinary collaboration among policymakers, researchers, educators, and technology developers to ensure that AI’s benefits are not dominated by its misuse. By adopting adaptive security policies, fostering digital literacy, and integrating transparent detection tools, stakeholders can strengthen the resilience of social learning environments against rapidly evolving AI-driven threats.</p> Pedro Almeida Perdigão, Nuno Mateus Coelho, José Cascais Brás Copyright (c) 2025 Pedro de Almeida Perdigão, Nuno Mateus Coelho, José Cascais Brás https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/60 Fri, 16 May 2025 00:00:00 +0000 Comprehensive Analysis for Cybersecurity and Interoperability in Portuguese Healthcare Systems Under NIS2 https://aris-journal.com/aris/index.php/journal/article/view/59 <p>This article presents a comprehensive analysis of cybersecurity challenges and interoperability requirements in Portuguese healthcare systems within the context of the Network and Information Security 2 (NIS2) Directive. Drawing from data and recommendations from the European Union Agency for Cybersecurity (ENISA), the National Cybersecurity Center (CNCS), the National Data Protection Commission (CNPD), and the National Health Service (SNS), this research examines the current state of healthcare information systems in Portugal. It evaluates compliance with NIS2 requirements and proposes a framework for enhancing both security and interoperability. The research presents a set of essential practices for safeguarding patient data, emphasizing the importance of rigorous monitoring, specialized staff training, and continuous updates of security systems.</p> Emanuel Gonçalves Copyright (c) 2025 Emanuel Gonçalves https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/59 Fri, 16 May 2025 00:00:00 +0000 Applying Zero Trust to Kubernetes Clusters https://aris-journal.com/aris/index.php/journal/article/view/58 <p>The growing adoption of Kubernetes as the foundation for cloud-native architectures, has underscored the need for robust and scalable security measures. Traditional security models often fail to address the dynamic and distributed nature of Kubernetes environments, making them vulnerable to threats such as lateral movement, privilege escalation, and misconfigured access controls. This paper explores the application of Zero Trust principles in Kubernetes clusters, synthesizing insights from peer-reviewed and technical studies to evaluate the effectiveness of current tools and practices. The research methodology involved a systematic literature review, identifying key security vulnerabilities, tools for Zero Trust implementation, and their impact on performance, scalability, and manageability. The findings reveal that while Zero Trust significantly enhances security, challenges such as integration, scalability in multi-cloud deployments, and performance trade-offs remain. A roadmap is proposed to address these challenges, integrating tools like Istio, Kyverno, and Falco into a cohesive framework for Zero Trust.</p> Rui Filipe dos Santos Copyright (c) 2025 Rui Filipe dos Santos https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/58 Fri, 16 May 2025 00:00:00 +0000 Assessing Domain Specific LLMs for CWEs Detections https://aris-journal.com/aris/index.php/journal/article/view/53 <p><em>In recent years, Large Language Models (LLMs) have witnessed a significant evolution branching to several fields of life. From science &amp; engineering to arts &amp; literature, the realm of applications has become limitless. Their ability to assimilate and comprehend contextual writings is astonishing. This ability also extends to human-machine software written code. Hence, many novel attempts have demonstrated cutting-edge experiments with LLMs for software testing and security. These contributions have set the initial seed for promising future research endeavors to use LLMs to detect weaknesses, vulnerabilities, and malicious pieces of software code in even the largest repositories. However, further explorations remain short, especially with domain-specific LLMs. LLMs specifically trained for software security remain undiscovered and their behavior is still undisclosed in the literature. This paper aims to explore this new area of LLMs for software security through testing and comparing the accuracy of these AI models against general domain trained models and discover their abilities to recognize the exact vulnerability while performing and observational study of their behaviors while responding to the precisely crafted prompts. In our experiments, we considered GPT-3.5 from OpenAI and Gemini Pro from Google. We find that, in terms of recall, Gemini Pro outperformed GPT-3.5 by a large margin with recall of 63.13%, while GPT-3.5 has Recall of 43.56%, showing that Gemini Pro is better at identifying the true CWE vulnerability with less type II error. Meanwhile, Gemini Pro is also better at discovering the correct CWE vulnerability No. among all correct identified vulnerable cases, with the accuracy of 13.13% vs the GPT-3.5’s 10.61%. However, GPT-3.5 is superior to Gemini Pro in terms of Precision and Accuracy. The Precision of GPT-3.5 is 88.89%, while Gemini Pro has a precision of 54.35%, showing that Gemini Pro inclines to identify case having vulnerability. The Accuracy for both models is similar; GPT-3.5 has Accuracy of 68.75%, and Gemini Pro has Accuracy of 55.50%. </em></p> Mohamed Elatoubi, Xiao Tan Copyright (c) 2025 Mohamed Elatoubi, Xiao Tan https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/53 Fri, 16 May 2025 00:00:00 +0000 Beyond Context: Identifying Individuals from Physiological Signals Across Experiments https://aris-journal.com/aris/index.php/journal/article/view/54 <p>This study evaluates the feasibility of using ECG and EDA signals for biometric identification in diverse VR contexts. Participants were first assessed in a controlled puzzle-based VR game and later in a dynamic exergame, separated by a two-year temporal gap. The proposed CNN model achieved 98.9% accuracy in the controlled environment, confirming the reliability of physiological signals for biometric identification. However, a 24% performance decline was observed in the dynamic exergame setting, highlighting the critical challenge of contextual dependence in biometric systems. Unlike most existing studies, which examine time spans of no more than a week, this work provides new insights into the impact of long-term variability and task-induced changes on identification performance. The findings underscore the importance of addressing contextual and temporal variability to improve the robustness and adaptability of biometric models.</p> Pedro Rodrigues Copyright (c) 2025 Pedro Rodrigues https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/54 Fri, 16 May 2025 00:00:00 +0000 Exploring Hacktivism: The Role and Impact of Social Media https://aris-journal.com/aris/index.php/journal/article/view/56 <p>The digital age has witnessed social media emerge as a potent tool for activism, especially hacktivism. Hacktivists leverage cyberattacks to advance political or social agendas while exploiting social media for organization, communication, and amplification. These platforms provide unparalleled reach and anonymity yet simultaneously heighten cybersecurity risks for organizations and governments. This paper examines the dual role of social media in enabling hacktivism and exacerbating cybersecurity challenges, offering insights into the intricate relationship between digital activism and modern cybersecurity threats. It delves into the transformative influence of social media on hacktivism, highlighting both its potential for empowering activists and the significant vulnerabilities it creates. By analyzing case studies and existing literature, the paper underscores the ethical and legal dilemmas associated with hacktivism, as well as the critical need for enhanced cybersecurity measures and international cooperation. Ultimately, the study aims to provide a nuanced understanding of the benefits and risks posed by social media in the context of hacktivism, offering recommendations to address these complex challenges.</p> Maria Costa Copyright (c) 2025 Maria Costa https://creativecommons.org/licenses/by-nc-nd/4.0 https://aris-journal.com/aris/index.php/journal/article/view/56 Fri, 16 May 2025 00:00:00 +0000