AI-Driven Threats in Social Learning Environments - A Multivocal Literature Review
DOI:
https://doi.org/10.56394/aris2.v5i1.60Keywords:
Artificial Intelligence, social engineering, phishing, deepfake, missinformationAbstract
In recent years, artificial intelligence (AI) has become important in improving educational processes by facilitating personalized learning and enhancing collaborative platforms. However, the same technologies that offer these advantages can also enable sophisticated cyber threats. This multivocal literature review (MLR) explores four major areas of concern in social learning environments: (1) phishing and social engineering, (2) AI-generated misinformation, (3) deepfake media, and (4) AI-driven detection systems. Gathering insights from recent academic articles, industry reports, and news/blog analyses, the study demonstrates AI’s dual function as both a channel for educational innovation and a tool for malicious exploitation. Findings indicate that AI-powered attacks not only erode trust and academic integrity but also target the inherent vulnerability of collaborative platforms, including Massive Open Online Courses (MOOCs). Additionally, while academic literature focuses on theoretical solutions such as explainable AI (XAI) and advanced machine learning detection, gray literature highlights practical challenges like regulatory gaps, limited funding, and insufficient user training. Blockchain-based audit trails and robust user-awareness campaigns also emerge as critical strategies for enhancing security. This review highlights the importance of interdisciplinary collaboration among policymakers, researchers, educators, and technology developers to ensure that AI’s benefits are not dominated by its misuse. By adopting adaptive security policies, fostering digital literacy, and integrating transparent detection tools, stakeholders can strengthen the resilience of social learning environments against rapidly evolving AI-driven threats.
References
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. Intelligence Unleashed: An Argument for AI in Education. Pearson. 2016. [Online]. Available: https://discovery.ucl.ac.uk/id/eprint/1475756/
Kinshuk, Chen, NS., Cheng, IL. et al. Evolution Is not enough: Revolutionizing Current Learning Environments to Smart Learning Environments. Int J Artif Intell Educ 26, 561–581. 2016. [Online]. Available: https://doi.org/10.1007/s40593-016-0108-x DOI: https://doi.org/10.1007/s40593-016-0108-x
Fritsch, L., Jaber, A., Yazidi, A. An Overview of Artificial Intelligence Used in Malware. In: Zouganeli, E., Yazidi, A., Mello, G., Lind, P. (eds) Nordic Artificial Intelligence Research and Development. NAIS 2022. Communications in Computer and Information Science, vol 1650. Springer, Cham. 2022. [Online]. Available: https://doi.org/10.1007/978-3-031-17030-0_4 DOI: https://doi.org/10.1007/978-3-031-17030-0_4
M. R. Shoaib, Z. Wang, M. T. Ahvanooey and J. Zhao, Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. 2023 International Conference on Computer and Applications (ICCA), Cairo, Egypt, pp. 1-7. 2023. [Online]. Available: https://ieeexplore.ieee.org/document/10401723 DOI: https://doi.org/10.1109/ICCA59364.2023.10401723
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Pedro de Almeida Perdigão, Nuno Mateus Coelho, José Cascais Brás

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.