THE INFLUENCE OF AI-GENERATED CONTENT ON TRUST AND CREDIBILITY WITHIN SPECIALIZED ONLINE COMMUNITIES: A BRIEF REVIEW ON PROPOSED CONCEPTUAL FRAMEWORK

Authors

  • Mostafa Essam Ahmed Eissa Freelance Independent Researcher and Consultant, India

DOI:

https://doi.org/10.29121/shodhai.v2.i2.2025.40

Keywords:

Ai-Generated Content, Specialized Online Communities, Trust, Credibility, Digital Media, Online Interaction, Community Behavior

Abstract

The increasing prevalence of Artificial Intelligence (AI) in creating content signifies a notable change in the digital communication landscape. While the broader effects on widespread media platforms have been extensively discussed, the specific consequences within specialized online communities remain less explored. These communities, frequently founded and established on shared interests, mutual confidence, and perceived genuineness, are particularly susceptible to alterations in the origin and trustworthiness of content. This paper challenges three questions: (1) How AI content affects credibility perceptions, (2) Verification methods used by communities, (3) Consequences for trust dynamics. A hypothetical framework would be used to investigate the potential impact of AI-produced content on the dynamics of trust and credibility within these focused digital environments. By drawing upon existing academic work in media studies, the behavior of online communities, and the concept of source credibility, a theoretical model and outline a potential research strategy were encouraged to examine how the presence, identification, and interpretation of content authored by AI might modify member interactions, processes for verifying information, and the overall unity of the community. The hypothetical outcome suggests that the subtle integration of AI content could diminish perceived authenticity, complicate established indicators of trust, and potentially lead to the fragmentation or decline of communities that depend on authentic human connection and collective expertise. The article concludes by considering the ramifications for those who manage communities, design platforms, and participate as members, stressing the importance of greater openness and digital literacy in navigating the evolving digital media landscape.

References

Ardichvili, A., Page, V., & Wentling, T. (2003). Motivation and Knowledge Sharing in Online Communities of Practice. Journal of Knowledge Management, 7(3), 64–77. https://doi.org/10.1108/13673270310463626

Basta, Z. (2024). The Intersection of AI-Generated Content and Digital Capital: An Exploration of Factors Impacting AI-Detection and its Consequences. DIVA.

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the Opportunities and Risks of Foundation Models. Arxiv Preprint Arxiv:2108.07258. https://doi.org/10.48550/arXiv.2108.07258

Braun, V., & Clarke, V. (2006). Using Thematic Analysis in Psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Broussard, M. (2020). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://doi.org/10.48550/arXiv.2005.14165

Bryce, J., & Fraser, J. (2014). The Role of Disclosure of Personal Information in the Evaluation of Risk and Trust in Young Peoples' Online Interactions. Computers in Human Behavior, 30, 299–306. https://doi.org/10.1016/j.chb.2013.09.012

Burtch, G., Lee, D., & Chen, Z. (2023). The Consequences of Generative AI for UGC and Online Community Engagement. SSRN. https://doi.org/10.2139/ssrn.4521754

Cinus, F., Minici, M., Luceri, L., & Ferrara, E. (2025, April). Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 US Election. In Proceedings of the ACM on Web Conference 2025 (pp. 541–559). https://doi.org/10.1145/3696410.3714698

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). Opinion Paper: "So What if ChatGPT Wrote it?" Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

European Commission. (2021, February 10). The Artificial Intelligence Act.

Ferrara, E. (2023). Social Bot Detection in the Age of ChatGPT: Challenges and Opportunities. First Monday, 28(6). https://doi.org/10.5210/fm.v28i6.13185

Flanagin, A. J., & Metzger, M. J. (2000). Perceptions of Internet Information Credibility. Journalism & Mass Communication Quarterly, 77(3), 515–540. https://doi.org/10.1177/107769900007700304

Flanagin, A., & Metzger, M. J. (2017). Digital Media and Perceptions of Source Credibility in Political Communication. In The Oxford Handbook of Political Communication (pp. 417–436). https://doi.org/10.1093/oxfordhb/9780199793471.013.65

Flavián, C., Guinalíu, M., & Gurrea, R. (2006). The Role Played by Perceived Usability, Satisfaction and Consumer Trust on Website Loyalty. Information & Management, 43(1), 1–14. https://doi.org/10.1016/j.im.2005.01.002

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681–692. https://doi.org/10.1007/s11023-020-09548-1

Fogg, B. J. (2002). Persuasive Technology: Using Computers to Change What we Think and do. Ubiquity, 2002(December), 2. https://doi.org/10.1145/764008.763957

Gallagher, M., Pitropakis, N., Chrysoulas, C., Papadopoulos, P., Mylonas, A., & Katsikas, S. (2022). Investigating Machine Learning Attacks on Financial Time Series Models. Computers & Security, 123, 102933. https://doi.org/10.1016/j.cose.2022.102933

Kollock, P. (1999). The Production of Trust in Online Markets. Advances in Group Processes, 16(1), 99–123.

Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens Versus the Internet: Confronting Digital Challenges with Cognitive Tools. Psychological Science in the Public Interest, 21(3), 103–156. https://doi.org/10.1177/1529100620946707

Labajová, L. (2023). The State of AI: Exploring the Perceptions, Credibility, and Trustworthiness of the Users Towards AI-Generated Content [Doctoral Dissertation, University of Technology].

Lai, H., & Nissim, M. (2024). A Survey on Automatic Generation of Figurative Language: From Rule-Based Systems to Large Language Models. ACM Computing Surveys, 56(10), 1–34. https://doi.org/10.1145/3654795

Lankes, R. D. (2007). Trusting the Internet: New Approaches to Credibility Tools. Macarthur Foundation Digital Media and Learning Initiative.

Lave, J., & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge University Press. https://doi.org/10.1017/CBO9780511815355

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Zittrain, J. L. (2018). The Science of Fake News. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Lin, H. Y., Yeh, Y. M., & Chen, W. C. (2017). Influence of Social Presence on Sense of Virtual Community. Journal of Knowledge Management, Economics and Information Technology, 7(2), 1–14. https://doi.org/10.1109/IJCSS.2011.29

Makki, A., & Jawad, O. (2023). Future Challenges in Receiving Media Messages in Light of Developments in Artificial Intelligence. Migration Letters: An International Journal of Migration Studies, 20, 167–183. https://doi.org/10.59670/ml.v20iS6.3943

Marinescu, V., Fox, B., Roventa-Frumusani, D., Branea, S., & Marinache, R. (2022). News Audience's Perceptions of and Attitudes Towards AI-Generated News. In Futures of Journalism: Technology-Stimulated Evolution in the Audience–News Media Relationship (pp. 295–311). Springer International Publishing. https://doi.org/10.1007/978-3-030-95073-6_19

Nonaka, I., & Takeuchi, H. (1996). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Long Range Planning, 29(4), 592. https://doi.org/10.1016/0024-6301(96)81509-3

O'Keefe, D. J. (1982). Persuasion: Theory and Research. Communication Theory, 147, 191.

Oksymets, V. (2024). The Impact of Artificial Intelligence on Journalism Practices and Content Creation (Doctoral Dissertation, Vytautas Magnus University, Kaunas, Lithuania).

Pennycook, G., & Rand, D. G. (2021). The Psychology of Fake News. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Pennycook, G., & Rand, D. G. (2022). Accuracy Prompts are a Replicable and Generalizable Approach for Reducing the Spread of Misinformation. Nature Communications, 13(1), 2333. https://doi.org/10.1038/s41467-022-30073-5

Pennycook, G., Binnendyk, J., Newton, C., & Rand, D. G. (2021). A Practical Guide to Doing Behavioral Research on Fake News and Misinformation. Collabra: Psychology, 7(1). https://doi.org/10.1525/collabra.25293

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021). Shifting Attention to Accuracy can Reduce Misinformation Online. Nature, 592(7855), 590–595. https://doi.org/10.1038/s41586-021-03344-2

Preece, J. (2000). Online Communities: Designing Usability, Supporting Sociability. John Wiley & Sons. https://doi.org/10.1108/imds.2000.100.9.459.3

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners. OpenAI Blog, 1(8), 9.

Ren, Y., Kraut, R., & Kiesler, S. (2007). Applying Common Identity and Common Bond Theories to Design of Online Communities. Organization Studies, 28(3), 377–408. https://doi.org/10.1177/0170840607076007

Rheingold, H. (1993). The Virtual Community: Homesteading on the Electronic Frontier. MIT Press.

Ridings, C. M., Gefen, D., & Arinze, B. (2002). Some Antecedents and Effects of Trust in Virtual Communities. Journal of Strategic Information Systems, 11(3–4), 271–295. https://doi.org/10.1016/S0963-8687(02)00021-5

Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so Different After All: A Cross-Discipline View of Trust. Academy of Management Review, 23(3), 393–404. https://doi.org/10.5465/amr.1998.926617

Shin, J., Jian, L., Driscoll, K., & Bar, F. (2018). The Diffusion of Misinformation on Social Media: Temporal Pattern, Message, and Source. Computers in Human Behavior, 83, 278–287. https://doi.org/10.1016/j.chb.2018.02.008

Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26. https://doi.org/10.1145/3359229

Sundar, S. S. (2008). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. In Digital Media, Youth, and Credibility (pp. 72–100). MIT Press.

Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4

Toral, S. L., Martínez‐Torres, M. R., Barrero, F., & Cortés, F. (2009). An Empirical Study of the Driving Forces Behind Online Communities. Internet Research, 19(4), 378–392. https://doi.org/10.1108/10662240910981353

Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408

Weber, E., Rutinowski, J., Jost, N., & Pauly, M. (2024). Is GPT-4 Less Politically Biased Than GPT-3.5? A Renewed Investigation of ChatGPT's Political Biases. Arxiv Preprint ArXiv:2410.21008. https://doi.org/10.48550/arXiv.2410.21008

Wellman, B., & Gulia, M. (1999). Virtual Communities as Communities: Net Surfers don't Ride Alone. In M. A. Smith & P. Kollock (Eds.), Communities in Cyberspace (pp. 167–194).

Downloads

Published

2025-08-12