Implementation of Generative Artificial Intelligence in Sociological Research
https://doi.org/10.32603/2412-8562-2025-11-1-52-70
Abstract
Introduction. This article discusses the use of generative artificial intelligence (GAI) in sociological research. The relevance of the topic is determined by the increasing interest in applying new technologies to enhance the efficiency and accuracy of research in social sciences. GAI provides new opportunities for data collection, processing, and analysis, which can significantly change traditional approaches in sociology.
Methodology and sources. The research is based on an analysis of available publications and experimental data obtained during discussions with sociologists using GAI in their projects. The paper examines methodologies for generating surveys, processing respondents' answers, and analyzing big data using machine learning algorithms. The focus is on specific cases of GAI applications in sociological research, as well as examples of successful projects.
Results and discussion. The results of the study demonstrate that the use of GAI allows for significantly accelerating the data processing process and enhancing the quality of the data. New patterns and trends in sociological research have been identified, enabling researchers to draw more accurate and justified conclusions. Ethical aspects related to the use of GAI are also discussed, such as issues of confidentiality and algorithmic bias.
Conclusion. Generative artificial intelligence represents a powerful tool capable of transforming sociological research. Despite existing challenges, it opens new horizons for data collection and analysis, fostering a deeper understanding of social processes and phenomena. It is important to continue exploring the possibilities and limitations of GAI for the advancement of sociological science.
About the Authors
V. E. DrachRussian Federation
Vladimir E. Drach – Can. Sci. (Engineering, 2005), Docent (2006), Associate Professor at the Department of Information Technologies and Mathematics
94 Plastunskaya str., Sochi 354000
Yu. V. Torkunova
Russian Federation
Yulia V. Torkunova – Dr. Sci. (Pedagogic, 2015), Professor at the Department of Information Technologies and Intelligent Systems; Professor at the Department of Information Technologies and Mathematics
51 Krasnoselskaya str., Kazan 420066
94 Plastunskaya str., Sochi 354000
References
1. “Generative artificial intelligence”, Wikipedia, available at: https://en.wikipedia.org/wiki/Generative_artificial_intelligence (accessed 20.08.2024).
2. Glukhikh, V.A., Eliseev, S.M. and Kirsanova, N.P. (2022), “Artificial Intelligence as a Problem of Modern Sociology”, DISCOURSE, vol. 8, no. 1, pp. 82–93. DOI: 10.32603/2412-8562-2022-8-1-82-93.
3. Ilyichev, V.Yu., Drach, V.E. and Chukaev, K.E. (2023), “Moral and Ethical Problems of Universal Use of Neural Networks”, Refleksiya [Reflection], no. 5, pp. 8–13.
4. Kopyrin, A.S., Vidishcheva, E.V., Kovalenko, V.V. et al. (2023), Tsifrovaya ekonomika i sistemnaya tsifrovaya transformatsiya [Digital Economy and Systemic Digital Transformation], in Kopyrin, A.S. (ed.), RIC FGBOU VO ‘SSU”, Sochi, RUS.
5. Deepti Hajela (2024), “The American paradox of protest: Celebrated and condemned, welcomed and muzzled”, The Associated Press, 05.05.2024, available at: https://apnews.com/article/american-pro test-paradox-israel-hamas-war22b1325188e0808db7389c8c3f04c331 (accessed 20.08.2024).
6. Clark, E., August, T., Serrano, S. et al. (2021), “All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text”, Proc. of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int. Joint Conf. on Natural Language Proc., vol. 1, Stroudsburg, PA: Association for Computational Linguistics, pp. 7282–7296. DOI: 10.48550/arXiv.2107.00061.
7. Wang, Y. and Wang, H. (2024), “A face template: improving the face generation quality of multi stage generative adversarial networks using coarse-grained facial priors”, Multimedia Tools and Applications, vol. 83, no. 7, pp. 21677–21693. DOI: https://doi.org/10.1007/s11042-023-16183-2.
8. Buolamwini, J. (2023), Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Random House, NY, USA.
9. Bianchi, F., Kalluri, P., Durmus, E. et al. (2023), “Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale”, FAccT ’23 Proc. of the 2023 ACM Conf. on Fairness, Accountability, and Transparency, Association for Computing Machinery, NY, USA, pp. 1493–1504. DOI: https://doi.org/10.1145/3593013.3594095.
10. Flores, R.D. and Schachter, A. (2018), “Who Are the “Illegals”? The Social Construction of Ille gality in the United States”, American Sociological Review, vol. 83, iss. 5, pp. 839–868. DOI: 10.1177/0003122418794635.
11. Bailey, E.R., Wang, D., Soule, S.A. and Hayagreeva, R. (2023), “How Tilly’s WUNC Works: By stander Evaluations of Social Movement Signals Lead to Mobilization”, American J. of Sociology, vol. 128, no. 4, pp. 1206–1262. DOI: https://doi.org/10.1086/723489.
12. Feinberg, M., Willer, R. and Kovacheff, Ch. (2020), “The Activist’s Dilemma: Extreme Protest Actions Reduce Popular Support for Social Movements”, J. of Personality and Social Psychology, vol. 119, iss. 5, pp. 1086–1111. DOI: 10.1037/pspi0000230.
13. Nelson, L.K. (2020), “Computational Grounded Theory: A Methodological Framework”, Sociological Methods & Research, vol. 49, iss. 1, pp. 3–42. DOI: https://doi.org/10.1177/0049124117729703.
14. “Institutionalreviewboard”, Wikipedia, available at: https://en.wikipedia.org/wiki/Institutional reviewboard (дата обращения: 20.08.2024).
15. Bender, E.M., Gebru, T., McMillan-Major, A. and Shmargaret, Sh. (2021), “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, FAccT ’21: Proc. of the 2021 ACM Conf. on Fairness, Accountability, and Transparency, Association for Computing Machinery, NY, USA, pp. 610–623. DOI: https://doi.org/10.1145/3442188.3445922.
16. Abubakar, A., Maheen, F. and Zou, J. (2021), “Persistent Anti-Muslim Bias in Large Language Models”, Proc. of the 2021 AAAI/ACM Conf. on AI, Ethics, and Society, Virtual Event, Association for Computing Machinery, NY, USA, pp. 298–306. DOI: 10.48550/arXiv.2101.05783.
17. Argyle, L.P., Busby, E.C., Fulda, N. et al. (2023), “Out of One, Many: Using Language Models to Simulate Human Samples”, Political Analysis, vol. 31, iss. 3, pp. 337–351. DOI: https://doi.org/10.1017/pan.2023.2.
Review
For citations:
Drach V.E., Torkunova Yu.V. Implementation of Generative Artificial Intelligence in Sociological Research. Discourse. 2025;11(1):52-70. (In Russ.) https://doi.org/10.32603/2412-8562-2025-11-1-52-70