Preview

Discourse

Advanced search

Intelligent Behavior of Neural Networks in the Context of Conceptual Engineering: Imitating Philosophical Reflection in DeepSeek, ChatGPT and GigaChat Models

https://doi.org/10.32603/2412-8562-2025-11-5-59-69

Abstract

Introduction. This article explores the pressing questions in the philosophy of artificial intelligence, focusing on the conditions required to generate meaning in technologies modeling the cognitive actions of neural networks. Methodology and sources. The study is conducted using a system-based approach, combining technical and philosophical aspects of concept engineering, as well as qualitative and quantitative methods to analyze neural networks’ cognitive activity during the interpretation of philosophical dilemmas. The empirical base is represented by a set of responses of three neural networks (DeepSeek, ChatGPT, GigaChat) to the same conceptual request. Features of neural network cognitive activity are explored in the context of a functional approach focusing on how architectural differences between attention systems and transformative blocks influence the orientation of flexible neural networks in various contexts. In a qualitative analysis aimed at identifying hidden patterns that determine differences in the style of presenting ideas by neural networks, methods of content, and discourse analyses were used. Quantitative assessment of the responces was performed using R. Flesch index and lexical diversity measures. Results and discussion. A generalized characteristic of the tendency of the DeepSeek, ChatGPT, and GigaChat models to a certain style of philosophical concept exposition is presented. This makes it possible to talk about imitating philosophical reasoning. Differences in how neural networks generate content for philosophical discussions were shown to depend on technical and software-based differences in attention mechanisms (local, global, and multi-layered). The unique intellectual behavior of models becomes evident when they reveal their ability to navigate different contexts and adapt their style of presentation according to the expectations of the audience. Conclusion. The intellectual behavior of ChatGPT, DeepSeek, and GigaChat is determined by flexible orientation in semantics of philosophical problems. From a technological perspective, this is achieved through interpolation of the input data that is consistent with the neural network architecture, which defines its cognitive style and self-assessment. However, these language models are not autonomous in task setting, as the boundaries of their operations are defined by the conceptual resources of human knowledge.

About the Authors

A. A. Lisenkova
Peter the Great St Petersburg Polytechnic University
Russian Federation

Anastasia A. Lisenkova – Dr. Sci. (Cultural Studies, 2021), Docent (2009), Professor of the Higher School of Social Sciences



O. D. Shipunova
Peter the Great St Petersburg Polytechnic University
Russian Federation

Olga D. Shipunova – Dr. Sci. (Philosophy, 2002), Professor (2011), Professor of the Higher School of Social Sciences



A. S. Lisenkov
Alferov Federal State Budgetary Institution of Higher Education and Science Saint Petersburg National Research Academic University of the Russian Academy of Sciences
Russian Federation

Alexey S. Lisenkov – Student (2nd year), direction “Bioinformatics and computer modeling in natural sciences”



References

1. Devlin, J. et al. (2018), “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, Arxiv.org, available at: https://arxiv.org/abs/1810.04805 (accessed 20.04.2025). DOI: https://doi.org/10.48550/arXiv.1810.04805.

2. Marcus, G. (2020), ”The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence”, Arxiv.org, available at: https://arxiv.org/abs/2002.06177 (accessed 10.04.2025), pp. 102–115. DOI: https://doi.org/10.48550/arXiv.2002.06177.

3. Floridi, L. and Chiriatti, M. (2020), “GPT-3: Its Nature, Scope, Limits, and Consequences”, Minds & Machines, vol. 30, pp. 681–694. DOI: 10.1007/s11023-020-09548-1.

4. Bender, E.M. Koller, A. (2020), Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data, Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5185– 5198. DOI: 10.18653/v1/2020.acl-main.463.

5. Bubeck, S. et al. (2023), “Sparks of Artificial General Intelligence: Early Experiments with GPT-4”, Arxiv.org, available at: https://arxiv.org/abs/2303.12712 (accessed 20.04.2025). DOI: https://doi.org/10.48550/arXiv.2303.12712.

6. Searle, J. (1980), “Minds, Brains, and Programs”, Behavioral and Brain Sciences, vol. 3, iss. 3, pp. 417–424. DOI: 10.1017/S0140525X00005756.

7. Dennett, D. (1991), Consciousness Explained, Little, Brown and Company, Boston, USA.

8. Bender, E.M. et al. (2021), “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, March 3–10 2021, pp. 610–623. DOI: https://doi.org/10.1145/3442188.3445922.

9. Chalmers, D. (2020), “What Is Conceptual Engineering and What Should It Be?”, Inquiry, available at: https://www.tandfonline.com/doi/full/10.1080/0020174X.2020.1817141 (accessed 20.04.2025). DOI: 10.1080/0020174X.2020.1817141.

10. Griftsova, I.N. and Kozlova, N.Yu. (2024), “Rudolf Carnap’s Ideas in Philosophy of Language in the Contextof Conceptual Engineering”, Epistemology and Philosophy of Science, vol. 61, no. 1, pp. 121–133. DOI: 10.5840/eps202461111.

11. Kozlova, N.Yu. (2024), “Conceptual Engineering: Idea and Problem Field”, Voprosy Filosofii, no. 9, pp. 157–166. DOI: 10.21146/0042-8744-2024-9-157-166.

12. Floridi, L.A. (2011), “Defence of Constructionism: Philosophy as Conceptual Engineering”, Metaphilosophy, vol. 42, no. 3, pp. 282–304. DOI: 10.1111/j.1467-9973.2011.01693.x.

13. Isaac, M.G., Koch, S. and Nefdt, R. (2022), “Conceptual Engineering: A Road Map to Practice”, Philosophy Compass, vol. 17, no. 10: e12879. DOI: 10.1111/phc3.12879.

14. Raffel, C. et al. (2020), “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”, J. of Machine Learning Research, vol. 21: 140, available at: https://www.jmlr.org/papers/volume21/20-074/20-074.pdf (accessed 02.04.2025).

15. Mikolov, T. et al. (2013), “Distributed Representations of Words and Phrases and their Compositionality”, Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119.

16. Chalmers, D. (2023), “Could a Large Language Model be Conscious?”, Arxiv.org, 2023, available at: https://arxiv.org/abs/2303.07103 (accessed 20.04.2025). DOI: https://doi.org/10.48550/arXiv.2303.07103.

17. Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford Univ. Press, Oxford, UK.

18. Vaswani, A. et al. (2017), “Attention Is All You Need”, Advances in Neural Inf. Proc. Systems 30: 31st Annual Conf. on Neural Inf. Proc. Systems (NIPS 2017), Long Beach, California, USA, 4–9 Dec. 2017, pp. 5999–6010.

19.


Review

For citations:


Lisenkova A.A., Shipunova O.D., Lisenkov A.S. Intelligent Behavior of Neural Networks in the Context of Conceptual Engineering: Imitating Philosophical Reflection in DeepSeek, ChatGPT and GigaChat Models. Discourse. 2025;11(5):59-69. (In Russ.) https://doi.org/10.32603/2412-8562-2025-11-5-59-69

Views: 19


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2412-8562 (Print)
ISSN 2658-7777 (Online)