
Aigul Zabirova,
Chief Research Fellow,
KazISS under the President of the Republic of Kazakhstan
This article is the final installment in a series exploring how artificial intelligence is gradually becoming part of everyday life for Kazakhstanis. In the first piece, we showed that today about half of Kazakhstan’s population already encounters AI in their daily practices, often without even realizing it. The second article focused on attitudes toward artificial intelligence. The data revealed different groups of people with varying expectations, hopes, and levels of trust in technology. This text turns to what is discussed less often, yet concerns people the most – fears and anxieties. It shows that these concerns are not about fantastic scenarios, but about real experiences: safety, employment, and the boundaries of human autonomy. Ultimately, the series helps to demonstrate how artificial intelligence has become a social reality for Kazakhstanis, and how, together with it, digital trust is being formed – a trust that society still has to work on.
Let me begin with a simple observation. When people are asked what they fear in connection with artificial intelligence, they hardly talk about artificial intelligence itself. They talk about phone calls, about messages in messengers, about emails from banks. Therefore, the primary concern is not a machine uprising, but cyber fraud. It is feared by 55.3% of respondents[1]. This is an important point, because it immediately sets the emphasis, where uncertainty appears, the human mind automatically looks for a threat.
And if we examine these responses more closely, it becomes clear that this is not about fear of artificial intelligence at all. What is at stake is human psychological adaptation to a new reality. Fears associated with artificial intelligence are not fears of technology itself. They reflect a concern about being left vulnerable in an environment where change is occurring faster than we can adapt to it. It is no coincidence that anxiety arises more often where the digital environment has already become a familiar part of everyday life, it is precisely there that the demand for trust and protection emerges earlier. As long as AI develops faster than the sense of security, anxiety will remain an inevitable companion of progress. And the main question today is not how intelligent artificial intelligence is, but how protected human beings are in this new reality.
In fact, this is about the formation of a new social norm: digital trust. It is already emerging in everyday life, but so far it is developing faster in technology than in rules and institutions. It is precisely this gap that has become a source of anxiety for people today.
More than a quarter of respondents (27.5%) expressed concern about breaches of privacy and the leakage of personal data. These two concerns are closely linked, it is not only about financial losses, but also about the sense of losing control over one’s own information and personal boundaries. In this context, artificial intelligence appears as part of a digital environment in which people do not always understand who is managing their data or how it is being used.
Another quarter of respondents (27.4%) fear human dependence on artificial intelligence, for example, that children will stop doing their homework themselves. When people talk about dependence on AI, the most vivid example they often give is children and school assignments. Here, the anxiety is not about the technology itself, but about the replacement of effort with ready-made answers. All of us remember that doing homework had always been less about getting the “right” answer and more about the process, the process of thinking, trying, making mistakes, and taking responsibility. Parents’ fear is that if this process disappears, children may stop thinking independently and delegate tasks to a machine. This concern is not about AI itself, but about ensuring that our children do not lose the ability to learn and make independent decisions.
Just under a quarter of respondents expressed concern about job loss due to automation (22.4% of respondents). This result shows that, for now, artificial intelligence is not perceived as an inevitable threat to employment. People are not afraid of being left without work or a future; they are afraid of not being able to adapt in time. Behind this concern lies not so much a rejection of technology as uncertainty, the question of whether people will have the time, resources, and support to find their place in the new economy. After all, today skills become obsolete faster than a sense of stability can develop.
If we consider these data as a signal, it becomes clear that anxiety around artificial intelligence should not be seen as a sign of societal weakness, but as a sign of engagement. People are not rejecting technology; they are trying to understand how to integrate it into their familiar worldview, where boundaries, responsibility, and a sense of support exist. This is why the discussion about artificial intelligence inevitably goes beyond technology and comes down to the question of trust. Digital trust does not emerge on its own; it cannot be activated with a button or established through a single formal rule. It develops gradually, through experience, mistakes, and the formation of new habits. The calmer and more consciously we approach these fears today, the higher the chance that artificial intelligence will become a tool that expands human capabilities without depriving individuals of control over their own lives.
[1] All observations are based on data from a sociological survey commissioned by the KazISS and conducted from October 3 to November 5, 2025. The sample size comprised 8,000 respondents. The survey included respondents aged 18 and over from 17 regions of the country, as well as from the cities of national significance – Astana, Almaty, and Shymkent.


