
Aigul Zabirova,
Chief Research Fellow,
KazISS under the President of the Republic of Kazakhstan
The results of the public opinion survey show that population in Kazakhstan have different views on the future of artificial intelligence. Based on empirical data, three perspectives on the future have been identified: opportunities, caution, and expectations which reflect different ideas about how technology will influence the future. Our understanding of the future is still developing, and the role of the state is particularly important in establishing a clear and understandable language for discussing technology.
Many of us are accustomed to perceiving New Year as a time to take stock, and have a feeling that we are drawing a line under the year that has passed. However, sometimes other things are more important, it is essential to try to understand how individuals and society view the future as they take a step towards the New Year. Nowadays, when thinking about the future, we increasingly turn to technology, and first and foremost to artificial intelligence, it has entered our everyday language, and itis most convenient to use it to talk about both the future and the role of humans in the future.
Sociology. The results of a November survey requested by KazISS[1] showed that there is no consensus in society on the development of artificial intelligence technologies. The responses were almost evenly divided between those who expect AI to be primarily beneficial (30.1% of respondents), those who see it as having as many benefits as risks (37.7% of respondents), and those who see more risks than benefits in AI (26.7% of respondents)[2]. At first glance, this picture does not seem unambiguous. However, upon closer examination, the data indicates that our society is in the process of reflection, these responses reveal different perspectives on the future as Kazakhstan enters the New Year 2026. Let’s take a look at the components of each of these perspectives and trajectories.
The trajectory of opportunities: Active Agents of Change.
For the first group of Kazakhstanis (30.1% of respondents), artificial intelligence is associated primarily with the expansion of opportunities. Here, individuals view technology as a set of tools that help them navigate everyday life more easily and make use of new possibilities. This position reflects confidence in one’s ability to adapt; it is not about abstract optimism, but rather about calculated expectations tied to learning and mastering new tools. Within this group, technology is not perceived as a replacement for human beings, but as a means of making human action more effective. At the same time, the role of age is particularly evident in this context.[3] Older age groups (45+) tend to express more cautious assessments of artificial intelligence. Naturally, our perceptions of technological change are shaped by personal life experience and by how we imagine the future. What is therefore at stake is less a simple evaluation of technology and more the horizon of life expectations ranging from anticipation of new opportunities to a more restrained and distanced attitude toward AI.
The trajectory of caution: Cautious Regulators of the Future.
For the second group of respondents (37.7%), artificial intelligence is primarily associated with the question of consequences. Within this group, AI is also understood as a powerful tool capable of delivering tangible benefits, yet one that requires close attention to risks and limits of application. This perspective does not stem from fear of technology or a rejection of the future. Rather, it reflects an expectation that the development of AI should be accompanied by rules, accountability, and public deliberation. Within this framework, what matters is not only what technologies enable us to do, but also the long-term effects they generate for individuals and for society as a whole. Notably, educational experience plays a particularly important role in this trajectory[4]. It is the level of education that shapes the language through which technology is discussed as a language of consequences, constraints, and institutional frameworks. Ultimately, the trajectory of caution reflects a broader aspiration to make the future governable and meaningful: a future in which technologies evolve no faster than society’s capacity to understand them.
The expectation trajectory: Observers Awaiting Clear Rules and Practical Examples. The third trajectory brings together those Kazakhstanis who are not yet ready to offer a definitive assessment of the development of artificial intelligence (26.7% of respondents). While this position may appear to reflect uncertainty, it in fact represents a different mode of relating to the future characterized by observation and a deliberate pause in judgment. Within this trajectory, AI is perceived as a phenomenon that is not yet fully understood. There is neither pronounced optimism nor clearly articulated concern. Instead, there is an expectation: individuals want to see how technologies will be integrated into real life, what practices will emerge around them, and which consequences will become evident over time. In this sense, the expectation trajectory is strategically significant for the state. It encompasses the segment of society whose attitudes toward technology are still in formation and therefore particularly sensitive to the language of public policy and, more broadly, to institutional signals.
Indeed, the expectation trajectory requires particular attention from the perspective of public policy. This refers to a group that has not yet taken a stable position and is therefore most sensitive to how the state and social institutions talk about technology. In this context, the task of public policy is not only to accelerate the introduction of technology, but also to create a clear framework for discussion in society. Education, open dialogue, and the demonstration of practical and understandable examples of AI applications in everyday areas from public services to education and medicine are becoming key tools for managing expectations. The direction in which the public agenda shifts over time will largely depend on how clearly and honestly this task is approached.
In this context, international analytical studies, including McKinsey reports[5], emphasize that the key question of artificial intelligence is not whether it will replace humans, but what role humans will retain in the new architecture of labor and responsibility. In this sense, the dawn of the new year 2026 is not a moment of uncertainty for us, but a point of meaningful progress with attention to people and the changes that are already taking place around them.
[1] The public opinion survey was conducted at the request of the Kazakhstan Institute for Strategic Studies between 3 October and 5 November 2025. The sample comprised 8,000 respondents. Participants included individuals aged 18 and over from all 17 regions of Kazakhstan, as well as from the three cities of national significance Astana, Almaty, and Shymkent.
[2] 5.5% of respondents found it difficult to answer.
[3] Age groups exhibit pronounced and statistically significant differences in attitudes toward artificial intelligence (χ² = 652.9, p < 0.001; Linear-by-Linear Association = 161.5, p < 0.001; Spearman’s ρ ≈ −0.15).
[4] The level of education also shows a consistent correlation with assessments of artificial intelligence (χ² = 222.0, p < 0.001; Pearson’s r ≈ −0.11; Spearman’s ρ ≈ −0.15).
[5] McKinsey Global Institute, Agents, Robots, and Us: Skill Partnerships in the Age of AI”, November 25, 2025. https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai


