Discover more from Zhebrak et al.
Soon, our engagement with synthetic intelligence agents will be ubiquitous and natural as we rely more on their assistance. While shaping these models to conform to the design objectives, we implicitly exert evolutionary pressure on human traits and skillsets. Not only those with access to such tools might have a significant advantage, but also people with specific characteristics might be innately better aligned to get the most out of the collaboration. In addition, the current generation of large language models is being primed by their developers to adopt a particular attitude and mode of interaction. Without further insights into the influence on human behaviour and the resulting selective pressure, as well as control over the model priming, we risk that the human characteristics we now consider valuable, such as the ability to think abstractly and independently and the capacity for initiative and deep work, might start losing their relevance.