In his book 'Bonjour ChatGPT', Louis de Diesbach explores the fundamental issues behind the rise of AI. Interview with the philosopher and consultant at the Boston Consulting Group.
Interview
In his foreword, Luc de Brabandère writes that the purpose of a digital tool is to amplify intellectual gestures, but that this in no way changes its status as a technical object. For your part, you explain that technology is never neutral, and always carries with it a project for society... Should we see this as an opposition?
More like two different readings. I agree with the idea that behind technology there are always human beings, and that's where the real genius resides. Nevertheless, whether consciously or not, the way this technology is created always steers this genius in one direction or another. For example, when Amazon created an algorithm that first sorted through CVs to enable recruiters to spend more time on interviews. We then quickly realized that, based on its own data, the algorithm was perpetuating discrimination in favor of white men, just like those who already made up a large proportion of its executives.
So AI can't be 'neutral'?
No. Even if it “doesn't choose”, AI is always the reflection of a vision of the world through the data it draws on and the way the algorithms have been developed by its designers.
Is this 'non-neutrality' of technology a paradigm shift?
No, because despite the rhetoric, this has always been the case. In the 1980s, the electro-mechanics who developed the first airbag systems used crash test dummies with measurements similar to their own. Those of Western men. The first cars equipped with this technology claimed many victims among smaller populations, particularly women. This was already a question of bias.
So how should we question technological progress?
The questions raised by AI are not strictly technological. Rather than asking what will be feasible tomorrow, we need to ask “is it desirable?” and “how will it be implemented?”. To ignore these real questions - economic, political and ideological - on the pretext of the supposed neutrality of technology, is above all to avoid responsibility for its consequences. When we know that artificial intelligence can potentially reduce the human workload of a marketing team by 50%, choosing how to reassign or not, or reinvest in these human resources, are not technical questions...
How much room is left for the human element in a company given the rapid pace of digitalisation?
As far as the consultant I work for is concerned, the various studies carried out recommend an approach to AI consisting of deploying 10% of a company's efforts on algorithmic capabilities, 20% on technological capabilities - platforms, cloud, etc. - and 70% on transforming the business and developing people. The latter is by far the largest share. Governance, processes, training... The adoption of generative AI cannot succeed, or can even destroy value, if you don't get your teams fully on board.
But is everyone really ‘reclassifiable’ in a world where many tasks will be taken over by technology?
We may wonder about the future of translators, marketing experts or even lawyers, but in reality, the history of technological progress has always been one of the disappearance of professions replaced by new ones. I don't believe that AI is going to turn things upside down and suddenly make masses of people insecure. Once again, the real question is not so much technological as sociological, anthropological, economic and political. It's about knowing how to redistribute the benefits of AI and its contribution to well-being. And here, politics will certainly have a role to play in supporting societal change, particularly through public-private partnerships. Public authorities will also have to regulate developments. The AI Act (editor's note: the regulation designed to ensure that AI systems placed on the European market are safe and respect the fundamental rights of citizens and the values of the European Union) is absolutely essential and will have to evolve over time.
The book - right down to its title - delves deep into the subject of anthropomorphism, by which we attribute a human character to the machine. By naming a digital assistant ‘Alexa’ or setting an interrogation protocol that begins with ‘Hello’, have Amazon, Google and the others understood that this is a powerful vector for the adoption of their services?
If Alexa and Siri have female voices, it's because it has been scientifically proven that men are more sensitive to them and that women are generally neutral or even more favourable. Traces of anthropomorphism can be found as far back as 40,000 years ago. So it's as old as the hills. By allowing you to speak to it and responding to you in natural language, ChatGPT gets straight to the heart of the matter. That's what makes it so powerful.
You think it's legitimate to look at the question of technological degrowth. But has history ever seen such a phase?
Not to my knowledge. But that doesn't mean we shouldn't ask ourselves the question. Technology is not the answer to everything. In many areas, the opposite is true. Using an app to limit our access to our own smartphone does nothing to reduce our dependence on technology. For all that, it must remain a personal choice. Such a move cannot be imposed in an authoritarian way, as when China, through invasive measures, authoritatively limits access times. This amounts to safeguarding one fundamental freedom while violating others. Philosophically, it makes no sense. Only the individual can emancipate himself from his own servitude.