Professor Kevin Haines
February 2022

There are two topics that could not be hotter presently; ‘Environment, Society, Governance’ and ‘Artificial Intelligence’. We have to ask ourselves: 

Both topics are in their relative infancy. ESG is still being formed; the concepts are yet to be fully articulated and the measures we should be taking are contested and as yet unclear. Much of what purports to be AI is little more than sophisticated algorithms; we are a very long way indeed from Skynet becoming self-aware (probably many decades), but it is not only governments who fund AI research, much work is going on in companies like Google, Facebook and Amazon (in the West; there are equivalents in the East). 

The ultimate consequences of pursuing ESG or AI related developments and goals are, however, incompatible. ESG is, at its core, about safeguarding the planet (whilst facilitating humankind’s prosperity). AI, on the other hand, has two logical outcomes: 

AI becomes self-aware. We can forget relying on Asimov’s Three Laws for robot behaviour: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. There is no way to hard wire these laws into a self-aware entity and there is no super-intelligent entity that would think humans are worth saving (save as pets). In this scenario we are doomed. 

The second dénouement scenario for AI is one I which a burgeoning super-power totalitarian regime (not such a distant memory, not so implausible) gains sole control of a super-computer AI and uses this tool to dominate, subjugate, exploit all of humanity. Such a totalitarian regime, such a ‘world in chains’ scenario is, according to Minari, a very serious likelihood.


We have become accustomed to an unfailing belief in scientism: the notion that science is inherently good, for the benefit of us all and we should listen to scientists when they tell us what is right or wrong, good or bad. This is such a bad idea. Science is value free: it has no view about consequences. People control Science (think about the atom bomb). 

The question we asked at the beginning should, therefore, have been: Should the rise of ESG be the demise of AI? You decide.

Read More