top of page

FIX CAPITALISM.
FIX THE AMERICAN DREAM.

Agree? Join in!

Search
Tim Thorlby

AI: A Beginner's Guide to the Revolution

We're pleased to have Tim Thorlby return to BetterCapitalism.org as this week's contributing author. The following is an excerpt of Tim's essay "AI, social justice & the planet: a beginner's guide to the 4th Industrial Revolution" originally published on his Beautiful Enterprise website blog, Insight. We invite you to read the entire essay here.  


"This is a blog about Artificial Intelligence for people who don’t really care about AI.


It answers seven questions about the 4th Industrial Revolution, which is now in full swing.


Whether the acronym ‘AI’ makes you groan or salivate, there are issues in here we should all ponder. Will this new wave of technology serve society or will we serve it? Or can we just leave it to the tech bros?


Q1: What is Artificial Intelligence?


Artificial Intelligence (AI) is a broad umbrella term used to describe an area of science and engineering which is making ‘intelligent machines’, often in the form of highly sophisticated computer programs. People have been working on AI since the 1950s but the technology has now reached a point where it is beginning to make an impact in the real world.


According to the Encyclopedia Britannica, artificial intelligence is:

“…the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”


It is sometimes described as part of the ‘Fourth Industrial Revolution’. So, significant.



“Your move, human”

One of the earliest examples of an AI computer program to hit the headlines was the computer which had been ‘trained’ in how to play chess and was then tested on the World’s chess grandmasters. The computer program had been taught the rules of the game, given lots of information on possible moves and had a lot of processing power and speed.

In 1997, after several attempts, IBM’s computer ‘Deep Blue’ finally beat Garry Kasparov, a Russian chess grandmaster; the first time that a computer had convincingly beaten a (human) world expert in chess. Cue lots of headlines about robots getting ready to take over the world.


[Edited out even though important . . . . continue reading this section here]


Interestingly, and relevant to this blog, when I used the Google search engine to find out who the current ‘World Chess Champion’ was, its new Generative AI feature at the top of the search findings page informed me that it is the 35 year-old Norwegian, Magnus Carlsen. Ah, but this is not correct. The Generative AI bot has failed to spot that there is a difference between the ongoing World Chess Rankings (updated weekly) which has Magnus at the No. 1 spot, and the World Chess Championships (held every 2 years) and of which the young Gukesh Dommaraju is currently the undisputed title holder. So, yes, Generative AI gave me the wrong answer. We will return to this later.


Machine learning

A significant step forward in technological development came in the 1980s with the growth of ‘machine learning’. This is a very different way of building a computer’s capability to perform tasks. Instead of just coding up a software program with lots of rules on how to do something, a software ‘model’ is built which contains algorithms (a complex set of rules). It is ‘trained’ by feeding it with relevant data to read and ‘learn’ from, so that it can recognise patterns and information and subsequently make decisions when it sees new data for the first time. After testing to check how effective it is, it can be deployed in the real world. The difference with machine learning is that it is not being ‘programmed’ in a traditional sense, it is ‘learning’ from past data so that it can recognise and work out what to do with new data. Machine learning was a whole new way of building software models.


Applied AI

AI is a broad field, with scientists working towards different goals. The original hope (or fear) of creating ‘conscious machines’ is clearly not imminent, and many scientists are sceptical it will ever happen. The most impactful field so far has been ‘Applied AI’ which seeks to use AI for commercially viable tasks, supported by ever faster processing power and ever larger datasets. AI is now finding its way into our lives, with a growing number of applications.


Applied AI is increasingly being used for solving specific problems, handling large volumes of data, automating repetitive tasks and personalising user experiences, amongst other tasks.


Generative AI is one step further on, software models that can use what they have ‘learnt’ in order to create (‘generate’) new examples or attempt to answer questions. It is still early days for this field.


By way of illustration, some of the common examples of applied AI include:


[Edited out even though important. . . . continue reading this section here]


Q2: Are robots going to destroy all of our jobs and take over the world?


No.


Q3: So, is AI just a lot of silly hype dreamed up by the marketing department of a rich Tech Bro?


Also, no.


There is indeed a lot of breathless, and rather silly, hype about AI. But AI is beginning to have an impact on society, our economy and the planet, so it’s good to know what’s going on. It is a real technological step forward and it is already beginning to change the way that some industries work and how some services are designed. It has much potential. It also comes with some significant issues. The better informed that we are about this technology, the better we can regulate it to deliver good socially useful outcomes and, hopefully, minimise the bad ones.


[Edited out even though important . . . . continue reading this section here]


Q4: Should I believe what an AI generative bot tells me?


For the time being, I would say that we should be fairly sceptical of outputs from generative AI and check the answer out with more traditional sources. I wouldn’t bet my pension on an answer from AI.


We need to remember that the AI industry is still at a relatively early stage of development and so moderate our expectations.


There are also some inherent risks and challenges with AI. I have picked out three here.


[Edited out even though important . . . . continue reading this section here]


Q5: How might AI impact on our jobs?


Companies which are largely or entirely focused on developing or using AI are growing in number and scale in the UK, although still account for a fairly small part of our economy [A good readable update can be found in a 2023 House of Lords report on AI in the UK here].


How might AI impact upon the UK economy, and jobs, more generally as the technology diffuses across more sectors and organisations?


The short answer is that we don’t really know. We can throw some straws into the wind and try to estimate it though, hey why not?


  • A 2021 report by consultancy firm PWC suggested that the overall impact on jobs in the UK in the next 20 years might be “broadly neutral”. They thought that manufacturing jobs were most at risk of being lost to AI (‘automated’) compensated with job gains perhaps in health and social care sectors and some professional services sectors. They thought that lower paid process-oriented jobs were most at risk.

  • A 2023 report by Open AI itself (creator of ChatGPT) reckons that it is the high-paid jobs which are most at risk of being automated instead, contradicting dear old PWC.

  • Perhaps more constructively, a 2023 report from the House of Commons Business Select Committee identified the potential productivity gains that might be made across many sectors of the economy in future years by adopting AI.


[Edited out even though important . . . . continue reading this section here]


Q6: Where, physically, does AI actually happen? Is it all in the cloud? Where is the cloud anyway? Is magic involved?


AI software models are large and they run on hardware machines which are also large. It turns out that the ‘cloud’ is a lot of very large buildings in surprising places using vast amounts of water and energy. They are very real indeed.


AI (and much other computing) now takes place in giant ‘data centres’ all around the world. They require a lot of electricity to power them and cool them, together with lots of water to assist with that cooling.


At the last count, the UK had 512 data centres, one of the largest tallies in the world after the 5,000+ in the USA. Hundreds more are planned in the UK. And, oh look, 80% are located in Greater London. Europe’s largest cluster of data centres can be found on a very dull-looking business park in Slough, on the M4, where some 35 data centres are located.


They vary in size but are becoming larger. A typical data centre is the KAO Data Centre in Harlow, which is 15,000 square metres of ‘technical space’; that’s about two football pitches full of computer racks. It requires 40MW of power to run it, roughly the output of two small onshore windfarms. Such centres do not create many jobs on site, but the jobs are high skill and high paid. My own back-of-the-envelope sums suggest perhaps 2 full-time jobs are created per MW, so this KAO data centre might employ roughly 80 people on site. They do seem to bring wider economic impacts with them too.


MIT estimate that data centres already account for up to 2% of global electricity demand. The International Energy Agency thinks that this is doubling in just four years from 2022 to 2026. It is rocketing as AI grows. Dealing with an AI query takes ten times more computing power (and electricity) than a simpler Google search engine request. So, energy consumption is a massive issue for these facilities. They are alert to this and a lot of work is going in to sourcing renewable energy for them, although success in this will ultimately depend on the UK’s wider ability to make this happen.


Water consumption is also a huge issue for these buildings, as they generate a lot of heat and need 24/7 cooling. In terms of planning for new data centres, this is a key consideration, particularly in areas of the South East of England which suffer from water shortages. This is not a bad reason to speed up plans for these behemoths to be based in the North, where rainwater does not seem to be in short supply.


Q7: Is AI being regulated by Government? Should we just let Mark Zuckerberg do what he likes? He seems like such a nice boy


If we have learnt anything about digital technology in the last twenty years it is that the industry cannot be safely left to its own devices. It is quite happy to trample on people’s rights, dignities and pay packets to make an ever-growing profit for its small number of uber-wealthy owners. See Messrs Musk and Zuckerberg for details.


As ever with new technology, Government is playing catch up on regulation. There is currently no AI-specific legislation or regulation in the UK. There does, however, some to be growing recognition that AI needs regulation to manage its worst excesses and protect the rights of ordinary citizens and so the Government is beginning to negotiate with the industry on this. Whether the power of these huge (largely American) corporations can be managed in the UK remains to be seen. The tussles over social media regulation show how hard it is to regulate powerful global industries.


The challenge for us, collectively, is to ensure that new technology serves the purposes of society and the common good. AI has the potential to improve our lives and create new opportunities. Left entirely to the market (or, in practice, a small number of rich business owners) it has the potential to cause significant harm and exacerbate global trends in inequality.


In particular, it seems to me that there are some important aspects of our common life which may need protecting as we engage with AI:

  • The truth still matters – AI is presently and unhelpfully fuzzing the boundaries between facts and made-up-non-facts and it is increasingly hard to tell the difference online. This matters. Some things, even in the 21st Century, are still true or not true. We need to be able to tell the difference and navigate a world full of accelerating and increasingly complicated information. We may need to become better equipped to tell the difference and we also need the producers of AI to be clear about what their businesses are producing.

  • Doing harm accidentally is still doing harm – The online world can sometimes feel weightless, free and airy. What does it matter what happens online? But if an AI app is – even unwittingly – discriminating against certain people or groups or causing harm to individuals in the real world, then this obviously still matters. I personally think this is a pressing reason for robust regulation of these technologies in the UK.

  • The environmental impact of AI is large – AI absorbs a lot of energy and water. Those AI interactions are not ‘free’, they come at significant cost. There are resource limits to how far we can go in energy-intensive industries.


When we put aside the hype, AI is still just a bit of technology which is owned by somebody who is (increasingly) making a profit somehow and causing all sorts of social and environmental impacts along the way. There is sometimes a sense in which the online world gets a ‘free pass’ and is absolved of taking responsibility for its impacts and harms because we perhaps don’t quite understand what is going on, or because regulators have been slow to catch up.


I think we need to be clear that all industries and all technologies, no matter how clever, are still tools which must serve society – not the other way round. They must therefore be subject to democratic discussion and governance to ensure that they work for us in making the world a better place, not perpetuating patterns of harm and injustice. We also need to make sure that the people working in AI and our tech sectors reflect wider society. We need to move on from the tech bro.


So, even if AI is not really ‘our thing’, it’s in all of our interests to show up when the role of AI is being discussed, because, whether we like it or not, it will one day affect ‘our thing’.


Your move, human."


[Again, we thank Tim for another insightful contribution. We likewise thank for you reading this excerpt and encourage you to explore his complete essay, together with others on his Insight blog.]


Fix Capitalism. Fix the American Dream.




Buy now, or get a free sample here >>


"This book merits close, sustained attention as a compelling move beyond both careless thinking and easy ideology."—Walter Brueggemann, Columbia Theological Seminary


"Better Capitalism is a sincere search for a better world."—Cato Institute

 





bottom of page