Misinformation and irresponsible AI - experts forecast how technology may shape our near future


Hands on keyboard with binary overlay

From misinformation and invisible cyber attacks, to irresponsible AI that could cause events involving multiple deaths, expert futurists have forecast how rapid technology changes may shape our world by 2040.

As the pace of computer technology advances surges ahead, and systems become increasingly interlinked, it is vital to know how these fast technology advances could impact the world in order to take steps to prevent the worst outcomes.

Using a Delphi study, a well known technique for forecasting, a team of cyber security researchers led by academics from Lancaster University interviewed 12 experts in the future of technologies.

The experts, ranged from chief technology officers in businesses, consultant futurists and a technology journalist to academic researchers. They were asked how particular technologies may develop and change our world over the next 15 years by 2040, what risks they might pose, and how to address the challenges that may arise.

Most of the experts forecasted exponential growth in Artificial Intelligence (AI) over the next 15 years, and many also expressed concern that corners could be cut in the development of safe AI. They felt that this corner cutting could be driven by nation states seeking competitive advantage. Several of the experts even considered it possible that poorly implemented AI could lead to incidents involving many deaths, although other experts disagreed with this view.

Dr Charles Weir, Lecturer at Lancaster University’s School of Computing and Communications and lead researcher of the study, said: “Technology advances have brought, and will continue to bring, great benefits. We also know there are risks around some of these technologies, including AI, and where their development may go—everyone’s been discussing them—but the possible magnitude of some of the risks forecast by some of the experts was staggering.

“But by forecasting what potential risks lie just beyond the horizon we can take steps to avoid major problems.”

Another significant concern held by most of the experts involved in the study was that technology advances will make it easier for misinformation to spread. This has the potential to make it harder for people to tell the difference between truth and fiction - with ramifications for democracies.

Dr Weir said: “We are already seeing misinformation on social media networks, and used by some nation states. The experts are forecasting that advances in technologies will make it much easier for people and bad actors to continue spreading misleading material by 2040.”

Other technologies were forecast to not have as big as impact by 2040, including quantum computing which experts see as having impacts over a much longer timeframe, and Blockchain which was dismissed by most of the experts as being a source of major change.

The experts forecast that:

· By 2040, competition between nation states and big tech companies will lead to corners being cut in the development of safe AI

· Quantum computing will have limited impact by 2040

· By 2040 there will be ownership of public web assets. These will be identified and traded through digital tokens

· By 2040 it will be harder to distinguish truth from fiction because widely accessible AI can massively generate doubtful content

· By 2040 there will be less ability to distinguish accidents from criminal incidents due to the decentralised nature and complexity of systems

The forecasters also offered some suggested solutions to help mitigate against some of the concerns raised. Their suggestions included governments introducing AI purchasing safety principles, new laws to regulate AI safety. In addition, universities could be vital by introducing courses combining technical skills and legislation.

These forecasts will help policy makers and technology professionals make strategic decisions around developing and deploying novel computing technologies. They are outlined in the paper ‘Interlinked Computing in 2040: Safety, Truth, Ownership and Accountability’ which has been published by the peer-reviewed journal IEEE Computer.

The paper’s authors are: Charles Weir and Anna Dyson of Lancaster University; Olamide Jogunola and Katie Paxton-Fear of Manchester Metropolitan University; and Louise Dennis of Manchester University.

DOI: https://doi.org/10.1109/MC.2023.3318377

Back to News