Richard Tang is chairman and founder of internet service provider Zen Internet. He’s passionate about using technology to benefit the world, particularly artificial intelligence (AI). Here, he discusses the potential opportunities and challenges that are surfacing as AI develops.

Defining responsible tech

First, it pays to understand what responsible tech means. For Richard, it’s about balancing sustainability and trustworthiness. He explains, “Over the past few years, profits have had a lot more prominence. But using technology responsibly is about doing the right thing. Even if you make less in the short term. Because your organisation will be more successful in the long term.

“Now, that can take two different streams. There’s a whole area covering the environmental impact of technology and how you can use it to be more sustainable. This involves putting sustainability at the centre of all your decision making. Selecting equipment that will be more power efficient, for example. There are many things that organisations can do in their day-to-day to have a positive impact on the environment. Small changes add up. This is especially relevant today given the current worries about climate change.

“Then there’s the use of AI and how this will impact the world and humanity. It’s such a large area that it deserves a separate section from all other ‘tech for good’ topics.”

Developing AI for good

Richard expresses a particular interest in the prospect of Artificial General Intelligence (that is, where AI is able to do all the things that a human can do) and Artificial Superintelligence (where the AI surpasses a human’s ability to do all things, because it’s faster, less error prone and cheaper). “It’s a matter of when this happens, not if,” he states, “And when that happens, humanity has a big responsibility. To instill AI with the best set of values.”

He likens this to bringing up a child. Before setting them free in the world, parents will train them with what they see as ‘proper’ values and beliefs.

“With the birth of AI, the human race must build AI that has a positive impact on the world. But that’s easier said than done.”

The challenges facing AI for good

One key challenge is the values that AI should be taught in the first place. Much like parents and their children, our values differ across cultures, countries, religion, life experiences and much more.

“What should be our stance towards gay rights?” Richard questions, “Although this should be an obvious answer to us (equality across the world) there are still 73 countries where homosexuality is illegal.”

“Then you move onto things like the death penalty and whether this is right or wrong. It really depends on who you talk to.”

Programming AI with a consistent set of values that are agreed globally is near-impossible. On controversial topics, one stance will always alienate another’s views.

Plus our values are continually evolving. “I often ask people about the women’s right to vote. Today, asking if women should be able to vote is a ludicrous question. But go back a few decades – not even 100 years – and you see a significant shift in values.”

Across decades, our views about what’s right and wrong change. When imparting values to AI, therefore, we must give it the ability to evolve its values and ethics too. “But then you’re talking about AI starting to make up its own mind,” warns Richard, “With the associated fears and risks that come with that.”

AI’s development timeline

So, when should we expect these issues to gather pace? Richard expects to see ‘the singularity’ (when General AI is developed) within the next few decades. He says, “There was a study where 350 AI researchers were asked to predict when General AI was likely to be developed. The answers ranged from 10 to 100 years, with the average sitting around 40 years from now – in 2060.

“We’re really talking about short timescales where we’ll see amazing things happening with AI.” Of course, for that to happen, AI must be developed with the ‘right’ ethics and focus.

The risks of ‘bad’ AI

Focus is everything. Richard poses a hypothetical scenario, “Imagine a superintelligent AI is developed by a stock-focused, money-driven organisation with the sole goal of driving more returns for the company. You then have a superintelligent being that surpasses human competency, with a primary goal of making as much money as possible. If you think capitalism is harmful now – wait and see what happens if AI falls into the wrong hands.”

There are similar concerns surrounding the use of AI by the military. “This is where people often get carried away by thoughts of the real-life Terminator. But realistically, if armies have a choice between fallible human soldiers or robotic super-soldiers (who don’t age, make mistakes and are much cheaper to maintain) then they’ll go for the more obvious, easier choice.

Unfortunately, then you end up with a superintelligent being that’s been programmed with the goal of destroying the enemy. That could have nasty consequences for the human race.”

Final thoughts

It isn’t all doom and gloom for humanity – as long as we set the right foundations for AI. This is all about focus. Companies that use and develop AI need to think about the long-term implications of their technology.

This requires a sustained effort and may not produce the greatest returns in the short term. But it’s the only way to ensure that AI becomes a force for good that works with us, to enhance our lives and the world we live in.

Want more like this? Sign up to Tech for Life for content updates straight to your inbox.