Powered by MOMENTUM MEDIA
HR Leader logo
Stay connected.   Subscribe  to our newsletter
Tech

AI regulation: Necessary protection or stifler of innovation?

By Jack Campbell | |7 minute read
Ai Regulation Necessary Protection Or Stifler Of Innovation

A common response to AI anxiety is calls to increase regulation. While this may seem like an easy fix, could this be a detriment to innovation?

According to Tony Anscombe, chief security evangelist at ESET, it could. He said: “Regulation is an interesting thing. Some people will term regulation of technology as controlling or stifling innovation.”

“So, we should always be careful [about] why and how things get regulated. Because if suddenly you stop people using it in ways that would benefit society, then regulation will be bad, wouldn’t it?”

Advertisement
Advertisement

The medical industry is one area that could suffer if regulation was to be enacted. With artificial intelligence (AI) helping to drive innovation, lives may depend on this tech.

“So, for example, if you want to use AI in the medical industry, there are distinct advantages … I recently read an article about a cancer unit in a hospital [that] had started using AI … and what they’ve done is they collected all the data and, say, you give AI this data set, and it learns from the data that you feed it,” said Mr Anscombe.

“So, they collected a mass of data on images of good and bad, i.e., patients with cancer [and] patients without cancer. And the AI model was trained to spot cancer in the images of new patients. Now, its rate of success was very similar to the human success. So, a doctor sitting down and looking at the image would spot the same things in the image that the AI is spotting.”

He continued: “The difference is the AI was spotting it in like, I think it was like 10 per cent of the time. Whereas a doctor would have to study the images for like 45 minutes to 50 minutes. AI was returning that result in a few minutes, which means the doctor, of course, is then freed up to go give service to somebody patient care.”

While this is an extreme case, it highlights the importance of innovation to be allowed to continue. However, rapid implementation could bring its own set of issues. What makes the difference is how the tech is used, as misuse can create trouble.

“Facial recognition is a technology that a lot of people will say it’s an invasive technology. No, it’s just a technology. The bit that’s invasive is how you use it. So, if I use some sort of facial recognition to authenticate you to your phone, then you’re going to go, ‘oh, that’s cool. It saves me [from] remembering a password.’ And suddenly lots of people use that,” Mr Anscombe explained.

“If I put facial recognition on the door of the pub that you go in and monitor when you’re going in and out, so I know it’s you without you knowing that I know it’s you, then that would be invasive. So, it’s purely about the use of technology that needs regulating. So, AI is not bad. It doesn’t need regulating because it’s the nasty thing. It needs regulating to make sure people don’t use it wrongly.”

This places emphasis on the user to be mindful of the tech they’re using. Training can be a great way to stay across the benefits and complications that AI can cause.

However, Mr Anscombe noted that many of these systems have been used in business for years, and the general public is only being exposed due to the rise of generative AI chatbots like ChatGPT.

“All you’ve got is a technology that is evolving into the public domain. AI has been in use in some form or another [for years]. To give you an example, at ESET, we’ve been using machine learning in our product for about 15 years or so. Not all of it is new, but what’s happened is you’ve suddenly got it put in front of you,” he outlined.

“So, the large language model, the ChatGPTs, the things where a member of the public can ask a question and it can intelligently come back and give us this answer in beautiful English, suddenly, that’s AI, and it’s only because it’s been presented to us in early form.”

“We’ve seen quite a few mistakes, haven’t we? Or good examples in the media where those language models have given mistakes out. So, I think it’s not new, I think it’s been around for a while. I think it’s just been presented to us in a way that most everybody can understand,” said Mr Anscombe.

This sudden rise in the more recognisable forms of AI like ChatGPT is due to organisations racing to be the first to promote their version, said Mr Anscombe.

“I think it was a race to get something into the public’s hands. So, you’ve got Google and Microsoft, two technology giants here, vying to be first into that space,” he explained.

This sudden emergence of publicly available programs has, unsurprisingly, created issues.

Mr Anscombe commented: “A lawyer in New York was in the middle of a case, and he submitted his arguments to the court and produced documents. He produced a defence that actually showed precedence in previous cases. And the other side of the court case came back to the judge the following day and said, these cases don’t exist.”

“The lawyer had apparently used one of the large language model search engines to actually do his research, and he had come back with these cases and he asked it whether the cases were real, and it came back and said, yes, but that’s a case of bad data in, bad data out.”

He added: “Everybody thinks [AI] is some sort of superpower, but it’s only working off the content that’s already there on the internet. It’s aggregating that content and then putting it in nice language and moulding it all together. So, if there’s rubbish out there on the internet that it’s using to index, then it’s going to produce rubbish coming out the other end, and it might not be able to define what’s actually accurate and what’s bad.”

The lesson to take home is that people must be cautious with this tech. While it may seem like a revolutionary and groundbreaking solution to all our issues, it’s only there to make our lives easier. Reliance should not be the concern, but rather enhancement.

The transcript of this podcast episode was slightly edited for publishing purposes. To listen to the full conversation with Tony Anscombe, click below:

Jack Campbell

Jack Campbell

Jack is the editor at HR Leader.