While AI is poised to disrupt our work and lives, these technologies can be harnessed through wise regulation. So rather than replacing individuals, much AI should assist them in completing tasks that are more fulfilling, or by augmenting work that is often classified as professional.

“Artificial intelligence (AI) has a proper substitutive role – it can ensure that difficult, dirty and dangerous work is done more and more by machines and less and less by human beings,” says Professor Frank Pasquale from Brooklyn Law School.“But mainly, it should be complementing persons in many complex jobs, rather than substituting for them.”

Can robots replace human intelligence?

Should people be taking more courses like computer science or technical fields that will help them understand AI better?

“Yes, but I don't think they should replace existing courses. I think that we need to have a more expanded view of education,” says Prof. Pasquale, who will discuss how AI can promote inclusive prosperity at the annual lecture for The Allens Hub for Technology, Law and Innovation at UNSW Sydney on Tuesday 20 October at 11am.

AI can play an important role in education and helping teachers plan and deliver courses – but it is not a replacement.

Similarly in the tax arena, shifting the focus to taxing robots is not the right solution. “When it comes to understanding the nuances in tax policies, taxing robots is not ideal,” he said.

“We need to have a more well-rounded tax policy that is looking at excessive capital income being taxed, rather than the machines being used to generate that.”

It is also important to go beyond universal basic income discussions, Prof. Pasquale adds: “I think universal basic services are needed, not just universal basic income (UBI), because it's important for the future of labour policy.”

Does AI still need human intervention?

What's really surprising over the past few years, according to Prof. Pasquale, is how tasks that many assumed would be completed by AI (self-driving cars, for example), are nowhere ready to be adopted en masse.

The self-driving car is itself a warning about issues with AI, he warns. “First, there are the basic technical problems that occur,” he said.

"Secondly, will we have to redesign roads and traffic patterns to accommodate self-driving cars? Do we rely less on public transport, or develop other forms of public transit?

“All of those are not just technical questions, but they're very deeply political and social questions that involve a lot more than just AI."

How can AI transform the field of law?

In many simple transactions, AI has the potential to perform work currently undertaken by certain legal personnel, such as the granting of a licence to hunt or fish.

“But there are also areas where AI can be beneficial by assisting current attorneys in deciding which case to bring in, deciding the types of arguments and doing better legal research,” Prof. Pasquale says.

“In various places that are purely financial, I could see another displacement effect where, like in tax law, it might be that we code taxes as opposed to simply having a verbal description of them.”

And Australia’s Robodebt issue, for example, highlighted the fact that excessive automation cannot be achieved successfully without human oversight.

“We have to always keep an eye on it. We’ll see a lot of enthusiasm about automating law, but it's going to be a slow process,” Prof. Pasquale says.

AI in business

AI has been increasingly playing an important role in business. Some companies, for example, have started using AI in the recruiting process by analysing candidates’ facial expressions, tone of voice and the rate of speech in job interviews.

Such AI is said to be able to determine if an interviewee would be a good employee, based on current employees’ expressions, facial patterns and tone of voice.

“I think that's troubling because people deserve to be judged on the basis of known characteristics and known qualities, and not on an AI that's cross-correlating thousands or millions of variables with existing employees.”

Prof. Pasquale refers to it as an ‘alien intelligence,’ a black box and opaque analysis resistant to explanation in human terms.

However, even black box, opaque AI can be of benefit in the medical sector, specifically when it comes to assisting doctors with improving medical diagnoses.

Looking at the Australian healthcare system, Prof. Pasquale says it would make sense to invest in AI capacity, particularly using data that Australia already has.

“Part of the future of the industrial policy of AI is convincing governments to invest in homegrown talent, and to avoid just doing the easy thing and outsourcing the work to Google or Facebook or any other big organisation,” he says.

"It is really interesting to see how the use of AI can be prioritised in a country’s industrial strategy”.

While AI has the potential to contribute to job losses if poorly planned and timed, medical AI has the power to make labour more valuable and easier.

“I think that if we can envision a future where AI is permitted to increase productivity, and increase the value of labour, then we are on a path to a much happier outcome for everybody,” Prof. Pasquale says.

Prof. Pasquale will further discuss how AI can promote inclusive prosperity at the annual lecture for The Allens Hub for Technology, Law and Innovation at UNSW Sydney on Tuesday 20 October at 11am. You can register for the online event here.

Professor Frank Pasquale

Professor Frank Pasquale, Brooklyn Law School. Photo: Supplied.

Prof. Pasquale grew up in Oklahoma and Arizona in the United States and completed his studies at Harvard University, Yale Law School and Oxford University. He practised law prior to becoming a law professor. Prof. Pasquale is the Chair of the subcommittee on privacy, confidentiality and security of the US National Committee on Vital and Health Statistics. His book The black box society analyses big tech and finance while New laws of robotics is a blueprint for better integrating technology into society.


Dawn Lo