Skip to content
Join our Newsletter

Military, medical questions hang over AI ethics debate in Vancouver

Society unprepared for ethical dilemmas from artificial intelligence, say experts
aiethicspanel
Experts debated the ethics of artificial intelligence systems at Microsoft’s Vancouver office on November 14. From centre left, Microsoft general manager of AI programs Tim O’Brien, IP lawyer and patent agent Maya Medeiros, and radiologist and Emtelligent CEO Tim O’Connell | Photo: Tyler Orton

One question occupying radiologist Tim O’Connell’s thoughts recently is what happens to historical medical data as artificial intelligence (AI) creates technology that can, for example, detect lung cancer.

“Should we not run it on the archives of chest X-rays that are sitting in every hospital in the world to find lung cancers that were missed?” said O’Connell, who also serves as CEO of Emtelligent Software Ltd.

The company has developed medical natural language processing technology to better understand data in medical records, and commissions physicians to review notes to decode the sometimes head-scratching language used by fellow doctors.

“I think we should do anything that improves patient care, but I think we need some guidance from policy-makers and ethicists and, certainly, lawyers to help us understand the implications of what these things are,” said O’Connell, who was among a group of panellists debating the ethics of AI November 14 at Microsoft Corp.’s (Nasdaq:MSFT) office in downtown Vancouver.

Some B.C. organizations have already been taking the lead with these ethical issues.

The Generation R consultancy, which was recently folded into the Open Roboethics Institute based at the University of British Columbia, has developed ethics guidelines that can be applied to software.

It bills this AI ethics road map as a first of its kind, identifying ethical risks associated with its clients’ AI-powered software and recommending ways to mitigate those risks.

Technical Safety BC, which oversees the installation and operation of technical systems in the province, was the first to use the guidelines to more accurately predict technical safety hazards.

But will it be up to policymakers to police the future of AI or will the industry veer toward self-regulation?

“We’re already seeing a combination of things, so there will be some self-regulation, and there are hundreds of ethics guides or declarations to use AI for good already targeting different groups,” said panellist Maya Medeiros, an intellectual property lawyer and patent agent at Norton Rose Fulbright Canada LLP and a director of the AIinBC industry group.

Her main concern is with horizontal regulations, or applying regulations only to a specific technology, that don’t properly define the issues at hand.

“The history of law and technology has been one in which the law has struggled to keep up with technology,” said panellist Tim O’Brien, Microsoft’s general manager of AI programs.

He cited last year’s launch of the European Union’s General Data Protection Regulation (GDPR) – online privacy rules widely considered to be the strongest in the world – as the only case in recent memory where technology has taken a cue from policy-makers as opposed to the other way around.

Microsoft and a wide range of other global companies now use GDPR in their offerings.

But would all global organizations be able to agree on a universal set of parameters for deploying ethical guidelines into AI systems?

“Military is a very different world because the worst thing that could happen there is battlefield asymmetry. You could wind up in a battlefield against an adversary who does not have an ethics framework, that has spent zero time thinking about it,” O’Brien said.

To get ahead of these issues, Medeiros said the U.S. Department of Defense should take a leadership position in clearly defining autonomous weapons so other countries can follow suit.

Meanwhile, the ethics guide used by Technical Safety BC also considers the effects of automation on people’s jobs, a subject O’Brien acknowledged is an ethical issue witnessed throughout history as technology has taken over duties once performed by human workers.

“The fundamental difference here is the speed,” he said, citing jobs related to call centres and quick-service food restaurants as the most at risk.

It will be the jobs requiring a unique set of human attributes that cannot be replaced that are best set to weather the storms of job displacement, O’Brien said.

“[AI] can probably tell the difference between a smile and a frown. It cannot tell you the difference between happiness and sadness.”

[email protected]

@reporton