“He Did It!” - Are Human creators to blame for AI issues?

What has happened?
In February 2021, the Chinese company Baidu opened its LinearFold AI algorithm for scientific and medical teams working to fight COVID-19. LinearFold predicts the secondary structure of the ribonucleic acid (RNA) sequence of a virus—and does so significantly faster than traditional RNA-folding algorithms. LinearFold was able to predict the secondary structure of the SARS-CoV-2 RNA sequence in only 27 seconds, 120 times faster than other methods. [1] This is significant, because the key breakthrough of COVID-19 vaccines has been the development of messenger RNA (mRNA) vaccines. Instead of conventional approaches to creating vaccines, which insert a small portion of a virus to trigger a human immune response, mRNA teaches cells how to make a protein that can prompt an immune response, which greatly shortens the time span involved in development and approval. [2]
What does this mean?
AI systems may make mistakes. As such, LinearFold may make an inaccurate optimisation of mRNA sequence design, potentially leading to vaccination inefficacy or even harm to persons. If that occurs, questions of the attribution of blame will arise. Whose fault is it if an AI algorithm makes a decision that causes harm?
What are the legal impacts?
As noted by Professor Lena Wahlberg of Lund University, strict liability (where producers will take full blame) may be unfair for manufacturers/producers of AI systems, who very often strive to ensure that their products are of top-notch, infallible quality. [3] Therefore, an AI company targeted by a products liability lawsuit will assert multiple defences and will claim that the AI algorithm is not flawed.
Nevertheless, companies should generally not be able to escape liability by blaming AI-driven evolution to algorithms that they originally designed. If companies want to reap the benefits of intelligent algorithms, they also need to be willing to accept the inherent risks. AI enables learning—and therefore, automated post-sale changes to the algorithm aimed at improving its performance. But the anticipation of those future benefits is present at the time of the original sale and will be reflected in marketing strategies and in product pricing. Thus, although a company at the time of sale would not know specifically how the AI algorithm might evolve in the future, the fact that the AI algorithm will do so will be portrayed as an asset to prospective customers. [4] If it turns out that machine learning occurs in a manner that renders the product harmful, the company needs to bear responsibility for that too.
However, as Professor Wahlberg notes, this may stifle the development of AI, as placing strict liability upon companies will inevitably deter them from enhancing, upgrading, and inventing new AI systems for fear of incurring liabilities from problems arising from AI decisions. [5] On the other hand, not imposing strict liability may result in some AI companies not improving upon their systems to reduce product defects.
This topic is debatable. At LawMiracle we think that because AI systems are here to stay, it is imperative that strict liability be enforced upon manufacturers to ensure that their products are well designed, and monitored closely to ensure that the AI’s machine learning does not result in it making defective decisions.

By Nickolaus Ng
Assessing Firms:
#DLAPiper #Kennedys
Footnotes:
[1] Baidu, ‘These Five AI Developments will shape 2021 and Beyond’ (MIT Technology Review, 14th January, 2021) <https://www.technologyreview.com/2021/01/14/1016122/these-five-ai-developments-will-shape-2021-and-beyond/>
[2] ibid
[3] Lena Wahlberg, ‘Legal Ontology, Scientific Expertise and The Factual World’ (2016) Journal of Social Ontology
[4] John Villasenor, ‘Products Liability Law as a way to address AI harms’ (Brookings Institution, 31st October 2019) <https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/#footnote-1>
[5] n3