Kimberly Hamilton
Jan 25, 2023

A Look At The Legal Intersection Of AI And Life Sciences

By Ariel Soiffer, Elijah Soko and Paul Lekas (January 20, 2023)

This article was not written by ChatGPT. Will all articles have to start with a statement like this? And will any statement like this be true?

ChatGPT uses artificial intelligence, or AI, to develop written work product.  While this application of AI has grabbed the news, there are many other exciting applications of AI, including in the domain of life sciences.

In this article, we start by defining AI in the context of data, algorithms
and AI systems. Next, we touch on leading regulatory efforts in the U.S.
and abroad, followed by a brief overview of some key issues in
compliance. After that, we assess the intersection of AI and intellectual
property law. And finally, we mention some of the applications of AI in life

Artificial Intelligence
AI starts with big data, which refers to large data sets which often come
from multiple sources. The data sets include a substantial number of
entries, or rows, with many attributes, or columns.

All of this data is analyzed in models which are used to explain, predict or
influence behavior. Generally, models become more accurate when
developed using more data, although the relationship between accuracy of
models and amount of data is often nonlinear.

The Organization for Economic Cooperation and Development defines an
AI system as "a machine-based system that can, for a given set of
human-defined objectives, make predictions, recommendations, or
decisions influencing real or virtual environments."

AI systems are designed to operate with varying levels of autonomy.
Therefore, AI systems may perform human-like tasks without significant
oversight or can learn from experience and improve performance when exposed to data sets.

Frequently, an AI algorithm produces a model from a big data set over time, and that model can be used as a standalone predictive device. Naturally, output of AI will only be as good as the input data sets.

Machine learning is a subset of AI. Machine learning is an iterative process of modifying algorithms — step by step instructions — to better perform complex tasks over time.

In other words, machine learning applies an algorithm to improve an original algorithm's performance, often checking the output of an analysis in the real world and using the output to iteratively refine the analysis for future inputs. Effectively, machine learning evolves the original algorithm based on analysis of additional inputs.

The AI Regulatory Landscape

AI systems analyze large data sets and produce predictions and recommendations that often has a real-world impact in areas as varied as hiring, fraud prevention and drug discovery. These many AI applications mean that AI has attracted significant attention from policymakers and regulators, which means the AI-focused legal and regulatory landscape is changing quickly.

At the state level, bills or resolutions relating to AI have been introduced in at least 17 states in 2022. However, only a few states enacted laws in 2022 — just Colorado, Illinois, Vermont and Washington — and each was focused on a narrow application of AI.

While there is currently no horizontal federal regulation of AI, many generally applicable laws and regulations apply to AI, including in many life sciences contexts. These include the Health Insurance Portability and Accountability Act, which protects personal health data; Federal Trade Commission regulations against unfair or deceptive trade practices; and the Genetic Information Nondiscrimination Act, which prevents requesting genetic information in some cases.

Federal regulatory efforts on AI are focused on sector-specific regulations, voluntary
standards and enforcement.

As an example of sector-specific regulations, the U.S. Food and Drug Administration has rules regarding medical devices that incorporate AI software to ensure safety of thosemedical devices.

As an example of voluntary standards, the National Institute of Standards and Technology is finalizing a framework to better manage risks to individuals, organizations and society associated with AI. The NIST risk management framework represents the U.S. government's leading effort to provide guidance for the use of AI across the private sector.

The FTC has indicated an interest in pursuing enforcement action based on algorithmic bias and other AI-related concerns, including models that reflect existing racial bias in health care delivery. Relatedly, the White House Office of Science and Technology Policy has created a blueprint for an AI Bill of Rights, citing health as a key area of concern for AI
systems oversight.

Outside the U.S., the AI regulatory landscape is also developing rapidly.

For example, the European Union is finalizing the Artificial Intelligence Act, which would regulate AI horizontally — across all sectors — and is likely to have a significant global impact, much like what occurred with privacy laws.

The EU approach focuses on high-risk applications of AI, which may include applications in life sciences and related fields. Further, the U.S. and EU, through the U.S.-EU Trade and Technology Council, have developed a road map that aims to guide approaches to AI risk management and trustworthiness based on a shared dedication to democratic values and
human rights.

AI Compliance Key Issues