Skip to main content
October 2023

The Clock Struck 13: Regulating AI before it’s too Late

When considering humanity’s future with the advent of Artificial Intelligence (AI), most people imagine a dystopia where machine overlords reduce humans—by force or by economics—into oblivion. This fear is unfounded. There are genuine concerns we should have regarding AI but these are more procedural than theatrical. AI users, policy makers, and AI researchers should focus their efforts on helping AI overcome issues of transparency, fairness, and accountability through practical legislation that promotes innovation, protects privacy, and ensures AI development is ethical.

Transparency
There exists a natural tension between AI researchers and AI consumers. Researchers and companies want to keep the data and processes used to develop new systems proprietary to protect their intellectual property and to defend against public retaliation for questionable practices. Consumers and ethics demands that development processes are transparent to protect against malpractice. Furthermore, consumers have a fundamental right to understand how the technology they use works. When specific, well-considered guidelines are made and policed, companies act appropriately and their interests can be protected, encouraging further innovation. If AI companies subjected themselves to external, confidential peer-review data auditing they could limit access to their trade secrets while maintaining greater industry safeguards. Information regarding the AI development process also needs to be made publicly available. As people have a greater understanding of AI, it will become a more powerful, ethical, and safe tool.

Fairness
In this context, fairness refers to ensuring that the decisions made by AI systems do not discriminate against certain individuals or groups based on immutable factors such as race, gender, or socioeconomic status. The AI training process requires terabytes of data, hours of computing time, and a rigorous training model. This data may contain entries from 20 to 50 years ago. If this data provided to the AI demonstrates bias against a group of people and the training model isn’t well-designed, the AI will replicate that bias. As AI has become increasingly important in making decisions in critical yet sensitive areas like employment, lending, education, housing, healthcare, and criminal justice, potential unfairness could be crippling to progress. For example, an AI system used to screen job applicants may unintentionally discriminate against women if it is trained to replicate historical data that reflects gender biases in hiring decisions. Today if someone applies to a large company or university it is almost certain their application will be screened by a bot trained on old data making this problem increasingly prevalent. There are many ways to manage unfairness in an AI program. For example, developers can use data scrubbing—removing unnecessary, inappropriate, or biased information and entries from the data used to train the AI. Researchers can also apply prejudice testing once a model is complete to see how it handles data from diverse groups of people. AI programmers can also implement methods of AI development that reduce the chance of prejudice.

Accountability
Accountability in AI is making sure that individuals or organizations are held responsible for how they develop and implement machine learning algorithms. The complexity of this is demonstrated by a simple example. Suppose a fully autonomous vehicle that allows the user to sleep or watch a movie while the car drives is involved in an accident, who is responsible for the damages? The vehicle manufacturer that made the car, the software developer who made the AI, or the user who oversaw the incident? If the damages were always tied back to the developer, like they are now, they would not develop fully autonomous vehicles. These organizations need to answer for decisions they make and negative consequences that may result from their AI systems but they also need reasonable protection or they will continue to stymie their own progress.
The law needs to reflect the reality that innovation is built on stability and predictability as well as the fact that companies need to be held responsible for decisions they make. The government could drastically assist in the development of AI by creating a regulatory body that oversees AI and technology [1]. This new administration could approve AI programs, absolving them from most damages that result from AI “side effects” if the company 1) adequately knows and informs the user of the risks associated with the AI system, 2) proves that the AI can perform a specific task on par with or better than a human in a sort of task-specific “Turing test”, and 3) proves that they followed all appropriate procedures in the development of the AI, such as those described above.

Conclusion
By having an organization that funds and publishes research on AI, and publicizes information to the general public on how AI works as well as makes companies do peer-review data auditing, scrub their data, test for prejudice, implement methods of AI development that reduce chance of prejudice, inform the user risks and appropriate use, and prove their product can pass task-specific “Turing tests”, we can reduce the potential for unethical practice without reducing the potential of innovation within this determinative field. AI is the future. What do you want that future to look like?

Hidden image
Footnotes

[1] For this, we can look to the past and apply principles from the FDA. After years of false advertisements and medications that made people more sick than healthy, the U.S. government created the FDA to regulate the production of food and drugs. The FDA has three fundamental tests to see if a product can be approved and absolved from side effect damages. First, the company must adequately know and inform the user of the risks associated with any medication. Second, the company must prove that the medication helps the problem more than a placebo. And third, the company must follow all regulations and best practices in the development, production, and distribution of the medication.