Skip to content

Colorado's new artificial intelligence law is a game changer. Here's why. – The Colorado Sun

  • by

The Colorado Sun
Telling stories that matter in a dynamic, evolving state.
On Friday, Gov. Jared Polis reluctantly signed Senate Bill 205 into law. The bill — one of the more controversial of the session — aims to regulate high-risk artificial intelligence to discourage discrimination, a first of its kind.
For consumers, this is good news.
Artificial intelligence has wide-ranging applications, and, when applied correctly, it can help companies make better decisions more quickly. However, AI has also been shown to yield discriminatory outcomes when applied poorly, and that is the potential harm Colorado’s new law seeks to reduce. 
Examples of AI discrimination vary wildly based on the type of application. In health care, studies have revealed prominent gender and racial bias such as one study that found inaccurate algorithmic rankings neglected to provide Black patients with equal additional care. In hiring, AI has led to many similar unfair employment practices such as failing to hire women for technical roles and screening out applicants for disabilities such as speech impediments in video question analysis.
There are also cases of algorithmic discrimination in legal cases such as Black men being falsely accused due to faulty facial recognition, unfair mortgage disapprovals for minorities and women regularly paying more in car insurance than men.
Of course, this kind of systemic discrimination is unacceptable, hence the need to increase oversight of business applications. Yet it’s still worth noting that for all of AI’s imperfections, and there are many, research consistently shows that left to our own devices, humans still make far more mistakes and commit more biases than our computer counterparts. This supports the overall use and benefits of AI, assuming we can work out the kinks to not let our human errors become digitized forever.
AI discrimination often stems from a combination of poor data input, statistics, design and/or application. As the theory goes, the real world is riddled with biases. Those biases can then be coded into data sets when not properly accounted for. As AI is only as good as the data, it can then become faulty. In the end, the AI output in these cases is biased.
For a real world example, consider a Fortune 500 company that needs to hire a new manager. Rather than combing through endless applications by hand, 99% of Fortune 500 companies, and 83% of companies overall, now apply AI to automatically review applicants and narrow the search. 
But if the AI is programmed either by the company or the AI designer to compare applicants to traits from past CEOs — data that will overwhelmingly reflect older, wealthier white men — the AI is more likely to produce an output to match that bias, even without intent. Notably, such lack of intent to discriminate is a defining feature of Colorado’s new bill as it aims to provide a framework for oversight regardless of whether the bias is implicit or explicit.
Identifying bias may seem obvious, but in the case of big data, it often isn’t. While the human mind is capable of great computation, today’s AI is capable of evaluating far more information in a shorter period of time.
This means a company can now input thousands of seemingly random data points about an individual — what car they drive, what soda they drink, what products they buy — into an algorithm and AI can spit out an answer to almost any question in the blink of an eye. It can also find correlations where there may not be causation.
This is another area Colorado’s new law will help address. By requiring companies to disclose the criteria used in their assessments for high-risk decisions such as mortgage approvals or employment — a stark departure from current privacy practices — consumers now have the right to be informed and potentially correct the error.
Additionally, the bill mandates disclosure and oversight of algorithms through the state’s Attorney General’s office, allowing for recourse in the case of chronic bad actors.
Should the law have been passed? Absolutely. We’re already behind the 8-ball when it comes to regulating AI. Pandora’s box has already been opened, but better late than never. Besides, the bill will apply pressure at the federal level for national oversight, and that’s a win for everyone.
That said, Polis is right to execute some caution as some aspects of the bill may need to be honed over time. High on the list for immediate review is the definition of what constitutes “high-risk AI.”
This is particularly relevant as many applications of AI have immensely impactful effects yet may not rank under the current definition, such as applications of AI in social media, ad buys, political campaigns, dating apps and more. 
The bill will also likely need stronger considerations for small and mid-size companies, as well as a clearer understanding of the penalties for breaches of discrimination. 
It’s worth noting a few more things about the passage of this AI bill. First, for as much as Polis may get attention for having signed the bill into law, Attorney General Phil Weiser arguably deserves more of the credit.
Not only has Weiser repeatedly acknowledged the need for stronger oversight of AI at state and federal levels, but he publicly backed this bill ahead of Polis. These actions arguably tipped the scales to encourage a reluctant governor anticipated to veto the bill to ultimately support the measure.
Second, arguments against the bill citing costs and legal exposure are overblown. The whole point of AI is to implement systems that save substantial time and money, making added costs in oversight a drop in the bucket. As for legal risks, these should also be taken with a grain of salt as being required to document the reduction of discrimination is a benefit, not a liability unless you’re a bad actor.
Which brings us to a closing consideration. Any company that wishes to leverage AI for business gain without a willingness to invest in the fair practice of such algorithms has no business deploying AI software, and it is precisely these companies this bill targets most for good reason. This means companies knowingly in breach of the new law have two years to make amends.

Consider yourself warned.
Trish Zornio is a scientist, lecturer and writer who has worked at some of the nation’s top universities and hospitals. She’s an avid rock climber and was a 2020 candidate for the U.S. Senate in Colorado. Trish can be found on Twitter @trish_zornio
The Colorado Sun is a nonpartisan news organization, and the opinions of columnists and editorial writers do not reflect the opinions of the newsroom. Read our ethics policy for more on The Sun’s opinion policy. Learn how to submit a column. Reach the opinion editor at opinion@coloradosun.com.
Follow Colorado Sun Opinion on Facebook.
Trish Zornio was born in the mountains of rural northern New Hampshire and spent her teens and 20s traveling the U.S. and abroad in addition to formal studies, living in North Carolina, Michigan, Oregon, California, Colorado and for extended…
The Colorado Sun is an award-winning news outlet based in Denver that strives to cover all of Colorado so that our state — our community — can better understand itself. The Colorado Sun is a 501(c)(3) nonprofit organization. EIN: 36-5082144
Got a story tip? Drop us a note at tips@coloradosun.com

source

Leave a Reply

Your email address will not be published. Required fields are marked *