Law Would Require Algorithms Used for Hiring Employees to be “Audited” for Bias (New York City)

Law Would Require Algorithms Used for Hiring Employees to be
“Audited” for Bias (New York City) 1

By B.N. Frank

Experts have warned for years about Artificial Intelligence (A.I.) technology (see 1, 2, 3) and the use of algorithms.  Shameful as well as heartbreaking examples of inaccuracies continue to be reported (see 1, 2, 3, 4).  Nevertheless this technology is still being used by some employers to hire as well as fire employers.  However, in New York, employers may soon be scrutinized for this practice.

From Ars Technica:


The movement to hold AI accountable gains more steam

First-in-US NYC law requires algorithms used in hiring to be “audited” for bias.

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests.

Now, efforts are underway to better understand how AI works and hold users accountable. New York’s City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted.

In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission; three of the FTC’s five members support stronger regulation of algorithms. An AI Bill of Rights proposed last month by the White House calls for disclosing when AI makes decisions that impact a person’s civil rights, and it says AI systems should be “carefully audited” for accuracy and bias, among other things.

Elsewhere, European Union lawmakers are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems. Countries including China, Canada, Germany, and the UK have also taken steps to regulate AI in recent years.

Julia Stoyanovich, an associate professor at New York University who served on the New York City Automated Decision Systems Task Force, says she and students recently examined a hiring tool and found it assigned people different personality scores based on the software program with which they created their résumé. Other studies have found that hiring algorithms favor applicants based on where they went to school, their accent, whether they wear glasses, or whether there’s a bookshelf in the background.

Stoyanovich supports the disclosure requirement in the New York City law, but she says the auditing requirement is flawed because it only applies to discrimination based on gender or race. She says the algorithm that rated people based on the font in their résumé would pass muster under the law because it didn’t discriminate on those grounds.

“Some of these tools are truly nonsensical,” she says. “These are things we really should know as members of the public and just as people. All of us are going to apply for jobs at some point.”

Some proponents of greater scrutiny favor mandatory audits of algorithms similar to the audits of companies’ financials. Others prefer “impact assessments” akin to environmental impact reports. Both groups agree that the field desperately needs standards for how such reviews should be conducted and what they should include. Without standards, businesses could engage in “ethics washing” by arranging for favorable audits. Proponents say the reviews won’t solve all problems associated with algorithms, but they would help hold the makers and users of AI legally accountable.

A forthcoming report by the Algorithmic Justice League (AJL), a private nonprofit, recommends requiring disclosure when an AI model is used and creating a public repository of incidents where AI caused harm. The repository could help auditors spot potential problems with algorithms and help regulators investigate or fine repeat offenders. AJL cofounder Joy Buolamwini coauthored an influential 2018 audit that found facial-recognition algorithms work best on white men and worst on women with dark skin.

The report says it’s crucial that auditors be independent and results be publicly reviewable. Without those safeguards, “there’s no accountability mechanism at all,” says AJL head of research Sasha Costanza-Chock. “If they want to, they can just bury it; if a problem is found, there’s no guarantee that it’s addressed. It’s toothless, it’s secretive, and the auditors have no leverage.”

Law Would Require Algorithms Used for Hiring Employees to be
“Audited” for Bias (New York City) 2

Read full article


Activist Post reports regularly about A.I. and other unsafe technology.  For more information, visit our archives.

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.

Read the Full Article

Gavin Newsom: California to Become Abortion ‘Sanctuary’ if Roe Overturned
Youngkin Wants to Withdraw Virginia from Program Taxing Carbon Dioxide

You might also like
Menu