FYI.

This story is over 5 years old.

Tech

​Can AI Help Gender Diversity Help AI?

Machine learning could help solve the gender disparity within the AI field itself.

It's no secret there's a gender gap in the tech industry: Bachelor degree rates in engineering and computer science hover around 20 percent for women.

The great irony is that AI technology being honed and implemented right now could actually help increase diversity within the field itself, as tech companies leverage machine learning programs to pinpoint unconscious gender bias in the workplace.

A slate of machine learning programs on the market utilize data and algorithms to spot diversity blind spots and help companies fill in the gaps. But eradicating bias isn't just politically correct; increasing gender diversity could change the face of AI research as well.

Advertisement

There's a new theory floating around the engineering and computer science industries that women are far more likely to enroll and stay invested in the field if the work being produced is more societally meaningful. Programs that focus on humanistic applications for the greater good perform remarkably better where diversity is concerned: A new UC Berkeley Ph.D. program in development engineering boasted a 50 percent female enrollment rate in its inaugural 2014 class, and MIT's D-Lab, which aims to build technology to improve the lives of the impoverished, is 74 percent female.

With this in mind, Olga Russakovsky, a Ph.D. student at Stanford's Artificial Intelligence Laboratory, created SAILORS, the US's first artificial intelligence summer camp exclusively for girls, which focuses on using machine learning for the greater good.

"Diversity in teams brings diversity in thought, which brings better outcomes"

"It's become uncool in AI research to disagree with [needing more diversity], but many people I've talked to don't truly understand why we need it. They don't understand that diversity in teams brings diversity in thought, which brings better outcomes," Russakovsky told Motherboard. She was quick to point out that the gender bias in the industry isn't willful so much as diversity remains an afterthought far more than it is a goal.

"There has been research that shows that women and minorities tend to be motivated by humanitarian applications of science," Russakovsky said. "So we decided that… we were going to focus specifically on teaching AI from a humanitarian point of view." The AI camp received over 300 applications for just 24 spots.

Advertisement

The girls were given different machine learning projects with the end goal of solving a larger societal issue. The teens created a program that could recognize when someone walked into a hospital room and didn't wash their hands—a computer vision AI program designed specifically to cut down on disease and better patient outcomes. They created a natural language processing algorithm that used tweets to aid in disaster relief; a program that used gene expression to detect cancer progression; and tackled AI's favorite pet project, navigating and routing self-driving cars for increased mobility.

However recruiting more women to the field is just a small part of fixing gender disparity in engineering fields. It's not enough to get women in the door of computer science programs if the culture within makes them want to run screaming back.

Retention has become the new mantra for diversity. As many as 50 percent of women in science, technology and engineering are expected to leave the industry due to hostile work environments, a lack of support systems in place for women in science, and a culture of gender bias, whether overt or unconscious.

Facebook now has employees go through unconscious bias training, with sessions focused on the tradeoffs between likability and competence, performance attribution, impressions, and stereotypes. Google has every employee take a 90-minute course on bias, in addition to sporadic workshops to continually educate.

Advertisement

Training and awareness are good goals, to be sure, but with Facebook and Google's diversity numbers hardly increasing year over year, awareness only goes so far. That's where AI enters the picture. If humans are fallible even when they don't intend to be, perhaps algorithms that get to know how people and companies think over time can help fill in the gaps.

"The three pillars of diversity in tech are recruiting, retention, and bias. We focus on retention; helping companies analyze their weak spots using big data," said Eileen Carey, co-founder of Glassbreakers, a female-run company that sells a software that utilizes machine learning to help companies identify their diversity blind spots.

The company is bullish on its ultimate goal: to make the C-suite at least 50 percent female, and improve female retention in a porous industry on the whole.

"And it's not just about the numbers," Carey told Motherboard. "Corporations are statistically more successful the more women and minorities they have in leadership roles, and culture drastically changes for the better when that's the case."

Glassbreakers is by no means alone in the race to fix diversity issues utilizing machine learning. Companies like Textio focus on the recruitment process, helping businesses analyze job postings for bias.

While phrases such as "proven track record of success" and "good under pressure" might seem par for the course for job descriptions, they also lead to a disproportionate number of male candidates

Advertisement

Textio has a word-processing tool that scans job descriptions for companies to identify how certain words and phrasing might appear to candidates and scrub bias from listings. While phrases such as "proven track record of success" and "good under pressure" might seem par for the course for job descriptions, they also lead to a disproportionate number of male candidates, Textio says.

By quickly scanning a job listing, the tool provides a number of suggestions intended to maximize the number of applicants, and diversify that pool. Most of the changes are incredibly subtle. Changing "exceptional" to "extraordinary" in Textio, for example, is statistically proven to lead to more female applicants. So is cutting down the number of bullet points in an ad.

Other companies, like the recently launched startup Unitive, use machine learning to help organizations try to eradicate unconscious bias from the entire hiring process, top to bottom.

Unitive has a similar word processing model that uses predictive analytics to help companies teach HR departments and hiring managers how to review resumes and prepare interview questions with less bias and more consistency.

Of course, AI has its limitations. There are downsides to letting machines automate the process of discovering unconscious bias; namely, that the machines would reflect the same biases of their non-diverse users. It's a tricky irony, machine learning being used specifically to combat bias and prejudice unwittingly displaying those exact biases.

Advertisement

"As algorithms are shaped by human behavior they are ultimately liable to exhibit the same bias we're aiming to sidestep," said Unitive founder Laura Mather, adding that the machine learning is used to give insights but Unitive doesn't take humans out of the decision-making process altogether.

The occurrence of algorithms that evolve with use to telegraph our inherent preferences isn't wholly uncommon. Just look at Microsoft's disastrous launch of Tay, it's teenage-mimicking AI Twitter bot that began tweeting out pro-Nazi statements and other racial epithets within less than a day of being interacted with.

Or more to the point, a 2015 Carnegie Mellon study found that Google displayed ads for high-paying executive jobs to male job seekers 14 times more often than it did female. The exact rationale for Google's seeming gender bias is unclear; Google refused comment beyond stating that advertisers can set parameters for their target demographics.

How do you course correct, then, when dealing with the imperfect data set of human behavior?

"We're always wary of the unconscious bias but since the algorithms are looking at the decisions that Glassbreakers are making, we've spent a lot of time designing the user experience to mitigate unconscious bias," explained Lauren Mosenthal, the Glassbreakers CTO. "For example, when tagging yourself with any quality tags, which can range from 'Pregnant' to 'Person with Disability,' 'Wine Lover' to 'Yogi,' your introductions will not see these tags unless you have them in common. [We also] make profile photos smaller and less important since being a great Glassbreaker match has little to do with one's looks."

Active moderation isn't the only solution. The researchers behind the Carnegie Mellon study suggest that running regular simulations to test the accuracy of algorithms is a better way to course correct, as well as building new algorithms from scratch to avoid previous discrimination.

Provided we don't let our biases get the best of us, we may find the relationship between AI and gender diversity is quite symbiotic. Having more women in the field could help create smarter AI for the greater good. And unlike diversity fixes that focus more on awareness, artificial intelligence has the ability to tangibly reduce gender bias in the field and beyond.

Silicon Divide is a series about gender inequality in tech and science. Follow along here.