How to spot human bias in the technology your company uses

Throughout the pandemic, technology decision makers have rapidly adopted new solutions to streamline hybrid and remote business processes. But this preparation process has highlighted a long-standing problem: the ingrained biases that humans have introduced into tech products.

Although many tech solutions are powered by artificial intelligence, the tech industry has Historically the audit process lacked accountability To calculate potential biases in data based on the algorithms of these products. The result is often extended exclusion practices in companies that rely on technology that supports artificial intelligence to run their businesses.

Tech giants like The Google And apple They have committed to making sweeping changes, providing hope for a future where inclusion and fairness will be standards in industry operations – particularly in employment. However, real change also requires an examination of past and current biases that humans have brought into the technology solutions companies that use them. Leaders must begin to raise awareness of the biases inherent in the technology they use to effect change.

A widespread and overlooked case

The tech industry struggles with transparency, accountability, and awareness, hampering efforts to fix product bias before it even begins. in addition to, Historic lack of technological diversity It can express itself in the products that people make.

Take, for example, Amazon, which was heavily criticized in 2018 for its use AI recruitment tool That introduced gender bias. The company’s machine learning team built computer programs that filtered the best talent from job applications and used artificial intelligence to award applicants a score on a range of one (poor) to five (great) stars. While this idea worked in theory, the system did not comprehensively assess job candidates for company software developer jobs in practice.

why? Because the tool’s AI has been trained to screen candidates based on 10-year-old biographical data of previous applicants — who were primarily men. As a result, the company’s system favored male applicants and downgraded the resumes of female candidates. Because the process was automated, it was scaled up to create an exclusionary recruitment system across the organization. This is just one example of how data weaknesses can amplify bias.

3 ways to assess bias in new technology

We’re only in the early stages of rooting out tech bias, so awareness is key. But while it’s easy to blame technology for bias, the bias ultimately comes from poor human-provided data. Some factors may be out of your control when it comes to human bias, but there are steps you can take to identify bias in the technology products your organization uses and hold sellers accountable for anti-bias technology offerings.

Prioritize fair and unbiased decision-making

To prevent bias from seeping into technical decisions, every employee must be educated about fair and anti-bias practices—particularly if your company writes algorithms for any of its products. Establish this training standard to ensure that it is part of the operational fabric of your organization. Also, implement anti-bias data checks if you take advantage of automated processes. You don’t want automation to amplify potential biases as it did with Amazon’s AI recruitment tool.

Additionally, consider creating a technology procurement committee with a series of different leaders such as HR professionals, data scientists, and DEI experts to evaluate all technology purchases for potential biases. And be sure to evaluate purchasing decisions based on ethical as well as business considerations. Ultimately, biased technology products create exclusionary side effects that can negatively impact your culture and operations.

Paying sellers for transparency

Make transparent data assessment a priority for partner companies and suppliers you work with. Look for organizations with formal and proven processes that evaluate data for bias and publish these results regularly. Public companies, such as Amazon, Apple, and Microsoft, are required to publish annual Environmental, Social, and Governance (ESG) reports that document Across these three areas, however, private companies are not obligated to publish ESGs.

Without widespread transparency about biased data, you will not have key information from technology vendors to make ethical decisions. Therefore, pay to have a confirmed confirmation that their data has been analyzed for bias. Also, express your concerns about companies (public or private) that do not share information about anti-bias efforts in their business operations and are not afraid to walk away if you are not convinced.

Prioritize comprehensive features

Consider differences in culture, language, and disability when deciding on a technical solution because human bias exists in these areas as well. for example, Zoom It addresses language and disability by adding written explanations with live translation of their calls, allowing non-English speakers and those with hearing impairments to feel included.

When adopting new technology, consider whether the user experience is acceptable to multiple generations of workers, not just younger employees. Age is pervasive in tech products, and it is up to you to provide training and guidance to all members of your organization.

Take the first step

As you participate in your DEI efforts, it is essential to root out human biases in your organization’s internal data sets and those you obtain from external vendors. By pushing for greater transparency and accountability, adopting a more detailed purchasing decision process and making comprehensive products a priority, you can raise awareness and help lead the way in eliminating human bias in technology.

Rachel Brennan is Vice President of Product Marketing at Bizagi.

Leave a Comment