The tech job market is getting more volatile but still going strong

This challenge extends to venture capital as well, which must increasingly try to assess whether founders have thought about how customers, partners, and regulators interact with the ways they use AI.

Even when the founders have the best intentions, it is easy to fall short. But without a clear ethical framework, the consequences can include regulatory delays, a longer path to profitability, and even real damage that can do long-term damage to a company’s reputation.

To solve this problem, a group of AI consultants, venture capitalists, and CEOs created the AI ​​Ethical Governance Group last September. In March, it went public and published a survey-style “connected chain” of investors to use to advise startups in their portfolio.

The series conveys clear guidelines for startups in different stages of growth, and recommends that startups have people responsible for AI governance and data privacy strategy, for example. The EAIGG leadership argues that using the chain will protect venture capital portfolios from value-destroying scandals.

Anik Bose, General Partner at Benhamou Global Ventures, is the CEO and Founder of EAIGG. He spoke with Protocol about how startups can align their operations with their values ​​and why he makes sure that the companies in his company portfolio follow his Ongoing Chain advice.

This interview has been edited for brevity and clarity.

How did you know now was the time to start standardizing AI ethics? Wasn’t it always important?

Artificial intelligence is a double-edged sword. First, it has tremendous promise across industries: manufacturing, healthcare, consumer products, insurance, banking, you name it. Some of these promises people are betting on: Private investments in artificial intelligence are thriving. If you look at patent filings in AI, they go up amazingly. And if you look at the top skills employers are looking for today, the number one spot is a Ph.D. in artificial intelligence.

Alongside that comes the fear of artificial intelligence. The first fear, which is very deep, is the replacement of robots with humans, such as the Terminator. The second fear is the fear of concentrating AI assets. If you look at the FAANG companies, there is a fear that these people will prevent the democratization of AI, because they have all the resources, all the people and are basically doing all the acquisitions in the space.

And then you look at the politics of artificial intelligence today, which is the Wild West. There is very little or no regulation in the US, it’s coming now in Europe, and there is a lack of public awareness of things like social exclusion, privacy snooping, and discrimination.

Given all of that, we believe now is the time to operationalize the ethics of AI. You really can’t wait for the regulations to come up and tell you what to do.

Why is AI ethics important from a business perspective?

It is about customer trust and market adoption. With early stage startups, you’re doing evangelical sales to large organizations. If they don’t trust you or your product, you’re in deep business. If your AI model is doing things it’s not supposed to, you’re done.

Second, regulation is coming. If one started tackling this now, while still a young company, they would be more than willing to deal with it when the guillotine fell.

The other two reasons are equally important, though people often don’t understand this: Attracting and retaining the best talent is the number one issue for startups. More and more people want to make sure that the startups they work for have a deeper goal beyond making money. They want to make the Earth a better place. You won’t hire talent if you build products in a mercenary way and don’t deal with these issues.

Finally, once you get to where you want to get — say, say Microsoft or Google is getting close to you — I can tell you that as they work on M&A, they’ll look at your ethical framework. If there is some liability, not only will you be taken over, but your company’s valuation may drop by 10 times.

Why is it important to have one person in charge of AI ethics, rather than just making sure all employees adhere to company values?

We fundamentally believe that the best way to establish accountability is to establish clear responsibility. Someone must own it. We’ve learned from our experience with startups that the number one reason why anything goes wrong is a lack of clear accountability. So we basically believe that unless you customize AI governance for someone, it won’t.

Think about it: the title “Chief Information Security Officer” did not exist in organizations in the 1990s. Today, every institution has one. Is this person responsible for the actions of the entire company? No, but they guarantee the execution of operations. They guarantee the use of tools. At the end of the day, the board of directors or the CEO can go to one person and say, “Where are we in this?”

What should be the title of this person within the organization?

In the early days it would be the VP of product management, the chief product officer or the founder who leads the product, because they’re the ones who really build with AI. They’re the ones who can figure out, “Are the right data sets being used?” or “Is there a drift model?”

Later, when you make $20 million in revenue, and $50 million in revenue, you might have multiple products, and possibly use the data in different ways. At this point, it only makes sense to have someone responsible for ethics, such as an AI official or ethical advisor. You see that a lot of late-stage startups today have a chief ethical officer. We think this will become more common.

What are the next steps towards engaging tech startups with AI ethics?

If you take a step back, education is a huge part of the conversation we have. Part of the reason we founded EAIGG was to open up best practices, so that everyone could learn from each other. The ongoing chain is just one tool, but we also hosted a panel discussion on what financial services are doing in terms of AI governance and what their best practices are. We had another discussion with IBM, where they talked about Fairness 360, a tool that they’ve open sourced that we’re promoting as a tool to use with AI models.

I think sustainability is a powerful tool for startups, but what we want with EAIGG is more research to create other tools as well as a push towards open source tools that companies are already using today. I’m sure Google has a lot of best practices that not many people know about, for example.

Finally, we’ll also be compiling tools to help people join the organization. We believe that Europe will lead the way with regulation, as it did with the General Data Protection Regulation, and that the United States will follow. When regulation comes on a larger scale, and people are fined $5 million, $10 million, $50 million – I can tell you people will start to care about the ethics of AI.

Leave a Comment