• June 3, 2023

Android Circuit: Samsung’s Galaxy Success, Honor 90 Launch, Reddit’s Expensive Blunder

Taking a look back at seven days of news and headlines across the world of Android, this week’s Android Circuit includes Samsung grabbing a premium phone prize, leaked Pixel Watch 2 …

Apple Loop: Stunning iPhone 15 Pro Max Details, Apple Watch’s Secret Weapon, Powerful New Mac Leaks

Taking a look back at another week of news and headlines from Cupertino, this week’s Apple Loop includes the big moves expected at WWDC, a stunning iPhone 16 leak, a surprise …

Are You Eligible For Ex-Spousal Social Security Benefits As A Divorced Spouse?

Spousal benefits are probably the most misunderstood Social Security benefit. Ex-spousal benefits may seem even more convoluted. To keep things simple, ex-spousal benefits are basically the same as current spousal benefits …

AI violates privacy. AI-generated output can’t be explained. AI is biased.

This is all true, and is happening today, and there’s a risk of these issues accelerating as AI adoption grows. Before the lawsuits start flowing and government regulators start cracking down, organizations using AI need to become more proactive and formulate actionable AI ethics policies.

But an effective AI ethics policy requires more than some feel-good statements. It requires actions, built into an AI ethics-aware culture. “An AI ethics statement is a nice start. It’s also the tip of the iceberg,” relates Reid Blackman, who explores AI ethics in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). “Even those who spend a lot of time talking about and even working in the field of AI ethics have a lot of difficulty understanding what’s to be done. That’s because they’re trying to build structure around something they still find squishy, fuzzy, and subjective.”

Blackman, CEO of Virtue, seeks to turn something as “squishy” as AI ethics into something concrete. He provides guidelines to instilling actionable ethics into AI systems and processes.

Bring clarity to AI standards. “Between ‘never do this’ and ‘always do that’ there is a lot of ‘maybe do this.’” Blackman cautions. “Think about the tough cases before they happen.” The body of these conclusions form instances of “ethical case law” that can lay the foundation for AI ethics standards. “Creating your organization’s AI ethical case law is an extremely powerful tool in articulating the organization’s AI ethical standards and communicating them to relevant stakeholders.”

Advertisement

Increase awareness among everyone in the organization. Make data scientists, engineers and product owners aware of AI ethics issues., Blackman urges. “Everyone in the organization who might develop, procure, or deploy AI needs to be aware . including HR, marketing, strategy and so on. This will not only require training, but also developing a culture in which that training gets uptake.”

Bake AI ethics into team culture. Teams such as product development “need not only knowledge about the issues and how, in principle, they can address them, but also concrete tools and processes for diligently carrying out ethical risk identification and mitigation,” he states.”

Make sure there are AI experts as part of an AI ethics committee. “While standard processes and practices and tools are helpful, they are only a first line of defense” in a complex undertaking such as AI, Blackman says. “To seriously address the ethical, reputational, regulatory, and legal risks, we need experts in the room. It would be unwise and unfair to charge data scientists, engineers, and product developers with the primary responsibility of identifying and mitigating ethical risks of products.”

Introduce accountability. There needs to be a sense of responsibility for tools and processes introduced in AI. “Staff needs to be held accountable for using these tools and complying with processes and on pain of penalties ranging from lack of bonuses, to inability to be promoted, to firing,” Blackman points out. “Relatedly, we need to financially incentivize taking these issues seriously.”

Measure everything. Metrics, in the form of key performance indicators that tie AI ethics to business goals, is a key form of metrics. “We need to track both the extent to which the organization is adopting new standards and to the extent to which meeting those standards identifies and mitigates the risks those standards are aimed at addressing,” Blackman says. Determining quality KPIs for your actual ethical performance needs to be the product of your AI ethics statement and your AI ethical case law.

Gain executive sponsorship. AI ethics needs leadership, in the form of a person “responsible for overseeing the creation, rollout, and maintenance of an AI risk program,” says Blackman. “Excitement about AI ethics from junior people is a wonderful thing, but there is no such thing as a viable and robust AI ethical risk program without leadership and ownership from the top.”

Advertisement

Leave a Reply

Your email address will not be published.