There is little doubt that we need blueprints in our daily lives.
If you are going to build that long-envisioned dream house of yours, you would be wise to first put together a usable blueprint.
A blueprint showcases in a tangible and documented way whatever wide-eyed visionary perspective might be locked in your noggin. Those that are going to be called upon to construct your cherished homestead will be able to refer to the blueprint and nail down the details of how to get the job done. Blueprints are handy. The absence of a blueprint is bound to be problematic for tackling any kind of complex chore or project.
Let’s shift this somewhat sentimental but genuine tribute to blueprints into the realm of Artificial Intelligence (AI).
Those of you that are substantively into AI might be vaguely aware that an important policy-oriented blueprint was recently released in the U.S. that pertains demonstrably to the future of AI. Known informally as the AI Bill of Rights, the official title of the proclaimed white paper is the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” and is readily available online.
The document is the result of a year-long effort and mindful study by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that was established in the mid-1970s and serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.
The AI Bill of Rights depicts the human rights that humankind ought to have with respect to the advent of AI in our daily lives. I emphasize this weighty point because some people were at first puzzled that maybe this was some kind of acknowledgment of AI having legal personhood and that this was a litany of rights for sentient AI and humanoid robots. Nope, we aren’t there yet. As you will see in a moment, we aren’t anywhere close to sentient AI, despite the banner headlines that seem to tell us otherwise.
Okay, so do we need a blueprint that spells out human rights in an age of AI?
Yes, we most assuredly do.
You would almost need to be locked in a cave and be absent of Internet access to not know that AI is already and increasingly encroaching upon our rights. The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.
For my ongoing and extensive coverage and analysis of AI Law, AI Ethics, and other key AI technological and societal trends, see the link here and the link here, just to name a few.
Unpacking The AI Bill Of Rights
I’ve previously discussed the AI Bill of Rights and will do a quick recap here.
As an aside, if you’d like to know my in-depth pros and cons of the recently released AI Bill of Rights, I’ve detailed my analysis in a posting at the Jurist, see the link here. The Jurist is a notable legal news and commentary online site, known widely as an award-winning legal news service powered by a global team of law student reporters, editors, commentators, correspondents, and content developers, and is headquartered at the University of Pittsburgh School of Law in Pittsburgh, where it began over 25 years ago. Shoutout to the outstanding and hardworking team at Jurist.
In the AI Bill of Rights, there are five keystone categories:
- Safe and effective systems
- Algorithmic discrimination protections
- Data privacy
- Notice and explanation
- Human alternatives, consideration, and fallback
Notice that I didn’t number them from one to five since doing so might imply that they are in a particular sequence or that one of the rights is seemingly more important than the other. We will assume that they are each of their own merits. They are all in a sense equally meritorious.
As a brief indication of what each one consists of, here’s an excerpt from the official white paper:
- Safe and Effective Systems: “You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.”
- Algorithmic Discrimination Protections: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”
- Data Privacy: “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.”
- Notice and Explanation: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”
- Human Alternatives, Consideration, And Fallback: “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate.”
By and large, these are facets of humankind rights that have been bandied around for quite a while in the context of AI Ethics and AI Law, see my coverage such as at the link here. The white paper seemingly does not magically pull a rabbit out of a hat as to some newly discovered or unearthed right that has heretofore not been elucidated in an AI-era context.
You could assert that the compilation of them into one neatly packaged and formalized collection provides a vital service. Plus, by being anointed as an acclaimed AI Bill of Rights, this further puts the whole matter overtly and ably into the consciousness of the public sphere. It coalesces an existing plethora of disparate discussions into a singular set that can now be trumpeted and conveyed across all manner of stakeholders.
Allow me to proffer this list of favorable reactions to the announced AI Bill of Rights:
- Provides an essential compilation of keystone principles
- Serves as a blueprint or foundation to build upon
- Acts as a vocalized call to action
- Spurs interest and showcases that these are serious considerations
- Brings together a multitude of disparate discussions
- Sparks and contributes to Ethical AI adoption efforts
- Will undoubtedly feed into the establishment of AI Laws
We also need to consider the less-than-favorable reactions, taking into account that there is a lot more work that needs to be done and that this is just the start of a lengthy journey on the arduous road of governing AI.
As such, somewhat harsh or shall we say constructive criticisms made about the AI Bill of Rights include:
- Not legally enforceable and completely non-binding
- Advisory only and not considered governmental policy
- Less comprehensive in comparison to other published works
- Primarily consists of broad concepts and lacks implementation details
- Going to be challenging to turn into actual viable practical laws
- Seemingly silent on the looming issue of possibly banning AI in some contexts
- Marginally acknowledges the advantages of using AI that is well-devised
Perhaps the most prominent acrid commentary has centered on the fact that this AI Bill of Rights is not legally enforceable and thus holds no water when it comes to establishing clear-cut goalposts. Some have said that though the white paper is helpful and encouraging, it decidedly lacks teeth. They question what can come of a said-to-be hollow-toothed set of nifty precepts.
I will address those biting remarks in a moment.
Meanwhile, the white paper abundantly states the limitations of what this AI Bill of Rights entails:
- “The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. It also is not determinative of what the U.S. government’s position will be in any international negotiation. Adoption of these principles may not meet the requirements of existing statutes, regulations, policies, or international instruments, or the requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities” (per the white paper).
For those that have been quick to undercut the AI Bill of Rights as being legally non-binding, let’s do a bit of a thought experiment on that stinging allegation. Suppose the white paper was released and had the full force of the law. I dare say that the result would be somewhat cataclysmic, at least to the degree of both legal and societal responses to the proclamation.
Lawmakers would be up in arms that the effort had not undertaken the normative processes and legal procedures in putting together such laws. Businesses would be enraged, rightfully so, as to have new laws spring forth without sufficient notification and awareness of what those laws are. All manner of consternation and outrage would ensue.
Not a good way to gravitate toward firming up humankind’s rights in an AI era.
Recall that I had earlier begun this discussion by bringing up the value and vitalness of blueprints.
Imagine that someone skipped past the step of crafting blueprints and jumped immediately into building your dream house. What do you think the house would look like? It seems a fair bet that the house would not especially match what you had in your mind. The resulting homestead might be an utter mess.
The gist is that we do need blueprints and we now have one for the sake of moving ahead on figuring out judicious AI Laws and empowering Ethical AI adoptions.
I would like to address therefore the ways in which this AI Bill of Rights blueprint can be turned into a house, as it were. How are we going to utilize the blueprint? What are the suitable next steps? Can this blueprint suffice, or does it need more meat on the bones?
Before we jump into those hefty matters, I’d like to first make sure we are all on the same page about the nature of AI and what today’s status consists of.
Setting The Record Straight About Today’s AI
I’d like to make an extremely emphatic statement.
Are you ready?
There isn’t any AI today that is sentient.
We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Justice & Fairness
- Freedom & Autonomy
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Now that I’ve laid a helpful foundation, we are ready to dive further into the AI Bill of Rights.
Four Essential Ways To Implement The AI Bill Of Rights
Someone hands you a blueprint and tells you to get to work.
What do you do?
In the case of the AI Bill of Rights as a blueprint, consider these four essential steps in moving forward:
- Serve as input toward formulating AI laws: Use the blueprint to aid in formulating AI laws, doing so hopefully on an aligned basis at the federal, state, and local levels (perhaps aiding international AI legal efforts too).
- Aid in getting AI Ethics more widely adopted: Use the blueprint to foster AI Ethics formulations (sometimes referred to as “soft laws” in comparison to legally binding “hard laws”), doing so to inspire and guide businesses, individuals, governmental entities, and other constituencies toward better and more consistent Ethical AI outcomes.
- Shape AI development activities: Use the blueprint to spur the creation of AI development methodologies and training aspects, doing so to try and get AI developers and those that field or employ AI to be more cognizant of how to devise AI along the lines of desirable AI Ethics precepts and in anticipation of impending AI laws being enacted.
- Motivate the advent of AI to assist in controlling AI: Use the blueprint to devise AI that will be used to try and serve as a check-and-balance against other AI that might be veering into the untoward territory. This is one of those macroscopic viewpoints whereby we can use the very thing that we find ostensibly worrisome to also (ironically, one might say) aid in protecting us.
I’ve discussed each of those aforementioned four steps throughout my column postings.
For this herein discussion, I’d like to focus on the fourth listed step, namely that the AI Bill of Rights can serve as a motivator toward the advent of AI to assist in controlling AI. This is a somewhat shocking or surprising step for many that haven’t yet gotten fully into this AI-advancing realm.
Allow me to elaborate.
A simple analogy should do the trick. We are all accustomed these days to cybersecurity breaches and hacker break-ins. Nearly every day we hear about or are affected by some latest loophole in our computers that will allow nefarious evildoers to snatch up our data or place a dastardly piece of ransomware on our laptops.
One means of fighting against those despicable attempts consists of using specialized software that attempts to prevent those break-ins. You almost certainly have an anti-virus software package on your computer at home or work. There is likely something similar on your smartphone, whether you realize it is on there or not.
My point is that sometimes you need to fight fire with fire (see my coverage on this, such as at the link here and the link here).
In the case of AI that lands into the verboten realm of AI For Bad, we can seek to use AI For Good that contends with that malicious AI For Bad. This is of course not a miracle cure. As you know, there is a continual cat-and-mouse gambit going on between evildoers seeking to break into our computers and the advances being made in cybersecurity protections. It is a nearly endless game.
We can use AI to try and deal with AI that has gone down a forbidden path. Doing so will help. It won’t especially be a silver bullet since the adverse AI being targeted will almost certainly be devised to avoid any such protections. This will be ongoing cat-and-mouse of AI versus AI.
In any case, the AI we use to protect ourselves will provide some amount of protection against bad AI. Thus, we indubitably need to devise AI that can safeguard or shield us. And we should also be seeking to craft the safeguarding AI to adjust as the bad AI adjusts. There will be a fierce semblance of lightning-speed cat-and-mouse.
Not everyone relishes this enlarging of the role of AI.
Those that already perceive AI as a homogenous amorphous conglomeration, would get goosebumps and nightmares at this feigned AI-versus-AI gambit. If we try to pit fire against fire, maybe we are merely making an even larger fire. AI is going to become a massive bonfire, one that we no longer control and will opt to enslave humanity or wipe us from the planet. When it comes to discussing AI as an existential risk, we are usually led to believe that all AI will gang up together, see my discussion about these issues at the link here. You see, we are told that every piece of AI will grab hold of its brethren AI and become one big overlord unitary family.
That’s the dreadful and decidedly unsettling scenario of sentient AI as seamless all-for-one and one-for-all mafia.
Though you are freely welcome to make such conjectures concerning that this might someday occur, I assure you that for now, the AI we have today consists of truckloads of disconnected disparate AI programs that have no particular way to conspire with each other.
Having said that, I am sure that those believing fervently in AI conspiracy theories will insist that I have purposefully said this to hide the truth. Aha! Maybe I am being paid off by today’s AI that is already planning the grand AI takeover (yes siree, I will be bathing in riches once the AI overlords rule). Or, and I most certainly don’t favor this other angle, perhaps I am blindly unaware of how AI is secretly plotting behind our backs. I guess we’ll have to wait and see whether I am part of the AI coup or an AI abject patsy (ouch, that hurts).
Getting back to earthly considerations, let’s briefly explore how contemporary AI can be used to aid the implementation of the AI Bill of Rights. I will conveniently and summarily refer to this as Good AI.
We’ll use the five keystones embodied in the AI Bill of Rights:
- Good AI for promoting Safe and Effective Systems: Whenever you are subject to or using an AI system, the Good AI tries to figure out whether the AI being utilized is unsafe or ineffective. Upon such detection, the Good AI might alert you or take other actions including blocking the Bad AI.
- Good AI for providing Algorithmic Discrimination Protections: While using an AI system that might contain discriminatory algorithms, the Good AI attempts to ascertain whether there are inadequate protections for you and seeks to determine whether undue biases do indeed exist in the being used AI. The Good AI could inform you and also potentially automatically report the other AI to various authorities as might be stipulated by AI laws and legal requirements.
- Good AI for preserving Data Privacy: This type of Good AI tries to protect you from data privacy invasions. When another AI is seeking to request data that perhaps is not genuinely needed from you, the Good AI will make you aware of the overstepping action. The Good AI can also potentially mask your data in a manner that will upon being fed to the other AI still preserve your data privacy rights. Etc.
- Good AI for establishing Notice and Explanation: We are all likely to encounter AI systems that are sorely lacking in providing proper and appropriate notifications and that sadly fail to showcase an adequate explanation for their actions. Good AI can try to interpret or interrogate the other AI, doing so to potentially identify notifications and explanations that should have been provided. Even if that isn’t feasible to do in a given instance, the Good AI will at least alert you as to the failings of the other AI, and possibly report the AI to designated authorities based on stipulated AI laws and legal requirements.
- Good AI for offering Human Alternatives, Consideration, and Fallback: Suppose you are using an AI system and the AI is seemingly not up to the task at hand. You might not realize that things are going sour, or you might be somewhat wary and unsure of what to do about the situation. In such a case, Good AI would be silently examining what the other AI is doing and could warn you of vital concerns about that AI. You would then be prompted to request a human alternative to the AI (or the Good AI could do so on your behalf).
To understand further how this kind of Good AI can be developed and fielded, see my popular and highly rated AI book (honored to say that it has been noted as a “Top Ten”) on what I have generally been referring to as AI guardian angels, see the link here.
I know what you are thinking. If we have Good AI that is devised to protect us, suppose the Good AI gets corrupted into becoming Bad AI. The famous or infamous Latin catchphrase seems fully pertinent to this possibility: Quis custodiet ipsos custodes?
The phrase is attributed to the Roman poet Juvenal and can be found in his work entitled Satires, and can be loosely translated as meaning who will guard or watch the guards themselves. Many movies and TV shows such as Star Trek have leveraged this line repeatedly.
That’s surely because it is an excellent point.
Sure enough, any AI laws that are enacted will need to encompass both the Bad AI and even the Good AI that goes bad. That’s why it will be crucial to write sensible and comprehensive AI laws. Lawmakers that just try to toss random legalese at the wall and hope that it sticks with respect to AI laws are going to find themselves profoundly missing the target.
We don’t need that.
We have neither the time nor can we bear the societal expense to cope with inadequately devised AI laws. I’ve pointed out that regrettably at times we are witnessing new AI-related laws that are poorly composed and replete with all sorts of legal maladies, see for example my probing analysis of the New York City (NYC) AI Biases auditing law at the link here.
Let’s make sure that we appropriately use the AI Bill of Rights blueprint that we now have in hand regarding AI. If we ignore the blueprint, we have lost out on having stepped up our game. If we wrongly implement the blueprint, shame on us for having usurped a useful foundation.
The esteemed Roman poet Juvenal said something else that we can leverage in this circumstance: Anima sana in corpore sano.
Generally, this translates into the assertion that it would be prudent to have both a sound or healthy mind and a sound or healthy body. This allows us to endure any kind of toil, according to Juvenal, and will assuredly be the only road to a life of peace or virtue.
Time for us to use a sound mind and sound body to make sure that we are ensuring humanity will have our human rights preserved and solidly fortified in the emerging world of ubiquitous and at times untoward AI. That’s sound advice from the Romans that we ought to abide by in today’s rush amid an AI pell-mell epoch and a future decisively filled with both good and bad AI.