• June 3, 2023

Motorola Razr 40 Battles Samsung With Aggressive New Deals

Motorola isn’t playing games. The company has launched its two new foldable flip phones – the Razr 40 and 40 Ultra (also called the Razr +) – at aggressive price points, …

New MacBook Air Leak Reveals Apple’s Disappointing Decision

Apple has its new Mac hardware ready to launch at next week’s Worldwide Developer Conference. The M2 Max and M2 Ultra Apple Silicon chipsets build on the one-year-old M2 chipset, offering …

Apple Reality Pro VR Headset: New Leak Reveals Unprecedented Detail

Okay, brace yourself. The first all-new product category from Apple since the unveiling of Apple Watch in late 2014 is about to be revealed it seems. It’s a new mixed-reality headset …

Artificial intelligence is today’s most discussed and debated technology, generating widespread adulation and anxiety, and significant government and business interest and investments. But six years after DeepMind’s AlphaGo defeated a Go champion, countless research papers showing AI’s superior performance over humans in a variety of tasks, and numerous surveys reporting rapid adoption, what is the actual business impact of AI?

“2021 was the year that AI went from an emerging technology to a mature technology… that has real-world impact, both positive and negative,” declared the 2022 AI Index Report. The 5th installment of the index measures the growing impact of AI in a number of ways, including private investment in AI, the number of AI patents filed, and the number of bills related to AI that were passed into law in legislatures of 25 countries around the world.

There is nothing in the report, however, about “real-world impact” as I would define it—measurably successful, long-lasting and significant deployments of AI. There is also no definition of “AI” in the report.

Going back to the first installment of the AI Index report, published in 2017, still does not yield a definition of what the report is all about. But the goal of the report is stated upfront: “…the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field. Without the relevant data for reasoning about the state of AI technology, we are essentially ‘flying blind’ in our conversations and decision-making related to AI.”

“Flying blind” is a good description, in my opinion, of gathering data about something you don’t define.

The 2017 report was “created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100),” released in 2016. That study’s first section did ask the question “what is artificial intelligence?” only to provide the traditional circular definition that AI is what makes machines intelligent, and that intelligence is the “quality that enables an entity to function appropriately and with foresight in its environment.”

So the very first computers (popularly called “Giant Brains”) were “intelligent” because they could calculate, even faster than humans? The One Hundred Year Study answers “Although our broad interpretation places the calculator within the intelligence spectrum…the frontier of AI has moved far ahead and functions of the calculator are only one among the millions that today’s smartphones can perform.” In other words, anything a computer did in the past or does today is “AI.”

The study also offers an “operational definition”: “AI can also be defined by what AI researchers do.” Which is probably the reason this year’s AI Index measures the “real-world impact” and “progress” of AI, among other indicators, by the number of citations and AI papers (defined as “AI” by the papers’ authors and indexed with the keyword “AI” by the publications).


Moving beyond circular definitions, however, the study provides us with a clear and concise description of what prompted the sudden frenzy and fear around a term that was coined back in 1955: “Several factors have fueled the AI revolution. Foremost among them is the maturing of machine learning, supported in part by cloud computing resources and wide-spread, web-based data gathering. Machine learning has been propelled dramatically forward by ‘deep learning,’ a form of adaptive artificial neural networks trained using a method called backpropagation.”

Indeed, “machine learning” (a term coined in 1959) or teaching a computer to classify data (spam or not spam) and/or make a prediction (if you liked book X, you would love book y), is what today’s “AI” is all about. Specifically, since its image classification breakthrough in 2012, its most recent variety or “deep learning,” involving data classification of very large amounts of data with numerous characteristics.

AI is learning from data. The AI of the 1955 variety, which generated a number of boom-and-bust cycles, was based on the assumption that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” That was the vision and, by and large, so far it hasn’t materialized in a meaningful and sustained way, demonstrating significant “real-world impact.”

One serious problem with that vision was that it predicted the arrival in the not-so-distance future of a machine with human intelligence capabilities (or even surpassing humans), a prediction reiterated periodically by very intelligent humans, from Turing to Minsky to Hawking. This desire to play God, associated with the old-fashioned “AI,” has confounded and confused the discussion (and business and government actions) of present-day “AI.” This is what happens when you don’t define what you are talking about (or define AI as what AI researchers do).

The combination of new methods of data analysis (“backpropagation”), the use of specialized hardware (GPUs) best suited for the type of calculations performed, and, most important, the availability of lots of data (already tagged and classified data used for teaching the computer the correct classification), is what led to today’s “AI revolution.”

Call it the triumph of statistical analysis. This “revolution” is actually a 60-year evolution of the use of increasingly sophisticated statistical analysis to assist in a wide variety of business (or medical or governmental, etc.) decisions, actions, and transactions. It has been called “data mining” and “predictive analytics” and most recently, “data science.”

Last year, a survey of 30,000 American manufacturing establishments found that “productivity is significantly higher among plants that use predictive analytics.” (Incidentally, Erik Brynjolfsson, the lead author on that study has also been a steering committee member of the AI Index Report since its inception). It seems that it’s possible to find a measurable “real-world Impact” of “AI,” as long as you define it correctly.

AI is learning from data. And successful, measurable, business use of learning from data is what I would call Practical AI.


Leave a Reply

Your email address will not be published.