[ad_1]
We are living in a world of probabilities. When I started talking about AI and its implications years ago, the most common question was – is AI coming after us?
And while the question remains the same, my response has changed regarding probabilities. It is more likely to replace human judgment in certain areas, so the probability has increased over time.
As we discuss a complex technology, the answer will not be straightforward. It depends on several factors, such as what it means to be intelligent, whether we suggest replacing jobs, anticipating the timelines for Artificial General Intelligence (AGI), or identifying the capabilities and limitations of AI.
Source: Canva
Let us start with understanding the definition of Intelligence:
Stanford defines intelligence as “the ability to learn and perform suitable techniques to solve problems and achieve goals appropriate to the context in an uncertain, ever-varying world.”
Gartner describes it as the ability to analyze, interpret events, support and automate decisions, and take action.
AI is good at learning patterns, however, mere pattern recognition does not qualify as intelligence. It is one of the aspects of the broader spectrum of multi-dimensional human intelligence.
Source: Canva
As experts believe, “ AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present, and the future; of history, injury or nostalgia. Without that, there’s no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the “intelligence” part.”
Some might refer to AI clearing tests from prestigious institutes and, most recently, the Turing test as a testament to its intelligence.
For the unversed, the Turing test is an experiment designed by Alan Turing, a renowned computer scientist. According to the test, machines possess human-like intelligence if an evaluator cannot distinguish the response between a machine and a human.
A comprehensive overview of the test highlights that though Generative AI models can generate natural language based on the statistical patterns or associations learned from vast training data, they do not have human-like consciousness.
Even advanced tests, such as the General Language Understanding Evaluation, or GLUE, and the Stanford Question Answering Dataset, or SQuAD, share the same underlying premise as that of Turing.
Loss Of Jobs
Let us start with the fear that is fast becoming a reality – will AI make our jobs redundant? There is no clear “yes or no” answer, but it is fast approaching as the GenAI casts a wider net on automation opportunities.
McKinsey reports, “By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI.
Profiles like office support, accounting, banking, sales, or customer support are first in line toward automation. Generative AI augmenting the software developers in code writing and testing workflows has already impacted the job roles of junior developers.
Its results are often considered a good starting point for an expert to enhance the output further, such as in making marketing copy, promotional content, etc.
Some narratives make this transformation sound subtle by highlighting the possibility of new job creations, such as that of healthcare, science, and technology in the near to short term; and AI ethicists, AI governance, audits, AI safety, and more to make AI a reality overall. However, these new jobs can not outnumber those being replaced, so we must consider the net new jobs created to see the final impact.
AGI
Next comes the possibility of AGI, which, similar to the multiple definitions of intelligence, warrants clear meaning. Generally, AGI refers to the stage when machines gain sentience and awareness of the world, similar to a human’s.
However, AGI is a topic that deserves a post on its own and is not under the scope of this article.
For now, we can take a leaf from the diary of DeepMind’s CEO to understand its early signs.
Source: Fortune
Good Assistant
Looking at a broader picture, it is intelligent enough to help humans identify patterns at scale and generate efficiencies.
Let us substantiate it with the help of an example where a supply chain planner looks at several order details and works on ensuring the ones at risk of being met with a shortfall. Each planner has a different approach to managing the shortfall deliveries:
- Looking at attributes like how much inventory is available on hand
- What is the expected demand from other customers during that timeframe
- Which customer or order to be prioritized over others?
- Running into war-room discussions with other factory managers to facilitate the items’ availability
- Working on optimizing the routing path from specific distribution centers.
As an individual planner could be limited with its view and approach to managing such situations, machines can learn the optimal approach by understanding the actions of many planners and help them automate easy scenarios through their ability to discover patterns.
This is where machines have a vantage point over humans’ limited ability to simultaneously manage several attributes or factors.
Mechanical
However, machines are what they are, i.e., mechanical. You can not expect them to cooperate, collaborate, and develop compassionate relationships with the teams as empathetically as great leaders do.
I frequently engage in lighter team discussions not because I have to but because I prefer working in an environment where I am connected with my team, and they know me well, too. It is too mechanical to only talk about work from the get-go or try to act as it matters.
Source: Canva
Lack of Empathy
Take another instance where a machine analyzes a patient’s records and discloses a health scare as-is following its medical diagnosis. Compare this with how a doctor would handle the situation thoughtfully, simply because they have emotions and know what it feels like to be in a crisis.
Most successful healthcare professionals go beyond their “Call of Duty” and develop a connection with the patient to help them through difficult times, which machines are not good at.
No Moral Compass
Machines are trained on data that could capture the underlying phenomenon and create models that best estimate them.
Somewhere in this estimation, the nuances of specific conditions get lost. They do not have a moral compass, similar to a judge has when looking at each case.
To summarize, machines may learn patterns from data (and the bias that comes with it) but do not have the intelligence, drive, or motivation to make fundamental changes to handle the issues plaguing humanity. They are objective-focused and built on top of human intelligence, which is complex.
This phrase sums up my thoughts well – AI can replace human brains, not beings.
Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.
[ad_2]
Source link