top of page

Looking Into The Headlights of AI


Ten years ago

In 2013, David Autor, an economist at MIT who had extensively studied the connections between jobs and technology, said, at least since the 1980s, computers have increasingly taken over such tasks as bookkeeping, clerical work, and repetitive production jobs in manufacturing—all of which typically provided middle-class pay.


At the same time, higher-paying jobs requiring creativity and problem-solving skills, often aided by computers, have proliferated. so have low-skill jobs: demand has increased for restaurant workers, janitors, home health aides, and others doing service work that is nearly impossible to automate. The result, said Autor, has been a “polarization” of the workforce and a “hollowing out” of the middle class.


Today in 2023

Reflecting on Autor’s observations of ten years ago, it’s Important to note here is his observation of the proliferation of those higher-paying jobs requiring creativity and problem-solving skills, are now looking directly into the headlights of AI, in 2023. In fact, a lot of us are looking into those headlights as we learn more about AI.

While today’s leaders are still dealing with the challenges of managing an engaging workplace culture via a largely remote environment, the possibilities of AI are overwhelming. And with the number of Global Google searches for “is my job safe?” doubling in recent months, as people fear that they will be replaced with large language models, employees are stressed as well.


We’ve all had to deal with the little chatbot on the bottom of most retailers websites these days, that attempts (often frustratingly) to answer your unique questions.


Love them or hate them, they do alleviate some traffic from overworked call centres. But are your employees ready to be replaced by a chatbot?


Some evidence does suggest that widespread disruption is coming. In a recent paper Tyna Eloundou of Openai and colleagues say that “around 80% of the us workforce could have at least 10% of their work tasks affected by the introduction of AI”. While another paper suggests that legal services, accountancy and travel agencies will face unprecedented upheaval. Not as as one would prefer to hear, these workforce effects will impact leaders as well, as their roles transform.

The speed and scale of AI uptake can be captured by a simple fact: it took ChatGPT just 60 days to reach its 100 millionth user, while in contrast, Instagram took two years to reach the same milestone.


Now, as we know, economists, tend to enjoy making predictions about automation much more than they enjoy testing them. So, while much of the news is about the benefits and growth expected on the economy from AI, the increases in unemployment and inequality are very likely to have backlashes on productivity and growth. The transformation of the workplace will not be done over night… and is likely not to be pretty unless true planning and due diligence is exercised.


Beyond the impacts on the economy, the employer and the employee, there are a few other niggles about AI technology that have emerged… and they’re not small niggles. Two in particular come to mind here as we look at “generative AI” or GenAI.

Bias and Potential Discrimination. AI learns from the data that it is provided, and if that data contains biases, GenAI may perpetuate them in its responses. Any bias in the data AI receives, including race, gender, age, and disability biases, can result in biased responses against candidates and employees.


For example, in a study done by Deloitte in 2020, Amazon’s automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Sadly, it became biased against women in the process.


In 2016, Microsoft launched Tay, which intended to learn from its casual, playful conversations with other users of the app. Initially, Microsoft noted how “relevant public data” would be “modeled, cleaned and filtered”. However, within 24 hours, the chatbot was sharing tweets that were racist, transphobic and anti-Semitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it inflammatory messages.


There are many other examples of historically tainted data that could affect how GenAI interprets them and for what purposes. All said, significant efforts will be required of companies to only use data for its intended objectives.

Privacy and Security. All information entered into the Chat GPT prompt box has the potential to be retrieved by third parties, AI trainers, or even incorporated into the Chat GPT’s model, and should not be considered secure.


AI presents a challenge to the privacy of individuals and organizations because of the complexity of the algorithms used in AI systems.


As AI becomes more advanced, it can make decisions based on subtle patterns in data that are difficult for humans to discern. This means that individuals may not even be aware that their personal data is being used to make decisions that affect them. AI systems require vast amounts of (personal) data, and if this data falls into the wrong hands it can be used for nefarious purposes, such as identity theft or cyberbullying.


What Can Employers Do?

GenAI is able to generate original human-like output and expressions in addition to describing or interpreting existing information. In other words, it appears to “think” and respond like a human.


However, GenAI is limited by the data upon which it was trained, and will not have the judgment, strategic thinking, or contextual knowledge that a human does. These and other technological limitations and risks are why having a sound GenAI Policy is so important.



What Can Employees Do?

The Graduate Institute in Geneva states that “unemployment effects may be limited” but notes that "the impact on income inequality and need for redistribution policy may be large”. Maria Demertzis (Bruegel) argues that the impact of unemployment could depend on reskilling, stating “the quicker this [reskilling] happens, the less the impact on unemployment”.


Note however, that reskilling to learn how to use AI in your job is one thing….. but if AI has replaced your job, is another thing altogether. So, do some analysis about the tasks of your current job and to what extent AI could enhance or replace it…. then develop your career options and plans.



The Bottom Line

In the late 1800’s, fire horses were replaced by fire trucks, employees learned how to drive the fire truck and horses were returned to their easier lives in the field. Response times to fires were improved dramatically.


The new technologies and manufacturing techniques of the 1920’s helped focus the economy on the production of consumer goods, contributing to improved standards of living, greater personal mobility, and better communications systems.




In the 60s and 70s, robot technologies were introduced to manufacturing. The opportunity of releasing human employees from strenuous and health-damaging activities was an important consideration as well as faster throughput and consistent quality. Humans were still needed to maintain the robots, provide creative thinking, provide critical decision-making and in fields that required emotional intelligence.


But if we go back to David Autor’s observations of ten years ago, about the 1980s and his observation of the proliferation of those remaining jobs requiring creativity and problem-solving skills, AI would suggest that everything is on the table at this point.


HC2advantage – July 2023



bottom of page