Artificial intelligence is rapidly becoming a pivotal element in reshaping workplaces, prompting concerns about job security and gradual adoption. The FT's "Working at" podcast, hosted by Isabelle Barrick, delves into the intricate balance between technology adoption and employee integration. It highlights the uneven adoption of AI technologies among employees, despite the keen interest from company executives to leverage AI’s potential.

Exploring various AI personas within organizations, the video reveals a spectrum of acceptance ranging from enthusiastic maximalists to cautious observers. The discussion also addresses the trust gap prevalent among employees and emphasizes the importance of transparent guidelines and leadership initiatives in encouraging the effective integration of AI tools. With leading organizations setting the groundwork for structured AI usage, trust and managerial support emerge as crucial elements in navigating this technological shift.

Main takeaways from the video:

💡
Adoption of AI in workplaces is uneven, with significant interest from leadership but hesitance among employees.
💡
Effective AI integration requires clear guidelines, transparency, and fostering a culture of trust.
💡
AI presents both opportunities and challenges, questioning conventional leadership approaches and requiring adaptive learning methods.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. generative ai [ˈdʒɛnəˌreɪtɪv aɪ] - (noun) - A subclass of artificial intelligence that uses algorithms to generate new content such as text, images, or code. - Synonyms: (creative AI, productivity AI, AI content generation)

generative ai is kind of a subset of artificial intelligence more broadly.

2. harness [ˈhɑːrnɪs] - (verb) - To control and make use of (natural resources), especially to produce energy. - Synonyms: (utilize, employ, apply)

Since AI's explosion into public consciousness in 2022, we've also seen a huge push from businesses to harness the power of generative ai to streamline the workplace.

3. urgency [ˈɜːrdʒənsi] - (noun) - Importance requiring swift action; an earnest and persistent quality. - Synonyms: (insistence, imperative, pressing need)

What we see in the data is that the executive urgency to incorporate AI is at an all-time high.

4. persona [pərˈsoʊnə] - (noun) - The aspect of someone's character that is presented to or perceived by others. - Synonyms: (character, identity, role)

We've been really interested in sort of understanding this gap of executive urgency and employee adoption. And so, we did some research to really understand the emotions that people are feeling about AI. And we uncovered five different Personas that sort of help us understand the AI workplace.

5. transparency [trænˈspærənsi] - (noun) - The quality of being done in an open way without secrets. - Synonyms: (clarity, openness, candor)

The most important thing is transparency.

6. autonomy [ɔːˈtɒnəmi] - (noun) - The right or condition of self-government, especially in a particular sphere. - Synonyms: (independence, freedom, self-rule)

But are senior leaders and employees really willing to hand over their autonomy?

7. trepidation [ˌtrɛpɪˈdeɪʃən] - (noun) - A feeling of fear or agitation about something that may happen. - Synonyms: (apprehension, dread, unease)

It's the biggest workplace shift in our lifetimes. No wonder there's a lot of hype and some trepidation.

8. circumspect [ˈsɜːrkəmˌspekt] - (adjective) - Wary and unwilling to take risks. - Synonyms: (cautious, wary, prudent)

And so one of the things that I urge bosses to do is to be much, much more circumspect about headcount reductions.

9. skeptical [ˈskɛptɪkəl] - (adjective) - Not easily convinced; having doubts or reservations. - Synonyms: (doubtful, dubious, unconvinced)

But it's right to be a bit skeptical.

10. integration [ˌɪntɪˈɡreɪʃən] - (noun) - The action or process of combining two or more things in an effective way. - Synonyms: (amalgamation, incorporation, unification)

What I will go to is the thing that is holding people back from going fast is their data not being in order, integrations not being set up, and people not having understanding for what's happening

AI is transforming the world of work, are we ready for it? - FT Working It

Artificial intelligence is set to reshape our workplaces. What does this mean for my job? Am I going to have a job? Will my children have jobs? Companies are itching to incorporate AI into their systems. But are we really ready for it? So 2/3 of our desk worker population are still not using this technology.

I'm Isabelle Barrick. I host the FT's Working at podcast and write a newsletter about the workplace. In this series, I'll explore some of the most pressing issues around the future of work and talk to senior leaders about how they're making work better. We have this extraordinary responsibility to shape the new world of work for everyone.

San Francisco's one of the tech hubs of the world and AI is definitely in the air. generative ai is kind of a subset of artificial intelligence more broadly. There's a long history behind this technology, but really when we talk about generative ai today, what we're talking about is really something that's emerged over the last three years is translate between text, images, video, audio, even code. Now you're seeing that technology applied to lots of other types of patterns, including things like DNA even.

And so the big turning point was with the launch of ChatGPT in 2022 at the end of that year where for the first time, you know, anybody, any user could literally just communicate directly, interact with a generative ai system. And since then there's been a lot of these. Since AI's explosion into public consciousness in 2022, we've also seen a huge push from businesses to harness the power of generative ai to streamline the workplace.

The buzz of AI is being felt everywhere. But when talking to many business leaders, one wise person said to me that CEOs have bought Ferraris in the shape of state of the art AI systems. They just haven't given any driving lessons to their staff. A survey of 10,000 desk workers found that the AI benefit that executives are most looking forward to is increased productivity among workers.

But LiDAR's biggest concerns about embracing AI are around data security and privacy, followed by distrust in AI's accuracy and reliability. What we see in the data is that the executive urgency to incorporate AI is at an all time high, right? This has increased seven times over the last six months.

So this is the most top of mind thing for executives worldwide. But what's really interesting is 2/3 of our desk worker population are still not using this technology. So there's this really interesting disconnect.

Salesforce Global HQ in San Francisco is home to SLAC's workforce lab, which Studies how to make work better. The team there has been researching what motivates workers to use AI. I went to meet the head of the workforce Lab and SVP of Research and analytics, Christina Janza.

So what are the conditions that might make workers more likely to trust AI or be interested to use it? We've been really interested in sort of understanding this gap of executive urgency and employee adoption. And so what we really wanted to do is let's better understand the humans, right? Why are the humans using it or not using it?

And so we did some research to really understand the emotions that people are feeling about AI. And we uncovered five different Personas that sort of help us understand the AI workplace. The first one is called a maximalist.

This is a person who's very excited about the technology. They use it very actively. The second persona, also using AI very actively, is called the Underground. And the really interesting thing about the underground is although they're using it and getting a lot of value from it, they're hiding their usage. They're hiding their AI usage because they feel guilty and they feel like people are going to think that they're cheating.

And then the next three are the ones that aren't really actively using AI. So the rebel is the person who feels like AI is a little bit of a threat. The super fan is very excited about AI, but they, they aren't using it themselves. They don't know how to start. And the final one is the observer.

The observer is simply someone who's in a wait and see mentality. They show some interest, they show some caution, they're just not actively engaged and they're kind of just waiting to see how the whole thing plays out. Intrigued, I took Slack's test to find out what AI persona I have. What is your AI persona?

Take the Slack AI persona quiz to find out who you are. How frequently do you use AI tools? Work related tasks? Probably a couple of times a week. How do you feel about the use of AI in the workplace? Excited? Guilty? Indifferent? Concerned? Relieved? Reluctant? Excited?

Actually, I'm concerned about AI replacing my job. I'm quite old, so I'm just going to put two for that. I'm interested in learning or further developing AI skills. Yes, I'm a five on that. I'm a maximalist.

As a maximalist, I can see the benefits that AI can offer. But with companies full of staff with such diverging views on AI, is it a good idea to have everyone speeding ahead? So do you think that organizations should have put guardrails in place first?

I mean, AI is moving so quickly. That's hard. Is there a trust piece missing here? I guess, is what I'm saying, there is a trust piece missing. I mean, what we see in the, in the data is that only 7% of workers worldwide fully trust AI.

Right. And that's to be expected with new technology. You have to use it, you have to get used to it in order to really understand whether you can trust it. The other thing that's really interesting about trust is there's a big manager component here. People who feel trusted by their manager are twice as likely to actually try AI.

What I think you're saying is that there's a very human part to this. Do you get on with your colleagues? Do you trust your manager? Do you feel safe to communicate where you've got things wrong? Oh, my gosh, I'm so happy you said it like that because I think so much of the conversation that we're having is around the technology, all of the amazing advances that we're making, all of the amazing things that this technology can do.

But you can build the coolest technology in the world, and if people don't use it, it doesn't matter. And so to your question about should people have come up with guidelines earlier? Maybe. But I also think we need to give leaders a break, right? This is.

New technology is developing so quickly and we're just trying to catch up. And so what we suggest is it's not too late, right? Now's the time to really sit down and figure out what is your policy going to be? What are you going to allow your employees to do?

And just be clear, the most important thing is transparency. Slack has found that when businesses cater for all types of AI Personas and have defined safe usage guidelines, employees are nearly six times more likely to use AI tools in the workplace.

But in a recent survey of desk workers, 43% say they've received no guidance from their leaders or organization on how to use AI tools at work. These models are potentially so powerful, they are remarkable in what they could provide to us as humans.

And if we think that analysis and structured thinking and creativity is a net economic good, we really want to be able to distribute that as widely as we can. Tech investor and founder of Exponential View, Azim Azhar looks at the impact of AI on society.

I invited Azeem to the FT's offices to find out more about what AI can and can't do for the workplace. How are you using AI yourself? At the moment, one of my favorites is that I have a Number of different AI assistants who will attend my meetings.

So one is extremely good at taking a detailed transcript. And there's another assistant which evaluates my performance in the meetings. And I'll get an email and it'll say, you did this well. You didn't do this so well.

Next time try doing this. What some of the academic research has shown is that the more expertise you have, the better you can get out of the system. The reason why somebody who's senior can do better with AI than someone who perhaps is junior is because when you use a generative ai tool, it's a little bit like delegating tasks and who best delegates tasks? Well, people who've been delegating tasks for 15 or 20 years, I.e. the senior exec.

What are the downsides that are obvious to you as someone who is in that world all the time? One of the biggest downsides is that this is still quite a complicated technology. And I think people who've used AI know that it can also be a little bit unreliable.

And when you have a complicated technology that's unreliable, you have got to be prepared for things to go a bit askew and awry. And I think firms have to figure out how they experiment and invest at a pace while recognizing that the ground is going to be shifting quite a lot.

A second issue is going to be about the temptation that companies may have to use this first and foremost as a cost cutting exercise. And the reason they need to be a little bit careful is that this is an unstable market and an unstable environment.

And so one of the things that I urge bosses to do is to be much, much more circumspect about headcount reductions because you never know exactly where the pieces are going to fall. AI is set to be a skills equalizer, helping weaker employees to level up.

But Azim has highlighted the complexity of adoption and that it is CEOs who have to lead the charge. I think what's been different with the generative ai wave is that it is so easy to use and it doesn't require changing, you know, your back end systems or replacing big contracts that you might have with enterprise software companies or whatever, because in a lot of cases tech companies that we already have relationships with in the enterprise, like say Microsoft or Google, these are the companies offering generative ai, so can easily pitch to business leaders saying, you know, we're the world's biggest enterprise software companies and we think this is going to change the world.

There's been a lot of buzz now verging on Maybe even hype around. You know, if you don't adopt this now, you're going to be left behind. It's a good job of both marketing and a sort of consumer led technology.

HR software company Lattice has made the move to becoming an AI powered platform. I went to their HQ to see its capabilities. So right now I'm logged in as Olivia's manager. And what you'll see on the right is a summary of all of the FE feedback Olivia received over the past year.

Lattice's AI software takes all the available data, feedback previous reviews and learns the tone and grammar of the user. It then creates an authentic performance review. We're the best one in the world.

Some managers are terrible at feedback. They give bad feedback, they're blunt, they're clumsy, they may offend people. What can Lattice do to stop that happening? So what we are actually maintaining is a set of what good feedback looks like.

It should be inclusive, it should be actionable, it should be concise. Regardless of what level of experience you have with feedback delivery, it up levels your writing in a way that converges with best feedback writing practices.

So it saves bad managers from themselves? Essentially, yes. I love it. Every time we've had advances in tech, we've had to work harder. Is the promise of AI that this time we'll get it right.

We have more tech that's supposed to simplify, that requires more tech to integrate. But our collaboration has not been easier because there's too much tech. This is what I think is so powerful with AI, is that it really is simplifying things down from a experience standpoint, the way that we will experience the technology is the way that we interact as humans.

It doesn't matter if you're in system A or system B, that data is brought together behind the scenes. So when you ask it a question, it can give you an answer. So what are you finding are the main use cases for it? And also how are people responding in a perhaps more cautious way in the world of hr, you know, a bunch of information about structured data around your employee record, your compensation, your performance, the feedback you've received, all of your skills that you may have.

And by being able to bring all of that together and just make your work life easier right now to answer questions and give you guidance, that's where we're at right now with AI and we're just going to see this develop faster and faster and faster, which is amazing. But then it also makes you question, how do I scale up my teams, my employees, to match these fast Changing expectations and how do we govern it?

Over the next year, tech companies will unleash the next wave of innovation to business AI agents. These gen AI assistants won't just tell you what to do, but will be given access to perform actions on your behalf.

But are senior leaders and employees really willing to hand over their autonomy? The question then is, how are we going to manage it? How are we going to hold it accountable? How are we going to be transparent with decisions that we're making?

There is no handbook, so hope can't be our strategy that we're going to get it right. We have to hold ourselves accountable and be very transparent so that we can learn every step of the way.

And so that's a thing for leaders, is that how do you build trust with your employees? With communication, education, and a deep understanding of what you're intending the AI to do? So things are moving very quickly in the AI world. Is it too quickly?

Should leaders be pressing pause or how should one best be implementing? It's a great question because one could say you move slow to go fast. The other thing is you need to be rapidly experimenting to learn along the way. What I will go to is the thing that is holding people back from going fast is their data not being in order, integrations not being set up, and people not having understanding for what's happening.

And then you can move very fast because people will see, oh, I'm getting this value. Oh, my job just got a lot easier. I think Sarah's reassuringly unsure about the ways that AI is going to change how we work.

It's the biggest workplace shift in our lifetimes. No wonder there's a lot of hype and some trepidation. We will only find by trial and error what works.

And companies like Lattice are asking the questions now so that we can all learn later. This is a very expensive technology to build for. Now, the companies that are building it, they're not passing that cost on to consumers or to customers because they want people to adopt it.

And that's generally the playbook of, of tech. How do you reach enough people so that you get to a point where you basically can't live without this, and then, you know, you start to make money. That's the phase they're in today.

But that's going to change because it is so expensive to train AI systems, you know, tens of billions of dollars to build these huge models, and the more sophisticated they get, the more expensive that becomes. So I think the first question is, like, how much are you willing to pay when it's not clear yet what the real big business benefits are?

So many leaders have gone all in on the hype around AI without really thinking about their specific organizational needs. One size doesn't fit all. Is the key to success simply to take a step back, a deep breath, and think about where AI might truly make a difference and where it's not needed.

Some staff won't want to be forced into using it, and the tech itself is still imperfect. We aren't very patient about mistakes in the workplace, but will we all be willing to shift our behavior to accommodate the software's learning curve?

AI is going to transform the world of work, no doubt about that. But it's right to be a bit skeptical.

ARTIFICIAL INTELLIGENCE, TECHNOLOGY, INNOVATION, TRUST IN AI, AI IN WORKFORCE, AI PERSONAS, FINANCIAL TIMES