The video discusses the deployment and impact of generative ai (Gen AI) across two different industries: commercial real estate and consumer packaged goods. Mahesh Bhagwati from PepsiCo and Yao Maureen from JLL share their experiences in implementing Gen AI technologies to enhance productivity, efficiency, and profitability. They highlight tools such as JLL GPT and JLL Falcon while discussing how PepsiCo uses Gen AI for customized products and large-scale business operations.

The speakers focus on the rapid adoption of Gen AI technologies over the past 18 months. They explain the challenges and benefits of integrating these technologies, noting that initially, there was skepticism and fear among employees, but education and understanding about AI's potential as an enhancement tool have increased acceptance and adoption. Both PepsiCo and JLL are leveraging AI to drive business transformation by investing in foundational capabilities and data management systems.

Main takeaways from the video:

💡
generative ai is being rapidly adopted for both external and internal business solutions, showcasing its scalability and potential for driving efficiency.
💡
Addressing employee concerns and educating them about AI's capabilities are crucial steps in ensuring acceptance and effective implementation.
💡
Ethical considerations and responsible AI frameworks are critical, with an emphasis on trustworthiness and minimizing biases in AI outputs.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. generative ai [ˈdʒɛnərətɪv eɪ aɪ] - (noun) - A subset of artificial intelligence that focuses on creating new content such as text, images, and music from learned patterns. - Synonyms: (creative AI, machine learning, content-generative ai)

Is 18 months back, nobody was talking about generative ai and LLMs.

2. profitability [ˌprɒfɪtəˈbɪlɪti] - (noun) - The ability of a business to earn a profit. - Synonyms: (earnings, gainfulness, lucrativeness)

I think this session will be more about productivity and profitability.

3. deployment [dɪˈplɔɪmənt] - (noun) - The action of bringing resources or techniques into effective action. - Synonyms: (implementation, utilization, positioning)

What is it that you guys have done so far in terms of Gen AI deployment?

4. scalability [ˌskeɪləˈbɪlɪti] - (noun) - The capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth. - Synonyms: (expandability, extensibility, growth potential)

Gen AI is really one of the scaler for us and then one of the unlocker for us in terms of leveraging those data for internal usage as well as external for our clients.

5. prototype [ˈproʊtəˌtaɪp] - (noun) - An early sample, model, or release of a product built to test a concept or process. - Synonyms: (model, example, trial version)

Yeah, from our perspective at PepsiCo, we've done a lot of POCs, right?

6. upskilling [ʌpˈskɪlɪŋ] - (noun) - The process of learning new skills or of teaching workers new skills. - Synonyms: (training, learning, development)

This requires upskilling your workforce and really raising the bottom talent.

7. observability [əbzɜːrˈvəbɪlɪti] - (noun) - The ability to monitor a system or application by collecting, analyzing, and acting on quantitative data about it. - Synonyms: (monitoring, assessment, inspection)

You also need to actually have observability as well.

8. foundation [faʊnˈdeɪʃən] - (noun) - The underlying basis or principle for something. - Synonyms: (base, groundwork, core)

That's actually providing us that really foundational capability across our length and breadth of PepsiCo.

9. adoption [əˈdɒpʃən] - (noun) - The action or fact of choosing to take up, follow, or use something. - Synonyms: (embracement, acceptance, implementation)

The adoption becomes really fast within the company to start

10. rationalization [ˌræʃənəlaɪˈzeɪʃən] - (noun) - The process of making something more logical or consistent. - Synonyms: (streamlining, organization, consolidation)

You've got to be just ruthless really about going after application rationalization.

PepsiCo & JLL on Leveraging AI to Improve Productivity

Hey, good evening everyone. I think what we have heard so far, it's quite interesting in terms of AI and the use cases. I think this session will be more about productivity and profitability. More focus towards Gen AI as we were discussing backstage. And I'm really excited to be joined by mahesh Bhagwati from PepsiCo and Yao Maureen from GLL. So we'll get two very different perspectives. One from a CPG company and a real estate company.

Perhaps that's a good starting point. What is it that you guys have done so far in terms of Gen AI deployment? And everyone is focused on scale here, so is it something that you can elaborate on in terms of what you have done so far?

Yao, I'll throw it to you first. Yeah, absolutely. You know, surprisingly, commercial real estate actually needs a lot of technology to run and also at the heart of it is a lot of data. And also one of the challenges for commercial real safe to use data is a lot of unstructured data as well as the data is global. So Gen AI is really one of the scaler for us and then one of the unlocker for us in terms of leveraging those data for internal usage as well as external for our clients.

So in terms of what we have deployed so far, really in the hands of tens and thousands of employees in jll, we have JLL GPT which is an internal tool where our that is plugged in to our enterprise data platform that our employees can really converse with it, get our data at their fingertips. We also recently launched JL Falcon, which is our AI platform. And this is something that we are very, very excited about. It's something that we will help really scale our production of AI applications because we believe that not only we want to be able to build one AI application at a time, but we have hundreds and thousands of clients. So we want to build AI applications for our clients. Leverage this. JL Falcon.

There's another reinvention of our data analytics tool where to Gary's point. I don't know where Gary is, but to use natural language to really query a lot of the data behind the scenes so that our clients can get the data very quickly and get insights at the speed that they can't before. Mahesh.

Yeah, from our perspective at PepsiCo, we've done a lot of POCs, right? And I think you get to a point where there's just POC fatigue specifically in the Gen AI space. But when you talk about profitability and we're talking about productivity, it's really at scale. We talk about PepsiCo. Right. We touch 1.4 billion experiences, consumers on a daily basis. That's a lot of people across the globe, across 200 countries. Right. And we love it. And I'm sure you all love it as well. How many of you have had Gatorade across the room? Sure. Everyone. Everyone. Right. So thank you. So we love that too.

So one of the examples I wanted to give, which is not a pocket, but something that's operating at scale, if you go to Gatorade.com now, you can actually access Genai, really operating at scale. So you can actually now customize your bottle as you're getting ready for the holiday season. Now, I might sound like Seth talked about, I'm crossing domains, but that's okay. So it's really about now really getting customized solutions, in this case a Gatorade bottle and being able to offer it to your loved ones during the holiday season. You're not using an LLM for that. Come on. That's not. We actually are using an LLM, so we're actually using multiple large language models on the back end by which you can actually go get your. You can type in your request now, of course, you can do some customizations and you can also get your name as well. And then of course, we've actually partnered with the bottler that actually does 3D printing. Then you can actually get it shipped over to your home. So yes, we are using large language models and yes, we're using it at scale. This, I think, is kind of the power of what the opportunities of the future can be from a revenue perspective.

Right. Okay. Sorry if I interrupted your response, but the question I have for both of you actually is. Is 18 months back, nobody was talking about generative ai and LLMs. So this is something obviously this is new. And I'm guessing you guys have had to do a course. Correct. In terms of embedding this new technology that came out of nowhere and now you're deploying it for these type of use cases. So just maybe help everyone understand what is it that you needed to change in terms of your plans 18 months back versus deploying this very quickly in short amount of time. And I don't know who wants to take the lead.

I can go first. For us, it's less of a course. Correct. But really fast, really responding to the trend very quickly, and then trying to understand the limitations of where the technology is. Obviously, to Magesh's point, at the beginning, everyone is experimenting. We have a lot of POCs going on. But at the end, as you continue to explore where the boundary is, where the limitations is, what do we need to put as a guardrail perspective, then putting more AI features into production becomes very fast. Obviously, I think Gary also mentioned that earlier it's the training perspective of it.

At the beginning we were worried about, oh, employees may not trust it, employees may not want to use it, but with a lot of resources and education internally about what it can do and what it can't, and that becomes. The adoption becomes really fast within the company to start.

Were they worried about losing their job while they were adopting it?

Yeah, I think just like any new things, normally you will see fear. Right. That's just human nature. However, I think as they are using it again, unless you use it, unless you really truly understand what it can do, then you start to understand it's an enhancer, not a replacer. And then I think that enhancement of efficiency and productivity is where I know JLL is really focusing on.

And also I know that a lot of enterprises are experiencing that kind of productivity gain, which is quite exciting. Nagesh?

Yeah. From our perspective at PepsiCo, Technology is a layer cake. And I think many of the previous panelists really talked about how they've gone about this journey. Right. So we've all seen the advent of the Internet, we've seen the advent of the cloud, we've seen the advent of the iPhone, we've seen the advent of big data, we've seen the advent of data science. And so what we've done at PepsiCo was really take a platform approach and a modular platform approach.

A number of our partners have actually helped us with this platform approach. So to give you an example, we've got cloud operating at scale as most of us are. So we're hyper multi cloud, we've got applications, quite a few of them are significantly modern. But we've actually embarked on a massive ERP journey. So that's actually providing us that really foundational capability across our length and breadth of PepsiCo.

Now we're actually going after one enterprise data foundation. This one's a little difficult. Specifically when you've got different business units that have a lot of independence and autonomy, you're really trying to get them to one Data foundation is difficult. Then on top of that you've got AI factory, you've got the AI factory ecosystem, you've also got an ecosystem that provides data science. So once you have that and over the time we've also started building capabilities like IoT computer vision and now PET genics as our own internal abstraction layer like Seth mentioned, then your ability to go fast really is because you've invested the foundational capability over decades.

Right now there's also one more piece as well. I think most of the previous panelists talked about is legacy. You've got to be just ruthless really about going after application rationalization, systems rationalization, environment rationalization and at the same time really upskilling your workforce and also really raising the bottom talent. So it's a, it's a collective effort and I think kind of the modern CIO and the extended CIO family kind of has over time actually figured this out. So that's why I don't think this is that big of a pivot as much as I thought it would be.

Also I remember at the beginning of all this Gen AI stuff blows up and I remember a comic that I saw on LinkedIn and it was CEO saying that I want AI. And then the technologist was like, oh my God, where's our data? And then nobody. The data piece is really very, very important foundation because everyone, the barrier of using Genai, it's pretty low right now.

But I see that if you have your own data or you don't have your own data makes the difference between a high schooler versus experts in a certain field. I think that's where I think companies need to think about is are you trying to just providing a productivity tool or are you really trying to drive specific goals with Genai? Actually I remember one of the prior panelists, they mentioned these models are more for pattern recognition than for reasoning. What do you guys think today?

I think the large language models are there for pattern recognition. And then of course, when they started off, these models were billions of parameters. Now as you go into GPT4O, et cetera, you're now talking about trillions of parameters from a training perspective. So yes, I think the models that's actually coming out right now are also reasoning models as well because there is that much data that's been fed on plus the volume of models that's out there. If you go to place like hugging face, you're going to see just an a plethora of models.

Then you add large language models, small language models, domain specific models. Then what ends up happening is that your opportunity to experiment dramatically increases. The question is, can your organization keep up with that level of innovation that's happening from a technology landscape? At the same time, can you now really, is there truly a need for reasoning at this point in time when you've not even figured out large Language models or small language models is something that we all have to ask.

I think going back to a lot of my panelists is why? The big question is why? How much time are you spending in terms of evaluating the capabilities of these models? Is GPT better than Gemini or than any other small language model? Is that something that's part of your day to day workflow?

Absolutely. So I've got a dedicated team and we've actually got a new head of AI on my team and the organization is broken into one AI Solutions for PepsiCo AI platforms and then really around evangelism and finding new use cases. And if you find new use cases, really how do we drive it at scale? So it's an investment we chose to make because we started realizing that if you want to talk about AI first, it's sweet to talk about, it's cute to talk about it, but really are you going to make anything fundamental out of it is a big question.

So in order to make something fundamental out of it, we've actually gone through pretty significant organizational changes well within PepsiCo. So I report onto the Chief Strategy and Transformation Officer, Athena. Underneath her is not only all the technology functions, but it's also all the business transformation leaders as well. Finance, commercial, consumer, and it's also the sector. How do we land these capabilities and how do we make them relevant for the sector?

So you take it through the full lifecycle all the way from why you need it to how you're building it to how you're deploying it and then really driving adoption in the field. And this is super hard not to make process changes, organizational changes and fundamental business transformation. You cannot drive this digital transformation. What about you? Ya cannot agree more. I think a lot of this productivity gain and profitability gain will have to accompany with change of how we do business, change of some of the operating models and really in addition to adoptions.

And I truly believe that those two go hand in hand. You can't just have a technology organization just sitting there like thinking about hey, how do we use AI? It doesn't work that way. It's really need to embed into the business transformational process in order to really get that ROI out of AI.

Are your IT budgets going up as a result of all these investments in generative ai?

That's a sensitive topic. But just like anything, it doesn't come for free and it's prudent to understand the investment and also making sure you measure the returns. I think at the beginning you can have all this hype. I will Give you a blank check. You just do AI very quickly. Then we will be asked, hey, how do you measure success? How do you show me the millions of dollars I put in can give me the value? I think that's also some of the, a lot of the Enterprise technologists, the CIOs and CTOs are talking about. It's like, well, yes, I know that some of these tools are helping our people, but exactly how does that turn helping the people, giving them productivity turn into the bottom line? I think that is something that we will continue to evolve and really show value of AI.

What about Magesh? You guys have deeper pockets.

With deeper pockets come greater responsibility. And so one of the things is that we've got to eat our own dog food. So definitely we're the ones introducing all these amazing technologies. If that's the case, we've got to be the first ones to eat our dog food. And so when you're saying that there's an opportunity to go some 20, 30% in terms of developer productivity, data science productivity, engineering productivity, and then overall, I would say technology productivity itself.

But those are all backend use cases, correct? Right. But you said first taking down the IT budget. So I'm just going after that component, saying, okay, I've got to eat my dog food, really show dramatic efficiency on my end. So now let's. And then reinvest back into the AI engineering capabilities and data science capabilities. Right. So with that reinvestment, how do you really go after productivity?

And so let me give you an example. Right? So we have almost 380,000 acres of farmland for potato farming across the globe. And for us, it's really. I didn't know Pepsi has potato in it. Oh, yeah, Doritos. I mean, we do lays and so we sell almost 2 billion Lay's chips. So we've got the other side almost, you know, the whole Frito Lay business. I don't want carbs in my soda. We'll figure out something for that. Genai, I'm sure can't do anything as of right now, but so as we start looking at the farm, there's a ton of productivity for from a farmer's perspective in terms of one, how do I get targeted pest control and pest management? How do I make sure I'm dramatically reducing the needs for herbicide within mine? And then how can I reduce water far more efficiently?

We've seen with the applying of, for example, satellite imagery, almost a 25% reduction in water usage in many of our farms. That's directly attributed to how data science traditionally is playing a pretty significant role in being able to drive not only productivity but also sustainability benefits. But most importantly, now the farmers are starting to get digital tech acumen and that process really raising, as I think Seth was saying, everybody goes from being a worker to really becoming a knowledge worker. Right. That transition is quite significant because of Genai, because I think it's an equal opportunity enabler unlike most of the other prior techs, in my opinion.

I want to end with a question that I think a lot of the folks in the audience are also trying to figure out. Are there any ethical concerns when it comes to deploying generative ai specifically? I don't know. Who wants to go first? Go ahead.

Sure. I think definitely. Right. One, as your consumers and even your customers. So for example, we're a B2B2C business and as they start using generative ai and the capability of generative ai, they want to make sure that what they are receiving as a data set is, is truly trustworthy. Right. So we've got to be able to work with business partners to make sure that this response is not trustworthy. So that's the piece. So for which, not only do you actually have to have a very strong responsible AI framework like most of the prior panelists talked, you also need to actually have observability as well.

Because once there is bias, once there is an issue with the even accuracy, most often people start turning back and saying let me just go back to my traditional method. So that's the big thing that we've got to be careful about bias, ethics and really making sure it's truly responsible AI. I think we're not there. We've got ways to go as a collective community. Do you think the next version of GPT or Llama or Gemini would do better in terms of the ethical component?

I think the jury is still out there, in my opinion. I think we talked about the human in the loop. We talked about observability and really strong AI observability capabilities that understands your data, but also understands the data it's been trained on. I think we've got some ways to go as a community.

I also think that the responsibility is not just lies on the foundation model producers. It's like in any enterprise. For example, in jll we have ethical use guideline for using any Genai products and you have to click on the button saying that I agree before you can use it. But I do want to advocate for a flip side of this ethical question and then maybe we can end this panel in a positive note. Yeah, we should. Yeah.

Is that, you know, like this is a very heartwarming story for, for me. And then I continue to remember this as like a reminder of what I'm working for, which is we actually, we started to roll out JLGBT and then we were like just expecting some feedbacks about how they're using it or whatever. But then when we started to hear from employees with adhd, employees whose first language is not English or have issues with communications and they started to say, hey, thank you for giving me this tool because it actually leveled the playing field for me. And so there is the good side of AI that we don't often talk about because it doesn't catch people's eyes. But actually, even as English as a second language speaker, I use genai all the time to improve my writing and presentation as well. So I think that helping our employees in various abled employees can be part of what AI can bring and bring positive side of the ethical problem.

Do you want to highlight any other positive sides? Magesh, have you the final word?

I mean, I think definitely translation, language translation is one. The benefit to the farmers is another. I think the third big thing is we've got a pretty significant factory and warehouse workforce and they now being a part of this overall knowledge economy can be a significant unlock on the positive side versus more of the things that we've been talking about.

Yeah. Ladies and gentlemen, please join me in thanking my fellow panelists, Nagesh and Yao. Thank you. Thank you.

ARTIFICIAL INTELLIGENCE, INNOVATION, TECHNOLOGY, GENERATIVE AI, BUSINESS TRANSFORMATION, DATA MANAGEMENT, BLOOMBERG LIVE