The video investigates the future of software engineering in the context of Artificial Intelligence (AI) advancements and its implications on the workforce. Several experts in the field of AI and software development discuss whether AI will lead to an increase or decrease in the number of software engineers by 2027. The conversation also emphasizes the potential democratization of coding due to AI tools, and how it could lead to a wider range of individuals from various fields becoming proficient in programming without formal training.
The dialogue further delves into the impacts of AI on the software engineering landscape, highlighting the rising influence of code assistants and the shifting skills required in the industry. As AI tools become more prominent, professionals from eclectic fields may adopt coding, transforming the skills needed in technical project management, design, and collaborative work. This evolution suggests an impending change in educational curriculums focusing more on problem-solving and design rather than traditional coding syntax.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. democratization [dɪˌmɒkrətaɪˈzeɪʃən] - (noun) - The process of making something accessible to everyone. - Synonyms: (accessibility, equalization, distribution)
I think that is just going to open up this sort of democratization of coding that we've all kind of hoped for.
2. proliferation [prəˌlɪfəˈreɪʃən] - (noun) - Rapid increase in the number or amount of something. - Synonyms: (spread, expansion, escalation)
The prolification of different capabilities, they're not quite combined into a single UI yet.
3. ethical [ˈɛθɪkəl] - (adjective) - Relating to moral principles or the branch of knowledge dealing with these. - Synonyms: (moral, principled, just)
I think the traditional computer science curriculum really need to adapt, emphasizing creativity, ethical coding practices, advanced debugging and also collaborative coding
4. collaborative [kəˈlæbəˌreɪtɪv] - (adjective) - Produced or conducted by two or more parties working together. - Synonyms: (joint, cooperative, shared)
Curriculums, they focus a lot on syntax and so if AI can assist with coding, should really educational systems shift from focusing on syntax to broader problem solving or even collaborative design? Especially as Shobit mentioned, open source, it is actually the biggest team sport right now
5. autonomous [ɔˈtɒnəməs] - (adjective) - Having the freedom to govern itself or control its own affairs. - Synonyms: (independent, self-governing, self-determining)
They're strengthening defense with AI agents. So they're revolutionizing vulnerability testing, allowing continuous autonomous scanning that adapts to new threats.
6. monopolizing [məˈnɑːpəˌlaɪzɪŋ] - (verb) - Obtain exclusive possession or control of a trade, commodity, or service. - Synonyms: (dominate, control, corner)
There is this potential for a centralized AI search model to emerge, potentially monopolizing search
7. infrastructure [ˈɪnfrəˌstrʌktʃər] - (noun) - The physical and organizational structures and facilities needed for the operation of a society or enterprise. - Synonyms: (installations, framework, base)
And it'll be really interesting to see how that kind of ecosystem evolves because, you know, people who want to use these agents for bad purposes will sort of need the same infrastructure that, you know, the people doing cybersecurity are engaged in.
8. proponent [prəˈpoʊnənt] - (noun) - A person who advocates a theory, proposal, or project. - Synonyms: (advocate, supporter, champion)
IBM has been a big proponent of having a very open community.
9. vulnerability [ˌvʌlnərəˈbɪlɪti] - (noun) - The quality or state of being exposed to the possibility of being attacked or harmed. - Synonyms: (susceptibility, weakness, exposure)
So Google did a blog post from their security project, Project Zero, that basically reported that they have a cybersecurity agent called Big Sleep that was able to find a vulnerability in SQLite, which if you're not familiar, is one of the most widely used kind of database engines out there
10. ecosystem [ˈiːkoʊˌsɪstɪm] - (noun) - A complex network or interconnected system. - Synonyms: (environment, biome, habitat)
And it'll be really interesting to see how that kind of ecosystem evolves because, you know, people who want to use these agents for bad purposes will sort of need the same infrastructure that, you know, the people doing cybersecurity are engaged in.
SearchGPT, from Naptime to Big Sleep, and GitHub Octoverse updates
Does the rise of AI mean that there will be more or fewer software engineers in the future? Chris Hay is a distinguished engineer and CTO for Customer Transformation. Chris, welcome to the show. What do you think? A billion software engineers by 2027. Wow. 2027. Okay. Shobhit Varshni is a senior partner consulting on AI for US Canada and Latin America. Shobit, what's your thought? Everybody will go from becoming a programmer to being a pro at grammar. I will ask you to explain that more in just a moment. Kautar L. Megrowie is a principal research scientist and manager at the AI Hardware Center. Kautar, welcome. What do you think? I think it's going to be a different breed of software engineers that we will be seeing.
All that and more on today's Mixture of Experts. I'm Tim Huang and welcome to Mixture of Experts. Each week Moe brings you the analysis, debate and banter that you need to stay ahead of the biggest developments in artificial intelligence. Today we're going to cover agents for cybersecurity and the launch of SearchGPT. But first, let's talk about software engineering. There's a fascinating blog post that came out from GitHub the other week, basically reporting out some data that from their perch, GitHub's reporting that there appears to be a rising number of developers driven largely by tools like Copilot. And second, they also point out that Python's incredibly becoming a really, really popular language, driven largely by data science and machine learning applications.
And this is super interesting to me, and this is one of the reasons I wanted to bring it up as our first story of the day, which is, had you asked me, I would have said, look where code assistance is going. We're going to eventually just replace all the software engineers. There's going to be no more software engineers in about a decade. And maybe, Chris, I'll toss it to you first, because your prediction is that if anything, we're going to have way, way more software engineers in, I think, 2027. So literally 24 months from now. Why do you think that?
I think it for two reasons. Number one is with code assistance being everywhere and with things like ChatGPT, large language models pretty much in the everyday person's hands, everybody can become a coder. So you don't need to go and pay money to go and get somebody to do that. You can literally have a go yourself. And I think that is just going to open up this sort of democratization of coding that we've all kind of hoped for. And I think More tools will come in like you remember Scratch from kind of mit. Then I think we're going to see more of that style side of things and everybody is going to become a coder.
The other one is. You didn't say in your question, Tim, whether they had to be humans, did you? So the carbons and the silicons, and there's going to be a whole bunch of silicon coders to match US carbons. So when I multiply that up by 2027, there's going to be a billion, buddy. Okay. All right. That's really interesting. Yeah, I guess kind of what we're talking a little bit about is almost like that. The. I guess the question is whether or not the job coder or the category software engineer is really going to make sense in the future. It almost feels like no one's like, oh, I'm a word processor, everybody knows how to write.
Shobha, I know your response seemed to suggest that you think some of the skills you'll need are going to have to change. Yes, absolutely. I think all of us will become pro at writing. Good grammar and the way you ask a question and how you describe what you want to get done. It's a good technical PM does a really good job at explaining what exactly they need so that the developers can go and execute the code to the vision of what the PM had. I think that's going to shift quite a bit.
Let me just spend a minute on just appreciating how far GitHub has come. We just refer to their annual report that talks about all the numbers. Last week we were at their big GitHub universe event and this is where we, as IBM sponsored that as well. Just to give you a sense of how far they have come, GitHub is the world's biggest repository. Like 90% plus of all Fortune companies use it, 98% developers. What we're at about what, 100 million developers plus on GitHub today, Chris. Not quite at the billion that you want there to be, but in the last like nine, 10 years now, 10th year they've been running this, they've had like, what, close to 70 million GitHub issues.
People have solved almost 200 GitHub polls like 300 million plus projects and whatnot. The way I look at it, Open Source is the biggest team sport on earth. It's not soccer, it's not football, it's Open Source as the biggest team sport. It has been crazy growing. So when you hear From Tom, the CEO of GitHub, they're giving you actual stats of what they're seeing with people developing more and more. And he's very right to say that AI, the threshold of creating code for AI, engaging with GitHub repositories and trying it out, downloading it, contributing back to it.
IBM has been a big proponent of having a very open community. We had a really good relationship with GitHub and now that the GitHub is opening up quite a bit, it has Claude models and Google models that can be leveraged in addition to all the OpenAI models. I think this is just an unstoppable force right now in the industry and more and more programmers will have access to tools that we just could not imagine we had a couple of years back. Yeah, I think one of the most interesting things in the report that they did was also that it seems like the geography of software engineering is changing.
Right. That there's like a lot more coders from. They're seeing from the global south come online on GitHub, I guess. Shobha, do you think that's related to code assistants or. I'm kind of curious about how you see like the role of these assistants in even potentially kind of like broadening like the geographic scope of who gets to be a software engineer. Yes. So I spend a lot of time with Latin America clients as well in Americas and I see a lot of centers developing where all of a sudden the threshold of being able to have economic benefit in the region has deploymented so people can go create code and go contribute to other locations, other countries and increasingly so a lot of my clients are starting to build their Latin American presence.
The time zone helps in the US as well, but just the access to tools and being able to create in every language. Right now I have an opportunity to know Portuguese in Chile and be able to code and get some assistance in Portuguese while I'm creating code. Right. That did not exist earlier. So the barriers have come down significantly and you see a higher threshold. This is one additional thing I would add to this. We should also look at the way energy movements happen across the world. Right. If you look at countries like Chile or Latin America, there's a lot of energy that's being created there and you want the AI models to be trained closer to where there's energy.
Energy consumption is going to be so much. I would anticipate more pull towards Latin America or centers where there's energy production in surplus. It used to take a lot to move that energy from Latin America to say serve customers in the US now the AI models will be created closer to where the energy sources are.
Kaltur, I want to kind of turn to you is, you know, building on what Shobit just talked a little bit about is that, you know, I think when you responded to the opening question, you said, well, it's going to be more about like asking the right questions. And I think that's like one interesting item here to kind of pull on. One thread to pull on is maybe actually in the future it's actually we're going to have a lot more technical PMS than we really will have software engineers because it feels like the role that people are increasingly having is they're kind of managing this agent that does the coding, not really doing software engineering themselves. And I guess kind of I'm curious if like the right way to think about this actually is we're going to just have a lot more PMs in the future.
Yeah, I see. Of course, you know, the skills will be changing, shifting and this for example, copilot, what it's doing is demystifying coding for people without formal training, turning more people into kind of citizen developers. So this means that professionals from diverse fields such as data analysis, design, finance, healthcare, etc. Can now use code to build custom tools without extensive studying or training or syntax, etc. And this kind of heading towards a world where basic coding becomes as common as using spreadsheets or even presentation software.
So just learning, it's kind of they're trying to be prompt engineers also, but specifically designing good prompts for software engineering. So I think it's also time to start reimagining what's the right developer workflows here. For experienced coders, AI can handle repetitive tasks, letting them focus on higher order problem solving, for example. And this might alter the skills expected in software development with these, with coding transitioning more from syntax heavy work to strategic thinking and architectural design. So I think those really would be good skills to start acquiring. Not really focusing on the syntax, but more on how do you build systems, how do you design systems, how do you put them together and then using the co pilots to help do the syntax work.
I think this also could have implications even for education right now. Curriculums, they focus a lot on syntax and so if AI can assist with coding, should really educational systems shift from focusing on syntax to broader problem solving or even collaborative design? Especially as Shobit mentioned, open source, it is actually the biggest team sport right now. So I think acquiring those skills, how do you collaborate, do all of these things, do PRs and learn how to Work in a team going to become really important skills in the future.
I think the traditional computer science curriculum really need to adapt, emphasizing creativity, ethical coding practices, advanced debugging and also collaborative coding. Yeah, for sure. And I think, Chris, it puts a tough question to you. Your title is Distinguished engineer, so you spent a lot of time getting really good at the software stuff, right? But I think if a kid approached me today and say, should I be a software engineer? Should I just tell them not to? It kind of feels like where we're headed is like, is there any more value in actually learning how to code anymore? I think is the question I want to put to you.
No, they should go and play soccer ball or something like that. Really? Okay, yeah, no, I think so. I think the question I would say is what happens when it goes wrong? Right. So if we really think about the history of software programming, right, it's, you're kind of back in the kind of the punch cards and the ones and zeros. And then the assembly language came across and then, you know, and then C. I mean there was a whole bunch of other languages, Fortran, etc. But then it really kind of took off, I would say from the kind of C onwards, which was, which is very close to assembly language. And then the abstractions got higher up and now we're at Python, etc. And then, you know, we've now got rust, blah, blah blah.
So the number of languages are increasing, but it's abstraction, layer after abstraction, layer after abstraction, abstraction, learn. We've went from hardcore kind of punch cards to assembly to low level languages, to garbage collected languages, to higher level languages, blah, blah, blah. And again, all I would say that's happening here is we're moving to another level of abstraction. And that level of abstraction is natural language. I think it will be better because with agents we'll have tools, et cetera. But you're still going to want to know the fundamentals.
Because what happens when you get a bug and it can't fix it? Are you going to be like the Homer Simpson? You're just going to be hitting the keyboard and go, try again, try again, try again. Or are you going to have to go, oh my God, I'm going to have to use my brain. How dare you make me use my brain. So I think the fundamentals are still going to be there. I see this becoming a higher level of abstraction. Now don't get me wrong, if the models become good enough at some point, then there may be a different abstraction where models may have their More native language, et cetera. And that, that's a whole different discussion. But I think I see this as an abstraction because we need, we need explainability, we need the reasoning.
Somebody's going to have to maintain this and look at it and you can't be fully dependent on the AI. I do want to address one thing though, tim, on that GitHub report, like Python and we mentioned Python there being. Yeah, we didn't talk about that aspect all that much. So yeah, yeah, Python being the most popular language. I just want to point out one thing, right? And I love all Langu, I love Python. But when number two and three are TypeScript and JavaScript, which are effectively the same language, my friends, and more JavaScript like people are becoming Typescript people, you know, if you add the two things together, who's number one again? I mean, yeah, I had the same reaction.
I mean I'm, I am a Python die hard, but I do feel like that was a little, a little bit funny in the counting, if I might add. I think there are also some risks here. There are potential risks for AI created code, especially as more code is generated by AI. Quality control becomes a concern here. How do we ensure AI generated code is secure, efficient, maintainable? So there is also the risk of over reliance on tools like Copilot, which could lead to a drop in fundamental coding skills among the new programmers. So of course there are a lot of advantages here in terms of democratizing having more developers, lowering the bar of interest and things like that.
But we shouldn't also ignore the risks that will come with this, especially around quality assurance, control, ethical considerations, security, and also when things fail. So can we ensure that we have skilled programmers or people like Chris, he mentioned that no bug and figure out what's going wrong or we will have less skilled people in those fields. What's the right balance here? Yeah, I think it's always going to be the tricky balance between kind of democratizing, making it accessible, making it usable, and then kind of like the reliance on these abstractions.
My mom, who was a coder when she was before her retirement, has a story about in her early days carrying a bunch of punch cards to the computer and then dropping the punch cards everywhere. And it basically. And her having a good enough sense of the program to basically reassemble the program like physically by the cards. And I was like, that is like a level of diligence that like modern engineers just would not be able to accomplish. So but obviously we are happy that we've moved past the punch card Era for sure. I'm going to move us on to our next topic. There was a great and very interesting story that kind of follows on, I would say, a sequence of stories we've had on MOE for the last few weeks, which is thinking a little bit about the application of AI and specifically kind of agents to the computer security space.
So Google did a blog post from their security project, Project Zero, that basically reported that they have a cybersecurity agent called Big Sleep that was able to find a vulnerability in SQLite, which if you're not familiar, is one of the most widely used kind of database engines out there. And this is a really interesting story because at least by their accounting, this is kind of one of the first instances in which an agent was able to find sort of a genuine vulnerability in the wild in a code base that is kind of like widely used. And so in some ways it's almost kind of like a real kind of hello world demonstration that we might one day be able to use these agents for identifying real world vulnerabilities and making our systems safer.
And so I guess maybe, Chris, I'll kick it to you to kind of kick us off on this topic. But you know, I think the first thing I think a little bit about is is this the beginning of just kind of a new era? Like we will just start to see agents play a bigger and bigger role in making systems more robust? Or is this still kind of in the realm you think of like the toy project, right? Like we're still going to be a few years off before we live in that world? No, I think we're already in that world. There's a couple of things about the Big Sleep thing. I mean, the first thing is if you give agents access to tools and then you get them to follow patterns, then the agents are going to do a pretty good job.
So if you think of cybersecurity, go fix me this bug, go identify this pattern, go find me what ports are open on a firewall. These are all things that agents can do today. Now, if we look at the Big Sleep one, and I do want to caution this, because when I read the paper there, the thing that they did is they took an existing vulnerability that existed on that code base and then they got the agent to go search the PRs and say, hey, go find me another vulnerability of this style that matches this pattern that wouldn't have been patch yet. And then it went and found that. So as much as it's like, by my understanding, as much as the agent discovered a vulnerability on its own. At the same time, it's kind of pattern matching and was prompted and directed to go and find a bug of that similarity. And that is completely within today's technology.
Agents and models are really good at pattern matching. And if you give them access to a large enough code base via tools, et cetera, access the PRs and the commits, they're going to be able to do that. Are they quite at the stage of being able to find a whole new class of vulnerability that is completely undiscovered and not prompt and patterned in itself? I don't know yet. I think we're maybe a little bit off that, but I don't think we're too far away from that.
Yeah, pretty interesting, Kaltar, maybe to bring it to you next because you think a little bit about the kind of risks around all these technologies. It seems to me, right, like that you're going to use this for security, but also the bad guys will get access to these agents as well. And it seems very straightforward. To be like, I have this vulnerability. Find it elsewhere in this code base is also exactly the kind of same thing you need to do if you are going to harm these systems.
Curious about how you see that kind of cat and mouse game playing out. Does the defense have the advantage right now? Do you think the offense is eventually going to have the advantage? What that balance looks like as these systems become more sophisticated? Sophisticated, yeah, that's a very good point. Of course, as Big Sleep or other similar system, they're strengthening defense with AI agents. So they're revolutionizing vulnerability testing, allowing continuous autonomous scanning that adapts to new threats. And this, this is especially beneficial in complex systems or complex environments, like, for example, cloud infrastructures where we're doing all these manual monitoring is very inefficient.
And security teams could be empowered and act faster on these emerging vulnerabilities and reducing the attack window. However, at the same time, there is also this threat of offensive AI. So AI driven security tools can also be a weapon in the wrong hands. Just as defenders can use AI to preemptively catch vulnerabilities, attackers could also use similar tools to identify exploits at scale. So this creates this potential AI, like you said, arms race in cybersecurity, where the line between defense and offense is very thin.
Yeah, I think what's so interesting about it is it also suggests eventually we're going to see a whole dark criminal ecosystem which kind of mirrors the kind of one that we have publicly, that there will be basically a criminal Lambda Labs where you can kind of run all these agents, you know, completely free and for criminal purposes. And it'll be really interesting to see how that kind of ecosystem evolves because, you know, people who want to use these agents for bad purposes will sort of need the same infrastructure that, you know, the people doing cybersecurity are engaged in.
Yeah. So I think that's why maybe some ethical and regulatory here challenges are, will need to be resolved. You know, with this rapid development of AI based security, there is this call for frameworks to ensure also responsible users. How do you protect these infrastructures and tools? So government, for example, and government cybersecurity experts, they need to be tasked with creating also ethical guidelines and regulations to balance the benefits of things like Big Sleep with its potential misuse also. Yeah, let me give you a client perspective on this.
We do a lot of work with our clients on cybersecurity. We have a whole security services team within IBM Consulting. It's been doing an exceptional job with clients. We also partner very heavily with our partners like Palo Alto to do a lot of cybersecurity work with them. And we're leveraging generative AI models and AI models quite heavily in that partnership as well. It's a two way street. It is AI helping drive better security. And as the reverse, how do you secure the AI models themselves? Right. If you look at the three different steps that our clients go through, they're securing the actual data that went into the models, securing the model itself from cyber attacks and then the usage itself.
How do you prevent misuse of the model when it's in production? Right. So there's across all these three different buckets, we've done quite a bit of work in creating AI models that prevent and detect and can counter the serial attacks and things of that nature. We had recently released our Granite series of models, Granite 3.0. There are a lot of public benchmarks and we have some private IBM benchmarks as well, where every model that we are putting into production, we have the ability to go test them across all these different attack patterns and stuff. Right. And if you look at that small class of models, which are roughly 2 to 8 billion parameter models, we do a really good job at, across all those different seven, eight different criteria, the grant models scored higher than say the LAM and the Mistral and a few other models as well.
Then on securing the actual usage, every time you're talking to a model and you're bringing data out for the model, both input and outputs get filtered. So I'm much more confident in 2024 November when we put models in production, there are enough safety guardrails from IBM and other ecosystem partners that we can start to address these fairly well. Yeah, that's great. And there's one subtlety here that I think is worth diving a little bit more into show. But if you want to speak to it is with big sleep. You're basically having an agent like an AI model examine sort of traditional, if you will, software code.
And it strikes me that there's a whole separate set of questions about how you could use models to analyze the security of models. Right. Because I think obviously where all this goes is that like once you do security on agents, it's the security of your security agent that becomes important. I'm curious if you can talk a little bit about how the thinking around that is evolving because it feels like the pattern matching of oh, here's a vulnerability in code that we're finding elsewhere looks a little bit different from how you might use a model to evaluate the security or safety of a model. Yes. And I've been really excited about the work the collectively the AI community has done in the space outside of Google.
We've had some amazing work done by Nvidia Meta IBM research on creating these models that can detect vulnerabilities. Right. So we do that at scale. There's a pattern recognition on the logs that's coming out. There is security vulnerability on what are the corner cases. You can now start to create infinite possible combinations of how you could break a particular model and you can stress test them in real time. Right. So I think we are doing a good job as a community on sharing those techniques as well. A lot of the work in the space has been very open source, so you can start to compare different models, different benchmarks, private and public, that people are leveraging to test these vulnerabilities of software code. I think over time there was a recent paper that came on comparing even the LLM judge.
How do you judge the LLM judge? There's a lot of this starts to get thinking about very meta and there's AI that's monitoring AI. But I think we are just moving the bar of what does a human do versus what does an AI do? So if you think about the way we employ people into our organizations, we would have somebody who's a graduate from an amazing school with multiple degrees, just like a really nice LLM. And if we're giving them some few short learning some examples during training saying that here's how we do this thing in our company, then you'll give them access to all the other vulnerabilities and all the other things. Right. They are in real time reading up on a new vulnerability that happens in a particular environment and then trying to think how will that impact their own code.
So we're starting to crunch through some of those steps that a human would have done. And if you think about this as bring a new graduate hire from an institution like MIT or Stanford into your organization for cybersecurity, that's the exact same pattern that we are following with LLMs as well. Yeah, that kind of human metaphor of how we train cybersecurity experts and applying that to the model is interesting and I think lands on maybe the final question I had for this segment, which is, Chris, if I can ask you to make another wild prediction for this episode, is it feels like the threshold.
The badge of honor, if you're a security person, is like you disclosed a really novel kind of exploit at defcon. And I guess I'm kind of curious if you think that agents will eventually pull that off and if so, you have an over under on the year. Is it 2027 when we're going to have a billion engineers or how far off is that in 2028? This is my prediction. AI agents will reveal the first human vulnerability in code and therefore they will say this person here is a human vulnerability and they're doing bad things. So that's my prediction. 2028. It's going to be the other way around.
AI agents predicting human vulnerabilities. Interesting. Yeah, I would love if the agent agent finds out, Wait, this is the new method for social engineering would be actually in some ways very perfect. I think also what's going to be interesting is as AI finds our security flows faster than ever, the real question is who's quicker? Defenders patching them or attackers ready to exploit them? That'd be really funny to see. And the human vulnerability part, Chris, you just mentioned we're doing this for one of the big Latin American banks right now where we are leveraging some social engineering techniques and stuff. The emails that you create for social engineering attacks, it just looks so plausible, right? LLMs are really good at creating convincing content and you can trick and the whole click baiting people to go into a rabbit hole.
That's working out really well. But it's really nice. Some of our clients are saying that, hey, I'm not quite sure about putting AI in production. Our security teams won't give us the green check. Let's go pilot. LLMs for security team first. If they're convinced and they put it in production, then they don't have an excuse to bottleneck the rest of the organization. It's been a good method working with lawyers and cybersecurity teams in these large organizations. Yeah, it's going to be so hard when you try to log into work and it's like you've been locked out because you're just too gullible. We've assessed that you can't make it here. Just like, okay, it's coming 20, 28, you heard it here first.
For our final segment, I want to talk a little bit about search GPT. So it goes without saying that OpenAI is the heavy in the industry, the big leader. Everybody's been waiting on their features and what they release. And one thing that everybody's been waiting on for a long time is for them to finally get into the search space. And long anticipated, but it finally launched and now OpenAI now has a Search GPT feature. And this enters a market that's been kind of dominated and competed over by companies like Perplexity. And of course Google through Gemini really wants to get into this space as well.
And so this is a big move, right? The big industry leader has finally kind of put its marker down for what it wants to do in search. And I know Shobit, you looked into this. The question I always come to it is like, does this mean that Perplexity is doomed? Is everybody doomed now that OpenAI is in the space and kind of curious about what you think the effect on the market is going to look like? So I recently posted on LinkedIn saying that after I've had access to GPT search for a while, I pay for a whole. I'm very gullible in paying 20 bucks a month to try out all kinds of AI.
So I've been a paid subscriber for a while and I was lucky enough to get access to it. I was comparing it. The closest competitor would be something like Gemini Search and then it'll be things like Perplexity, right? So I think if I did a side by side comparison, I have like 13 different areas of topics that I compared GPT search versus Google Gemini and overall I don't think I'm going to be switching my search from Perplexity and Google and Gemini over to GPT search quite yet. And there are a few things that I found when I was comparing them one by one.
Just to summarize this, I have a whole article giving you visual side by sides, but Google generally is a lot more visual. They have learned from years of UX what's the best way to represent the information for the user, right? So for example, if you're suggesting restaurants, if I ask GPT search to find restaurants in a particular location versus Google Gemini, Google Gemini understands that it's logical to put a map and pinpoint all the restaurants in the response that I'm giving you, right?
So it understands the right UX and people would want to go interact with the graphic and see which one is closest and so on, so forth, right? Similarly, if you're talking about weather, it makes sense. And for the last few years Google has had a really nice card on the very top and tells you exactly what I was looking for. The one thing that I still need to that GPT needs to address is they have a prolification of different capabilities, they're not quite combined into a single UI yet.
So as an example, when I switch over to web search, I lose the ability to upload any content, I can't give any attachments, I can't use any function calling things of that nature that I'm very used to when I'm using my 01 previews or my four O's right? Versus in the Gemini world, Google Gemini, they figure out what I'm looking for, right? So the simplest example would be if I'm standing in front of a monument or some place, some landmark, I take a picture, I said can you find me restaurants around this right? Now obviously Google Gemini will identify the place, very high accuracy. It'll give me nice recommendations and help me fine tune it.
ChatGPT's GPT search cannot take attachment, so it can't take any itinerary. It can't do things like if I give you a document and say here are the people that I'm looking for, go on LinkedIn and scrape something for them. It can't act, it doesn't have access to function calling. I can't give you documents, right? So there's certain things that are like absolutely missing on the GPT side. I think that the last piece that is going for Gemini, which is still why I favor Gemini Google, is the connection to your personal data. I've been a big Google. Like my email address is Shobha Gmail.
I got that when they were starting at the very, very beginning. So all my data, my photos, my calendars and stuff like that are inside of Gmail. So when I ask about hey, can I find restaurants near the hotel I'm staying in in Mexico, it'll be able to go find that really quick. It's very personalized. You can go search with my permission of course. It can go and look into my emails and things of that nature. That has a huge value add to me.
Yeah, that's so interesting is basically that I think maybe one way of thinking about this competition is how much is search about the form of the results versus the substance of the results, which is kind of showing what you're saying is like oh, when you ask for a restaurant, it's great to have the map and the pins and all the stuff that Google has indexed. Even though the response might be less conversationally well flavored than what you might get out of perplexity or something like that. Just to counter maybe some of the arguments that Shobit mentioned. Of course having that personalization is so important, having access to all of that. And I think Google has perfected many of these features given its long history with search.
But don't you see as GPT is acquiring also more multimodality features and as people, more people are using ChatGPT or Search GPT that personalization will come along. You know, they'll acquire more personal data, they can customize also things. So I think it may be a catch up game here. One thing also that I find nice in search GPT that I still don't see in Google search is that interactive nature the way they basically it's more conversational search. So unlike traditional search where they give you like a bunch of links that you have to click through, this is making search more intuitive, particularly for complex queries or ongoing projects. Users might no longer need to click through a list of links as the model delivers synthesized responses.
So Carter, I will push back on that a bit if I may. I think it's unfair. It's apples to oranges. If you're comparing GPT search with the classic Google search, the right comparison would be Gemini search with Google. Right. So Google's Gemini is multimodal. Like I said earlier, I can take pictures and things of that nature. It is personalized, can tap into your Google Gmails and stuff if needed. I can take images and so on, so forth. Right.
So I don't. Google understands and acknowledges that the blue link world is dying their Gemini. Google search I think is an incredible product. It works really, really well. And they're trying their best to make sure that within the conservative boundaries of what they can do, being such a large company, personalizing, hyper personalizing and multimodality and things of that nature, looking at very, very Long videos and summarizing it, things like that, I think they have a very good moat. But the true comparison is not Google Search Blue links with GPT search. A lot of people in media are comparing the two together and I feel that it's unfair to Google.
I agree with you, it's not a fair comparison. Yes. And I think the question here, are we moving towards this one model to rule them all scenario for search or it's going to be a competition. So but we always had one more model to rule them all with Google because they had such a massive 95% plus market. Right. So I think people are. And is that shifting to the Google's Gemini or OpenAI right now will have a place as it's also improving its search capabilities.
So I think OpenAI is going to win this one out, but maybe not for the reasons you think. And so my experience with ChatGPT with search in this case is it works as a true extension to the conversation I was having anyway. So I was having a good. So maybe I'm looking at a particular paper on something. I want something updated before, without access to the Internet there, it's only going to come back with a limited amount of information. Right. With ChatGPT with search, it extends out, it takes its knowledge plus the knowledge that it's got from the Internet and then starts to give me back better answers.
And for me that is the game changing part. And I just found myself using ChatGPT research more naturally than I did before. So rather than reaching out for Google to go answer that question and then mess around, I'm just doing it within the conversation. Now if I then bring that in with the O1 capabilities, as that starts to get released and as they start to combine the modalities, the fact is OpenAI has been leading on the modalities on this for a while. They're ahead of the game with the O1 models, et cetera, making it more agentic when they bring all that together.
I think Google's got a lot of work to do there. Are they going to go after true search, et cetera? No. But if this is a comparison between gemini and the O1 models with search capabilities and tools, as it stands Today, I think OpenAI is winning that one and I feel that today from the experience I'm having. And the fact is there are millions of people using ChatGPT today and there's maybe 12 people on Showbit that's using Gemini. Wow. So I think that's my feeling. Yeah, I think there's A very interesting question here, a little bit about. It's a debate over what we think the commodity asset is and what do we think is the irreplaceable asset or the hard to reduplicate asset.
I think Showbit, if I don't want to put words in your mouth but Showbit, your position seems to be all of this data, all of this kind of incumbent advantage is the hard to replace thing. And I think what Chris is saying is like well actually getting the data is not the hard part. It's this additional analysis layer which is going to be the really unique differentiator. I don't know, maybe that's the right way to catch that.
There's no doubt that Google is under a lot of pressure. Perplexity has just shown how well they work and I'm a pro user for a very long time. Amazing work. Right. So I think yeah, generally speaking, yes, they have a lot of pressure on getting this right. It's a hundred billion dollar problem for them to solve so they're putting everything that they can behind it. Right. So they had to make sure that they nailed the conversational search part and more personalized.
I think the things that are going in favor of Google are the fact that they have the world's data to train on in YouTube and search and they have like decades of how people, the patterns that people follow to get to the right answer when they're planning a trip, things of that nature. They do have a lot that they can tap into that other competitors like OpenAI do not have access to today. Right. So over time they'll try to catch up with each other. Google will always have a lot of fire behind them to go fix this, to get this the right way.
But I'm just the fact that my personal data is accessible to Google, I think that may change at some point but in the current state it is more relevant for me to have an answer that's hyper personalized to me and the way I do things. Right. The fact that I'm asking you to set an itinerary in Italy, it should know that I'm landing at 2pm and not start my itinerary at 6am Right. So that that fundamental part of me having to tell a model say guys just understand what is important to me first and you know that the airport is X hours away so take all of that information into consideration.
And I'm thinking about this from a very enterprise perspective as well. Right. For us our clients are more focused on I have all this repository of manufacturing Documents and warranty documents and stuff like that. And then I have all of the other data sets I need to be able to search against those with high accuracy and the same experience I'm getting with ChatGPT search or with Gemini, I need to bring that into my employees to get unlock the value. And it's really nice to see that meta is starting to get into this game as well. There's a lot of rumor over the week about meta coming up with its own search because now they're incrementally making progress towards that space as well.
So I'm really excited about the future of what happens with getting information to showbit in the moment that I need that's hyper personalized to the way I consume information and what's in my emails, things of that nature. And I agree with that Shober. But you know what? I don't want Google having exclusive access to my information. Right. Do you know what? I actually want an open ecosystem in marketplace where I can plug into the agents here, go access to my Gmail, go access to this, et cetera. And as opposed to going well, Google's already got this information and it can train its models and do whatever it wants with my data and nobody else can play in the system.
So open ecosystem is where I am. So yes, I agree, but it's got to be open. Yeah. There is this potential for a centralized AI search model to emerge, potentially monopolizing search. While this could bring consistency and ease of use, it also risks creating this information bottleneck. I definitely agree with Chris that having an open system would be better because if one model provides more search answers it might centralize information flow, reduce diversity information sources and also shape public knowledge in ways we really don't yet understand.
Great. Well that's all the time we have for today. It's great Shobha that you mentioned that meta thing because that was the other part I wanted to get into. So we will definitely have that on a future episode of Mixture of Experts but unfortunately we are out of time today. So thank you for joining us. If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify and podcast platforms everywhere. Showbit Kaotar Chris, thanks as always. Appreciate you joining us.
Artificial Intelligence, Technology, Software Engineering, Innovation, Future Trends, Coding Transformation, Ibm Technology