ENSPIRING.ai: Rewriting the future - how devs are driving gen AI
The video explores the perspectives of developers on the role of generative ai as a development tool. JJ Asgar, a developer advocate at IBM, sheds light on the misconception surrounding AI’s so-called intelligence. Instead of viewing it as intelligent, developers liken AI to a powerful tool that functions similarly to a librarian, helping to manage and retrieve data.
The discussion delves into the importance of transparency, ethics, and governance in AI development. There are concerns about AI's implications on the workforce, but JJ reassures that AI is not a threat to developers' jobs, as it lacks the intelligence and creative problem-solving skills innate to humans. Open source projects like IBM's Granite and Instructlab are highlighted, emphasizing community collaboration in fine-tuning AI capabilities.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. generative ai [ˈdʒɛnərətɪv eɪ aɪ] - (noun) - A type of artificial intelligence model that generates new content based on input data. - Synonyms: (AI model, AI system)
So can you tell us from where you stand or where you sit, how are developers thinking about generative ai as a new development tool?
2. overloaded term [ˌoʊvərˈloʊdɪd tɜːrm] - (noun phrase) - A term that is used so frequently and in so many contexts that it loses a clear meaning. - Synonyms: (ambiguous term, exhausted term)
AI is an overloaded term now that has caused confusion in the market.
3. sovereign ai [ˈsɒvrən eɪ aɪ] - (noun) - AI systems that operate within the boundaries of a specific nation, ensuring data stays within that country. - Synonyms: (domestic AI, national AI)
which allows you to have sovereign ai.
4. fine tuning [faɪn ˈtjuːnɪŋ] - (verb) - The process of adjusting or enhancing a model for specific tasks or data sets. - Synonyms: (adjusting, refining, enhancing)
and now you can train it, or what we call a fine tuning to give it more skills and more abilities.
5. open source engineer [ˈoʊpən sɔːrs ɛndʒɪˈnɪr] - (noun) - An engineer who works with open source software, allowing their code to be public and freely accessible. - Synonyms: (public domain engineer, collaborative engineer)
I'm actually an open source engineer.
6. regurgitate [rɪˈɡɜːrdʒɪteɪt] - (verb) - To repeat information without understanding it, often without transformation or analysis. - Synonyms: (reproduce, repeat, reiterate)
AI doesn't have intelligence, doesn't have logic to figure it out. It can regurgitate code that it knows about.
7. transparency [trænsˈpɛrənsi] - (noun) - The quality of being open and honest; readily understood or accessible. - Synonyms: (clarity, openness, candor)
So you mentioned this a little bit before, too, JJ. I believe you said the word transparency.
8. knowledge work [ˈnɑːlɪdʒ wɜːrk] - (noun phrase) - Work that primarily involves handling or using information and knowledge, often requiring intellectual efforts. - Synonyms: (intellectual work, information work)
software engineering as a whole is actually knowledge work, right?
9. cathedral in the bazaar [kəˈθiːdrəl ɪn ðə bəˈzɑːr] - (noun phrase) - A metaphor contrasting two development models: centralized control (cathedral) and open, collaborative development (bazaar). - Synonyms: (structured vs. collaborative model, central vs. dispersed management)
where you're using what they call, I think it's Eric Schmidt wrote, I say called the cathedral. In the bazaar.
10. predictive text [prɪˈdɪktɪv tɛkst] - (noun phrase) - A technology feature that suggests words or phrases as you type based on previous inputs. - Synonyms: (text prediction, text suggestion)
It's like predictive text on your phone, if that makes sense.
Rewriting the future - how devs are driving gen AI
It seems like folks treat technology development as if there is an easy button. You know, just press it and it's all good. So I have to wonder, how do the people who are actually doing the work developing these programs themselves, how do they feel about AI? Or the $10 million question, do developers of AI worry about AI taking their jobs? You know, we want to know. So my guest today is JJ Asgar, developer advocate at IBM, and he's gonna take us inside. JJ, welcome.
Hey, thank you so much for having me. First up, you're coming from the developer POV. So can you tell us from where you stand or where you sit, how are developers thinking about generative ai as a new development tool? The core problem with AI for a developer standpoint is it's not very smart. It's not, and it's not really. So you'll notice quickly that I will not use the term artificial intelligence because it's not intelligent. AI is an overloaded term now that has caused confusion in the market. There are certain tools for certain jobs, and the best part about AI is it is one of the best librarians you'll ever have in your life.
Hold on. Is that a controversial take right there, JJ? You know, as somebody who lives and breathes this every single day, I'm trying to tell you the truth here. Now, I want to spend a little bit of time on that then, because you really made a point to take away the intelligence part of it. Why can this not be considered intelligent? Oh, that's a philosophical question there, my friend. But in short, what it is is a program that is looking for the best possible answer to what you are asking. There's no logic inside of it. There's no reasoning inside of it.
What it is doing is looks in a database and it says you're looking for apples. Okay, cool. Well, these words are really close to the word apple. So maybe you're asking about Granny Smith apples, maybe you're asking about red apples, maybe you're asking about green apples. So it generates to that question where it adds to those words as that sentence. If you ever noticed, as you use generative ai, it comes out in word per word because it's actually looking for that next word. It doesn't figure everything out and then dump it out. It's like predictive text on your phone, if that makes sense.
No, that makes complete sense. So then I'm wondering, from your perspective as a developer, then how do you best go about using genai to its maximum capacity? That's a great question. Again, it's another tool, right. Where every single company out there, every single person, frankly, has a bunch of documentation, right? They have information that they need to store in what some people call the second brain, right? You know, you take notes or those notebooks people have with notes and whatever. If you look at generative ai from that lens, where it is now a thing that you can query in natural human language or natural language processing to be able to ask it questions about those documents.
Hence the librarian, that becomes really powerful, because now a company can have all of its documents, and then instead of going to the HR representative to talk about, you know, your insurance policies from 1963 or, I don't know, whatever number you're thinking, right. But instead of asking those questions and having them go look for this, now you have this thing that already knows about all that, all that data, or at least gets really close to that data. Well, then, since you started this, I'm going to go back and forth with you with this as a library metaphor, because I love a good library, but a library can also be super duper intimidating.
Like my university, you know, we had levels and levels and the stacks, and so much information is housed within there. So if we're looking at Genai in that way, there are just constantly and ever expanding options that are out there. How does one even begin to navigate and evaluate these different tools that are there? JJ, how do you know which book to pull first? Yeah, wonderful question. One of the biggest challenges is that every single model, which is what is the brain. Right, of the generative ai? Every single one is. There's different ways of designing and programming those.
One of the best parts is there are generative ai models out there, just like something called granite from IBM that you can run locally inside your own data center or inside your own country, which allows you to have sovereign ai. One of the biggest problems is that you don't want it as a company. You don't want to go send your data out to the San Francisco Bay Area and have them crunch the numbers and come back. You're sending it across the Internet. Would you send your secret sauce across the Internet? No. That's a horrible idea. Right? Yeah. So that's the power of having these different models in different ways they're designed.
And the foundational model from IBM called Granite, is a model that is designed to be able to run in your own data center, and now you can train it, or what we call a fine tuning to give it more skills and more abilities. Well, then, hold on. So you mentioned granite. What you're talking about it sounds as though granite is open source, right? Yes. You can actually get the paper, believe it or not, on my browser right now, I have a link to the actual paper, like above your head, which is really funny. Wow. Cause I read it all the time. It's a math paper, so it is actually kind of hard to read, to be honest with you. But I do actually. It is really there. But yes, you can actually see exactly what IBM used to build the data set.
So if we're using the analogy that a model is a program, think of the dataset as the source code. Okay, that's not 100% true. Right? People are going to pick at me because I said that. But if you're trying to keep that analogy in your head to understand the power of this, datasets are the source code for the models. So that's actually what builds the model. It gives it the initial knowledge, as you notice, I didn't say intelligence. Initial knowledge of what it has to understand. And then you put your knowledge, or what we call fine tuning on top of it, of your company's PDF's or documents or whatever inside of it.
Gotcha. Well, sidebar, you're going to start having me say AK instead of AI. Now you're messing with me, JJ. Now, I love that you broke down granite for us there, but in general, can you let me know a little bit more about why I open source could be considered as a beneficial thing? And is open source always the ideal? So you, my friend, are a philosopher. Deep down inside. I'm starting to get this. I'm actually an open source engineer.
So what does that mean? That means most, if not all, my code is out in the public, where you can actually see the tooling and the work that I'm doing, where you can literally just find me on the Internet and be like, oh, this is what JJ's working on now, right? That is the core of open source, where you're using what they call, I think it's Eric Schmidt wrote, I say called the cathedral. In the bazaar, there's a cathedral where it's very top down mandated, which is closed source software. And then there's the bazaar, which is a marketplace where everybody's working back and forth, and you leverage all these engineers across the planet to make stuff.
So what does that mean? Open source allows you to have multiple eyes on problems, looking for stuff. There are certain security issues that have recently happened that have hit the news. That one was a closed source system that caused a lot of people a lot of problems traveling and then there's another one that was actually even worse, but on the open source side, but was caught before any major issue happened. And it was because some nerd out there couldn't actually access their server quickly as they usually got, which was really, really interesting. So we had one that took down travel, which was a closed source system, and then we had another one who was just one nerd who was like, I couldn't log into my server fast enough.
Oh, there's a backdoor in OpensSh. This isn't good. And then he found the CV and he figured it out and then put it out to the world and fixed it before anything happened. Okay, well, see, hold on. That's actually really counterintuitive to me because I would have thought that the closed system would have been safer than open. Cause when I think of open, I think, okay, people can just come on in here, like bad actors can come and do their thing and mess around with it. But you're saying that in this case, the open source system, because it was able to draw upon experiences from people that weren't just inside of that actually ended up being a stronger force.
Exactly. It's 100% that because you have so many more eyes, so much more experience. Right. I mean, what is the whole story of, like, why you need different people in a room is you need diversity and the ability for people to come with different viewpoints. What is open source? But the way the nerds are doing a true diversity, where you have people who are. Have been in the military, you have people who have only ever who failed out of university, you have people who didn't go to university, all looking at the problem in different ways, and they all resolve it. And there's a handshake agreement inside those rooms that allows you to say, okay, this is a good patch. Let's go ahead and submit this. So this fixes the problem.
So you mentioned this a little bit before, too, JJ. I believe you said the word transparency. So if possible, I want to time travel a little bit and go back there and dig on into it, because, you know, transparency, ethics, governance, these are huge questions when it comes to AI or AK in your situation. Um, so what really matters to developers when we're thinking about those big questions, when we are thinking about ethics and data, transparency and governance? So, frankly, as a developer, as somebody who gets to play around in the plumbing, not the porcelain, but the plumbing of the world, right.
The ethics and the, and the governance of it is insanely important to me because I need to know the thing that I'm working on, frankly, I. I'm. I'm a human. I like people. I don't want to kill people. Right. That's not something I want to do. What a relief. Yes. Yeah, yeah, exactly. But, you know, if we take AI the wrong way, it can really hurt society, right? It really can. And having that governance, having that transparency in it, we can be the rebels. And that's what we're doing here with the granite model and the transparency is that we're giving you an opportunity to actually see into how these models are made so you can make good choices for your business and hopefully society as a whole.
Let me take it back then, to this idea that there are legitimate concerns for you to have when it comes to AI, especially as a developer. So I'm gonna get really personal with you for a second. Um, are you and your fellow developers, are you concerned about AI taking your jobs? No, not at all. Not at all. There's some great stories around using AI to build software, and people are like, oh, well, why won't you just get the AI to do it? People don't realize until much later on in their career, especially as a whole, or they don't teach you this in university. If you go down the computer science and computer engineering space, they assume that engineering is math. And a lot of like, you know, sitting there thinking abstractly to figure out problems, well, believe it or not, software engineering as a whole is actually knowledge work, right?
It's actually artistic also, where you have to think of problems in unique ways to do it. And back to the intelligence statement earlier. AI doesn't have intelligence, doesn't have logic to figure it out. It can regurgitate code that it knows about. But if I put my business and asked it to create something for it, and then something went pear shaped inside of it, I would have to have an army of engineers to unwind what it did to make it happen. And this goes back to the analogy of the librarian, where there are some code completion systems out there, including Watson code assistant, which is from IBM, that allows you to use it as a reference, where you can ask it.
It'll give you suggestions to put in an if then statement or stuff like that. As a whole, you would never ask it to build me a piece of software. You'd use it as a pair programmer, a programmer sitting beside you. So you're like, I'm trying to do this. And you write it out as a sentence, and then it gives you a suggestion. And then you look at that suggestion and then you edit it to actually do what you're looking for, right? It gives you kind of a framework, if you will, or a straw man of the problem that you're trying to resolve and then come out with that. Does that make sense?
No, that does, that does make sense and thank you for breaking it down in that way. I just have to give you additional props right now because as someone who's not a developer, you're actually making this make sense to me and I just appreciate you for doing that. But now I want to give you a chance to actually speak directly to some of the developers that may be listening, that hopefully are listening to this right now.
If you could encourage developers to do one thing as they move on with evaluating tools and building solutions, what would that one thing be as a developer? And hopefully it's a modern day developer. You're looking at me right now. You probably spent some time in the cloud native ecosystem, right, where we use this thing called Kubernetes and we're trying to do all these like these VM to Kubernetes, pod conversions, all that jazz. We thought that was hard and we thought we're gonna make a lot of money doing that because that was gonna be the next generation. Well turns out AI is two generations ahead of that and it's even harder. So what you've gotta do is you've gotta go learn this stuff.
And this stuff is confusing as all hell, I'm not gonna lie. And it is a completely different way of looking at it. But it's not just PhDs and Jupyter notebooks anymore, there's actual tooling to get something useful out of it. But you're gonna have to talk to your bosses to understand that as much as the VC's of the world want you to just slap AI on the side of your company or whatever to say that you're doing it, there's a lot more there and you will quickly realize that there's a lot to learn. And the best thing to do is start from the ground zero and learn what a token is. And as soon as you understand what a token is, then find out the next thing you need to learn.
I want to invite you real quick to let me know, is there anything that you would love to share that I didn't ask you about today? Oh actually yes. Back to the open source story. So we talked about the granite model and we talked about how all that works. Well there's another open source project out there called Instructlab that is came out of IBM research and has been donated to red Hat that runs it. Now, it is basically that fine tuning narrative that we were talking about by putting your company's knowledge, or your knowledge on top of something like granite to be able to do something. So it's in the infancy of a project, but we really do need developers to come into our space to start helping us here, because the more we have there, the more transparency we show and the more the ability for the things that I was talking about earlier.
It all boils down to what we're trying to do inside of instruct lab. And there's enough to learn here that will teach you the AI ecosystem as you're going down this path, so you'll be able to understand the value of the space developers. You hear that, right? You've now got your mission, you got your charge. JJ needs you.
Well, look, JJ, thank you so much. This episode has been hugely informative. And again, if you are a developer who's been listening, first off, thank you for being here, but I know that you're also going to walk away with some great intel. So once again, appreciate you, JJ. And that's it for today's episode, but y'all please stay tuned for more because you know that it's on the way. We'll see you then.
Technology, Innovation, Education, Ai Development, Open Source, Data Management, Ibm Technology
Comments ()