This video explores the future of AI in education, focusing on its impact by the year 2028. Phaedra Buenadiras, a responsible AI leader, highlights that the extent of AI use in learning can vary greatly based on socioeconomic backgrounds. Skylar Speakman and Marina Danilevsky, both senior research scientists and parents, discuss intentional limited use of AI for their children. They emphasize maintaining a balance between technological aid and human connection in education, indicating personalized learning, curriculum customization, and operational support as current AI uses in the educational sphere.
The experts delve into the opportunities, risks, and ethical concerns related to AI in education, including potential equity issues and the importance of teaching AI literacy at a young age. Marina mentions innovations like gamification in learning, while Phaedra emphasizes the necessity of being critical consumers of technology and understanding AI's socio-technical impacts. The discussion stresses the importance of interdisciplinary teaching, breaking technology barriers, and ensuring that technology reflects societal values and ethics.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. equity [ˈɛkwɪti] - (noun) - Fairness and impartiality towards all concerned, based on the principle of equal opportunity. - Synonyms: (fairness, justice, impartiality)
And I do want to talk a little bit about some of the concerns here, I think. Phaedra, do you want to go into that point just a little bit more? I know you mentioned it in your at the top of the episode, as well as kind of these concerns ultimately about, you know, the equity of these kinds of tools.
2. socioeconomic [ˌsəʊʃiəʊˌiːkəˈnɒmɪk] - (adjective) - Relating to or concerned with the interaction of social and economic factors. - Synonyms: (social-economic, financial-social, wealth-status related)
I think the answer is it depends in particular on one's socioeconomic background.
3. gamification [ˌɡeɪmɪfɪˈkeɪʃən] - (noun) - The application of game-design elements and principles in non-game contexts to engage users. - Synonyms: (game-oriented, interactive, engaging)
I think that there's a lot to be said for some very interesting games and gamification that is going on in the educational space
4. bias [ˈbaɪəs] - (noun) - An unfair preference or prejudice for or against something or someone. - Synonyms: (prejudice, partiality, favoritism)
One of my favorite definitions of the word data is that it's an artifact of the human experience. We humans, we generate the data or we make the machines that generate the data. But it's important to recognize we humans, we have over 180 biases and counting.
5. interdisciplinary [ˌɪntərˈdɪsəplɪnɛri] - (adjective) - Involving two or more academic disciplines that are usually considered distinct. - Synonyms: (multidisciplinary, cross-disciplinary, integrative)
It is truly interdisciplinary
6. algorithmic [ˌælɡəˈrɪðmɪk] - (adjective) - Relating to or using a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. - Synonyms: (procedural, rule-based, computational)
And we have used games to introduce the girls to things like algorithmic bias.
7. artifact [ˈɑːrtɪˌfækt] - (noun) - An object made by a human being, typically one of cultural or historical interest. - Synonyms: (relic, object, creation)
One of my favorite definitions of the word data is that it's an artifact of the human experience. We humans, we generate the data or we make the machines that generate the data.
8. generative [ˈdʒɛnərətɪv] - (adjective) - Capable of producing or generating something, especially in the context of AI models that can create content. - Synonyms: (productive, creative, inventive)
They are not using generative AI, but more traditional forms of AI, and it's specifically targeted towards younger kids who naturally talk to the tv
9. introspection [ˌɪntrəˈspɛkʃən] - (noun) - The examination or observation of one's own mental and emotional processes. - Synonyms: (self-analysis, reflection, contemplation)
And as I mentioned, these aren't strictly technical challenges.
10. sociotechnical [ˌsoʊsiəˈtɛknɪkəl] - (adjective) - Pertaining to the interrelated social and technical aspects of an organization or situation. - Synonyms: (social-technical, interactional, integrated)
It takes a tremendous amount of work and it's not strictly a technical problem at all. It is a socio technical problem.
AI in education - Safety, literacy, and predictions
It's 2028, three years from now and you are 12 years old. How much of your learning is done with an AI assistant? A lot, a little, or none at all. Phaedra Buenadiras is the responsible AI leader for consulting. Phaedra, welcome to the show. For the first time, what do you think? I think the answer is it depends in particular on one's socioeconomic background. Yeah, we will definitely be talking about that. Skylar Speakman, senior research scientist, welcome back to the show. Skyler, what do you think? Yes, but a little. And that is intentional for my three kids growing up in Nairobi, Kenu. Awesome. And last but not least is Marina Danilevsky, senior research scientist. And I believe we're talking about your kid here who will be 12 in 2028. What do you think my son will be? And I think it'll be a little also intentional, even though and maybe especially because I live in the Bay Area.
All right. Awesome. All that and more on today's Mixture of Experts. I'm Tim Hoang and welcome to Mixture of Experts. Each week, Moe brings you the analysis, hot takes and banter that you need to keep up with the ever hectic world of artificial intelligence. Today, we're going to focus the entire episode as we get to the end of the year on AI and education, what that means for AI to be used in education, the risks and opportunities and where we think it'll go into the future. So let's just dive into it.
And Phaedra, I want to turn to you first to just kind of set the stage for our listeners. I think AI in education is kind of widely hyped and it's sometimes difficult to know what it is that's going on. And so I want to give our listeners just a lay of the land to start. What are the big uses for AI in education right now and where do you expect it'll go in the next few years? Well, I think it's really important to be thinking if you're thinking about using artificial intelligence in an education context, first of all, to be focused specifically on personalized learning and having customized curriculum, but then also utilizing it to help teachers curate that curriculum to be able to augment their day to day. Additionally, there's all kinds of things that happened in the background back office to help with operations.
But I think in addition to having a conversation about what is the usefulness of artificial intelligence in an education context, I think it's also important to have a conversation about how do we need to be changing our approach to how we're even teaching the subject of AI in schools today and how that needs to be changing going forward. Yeah, that's great. I mean, I guess your short answer is it's kind of happening everywhere. Front of the, you know, the educational experience, the school operations, teaching people about AI, I guess. Brina, Skylar, I don't know if either of you want to jump in as both parents yourselves. I'm kind of curious about what you're seeing on the ground with your kids. I mean, are you seeing, you know, teachers starting to use AI tools or people being encouraged to learn about AI? Kind of curious about how that's all kind of playing out in your experience.
So I am seeing it come out a bit more with our students and. Sorry, with our kids and their teachers. I think one of my prob's takes on this is, at least for primary education, this is an opportunity for us to channel all of the people hours that ideally will be made available because of with the advent of AI. So I think it would be really great to really keep the personal touch in the primary education space because of all of the other enablements we've had in other sectors. So I think really keeping this balance between the role of AI and that human connection from the front of the classroom so important for what at least we are looking forward to for our kids.
Yeah, for sure. Marina, what are you seeing? I'm kind of curious. I mean, you know, scholars in Nairobi, you're in California, very different places. But are you seeing the kind of AI sort of wave appear with your kids education? Yeah, absolutely. I think that there's a lot to be said for some very interesting games and gamification that is going on in the educational space. So one thing I'll call out without trying to be a sponsor is osmo really, really great games that would not have been available even a few years ago because of the capabilities of like the tablet camera to be able to see and directly interact. So it's this really lovely mix of what's going on on the screen, but also being able to do things that are physical for spelling, for math, for coding. Those kinds of things are really great. Same thing with programmable robots, Botley and things of that nature. That's the kind of thing that's showing up as well. So I think that it's very interesting. It gives a lot more options for how kids could be exposed to these concepts and that seems to be a good thing since kids learn differently.
Yeah, for sure. I think it's one of the things I'm most Excited about is like, you got all these options for learning the same topic now, which feels really interesting. Phaedra, let me ask you this for. We're looking ahead to the next year in 2025, I think we're going to hear a lot more about AI in education. You know, what is the big trends? Do you have, like, the one thing where you're like, wow, this is really the thing that's going to kind of knock people's socks off in the next 12 months? Kind of curious about what our listeners should be paying attention to.
Well, I was interested this week to see in an article come out of PBS and their use of artificial intelligence, in particular, enabling children to have conversations with some of their favorite characters in the PBS learning shows. They are not using generative AI, but more traditional forms of AI, and it's specifically targeted towards younger kids who naturally talk to the tv. So I thought that was really interesting. I think we're going to see more very clever ways, as Marina said, about the intersection of play and AI. And I'm really looking forward to seeing what. How that shapes in the realm of education and in particular ways of being able to harness that to address more equitable outcomes in education.
Yeah, for sure. And I do want to talk a little bit about some of the concerns here, I think. Phaedra, do you want to go into that point just a little bit more? I know you mentioned it in your at the top of the episode, as well as kind of these concerns ultimately about, you know, the equity of these kinds of tools. It's really important that we're teaching people how to be critical consumers of technology at large. It's really, really. And in particular, how to teach people what is the real nature of AI and the real nature of data. One of my favorite definitions of the word data is that it's an artifact of the human experience. We humans, we generate the data or we make the machines that generate the data. But it's important to recognize we humans, we have over 180 biases and counting. So what's really interesting about AI is that it acts as a mirror that reflects our biases back towards us. But we have to be brave enough and introspective enough to look into the mirror and decide, does this reflection actually align with my values, with my organization's values? If it does, it's important to be transparent. Why did you pick this data? Why did you pick this approach? And if it doesn't align, that's when you know you need to change your approach. And it goes
Back to the conversation I had up at the top of the hour about not just having a conversation on how AI can be used to transform education, but how we really need to be teaching this in schools. Because if you're lucky enough to be able to take a class on the subject of AI or AI ethics or data ethics, you're probably in a higher ed institution and you have self categorized as a coder or a machine learning scientist or data scientist and literally not everybody else on the planet. So I think we need to be thinking about how do we bring this kind of a curriculum that's holistic and multidisciplinary much earlier in people's academic careers. In fact, I see no reason why we shouldn't be teaching this in middle schools and in particular in social studies class versus computer science class, which I think is where this subject ultimately belongs.
That's great. And I wanna get some more kind of concerns out on the table. I think I do wanna talk a little bit about how we approach these sorts of issues and how do we. I know both Marina, you and Skyler said, well, hopefully a very little. That's kind of by choice in terms of kids using AI to learn and specifically kind of AI assistance was kind of how I had teed it up, I guess, Marina, maybe I'll choose you first and then we'll go to Skyler is why is that? I mean, what are your concerns there? Why would you want to limit sort of like access to these tools as a way of learning?
Well, I think because the nature of the tools right now, if you actually will go into generative AI and not sort of the more traditional ML AI is that it wants to adjust itself almost a little too much to the person. And it's a good way to fall down rabbit holes that kids are maybe not yet very well equipped to handle. There needs to be some structure around that. So on the one hand it's good to have the adjusting personalization. On the other hand, it can be dangerous. So I hope that there's going to be a decent amount of oversight for that kind of thing and a way also of doing critical thinking.
So I think that from a very early age what kids can be taught again in a gamification way is how do you trick, how do you break it, how do you make it lie to you? Isabella's tell you the truth. Then you start to really understand, even as a kid how to set the expectations that it's not an oracle, it's more potentially like Loki and like the Trickster and See that that's maybe the kind of back and forth, mildly antagonistic relationship you might want to have with it. And also it'll help with critical thinking. Yeah, I love that. Part of the education here is like getting kids to break these technologies. It feels like very, very rich and something we should talk more about. Skylar, do you want to get in?
I' sure. If you share Marine's concerns or if you kind of think about your worries about this technology in a different direction. First of all, plus one on the gamification, I think that really is such an important catch for these kids going into this technology space. But I do want to give this example that I saw earlier on Facebook this week. I think of just a really great balance between generative AI technology and classroom leadership. One of my friends from undergrad is a primary education teacher in the US and he had this really cool post on Facebook where he generated some prompts that he used in his class that wrote act one of a play, and then his students were going to act it out, and the students then had to write Act 2 of the play. And so having that type of dynamic leadership in front of the classroom with content coming from both generative AI and from the kids themselves playing back and forth on that, I thought that was just a really cool example of balancing the roles that come from generative AI, social interactions, and leadership from the classroom.
So, yeah, shout out to Donnie Pearcey on that and Skyler, I guess to kind of close this kind of section before we talk a little bit about ways of addressing these types of concerns. I'm curious if there's any other items you might want to throw on the table. I know Phaedra's talked a little bit about the equity concerns here. Marina's talked a little bit about kind of like dependence and personalization as things that we might worry about. I'm curious if there's other things that you might want to put on the table that come to mind as we think about how to responsibly deploy this kind of tech. Yeah, I think that those are some really great examples on the responsible and the equity angle. In particular, Mike, kids do go to a private school here in Nairobi, Kenya, and that looks quite different from the global majority from around the world. You know, north, south, east, west. And so I think making sure that that is recognized and top of mind for how these things are deployed across schools of all sorts of socioeconomic backgrounds is a key point that FEJA started off the conversation with.
I do a lot of volunteer work with the girl Scouts. And we have used games to introduce the girls to things like algorithmic bias. But then I think it's important to have conversations with them. Like, you know, give me examples of where AI has delighted you. Now give me examples of where you were playing around with an AI and the output made you feel really bad, like you knew it was wrong or you. It didn't make you feel good. And listen to what they say. It is, I think, really telling when you invite a young person to be critical consumers of the tech and really be thinking about things like disparate impact or unfair outcomes. It's very, very telling. And again, it goes back to what I was saying at the onset of this conversation, which is about. This is far more about social studies, like, whose worldview is actually being depicted in this AI model beyond just can I trust the outputs whose worldview is in this model is being reflected, also teaching them to ask critical questions like who's accountable for this model? How much better does it perform compared to a human being?
It's just, I think these are all important things we need to be teaching the next generation. I think that's a great lead into the next segment, which is thinking a little bit about how do we kind of address some of this, right, these kind of concerns in the technology. And Phaedra, you've been thinking a lot about these issues. Done a lot of work on it in the last few years. In particular, I know right before this episode, I was reading a little bit more about your work with Smarter Balanced. And I'm kind of curious if you want to talk a little bit about your work there and kind of how it applies to some of the issues we've been talking about. Yes, this ed tech company out of the state of California, they were interested in addressing inequity in traditional educational assessments. There's been a lot of research that shows that traditional educational assessments are inequitable for a wide variety of different ways.
Reasons including, you know, English might not be your primary language, or you might suffer from test anxiety, or you might be neurodivergent. There's many countless reasons why traditional tests might not work right. So they wanted to dive into discussing or experimenting on whether artificial intelligence could directly address some of this inequity. And so one of the things they tasked us to do was to form a think tank that included students from all over the world and teachers in elementary, middle school and high school, as well as people who had leadership in neurodivergent communities, et cetera, et cetera. We pulled Together this think tank and really dove into some very specific use cases for these AI models. Like if you were going to use in AI to be able to ascertain the skill set of, let's say, a sixth grader's ability to comprehend a passage of text and have conversations, deeper conversations about that passage of text. Right. What would unintended effects of such a model be?
What are unintended effects? And then given those potential categories of harm and the principles that this think tank came up with, how would you detail what are the functional as well as the non functional requirements needed to be seen in such a model? And the principles that the think tank came up with, I think were really interesting. Like IBM, for example, we detail fairness and explainability and robustness against adversaries and transparency and data privacy. Right. This think tank, when thinking about that these AI models are going to be used by children, included principles like kindness and data sovereignty and agency. And so a lot of the work was thinking through what does it mean for an AI model to reflect a principle, a human value like kindness?
What does that look like in terms of feature and function? It was absolutely fascinating work. And that report is being made public. Yeah, Phaedra, I think that's great. I think one of the things I'm really excited to see is all of these groups starting to articulate a lot more crisply, like, what are the values that they want out of these technologies? And I think it's such important work because it helps to kind of of really set up the goals. Right. Like, what do we need to do in order to make sure that these systems are doing what we want? Well, part of it is we need to know what we want in the first place. Skylar, I know I wanted to give you a chance to give yourself a little bit of a kind of travel report. I know you were at the AI Safety Institute's conference, which as I understand, is very much involved in kind of the process of trying to develop evaluations and standards for the space. Did any of these topics come up? I'm kind of curious about how that might plug into what we're talking about here.
Yeah, I think it came up in two ways. One was directly with education as a use case, and the second one was a bit more indirectly, which is what are these kind of international AI safety institutes doing for capacity building and awareness? So we've already hit on these kind of two topics already about how important it is for these young consumers to be critical about that technology. That's the capacity building and awareness side. And A bit more kind of on the policy side. The, this kind of, of technical gathering for these AI safety institutes were really trying to spell out how we do risk assessment, everything from the, you know, the doomers, you know, end of the world type scenarios and addressing the day to day harms that we already see in these deployed models. So it was really a fascinating couple of days between technology, technology experts, academics and policymakers trying to come together and put language so that in Paris, in a few months from now, in February, these countries can come together and sign these multilateral agreements about where they want to prioritize AI safety.
Again from education, from healthcare, from market competition. Really, really cool space to be a part of. And that all just concluded last week in San Francisco. I was there representing the Kenya delegation. Quite an interesting event. Yeah, that's really exciting. And yeah, I think part of it is, you know, especially in the U.S. right, education is kind of regulated such on a regional level. It's exciting to hear that kind of like at the international level we're trying to develop these kind of global standards.
You used two key statements there, regulating and standards. And the Secretary of Commerce presented at this conference and she was incredibly clear, the AI safety institutes are not regulators. They are there to catalyze and provide standards. So it was a really, really cool conversation to have in there. So both of those roles have, both of those areas have a role to play. But these AI safety institutes are much more about catalyzing and forming standards and not yet on the regulator side. Samarina, maybe I'll present to you maybe a little bit of a hard question that I've been kind of mulling over. I think, like as we've talked about, I think there's huge opportunity with this technology. There's certainly risks, but I think there's a lot of work being done to try to mitigate them.
But I'm sure some of our listeners will be kind of listening to this episode and saying, well, there's maybe one thing which we haven't talked about, which is can someone just refuse in the future to use AI? I feel like should we give students kind of the right to entirely opt out of AI entirely? It seems like a lot of the discussion we've been having here is, well, the technology will be here. We'll just have to kind of mitigate its risks. But curious about what you think about that. Should that be something that we're trying to protect as we build this new educational ecosystem? Or ultimately is it very challenging just given kind of how AI appears to be Headed to be ubiquitous in the future.
Well, actually that's interesting because I would ask you, what do you think would be the motivations for a student to decide to opt out? Because I can see a couple of things. It can be parent driven. It could be because a student sees that they want their, you know, their voice to remain theirs and not have any AI assist in anything. I mean, again, can you learn things without AI? Yeah, we've been doing it for a while, so probably. But what would be the motivation, do you think, for opting out? I would say there's probably a lot of fear of just like the technology itself. Right. Which is to say I don't know much about it. Right. Like I learned the old fashioned way. I'm sure I can imagine that being a very strong incentive. Like I learned with books. Right. Like, I don't know why we need these new AI assistants.
You know, I think that's probably one of the risks. I'm sure there's also like a privacy risk. I'm sure some parents say, where is all the data about my kid going? Do I have any control over that? So you're right. I think that there's a couple reasons why someone might be concerned about it. But I think, you know, like any of the new technology, I think there's just like a lot of fear over what it is and what it might be doing to your kid. Right. The data is a really fair risk. Although that's, I think that's something that maybe parents understand better than their kids do, especially today's kids. They've grown up not even thinking about the fact that everything they do is online. But the idea of what does it mean to learn with it?
I think this goes back to a lot of interesting things that Phaedra had pointed out of. Are you going to be subjected to biases without even understanding that you are, are you going to be ending up in some sort of an echo chamber? Are you going to not have the breadth and depth of concepts that you are trying to go through that? Like a human might find times that are the appropriate time to push back, to stop, to pause, to redirect, and AI is not going to do that most of the time. The AI assistants, what they really want to do is keep, keep hurtling along at speed in the direction that they've been pointed, at least so far. Maybe things will change. So then the other hand part of education needs to be how do you function in society? And even if you opt out, you do need to know how to handle it. When it comes your way or when it comes the way of your friends or your family.
So even if you have that critical of an eye, I think it's not great to say I'm not going to learn. It's like I'm not going to learn to follow traffic signals. Well, I guess you can opt out, but it's probably not a very good way to be a part of society. So you at least have to learn about it even if you don't want to fully participate. Yeah. And I think this is the third topic I really did want to touch on is kind of we're now moving away from sort of the AI being the teacher here to kind of the difficult questions, I think really interesting questions, I think around AI literacy. Right. Which is, well, you might opt out, but we actually think it's really important because you need to know how to work with these systems in the future. Skyler, you're smiling. I guess you might want to jump in.
Well, I think I was just reflecting a bit. Do you think that opt out conversation is happening at the family level, at the classroom level, at the school level? I mean, where, where do you think? Maybe not the opting out, but the decisions to really kind of, you know, engage with this technology? How do you see that? How do you see that working out in a practical level? What, what level of decision making do you think is going to drive that type of adoption? Yeah, I think it's complex and I could see it. I mean, the short answer is I think I can see it emerg any of these options. Right. Like a school district might say, this is untested. We're going to opt out. I could imagine a parent saying, I don't trust this technology. We're going to opt out. I could also imagine a kid just saying, hey, I don't learn great this way. You know how I learn best?
I learn best with books. Right. I want to opt out. And so I could see it happening across all those levels. I know, Phaedra, you're right in the middle of it. I don't know if you want to jump in and kind of respond. I would say that the reason why an individual or a group or a school or a state would want to opt out is because they don't trust it. They don't trust. And there are many reasons why someone might not trust an AI model. And earnestly, it takes a lot of work to earn somebody's trust. It takes a tremendous amount of work and it's not strictly a technical problem at all. It is a socio technical problem and with any socio technical problem it has to be approached in a very holistic way first, beginning with accountability.
Like do you actually have a group of individuals who are being held accountable for making sure that this model is behaving in the way that it's intended to behave? Are they being transparent about this model? And again, the worldview that has been embedded within this model, the data, was it gathered with consent? Is it representative of all the different communities which have to be served in an educational system? Is it the correct data to use according to real domain experts who understand the context of this data and the relationships between this data? And I'll tell you, I think it's very unfortunate that so many organizations, I think are ill prepared to be held accountable for these models. And again, it goes back to why the emphasis on AI literacy and really understanding what is the level of effort that is needed to put into these AI solutions in order to be able to earn people's trust.
And honestly, the hardest part, as I said, the hardest part is not technical. The hardest part is the social part and making sure that you've got the right organizational culture and the processes in place as well as the tools in AI engineering frameworks to do this work in a responsible way. Yeah, for sure. And I want to unpack that a little bit more. Phaedra, I think, think, you know, what does this look like exactly, AI literacy in practice? I mean, is it okay districts, okay parents, okay kids, like here's a, here's a curricula, right? Like you have to go through the AI101 class or is this something else that you're envisioning? Oh heck no, no, no. This. First of all, it has to be multidisciplinary. It really.
Now when I say multidisciplinary, I mean like get it out of just strictly computer science class and you know, have it be where you're, you're bringing in school, schools of philosophy, schools of government. It is truly interdisciplinary. And the challenge, I think, at least within the United States, I'm not going to speak for other countries, but public school systems, even higher ed institutions within the United States have been extremely siloed with respect to how they teach disciplines like artificial intelligence. As I mentioned at the beginning, if you're lucky enough to take it right now, you're in a school of engineering, mostly likely. And you're not bringing in linguistics professors, you're not bringing in philosophy professors to talk about worldviews and ethics or even disparate impact, to give an example
I've come across AI practitioners who are developing AI models to do something like offer predictions on what percentage interest rate people should be given with respect to a home loan that don't know what the word redlining is. They've never heard it before. And again, this points to why we desperately need to have a multidisciplinary, interdisciplinary approach to how we teach this subject. In other words, AI is not the death of liberal arts education. If anything, it's more important than ever. That's right. She's right. She's absolutely right. And even when you look at generative AI, look at how much it's being used to do coding now, what does that mean in terms of the programming profession? Whereas now people are saying we need more English majors to be able to craft the right prompts.
Right? So she's right. Liberal arts education is now more important than ever that we understand what is inequity, what is human history, what is disparate impact, how do we approach ethics in a way that's holistic and representative of all the people that we need to serve? I'm just now so much more optimistic about my undergraduate liberal arts degree. Yeah, thanks. It was all worth it. Yeah, yeah, for sure. I mean, and I guess, I don't know, it strikes me, Phaedra, I don't know if you'd agree with the statement, the stakes are pretty high here in terms of getting this AI literacy bit to work properly, because it does seem like, look, irresponsible deployment of the technology could lead to some kind of incident that really reduces public trust. That means there's going to be less use of the technology going forwards, less opportunities to show that the technology can really create real benefit.
It almost feels like the kind of like, getting the trust and education bit is going to be the thing that kind of like, ensures that we can actually get to all the opportunities that we've been talking about here. I don't know if you'd agree with that at all. I think in order to be able to get to the opportunities that we're describing, where you're creating models that earn people's trust, you need to educate people on what the heck we're even talking about. Like I said, what is the real nature of data? Because interestingly, working with the clients that I do so often, real domain experts who desperately need to be part of the conversations and have a seat at the table, their perception in their mind is, I'm not a machine learning expert, I'm not a data scientist. I don't have a degree. So do I really belong here? That's not really my swim lane. And that's what we've been communicating to people for decades, is that they don't belong, which in fact they desperately do. We desperately need to have hear their voice at the table and even in addition to those domain experts, again where you're trying to solution something in their domain, like I mentioned, we've got to have far more diversity inclusivity in terms of who's developing these models and the systems of governance around these models.
And that I don't just mean gender, race and ethnicity, but earnestly people who have different lived world experiences coming to the table to have discussions about does this artificial intelligence, is this solving the problem? Is it reflective of all these, the needs of a wider variance of human beings? What are the unintended effects of these models? How do we design this in a way to earn people's trust? And as I mentioned, these aren't strictly technical challenges. Yeah, for sure. Marina, I'm curious how you kind of respond to all this. You're someone who spends a lot of time directly in the research and I'm sure again, when I talk about this with some of my friends in the machine learning space, they're like, this is overwhelming. We're just trying to get these models to work. Now you want me to worry about all this other stuff. And I guess I'm kind of curious.
Do you think, in effect, I think what Phaedra is proposing is that people who do machine learning in the future will look. Look really different right from the people who are marginally doing it today. And in part it'll be that they will have to be so strenuously interdisciplinary that I think it might end up looking quite a bit different from kind of what we expect at an ICML or you go to a kind of technical conference today. I don't know if you'd agree with that. We used to think that only specific people needed the training to learn calculus. And that wasn't because you're going to be doing calculus forever. It was just because you needed to learn what it is and how it shows up and what does it mean to have a structure and a proof and things of that nature. I'd make a plug to join Phaedra social studies class. Statistics, early statistics. Statistics, early statistics.
Often because part of what you really need to do is understand how do these models even remotely work? Just an intuition, not the deep math. But that's what's going to help you combine that along with your work in linguistics, your work in history, your work in language and all the rest of it, I do find my own slightly more liberal arts background coming up a lot when it comes to trying to talk to people with examples that they can understand. But also, again, intuition from my stats classes comes back time and time again. The explanation of what do these generative models do? They're playing guess the next word. Simple things. They might not be completely accurate, but simple things don't try to boil the ocean. If everybody has just a little more intuition and then you're going to be more effective. Again, another example, look at cars. None of us understand how they work, but we understand how to drive them, we understand how to regulate them.
We understand in general how we live with them and use them and what the effects are. It'll get to that point. So I'm not worried. I just hope that we're not going to be rushing it. It's going to take a little time for this to become pervasive and become natural and sort of second nature. To the point about how this is going to take time. Again, look at the traditional school systems today and how siloed the approach is and how hard it is to get these different schools to actually work together on a collaborative curriculum like that. I think is what's going to be the hardest thing to move.
Yeah, just last week I was helping my 10 year old make a probability wheel which is a spinner and it can fall and one of these things. And then I told him that his dad me, I do probability day in and day out at my job and I could just see his wheels spinning. What do you mean you spin this wheel? Probability wheel. But it goes to Marini's point about starting those conversations early and the idea, the importance of that type of background and intuition. I'm seeing it play out already in some of these young lives. So yeah, again, just a great comment, Marina, and backing that up with a real world example from just a week ago. Yeah, that's great. I love your kid imagines like you just sitting in your office with a bunch of wheels spinning them. Exactly. He could, couldn't quite get it, but I told him that this is really important and I use this on a daily basis.
All right, for our last segment, it's the end of November. We're starting to think about the new year. I want to go around and just ask each of you to kind of tell us your greatest hope for the new year. If you could change one thing, what would that be? And Marina, I think we'll start with you as much as possible, get teachers up to speed and educated and comfortable and able to own what's going on. They are, after all, the folks that drive how it's really used on the ground and any way that we can offer support to teachers to meet them where they are and make this be something that's positive in their classrooms. That's a great one.
Skyler, you next doubling down on supporting the teachers, but with their outside the classroom, with their extra work, you know, that sort of stuff I think are some, some areas that could be lifted off them to make them so much more impactful and involved from the front of the classroom. So I think AI's got both the role to play helping teachers from the front of the classroom, but also I guess what we'd call back office stuff as well that could really, really change the lives and aspirations of teachers. That's a great one. And last but not least, Phaedra. Well, as I mentioned, I want AI and social studies class and I want it taught much earlier. Like I said, middle school, if not elementary school, you could twist my arm. But then also I would love to be able to see more schools making a concerted, deliberate effort to make more room at the table, pull the seats out and invite students who don't see themselves as being technologists and say, hey, having a conversation about AI and what it means for you and does it reflect you is core to you. Having a seat at this table to be a critical consumer of this tech. That's something I would desperately want to see within the coming years.
Phaedra, Marina, Skyler, thanks for joining us and we'll have to have you back on in 2025 to talk more about this. And thanks to all of you listeners for joining us. If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify and podcast platforms everywhere. And we'll see you next week on Mixture of Experiment.
ARTIFICIAL INTELLIGENCE, EDUCATION, TECHNOLOGY, AI IN EDUCATION, PERSONALIZED LEARNING, INNOVATION, IBM TECHNOLOGY