ENSPIRING.ai: Innovations in AI OpenAI Restructures and Voice Activation Advances
The video discusses major developments in AI, focusing on OpenAI's new advanced voice mode for chat users, offering a demonstration and discussing its benefits and limitations. It also highlights the speculation and implications of OpenAI's restructuring plans, potentially positioning itself as a for-profit entity to attract investors, amidst the recent departure of key figures from the company.
Furthermore, the video covers exciting collaborations and new technological advancements announced during the Meta Connect 2024 event, showcasing novel AI features, devices, and the growing presence of artificial intelligence in communication platforms like Facebook and Instagram. The discussion also includes James Cameron’s unexpected partnership with Stability AI and other AI-related Legislative concerns.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. Duet [djuːˈɛt] - (n.) - A musical composition for two performers.
For instance, this guy here managed to actually get it to do a Duet with him for the song Eleanor Rigby by the Beatles...
2. Restructure [ˌriːˈstrʌktʃə] - (v.) - Organize differently.
...OpenAI is working on a plan to Restructure its core business into a for-profit benefit corporation...
3. Equity [ˈɛkwɪti] - (n.) - The value of shares issued by a company.
...give Sam Altman Equity in the company...
4. Nonprofit [ˈnɒnˌprɒfɪt] - (adj.) - An organization not making a profit, typically dedicated to furthering a social cause.
...Nonprofit board will continue to exist...
5. Amicable [ˈæmɪkəbl] - (adj.) - Characterized by friendly goodwill.
...leave OpenAI and everything seems to be Amicable...
6. Strategic [strəˈtiːdʒɪk] - (adj.) - Relating to the identification of long-term or overall aims.
...the timing was a little bit Strategic...
7. Multimodal [ˌmʌltiˈmoʊdl] - (adj.) - Involving several different modes or methods.
...the new meta AI is now Multimodal...
8. Lip syncing [lɪp ˈsɪŋkɪŋ] - (v.) - Moving the lips in synchronization with recorded music.
...AI translation and Lip syncing...
9. Augmented reality [ɔːɡˈmɛntɪd rɪˈæləti] - (n.) - A technology that superimposes a computer-generated image on a user's view of the real world.
...their Orion project, which is their Augmented reality glasses...
10. Legislative [ˈlɛdʒɪslətɪv] - (adj.) - Relating to the process of making or enacting laws.
...AI-related Legislative concerns...
Innovations in AI OpenAI Restructures and Voice Activation Advances
One of the biggest stories of the week was that OpenAI finally rolled out their advanced voice mode to pretty much everybody that has a plus or teams account for chat. GPT this is one of the things that I made a full demo video of testing various things with the AI voice assistant, and we learned that if you didn't seem to get access to the advanced voice assistant on your phone, the simple solution was to remove the app completely on your phone, go back into the app store, redownload it, and set it up once again. And that seemed to work for most people. It worked for me, and it seemed like it worked for most people in the comments as well. Once you have access to it, you simply press that little circle with the little lines on it. Hey Matt, what's up? How can I help you today?
And it looks like it'll start working. Hey, I'm just doing a quick live demo of the advanced voice feature. Nice. Advanced voice features can add a whole new dimension to your content. How's the demo going so far? So far so good. Awesome to hear demos can be a bit unpredictable, so it's great when things run smoothly. Since this has been rolled out and since I made my demo video walking through it, there has been a few like interesting things pop up around this. For instance, this guy here managed to actually get it to do a Duet with him for the song Eleanor Rigby by the Beatles Eleanor Rigby picks up the rice in the church where her wedding has been lives in a dream wakes at the window wearing a face that she leaves in a jar by the door super interesting. You could hear for a second it started to say my guidelines don't let me do that. But then it continued to sort of sing along.
Anyway, we also learned that there are some rate limits. I'm actually not 100% sure where these rate limits lie to here on x shared this screenshot saying that you have twelve minutes remaining of advanced voice mode. If you reach the limit, you can switch to standard voice quickly. Looking at the FAQ here over on OpenAI's website, we can see for how long can I have voice chats? Your daily use of advanced voice for plus and team users is subject to a limit each day, and daily limits may change. We provide a notice as you are approaching the daily limit plus and team users will be notified when they have 15 minutes left of advanced voice for the day. So apparently it's a bit of a moving target and they're not telling us what the limit is. It's kind of a changing limit every day.
I don't totally know how this works yet again, I did a much deeper dive on this new advanced voice feature the day it comes out. It looks like this here, but we did get some other news out of OpenAI this week. We learned that OpenAI is going to remove the Nonprofit control and give Sam Altman Equity in the company. According to this report, OpenAI is working on a plan to Restructure its core business into a for profit benefit corporation that will no longer be controlled by its Nonprofit board. They're trying to make the company more attractive to investors. The Nonprofit will continue to exist, but own a minority stake in the for profit company. The rumor is that Sam Altman himself will receive around 7% of this new for profit entity that's created, and it's expected to be valued at about $150 billion. That would make Sam Altman stake in it roughly $10.5 billion.
Now, this is still just sort of rumors. There hasn't been any sort of confirmation that I've seen that's come out of OpenAI yet, but there has been some other interesting things that happened at OpenAI this week. For instance, the CTO Mira Marati, one of the people that really stood up for Sam Altman back when he was sort of booted from the company almost a year ago, now, has decided to step away from the company. She says after much reflection, I have made the difficult decision to leave OpenAI. My six and a half years with the OpenAI team have been an extraordinary privilege. While I'll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.
Everything seems to be Amicable. Mira's not making any statements about leaving for safety reasons or anything like that. It sounds like she just wants to move on to something else. You'll see on X all sorts of, like, conspiracy theories and people trying to figure out explanations of why she's leaving. But quite honestly, in some of these cases, it might just be that they put in a ton of work over the last six plus years. They got thrusted into the spotlight because OpenAI became such a massive company and chat GPT was such a big success, and they just don't want to be in the spotlight anymore.
Smoke away here points out on X, Ilya announced his departure the day after the GPT four O presentation. Mira announced her departure the day after the chat GPT voice release. I do think the timing was a little bit Strategic. Mira probably knew for a little bit that she'd be leaving, but didn't want to make waves before a big announcement. So waited till after the big announcement, and I'm sure it was the same kind of thing with Ilya.
Personally, I'm not buying into a lot of the conspiracy theories around OpenAI and why all these people are leaving again. I just think the company grew really, really big, really fast. A lot of these people were thrust into the spotlight really quickly. Mirror has been on all sorts of tv shows and has essentially become famous because of this, and that could quite honestly burn anybody out pretty quickly, and I think that probably has a lot to do with it. It doesn't seem like there's bad blood or she's scared of what OpenAI is creating or anything like that. That's not what I'm getting out of this. A lot of people on X and other YouTube videos will probably try to convince you that that is the case, but I'm not seeing that.
Mira wasn't the only one who left OpenAI this week. Their chief research officer also left right after Mira left. OpenAI's chief research officer, Bob McGrew and research VP Barrett Zof left the company on Wednesday, hours after OpenAI CTO Mira Marati announced she would be departing. Mira, Bob and Barrett made the decisions independently of each other and amicably, he said. But the timing of Mira's decision was such that it made sense to now do this all at once so that we can work together for a smooth handover to the next generation of leadership. The way they're sort of angling this is, well, Mirror was leaving, so there was going to be a little bit of a shake up in the leadership anyway. Might as well leave at the same time so they can sort of figure out the reorganization all at once. Instead of one person leaving, reorganize another person leave. All right, let's reorganize again. Another person leaves now, let's reorganize again. They sort of seem to have coordinated it to kind of make things easier on open AI. I don't know, that's sort of the way they're angling it.
At least this week, Sam Altman made a rare personal blog post over on his samaltman.com website called the intelligence age he says. In the next couple of decades, we'll be able to do things that would have seemed like magic to our grandparents. The phenomenon is not new, but will be newly accelerated. People have become dramatically more capable over time. We can already accomplish things now that our predecessors would have believed to be impossible. He goes on to talk about where he believes all of this is headed.
Eventually, we can each have a personal AI team full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instructions in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more. This is the part that I think is probably the most interesting and, well, interestingly worded, he says. It is possible that we will have super intelligence in a few thousand days. Wording it like that is interesting because it makes it sound like it's fairly close.
But a few thousand days could be, you know, anywhere from three years from now to like two decades from now. It's an interesting read and I highly recommend it if you want to understand where Sam Altman, the CEO and the person running OpenAI, believes all of this is headed right now. And finally, in the last bit of OpenAI news for this week, Johnny Ivey confirms that he's working on a new device with OpenAI. Johnny Ivey, if you're not familiar, is a famous designer who worked at Apple, who helped design some of Apple's most iconic products, like the iPod, the iPhone, the iPad, the iWatch. Most of those were designs by Johnny.
We don't know much about what this new device that he's teaming up with OpenAI on are. All we know is that he has given some sort of confirmation that there is something in the works. So that's something to look forward to. Hopefully it's not another like Rabbit R one or humane pin, where it seems kind of cool in theory, but in practice it's just not something that most people are interested in using. But given his reputation with the products that he helped design at Apple, I think he's going to fare a little bit better than some of those products.
Moving on to the next big, massive, monumental thing that happened in the AI world this week was Meta Connect 2024. I was actually at this event and once again I made an entire video breakdown of all of the announcements they made at the event, some of my thoughts around it, and a little behind the scenes of my experience at the event. Therefore I'm not going to go too deep into all of the announcements they made because I have a whole breakdown video that looks like this right here that you could watch and see. But here's the quick, rapid Fire overview if you just want the TLDW they introduced the new Meta Quest three s, which is very similar to the meta Quest three, but it is less expensive. It's going to be starting at just 299. It's going to come with a new Batman game that's super fun. I've played it myself. I already have a meta quest three, but I will be buying that Batman game because the demo hooked me. I want to play more of it.
They also announced a ton of new AI features and functionalities that are going to be rolling into basic Facebook Messenger, Instagram Messenger, WhatsApp, and, you know, all of the meta suite of tools. They announced a new meta voice feature. I actually think that maybe OpenAI knew that Meta was going to announce this voice feature and wanted to sort of front run their advanced voice announcement because that came out the day before this voice feature. Interestingly enough, if you want to talk with meta's AI, they actually have some celebrity voices that they got permission to actually use. Voices like Awkwafina, Dame Judi Dench, John Cena Keegan, Michael Key, and Kristen Bell.
I find the Kristen Bell one absolutely fascinating because Kristen Bell actually spoke out quite a bit about AI. At one point, she actually made an Instagram post that said she opposed Meta's AI to use her data, but now she's one of the chatbots official voices you can see on Instagram. She put this whole thing I own the copyright to all images and posts submitted to my Instagram profile, and therefore do not consent to meta or other companies using them to train generative AI platforms. This includes all future and past posts, stories, threads on my profile. One thing to note is by just putting this message on your instagram, it actually does not exclude you from anything that was in the terms and conditions that you agreed upon when you signed up. Like Meta's not watching your posts to see if you actually consented or not.
I'm actually kind of a fan of Kristen Bell. My wife and I used to watch Veronica Mars, and I think the good place is like one of my favorite tv shows ever. But this whole 180 that she pulled is kind of fascinating, being sort of anti AI at meta and then sort of flipping the script and becoming one of the voices. And it makes sense, to be honest, they don't want to consent on using their likeness and their content without compensation. And I'm sure this new deal with meta got her compensated quite well. If I had to guess.
The new meta AI is now Multimodal, so you can actually upload images and it could understand what's happening in the images. You can even edit your images with text. We can see in this example, they uploaded an image of a cake and asked how to make the cake, and it actually gave them instructions and a recipe on how to make the cake. Here's an example where they uploaded an image of a goat and then gave it a prompt to add a hat that says goat. And we can see it put a hat on the goat with the word goat on it, put them on a surfboard and it put the goat on a surfboard. So this new Multimodal functionality is gonna let you have a little bit more fun with your images that you throw into one of their messaging platforms.
One of the more useful features I think they are rolling out is the AI translation and Lip syncing. I can make an Instagram reel completely in English, upload it, and then have it translate it to Spanish and Japanese and whatever languages I want, and it will recreate that same reel with me speaking in the proper language, properly translated. And if I'm on camera speaking, it'll actually sync up my lips so it looks like I'm speaking in that language. That seems really useful to get a lot more reach on your Instagram reels and things like that.
They also showed off a new creator AI feature where you can create a sort of virtual version of yourself that's trained on your Instagram and threads and Facebook content so that it can speak like you and answer questions in the way that you would likely answer them. And in this really cool demo, a buddy of mine was actually the example they showed off. This is Mark Zuckerberg talking to the AI version of Don Allen Stevenson here. Congrats on the new book that you just released.
You know, what's the main thing that you're hoping that people take away from it? Thank you so much. Yeah, the main thing I want people to take away from my book is the idea that you have the power to create your own opportunities by combining curiosity, adaptability and resilience in a rapidly evolving digital world. They also rolled out a new version of Llama 3.2, the open source large language model, and it is now Multimodal and it is also available to use and play around with for free. Right now on hugging face, you can upload images here, type in text, pretty much use it in the same way they were demoing it inside of the various meta platforms, they showed off some new features for the ray ban meta glasses, glasses that I actually use as my sort of daily wares.
I actually love mine. They added a bunch of quality of life features to it. Like, you can start talking to it by saying, hey, meta. But you don't need to say, hey, meta every time you want to prompt it or ask another question after that first, hey, meta. When it starts kicking off the AI conversation, you just kind of keep talking to it normally. After that, you can also tell your glasses to play music for you from places like Spotify or Apple Music, or audiobooks from places like audible, and it will just start playing them in the sort of built in headphones on the glasses. But in my opinion, the most useful features that they're adding in are memory. So you can say, hey, meta. Remind me in ten minutes to do this, and your sunglasses will remind you in ten minutes to do the thing.
Or the example they showed is they said, hey, Meta. Remember where I parked? Right? And it looked at the parking spot where they were, took a picture of the number on the parking spot so that later on, when they were looking for their car later, it would remind them what parking spot they were in. It's also going to have live translation, so somebody could speak to me in Spanish. If I'm wearing those glasses, it can translate that to me directly into my ears in English. It could scan QR codes and automatically open whatever it scanned on your mobile phone. All you have to do is look at it with your glasses, and it'll scan the code. A lot of really cool features.
They also rolled out a new clear version where you can see all the electronics inside of the sunglasses. I actually got my hands on a pair. They're pretty cool looking, and honestly, I think these meta glasses are only going to get more popular with all these new features they just rolled in. The biggest announcement at the event was their Orion project, which is their Augmented reality glasses that just kind of look like normal glasses. I mean, they're still a little bit big and bulkier than normal glasses, like the meta ray bansite, but they're a heck of a lot smaller than like, a meta quest or an Apple vision Pro.
They look like glasses, and they work almost like an Apple vision Pro, where you can use hand gestures and put up videos in front of you and move things around and play games and sort of Augmented reality in front of you. They're pretty dang mind blowing. Everybody was pretty blown away when they were showing these off at the demo. During meta Connect, you can see some examples of having phone calls in the glasses while having a browser open and a messenger open on the side. It can look at ingredients on a table and give you a recipe straight into your eyes based on the ingredients that are there on the table. A super, super exciting project. And once again, I did an entire, like, 25 minutes breakdown of everything they talked about at medicinenect. You can see that video here.
It looks like this, and that's a super deep dive into all of the new announcements. But again, because I don't want this video to be an hour long, I'm going to go ahead and keep moving on with the rest of the news from this week with something that I think shocked almost everybody. The fact that James Cameron, you know, the guy behind Terminator and Terminator two and Avatar and Titanic and some of the biggest movies ever made, has signed on as a board member of stability AI. He is such a huge figure in the filmmaking world that it just seems a little bit shocking that he is joining forces with an AI company.
As you know, most of Hollywood is kind of actively trying to fight against AI right now. Hopefully, big names like his can sort of legitimize the use of AI and this emerging technology in Hollywood. And I'm super excited to see what kind of technology comes out of the team up of James Cameron and Stability AI, because James Cameron essentially invented new technology to make a lot of the movies he made. Like Avatar, they had to create whole new cameras and new systems just to make those movies. And with him pulling AI into the mixed, I can only imagine we're going to see some really, really crazy sort of filmmaking capabilities become more and more accessible to normal people that don't have the kinds of budgets that someone like James Cameron might have.
But while we're talking about Hollywood, I got to mention this real quick. We've got only a couple days left for Gavin Newsom to make the decision on whether he wants to pass or veto the SB 1047 bill. This is the bill that will put model makers responsible for any catastrophic harms that are done with the models, even if the model maker wasn't specifically involved with that catastrophic harm. It seems like Hollywood is speaking up and telling him not to veto it. You've got to pass this bill. He recently passed a whole bunch of bills that really, really help out Hollywood, bills that help protect actors from their voices and likenesses being used in films without their consent.
But now they're getting behind a bill, that doesn't really involve them too much. It really kind of puts Gavin Newsom in a tough place because the two most powerful industries in California are the tech industry up in San Francisco and the film industry down in LA. And right now, those two industries are sort of at odds with each other. The tech industry does not want SB 1047 to pass. The film industry does want SB 1047 to pass. Both of these industries have huge lobbying power in the government, and it puts someone like Newsom in a tough spot. Like which industry do I piss off and which industry do I work closer with? Yeah, that's.
I don't want to be the one making that decision right now. We got some new updates out of Google this week as well. There's updated Gemini models, reduced 1.5 pro pricing, increased rate limits, and a bunch more updates to the Gemini suite of models. Here we can see the price reduction of using Gemini 1.5 Pro. This is specifically for API. So if you're no developer, you're going to be able to use Gemini for cheaper than you used to be able to. But in my opinion, the coolest thing to come out of Google this week is that they made some updates to their notebook LM platform.
If you're not familiar with notebook LM, it's a platform where you can throw a bunch of documents or text files into a sort of folder, and then it will help you summarize those. You can chat with them. It will even create audio podcasts explaining what's going on in those documents. Well, now they just added new features where you can even add audio and YouTube videos into your folder and have it help you summarize those and create podcasts around those as well. So for example, I could come into notebook LM here, create a new notebook, and we have the option to add a YouTube link down here.
We can also upload PDF's text markdown audio like MP3 s and have it actually use that as the context for discussion, for summarization, for podcasts. So if I was to go grab the link for my latest meta connect video here, plug it in as a YouTube link, click insert. You can see it quickly gave me a summary. The YouTube video by Matt Wolf titled Metaconnect blew my mind. Here's everything they shared is a summary of the Metaconnect conference, etc. Etc. But I can generate a deep dive conversation here and in just a moment it will give me an audio version. But I can also create a study guide based on the video. I could create a timeline based on the video, create an faq based on the video.
Like, here's a meta connect event timeline that it just generated for me. Meta Quest three meta AI updates llama 3.2 AI voice mode, AI clone feature, AI translation pretty much everything I just got done talking about a moment ago. Cast of characters, Mark Zuckerberg, Don Allen Stevenson, Roberto Nixon, Cleo Abram Kane Calloway, Rowan Chung, Riley Brown, Linus Eckenstamp, Daniel Mack, all these people that I mentioned that I actually got to meet at this event, it actually put a cast of characters together of all the people that I mentioned. Here's the faq. What is MetaQuest three s and how is it different from Meta Quest three? And it answers that question. What AI advancements were announced for Meta's chat apps, et cetera.
It created a whole faq based on my video. It created a study guide, and now it created an audio podcast based on my video. All right, so we just got done diving into Matt Wolf's breakdown of MetaConnect 2024. And I gotta say, this wasn't your typical, like, oh, here's the new phone, here's the new whatever, right? It's like meta looked at the tech landscape and decided, you know what? We're going all in on AI, on everything.
It's kind of crazy because it sounds fairly natural, like, it doesn't sound like an automated AI voice that you're going to quickly tune out. It sounds like kind of a real discussion between two people talking about my YouTube video. Kind of interesting. This, to me, is one of the most useful things Google has done with the AI technology that they have. It's more useful to me than using the Gemini advanced chat. I'd much rather put in information that I really want to deep dive on and then chat with just that information and get a podcast about just that information. This is really cool.
Steven Johnson here, who actually works over at Google, suggested this way for students to actually use the technology. Here he says one record audio from class on your phone to keep your laptop closed. Just jot down some short phrases to describe the most important points. Upload an audio and a PDF scan of notes to notebook LM. Ask notebook to expand your notes with details from the recording recording so you can take handwritten notes, scan them, throw them into notebook LM, and it will use that for context as well. Bonus.
At the end of the week, create an audio overview from all of your class summaries to review the most important concepts in podcast format. And once you've got your audio overview, you can even change the playback speed. Listen at two x speed, you can download it and send it to other people. It's just really, really useful. I'm probably going to make a whole separate video just talking about notebook lmDhdem.
My only fear is that people will start getting podcast versions of my YouTube videos instead of actually watching my YouTube videos, which sort of disincentivizes me to make the YouTube videos. I think it's really cool, but I'm sort of conflicted about the implications of it, if I'm being totally honest. All right, that was the majority of like the major news that happened, but there's a handful of other things that I want to share with you that I thought were interesting. So here's kind of a rapid fire of some of the other stuff that was talked about this weekend since we were talking about Google Snapchat is actually going to use Google Gemini to power its chat bot and generative AI features.
Snap entered into an expanded partnership with Google Cloud to power generative AI experiences within Snapchat's my AI chat bot, it's going to leverage the Multimodal capabilities of Google's Gemini AI to enable the chatbot to understand different types of information like text, audio, images and videos. They've also recently added Google lens like features. Well, it turns out that that technology is also being powered by, well, Google. Microsoft claims that it has a new AI safety tool that can pretty much eliminate hallucinations. Essentially, when information is given back from a response with a chatbot, it will actually kind of double check and make sure that there's actually a source for that information. The new feature is called correction, and it gives their AI systems the capability to automatically detect and rewrite incorrect content and AI outputs. It's currently available in preview as part of the Azure AI studio.
AMD, the chip company that's a competitor in video, just rolled out their first small language model called AMD 135 M. There's not a whole lot of information here about what this model is actually designed for. Due to the size of the model, I'm guessing it's used for on device inference with your AI, maybe for mobile phones? I'm not sure. They don't really go into details about what the sort of main use case of this model is.
If you use Suno to make your AI generated music, they just added a new cropping feature for pro and premier users so you can adjust the start and end of the song. I'll link up this tweet from Suno in the description, so if you want this little five step tutorial? You can find it in the link below. Cloudflare is rolling out a new AI audit tool to help content creators block bots if they want. If you're not familiar with Cloudflare, it's sort of a tool that lives between like your domain name and your hosting company if you run a website.
So when somebody goes to the domain name, the data kind of comes from the hosting company, routes through Cloudflare, and then shows up on your browser at the domain name that you plugged in. Well, if you're a user of Cloudflare, they're going to give you some features that are going to allow you to block AI scraping if some of the big companies are out there trying to scrape your website, Duolingo, the company that helps you learn other languages, is launching an AI powered adventure mini game and a video call feature. So imagine you're learning a new language. You're trying to learn Japanese on Duolingo, and you want to practice having a conversation with somebody. You can actually have a conversation with an AI bot with the video call with Lily here.
It's designed to simulate natural dialogue and provide personalized, interactive practice environment. There's also this adventures feature. It's basically a game where you walk around like a simulated environment and you interact with characters in this game and you interact with them in the language that you're trying to learn. So it's designed to kind of simulate a more immersive environment. So imagine you're playing a game that's like a Zelda, like top down, or like a stardew Valley type game where you're going around and having conversations with different characters in this world, but you're doing it in the language that you're trying to learn, just trying to make that learning experience a lot more fun.
Quite honestly, I can see my kids absolutely loving something like this. They're both trying to learn Spanish. I might put this in their hands and let them play it a little bit and see how it goes. This week, the FTC announced that they're cracking down on deceptive AI claims and schemes. This article is specifically singling out a handful of companies like do not pay a company claiming to sell AI lawyer services and ascend Ecom empire Holdings writer, FBA machines.
Basically multiple companies claiming they could use AI to help consumers make money through online storefronts. They feel like a lot of these companies are misleading with their claims. Do not pay claims to be the world's first robot lawyer, but the product failed to live up to its lofty claims that the service could substitute for the expertise of an actual human lawyer. The site also claimed that it offered a service that would check a small business website for hundreds of federal and state law violations based solely on the consumer's email address. It would detect legal violations that, if unaddressed, would potentially cost a small business $125,000 in legal fees. But according to the complaint, this service was also not very effective.
There's a whole bunch of other cases like this. Ascend ecom, e commerce, Empire builders, this writer product, FBA machine. All of these tools claim that AI can help you build a company that that will make you money, and none of them really come through on their promises. And the FTC is saying no more of that. And finally, this is pretty cool.
Google DeepMind as a program called Alpha Chip that is transforming computer chip design. It's basically an AI model designed to help create new chips that are better at training AI models. So it creates this like loop of AI is helping design a better chip that's going to make AI better and smarter. These new chips are going to be used for AI that can then be used to make chips that are even better and smarter and faster and more efficient. And it's just going to create this loop of chips getting better and better and faster and smarter and more efficient and more cost effective, etcetera, ideally creating this exponential curve of compute capability to train smarter and smarter and smarter models.
And that's called Alpha Chip. It's out of Google. DeepMind everything I mentioned in today's video will be linked up in the description below. All of the tweets, all of the articles. It should all be down there.
Really cool stuff. Really exciting week. I am absolutely exhausted. I just spent the last month on the road. You probably noticed a lot of my videos aren't in my home studio.
I'm finally back home again. I did vid summit. I was at Disneyland with the family. I went and spoke at HubSpot inbound with Nathan lands. I was just at Meta Connect.
I'm exhausted. There was a ton of stuff happening this month, but I am excited to finally be back in the studio, back into a routine, ramping up my video production. Going to try to get back in that habit of making three plus videos a week, talking about all the coolest AI news, sharing some cool tutorials, sharing my favorite tools and how I use them. So much exciting, fun stuff that I'm going to be putting out on this channel. If you like that kind of stuff, give this video a thumbs up and maybe consider subscribing to this channel.
It will really help me out. It will also make sure you see more stuff like this inside of your YouTube feed. One last thing before I wrap this up. I'm going to be helping judge a hackathon coming up in LA on October 12 and 13th in Santa Monica, along with some other amazing judges. And this hackathon is really cool because it's an AI hackathon. Whether you have developer experience or no developer experience, but you just use AI to help you code, you can participate in this hackathon if that's something that interests you and you're going to be in the LA area in mid October, make sure you apply for the hackathon. You can find it over at hack dot cerebralbeach.com. it should be a really good time. And finally, if you haven't already, make sure you check out future tools where I curate all the coolest AI tools that I come across. Keep the AI news page up to date on pretty much a daily basis, a little bit slower when I've been traveling, but it's up to date now, and I have a free AI newsletter that will deliver the coolest tools and the most important news directly to your email inbox. You can find it all for free over at Futuretools IO. Thank you so much for tuning in. I really, really appreciate you. Thanks for nerding out. I'll see you in the next video. Bye.
Innovation, Technology, Science, OpenAI, Advanced Voice Features, Meta Connect
Comments ()