ENSPIRING.ai: All The Wild AI News You Missed This Week!
In a recent event, Tesla showcased their advancements in autonomous vehicle technology. Elon Musk highlighted features of their upcoming robo taxis, which will operate without steering wheels or pedals, allowing passengers to use travel time efficiently for activities such as work or entertainment. Additionally, Tesla is developing a Robovan for group transportation and unveiled the Optimus robots. These innovations promise to change how we think about transportation and urban space utilization.
Exciting developments in AI video have also been demonstrated, notably by Meta's new video generator, Metamovigen, which can create realistic visuals and audio. Another AI video tool, Hola AI, was introduced, capable of transforming images into videos with accurate depiction and diverse styles. Pyramid Flow's open-source video generator further reinforces the potential for creatives to design videos with ease, tapping into the power of AI for media manipulation and generation.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. autonomous [ɔːˈtɒnəməs] - (adjective) - Having the freedom to govern itself or control its own affairs; in technology, a self-operating machine or vehicle. - Synonyms: (independent, self-governing, self-contained)
Now, the event started off with Elon getting into an autonomous vehicle and driving around the lot
2. robo taxi ['roʊboʊ ˈtæksi] - (noun) - A self-driving or autonomous taxi that eliminates the need for a human driver. - Synonyms: (driverless taxi, self-driving car, automated vehicle)
The interesting thing about this robo taxi, it has no steering wheel, it has no pedals.
3. monetization [ˌmɒnɪtaɪˈzeɪʃən] - (noun) - The process of earning revenue from a business, product, or service. - Synonyms: (commercialization, exploitation, profit-making)
Like, he didn't actually break down how the monetization would work, but people will buy these cars for roughly $30,000.
4. robotic [roʊˈbɑːtɪk] - (adjective) - Related to or characteristic of robots or automation. - Synonyms: (mechanized, automated, machine-like)
When he was done showing off his reboven, he brought in the Optimus robots to kind of show where they're at right now.
5. autonomous Vehicle [ɔː'tɒnəməs ˈviːəkl] - (noun) - A vehicle capable of sensing its environment and operating without human involvement. - Synonyms: (self-driving car, driverless car, robotic vehicle)
Now, the event started off with Elon getting into an autonomous vehicle and driving around the lot.
6. advancement [ədˈvɑːnsmənt] - (noun) - A forward step or progress in development, quality, or state. - Synonyms: (progress, development, improvement)
However, we got some really cool stuff out of Tesla and their Wii robot event that happened, and there was also a ton of advancements in the world of AI video.
7. visualization [ˌvɪʒuələˈzeɪʃən] - (noun) - The act or process of creating a visual image in one's mind. - Synonyms: (imagery, picture, depiction)
Notice that as this girl is running through the sand, she's actually properly leaving footprints.
8. optimization [ˌɒptɪmɪˈzeɪʃən] - (noun) - The action of making the best or most effective use of a situation or resource. - Synonyms: (enhancement, improvement, refinement)
Also, I can't help with photorealistic images of identifying people. Apparently, generating faces still doesn't work.
9. integration [ˌɪntɪˈɡreɪʃən] - (noun) - The process of combining or coordinating separate elements to provide a harmonious or interrelated whole. - Synonyms: (combination, amalgamation, unification)
Hola AI seems to be pretty good. And again, you get three days free to test it out.
10. algorithm [ˈælɡərˌɪðəm] - (noun) - A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. - Synonyms: (procedure, formula, protocol)
Hinton, who has a PhD in artificial intelligence, went on to co create the back propagation algorithm, a method that allows neural networks to learn from their mistakes, transforming how AI models are trained
All The Wild AI News You Missed This Week!
Compared to other weeks, it's been a relatively slow week in the world of AI. However, we got some really cool stuff out of Tesla and their Wii robot event that happened, and there was also a ton of advancements in the world of AI video. So let's just dive right in, starting with the Tesla Wii robot event, where Elon showed off some of the things they've been working on over at Tesla. Now, the event started off with Elon getting into an autonomous vehicle and driving around the lot.
Here, there is nobody driving the cardinal. That's just Elon riding in the front seat as it drives him over to, I guess, the stage where he's presenting on. They have this moment where he drives around the corner, and as he drives around the corner, some bicycles right out in front of him. And the car knows to stop to show that it's actually aware of all the stuff that's happening. He rambled for a little bit and then got into actually showing off the new robo taxi that they're putting out.
The interesting thing about this robo taxi, it has no steering wheel, it has no pedals. You basically just get in, it drives you to your destination, and then you get out at the destination. You can see as people use the car, they're watching videos or doing work or having Zoom calls or watching sports, showing that you're just like in this vehicle, going to your destination, and all of the time to the destination is now freed up time where you can do whatever you want.
They showed off all sorts of potential hazards that the robotaxis are equipped to stop for and recognize as they're driving. Stuff that if a human driver ran into this, well, the autonomous vehicles are probably going to actually handle them better. Like this dude going the wrong way in traffic, or this guy just walking across a freeway here. Humans aren't going to handle this as well as these robotaxis with cameras looking 360 degrees around them will.
He also hammered in the point that so much of the world is covered in parking lots, and he can see way less parking lots, like green spaces, and more space to be used for parks and things like that, because now we don't need to park the cars. The cars can drop you off and then go and take another driver somewhere or drive home and park in your garage after it drops you off, making all of this parking lot space and all of these big cities available for other uses.
Now, he did give a little bit more details about the cars. He said he expects these to be on the road by the end of 2026. He said, before 2027. He also said he expects these robotaxis to sell for about $30,000. Anybody who can afford that $30,000 could own one of these robo taxis, and then they would have a fairly interesting business model on your hands, because you can own one of these taxis for $30,000.
It will take you wherever you need to go, drop you off wherever you need to be, and then if you want, you'll be able to put it into a mode where it becomes like an Uber for you, and it will go around and pick people up and drop them off. And because you own the car, you make a large percentage of the money. Like, he didn't actually break down how the monetization would work, but people will buy these cars for roughly $30,000, use them when they need them, and when they're not actually using them, they'll have the option to put them into, like, a robo taxi mode, where they can be driving around and picking up other people and dropping off other people while you earn money, because your vehicle is out there doing that.
Now, he also showed off the fact that there isn't going to be a charging port on these either. It's basically going to be like wireless charging for your phone, where you go and park this vehicle, like, over a powered parking spot. It charges up the vehicle, and then you can drive it away. So no more plugging in and unplugging a charger.
He also then showed off what he called the Robovin, which is Robovan. I don't know why he was saying it that way, but he showed off his Robovan, or Robovin, which is an autonomous bus, essentially, that can hold up to 20 people inside and take them around. He was giving the example of using it to bring sports teams around, or, you know, use it as, like, a party bus to go from, like, bar to bar or something like that. And you can see it holds up to 20 people.
Now, inside, it looked like it had about 8910, 1112, 30, like, 14 or 15 seats. But it also looked like another four or five people can stand while in the vehicle, and it can drive people around. No estimate yet on when this is coming. He didn't talk about the range on any of these vehicles. The only thing he did say was that the sort of robo taxi fleet will be coming, like, next year, but using the Model X's and Model Y's that are already out on the roads in the model threes and the Cybertrucks, pretty much the Tesla fleet of vehicles that are already out in the world.
They'll be able to be used as these robo taxis, I guess, next year sometime. And then the actual robo taxi models that they showed at this event are supposedly going to be ready in 2026, he said, before 2027. And when he was done showing off his reboven, he brought in the Optimus robots to kind of show where they're at right now. Now, he didn't really get into any, like, new details about these. All he did say was that they're going to eventually cost less than a car.
All we really got to see them do during this event was a video of them watering plants, playing like a board game, washing a counter, serving drinks, helping get groceries out of the back of a car, and, of course, something else we've seen quite a bit, which is watching them dance, because, yeah, they can. They can do that as well. They can dance. Overall, it was a pretty cool event. Elon kept calling it a party, and it feels like they put on a giant party to unveil these robo taxis and these robo vans and what the Optimus robots can do.
But it was also very short. They didn't really go into too much detail around technical specs or anything like that. We got a little bit of a tease of how much they'll cost. We know the robotaxis will be about 30,000. We know that Optimus robots will be less than a car, and we heard that the fully autonomous cars will be available sometime before 2027.
Obviously, Elon's always optimistic on his timeframe, so we'll see if that actually happens or not. Again, I just love nerding out about this kind of stuff. Robots and autonomous vehicles, and where all this is headed in the future is just so fun and fascinating to me, this was probably the biggest event of the week, the. The biggest, like, thing in the AI news world.
Meta just showed off a new video generator that they've been working on called Metamovigen. We can see some likely cherry picked examples of what it's capable of here, but they all look pretty good. I mean, they look on a similar level to the demos we've seen from Sora out there, but this does seem to have a few cool features that we haven't really seen in the other video platforms yet.
So taking a look at the official page over on Meta's website here again, we can see some other examples. Notice that as this girl is running through the sand, she's actually properly leaving footprints. This woman here looks really realistic. But check this out. There's actually a headshot here on the right. So it was actually able to import somebody's real face and make a video of that person.
We can see here says our latest research breakthroughs demonstrate how you can use simple text inputs to produce custom videos and sounds, edit existing videos, or transform your personal image into a unique video. So this is an actual person being transferred into a video. Here's another example of thunder cracks loudly with an orchestral music track, that's the other thing. It will actually create audio to go with these videos.
Let's go ahead and unmute this here. We can hear some background sound, we can hear the thunder, some background music. All of that was created with this AI video generator. Sora wasn't generating audio to go with the videos. This actually is. Here's an example of it being edited. We can see the text input here is transform the lantern into a bubble that soars into the air. And so the original video here has this lantern floating in the air.
The edited video, it changed that lantern into a bubble. So not only can you create videos, you can actually import your own face into them. You can have it figure out audio, sound effects and background music, and you can even use it to like edit videos and change what's going on in the video. Similar to Sora, meta showed this off and said, look at this cool research we've done.
Look at what we're able to do now. But then, didn't I give anybody access? So none of us have the option to go and play around with it yet. But this looks like a pretty dang powerful and exciting new model that generates really, really good videos, but again, also adds music, sound effects, imports your own face, and lets you edit some really, really cool stuff with this metamoviegen, I'll make sure it's linked up in the description so you can check it out yourself.
But that's not all we got in the world of AI video. This company hello AI. I have no idea how to pronounce that. Just launched an image to video feature. So what distinguishes this image to video experience?
Well, according to this tweet here, tweet and image joint instruction following hello seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts. Apparently it's much more accurate and can manipulate objects within your images, and it's got a bunch of diverse styles. You can find this one over at hail Uoai video. Again, don't know how to pronounce this one. When you log into this site for the first time, it does give you a three day free trial.
You can have up to three tasks in queue, and you get bonus credits if you log in daily. But it looks like typically it's going to run you about $10 per month, which gets you 1000 credits per month. And then, you know, you have that whole confusion of how many credits is something going to cost?
And not my favorite style of payment plan, but you can try it for three days for free. Let's double check and see how long it actually takes to create a video. I'll start with this image here that I use for a YouTube thumbnail. And let's say an interactive map moves and glows behind a man wearing glasses.
And then I can turn this on to refine the prompt. Let's go ahead and make sure the prompt is refined. And let's see how long this takes. And before we even get going here, it says there are still 2676 people ahead. Expect to wait for 15 minutes. And here's the video that it generated. So let's go. Go ahead and click on this, see if we can get a bigger screen of it. This is actually pretty impressive.
We can see some animation going on in the background. It looks like he's touching the screen. You see another hand pop up. The hand is just totally jacked up, but it looks like a pretty decent looking little cartoon animation. So it's using this mini max Hallo. AI seems to be pretty good. And again, you get three days free to test it out. And you can click around on the explore tab, see some of the other videos that people have generated, and honestly, most of them look pretty dang impressive. I think this one really holds up with a lot of the other ones. And again, you get a free three day trial. So go mess around with it and see if it's for you.
That wasn't the only AI video news we got this week, in fact. There's a new generator in town called Pyramid Flow, and it's open source. This is the first AI video generator that we've seen of this caliber that's actually open source, meaning that if you've got a strong enough computer, you can download all of the code and run it locally without even being connected to the Internet.
Or you could run it on cloud servers. They even have it available over on hugging face where you can generate it there, and none of that information about what you're generating gets saved. Some of the people that were involved in making this generator are actually some of the same people that were behind the Cling AI generator. Here's some of the examples they showed of what this is capable of. Here's some people, you know, walking in the snow under, like, cherry blossom trees.
Here's a black and white video of a boat in front of a sort of weird looking Eiffel Tower. Some waves crashing against some cliff sides. That looks pretty solid. And all of these videos are pretty decent looking. They can generate up to 10 seconds. And again, it's open source. Meaning that other developers and people that want to fine tune these models are gonna get their hands on it and things are going to start to get really, really wild here as people start to make this open source model more and more capable and have different weights that they train it on so we can start to do different things.
The fact that we have a video generator that's open source, but also this caliber of output is really, really exciting. Quite honestly. If you're a user of Zoom, you're going to be able to let your AI avatar talk for you pretty soon. This is kind of crazy. You'll soon be able to create a custom AI avatar of yourself that you can use to record and send short messages to people.
Zoom is getting one step closer to letting AI avatars attend meetings for you. Zoom announced it will soon let you create an AI avatar of yourself that you can use to send brief messages to your team. You'll need to record an initial video of yourself and then Zoom's AI will use that to make an avatar that looks and sounds like you. So similar to what we've seen from hey Jen and synthesia and some of these other platforms.
And it doesn't look like your AI avatar is going to be able to like sit in on the meeting for you yet. Like, it's more designed to just use to send messages to people right now as you. But they're implying here that eventually it's going to be a bunch of AI avatars on the meeting. Like, that's going to be interesting. Could I just send my AI avatar to jump on like Zoom team meetings? The AI will just like fill me in later on what everything was talked about.
What happens if everybody's an AI avatar? Does anything actually get done in these meetings? So many questions. Speaking of avatars, hey Jen rolled out a new feature late last week called Avatar looks. We can see in their demo video you can create multiple avatars and create multiple looks of the same character that you created in hey Jen.
So here's another example of somebody here talking on screen. And as I scroll through this video, you can see it's the same person, but different angles and different looks of that avatar. And since we're on the topic of hey Jen, might as well mention this. Hey Jen and HubSpot have actually partnered up. If you make a blog post using your HubSpot account, you can automatically have that blog post turned into an AI generated video of a talking avatar explaining what's going on in that blog post.
We can see the title of this blog post hey Jen at HubSpot inbound, the future of AI video and content generation. And then this is the video that it automatically generated. Video is becoming the dominant medium for digital engagement and storytelling. Well, there's a clip of the video that it created, but you get the idea.
You make a blog post, it automatically sends that blog post to hey Gen. You log into hey gen and you now have a video version of somebody explaining what was going on in the blog post. Just a handy little workflow hack for people who like repurposing, moving over to the world of AI image generation.
Innovation, Technology, Artificial Intelligence, Tesla, Autonomous Vehicles, Ai Video, Matt Wolfe
Comments ()