OpenAI just announced their new Text-To-Video model called Sora.
Look at these insane examples:
* Space movie trailer featuring a man wearing a red wool knitted motorcycle helmet
* Fluffy animated alien
* dwarf in a zen garden inside a glass sphere
🚨 BREAKING: OpenAI just announced their new Text-To-Video model called Sora.
Look at these 10 insane examples:
1. Space movie trailer featuring a man wearing a red wool knitted motorcycle helmet pic.twitter.com/Z8ezF69Tar
— Alvaro Cintas (@dr_cintas) February 15, 2024
2. Prompt: Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. The art style is 3D and realistic, with a focus on lighting and texture. The mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with… pic.twitter.com/sYarGREcB4
— Alvaro Cintas (@dr_cintas) February 15, 2024
3. Prompt: A close up view of a glass sphere that has a zen garden within it. There is a small dwarf in the sphere who is raking the zen garden and creating patterns in the sand. pic.twitter.com/JJ7s1RRNhA
— Alvaro Cintas (@dr_cintas) February 15, 2024
4. Prompt: Extreme close up of a 24 year old woman’s eye blinking, standing in Marrakech during magic hour, cinematic film shot in 70mm, depth of field, vivid colors, cinematic pic.twitter.com/dlbmLT0tiy
— Alvaro Cintas (@dr_cintas) February 15, 2024
🚨 Google just dropped Gemini Pro 1.5, the next version of its AI model with a 1,000,000+ token context length.
The model can now understand entire books, full movies, and podcast series' all in one go.
This surpasses all other competitor Chatbot context windows by a longshot.… pic.twitter.com/lIjeHE8W9u
— Rowan Cheung (@rowancheung) February 15, 2024
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
It still has the problem of having many corner cases (e.g. don’t ask it to model a plastic chair), where it doesn’t know enough about some objects and light effects, resulting in items looking floppy, morphing into others or sprouting limbs out of nowhere.
I’d say it’s already like 80% there, right in the Pareto of bare usefulness vs the weird requests, needing some special additional training to model correctly.
It will get better, though.
I want to be able to describe a character then have it drawn. Then I want to be able to have that SAME character drawn later… it needs a memory of that character or some way to output enough info that it can recreate that character later. I want to be able to age the character and place then in any environment and have them displayed in any desired style.
It needs to be REPRODUCIBLE.. every image of that character once drawn should be able to be drawn again with the correct input.
There are some ways where you can create a character sheet of a character and then train a LoRA of it, or at least that’s what YouTube has taught me (my desktop died soon after I started playing around with Stable Diffusion). It requires
Wait, are you saying Gemini can take my entire 400,000 2-volume Sci-Fi novel, Neitherworld: https://amazon.com/Neitherworld-Book-Akiiwan-Scott-Baker-ebook/dp/B07NHTTKC3/ref=sr_1_2, (I can combine the 2 PDFs into 1), and turn it into a video/movie? Do the 24 pictures I commissioned an artist to create help or hurt that effort? I’m about to receive a Mac Studio with 64gb ram & 4tb storage. Is that adequate to store the resulting film & process the request?
I’m not sure what this means for creators. LLMs can produce beautiful, fantastic things, but if they can’t deliver what creators actually want & imagine (more or less), are they really useful?
If it is anywhere like Dall-E, then it is fun for getting A result out of it. If you want something specific, it can’t do it. “Make the monster green” will change not just the color of the monster, but everything else too.