OpenAI presented Sora in February of this year, we had witnessed firsthand the evolution of image generation tools such as DALL·E, Midjourney or Stable Diffusion, high-quality video generation solutions are emerging.In recent months the panorama has changed significantly.
It turns out that OpenAI wasn't so unattainable. Within a matter of months, a variety of tools aimed at directly competing with Sora appeared on the scene. Vidu and Kling AI emerged from China, while Dream Machine and a few hours ago the new Gen-3 Alpha appeared from the United States.
This last tool has been developed by the well-known New York firm Runway. Gen-3 Alpha arrives after Gen-1 and Gen-2 launched in 2023, but is billed as the first in a new series of models trained on a new multi-modal infrastructure. And the model boasts many new features.
Unlike previous Runway ML products, Gen-3 Alpha offers an improvement in three key aspects. On the one hand, according to what the company explains, it will have higher image quality. Visual representations may take on abstract or realistic tones, with an improved level of fidelity.
On the other hand, the company promises that its new model will also make a leap in terms of consistency. This point is interesting in any professional use where the aim is to achieve results that follow a line. It is of little use if the model is capable of generating great images, but different from each other.
Another highlight will be the movement. Here the creators of the models usually maintain a balance. The less movement there is, the less chance there is of unwanted patterns emerging in the video. The more movement, the risk increases, but Runway seems to be present.
The model will be available in the coming days for Runway subscribers with advanced controls for scene generation, such as Motion Brush, Advanced Camera Controls, Director Mode.