New AI Tool From Amazon’s AWS Speeds Up Vertical Video Clip Process; Fox, NBCUniversal Among Initial Customers

New AI Tool From Amazon’s AWS Speeds Up Vertical Video Clip Process; Fox, NBCUniversal Among Initial Customers

[ad_1]

Amazon‘s AWS is rolling out a new AI-enabled product it believes will help broadcasters meet the social media moment.

AWS Elemental Inference enables the conversion of live and on-demand footage into vertical videos optimized for mobile and social.

In carrying sports and other live events, networks and streamers are increasingly pumping out vertical videos designed for mobile devices. In success, the videos trend on TikTok, Instagram Reels, YouTube Shorts and other platforms, increasing engagement and luring younger viewers. Yet the process can be time-consuming, due to a largely manual process whereby video captured by the main crew is rendered for mobile.

AWS said the new setup utilizes a “process once, optimize everywhere” scheme, achieving latency of 6 to 10 seconds compared with as much as a minute with rival tools.

NBCUniversal and Fox Corp. are among the initial customers of the offering, Samira Panah Bakhtiar, GM of Media & Entertainment, Games, and Sports at AWS, told Deadline in an interview.

Ricardo Perez-Selsky, senior director of digital production operations at Fox Sports, said the tools cut the turnaround time from 45 minutes to an hour to less than 15 minutes. “And that’s with a plussed-up storytelling aspect,” he said. “It’s automating what was a pretty tedious process of like keyframing 16×9 video to create 9×16 video.”

The service uses an agentic AI application by analyzing video in real time and automatically applies the right optimizations at the right moments. Detection of vertical video cropping and clip generation happens independently, executing multi-step transformations that “require no human intervention to extract value,” AWS said in a press release.

Perez-Selsky said the objective when cutting vertical clips is to not “do these things that are going to dizzy the audience. This is now multiplied with vertical video because the action is moving that much faster because you have less space to actually capture the action. And so the model has to understand ‘How do I ramp up? How do I ramp down? How do I create camera movement that feels like it’s being operated by a person and not by a machine?’”

[ad_2]

Source link

Posted in

Nathan Pine

I focus on highlighting the latest in business and entrepreneurship. I enjoy bringing fresh perspectives to the table and sharing stories that inspire growth and innovation.

Leave a Comment