Red5 https://www.red5.net Wed, 10 Jul 2024 14:35:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.red5.net/wp-content/uploads/2023/09/Red5_favicon.png Red5 https://www.red5.net 32 32 Red5 Welcomes Brett Fasullo as New VP of Global Sales https://www.red5.net/blog/red5-welcomes-brett-fasullo-as-new-vp-of-global-sales/ Wed, 10 Jul 2024 14:35:06 +0000 https://www.red5.net/?p=35974 Red5 Welcomes Brett Fasullo as New VP of Global Sales We’re thrilled to announce a significant addition to our leadership team at Red5. Brett Fasullo has joined us as the new VP of Global Sales, bringing over two decades of experience in the technology and media industries. Brett’s impressive background includes leadership roles at Phenix… Continue reading Red5 Welcomes Brett Fasullo as New VP of Global Sales

The post Red5 Welcomes Brett Fasullo as New VP of Global Sales appeared first on Red5.

]]>
Red5 Welcomes Brett Fasullo as New VP of Global Sales

We’re thrilled to announce a significant addition to our leadership team at Red5. Brett Fasullo has joined us as the new VP of Global Sales, bringing over two decades of experience in the technology and media industries.

Brett’s impressive background includes leadership roles at Phenix Real Time Solutions and Akamai Technologies, as well as valuable experience with innovative startups in the digital media space. His expertise in building high-performance sales teams and deep understanding of advanced technologies make him the perfect fit to lead our global sales efforts.

In his new role, Brett will be instrumental in expanding Red5’s market presence, developing our global sales strategy, and fostering strategic partnerships. His appointment comes at an exciting time for Red5, as we continue to see growing demand for our low-latency, scalable streaming solutions across various industries.

We’re confident that Brett’s leadership will play a crucial role in accelerating Red5’s growth and bringing our innovative WebRTC streaming ecosystem to an even wider global market.

For more details about Brett Fasullo’s appointment and his background, read the full press release here

The post Red5 Welcomes Brett Fasullo as New VP of Global Sales appeared first on Red5.

]]>
IP Camera Live Streaming: Connecting RTSP to WebRTC https://www.red5.net/blog/ip-camera-live-streaming-rtsp-to-webrtc/ Fri, 28 Jun 2024 17:22:29 +0000 https://www.red5.net/2019/11/13/ip-camera-live-streaming-rtsp-to-webrtc/ So you have your IP camera all hooked up and ready to begin streaming video. Now you just need to figure out how people can watch it. Ideally, it should be simple and universally available; like an internet browser. You also want to see the live stream coming from the IP camera with minimal latency,… Continue reading IP Camera Live Streaming: Connecting RTSP to WebRTC

The post IP Camera Live Streaming: Connecting RTSP to WebRTC appeared first on Red5.

]]>
So you have your IP camera all hooked up and ready to begin streaming video. Now you just need to figure out how people can watch it. Ideally, it should be simple and universally available; like an internet browser. You also want to see the live stream coming from the IP camera with minimal latency, and as close to real-time as possible.

Unfortunately, most IP cameras are RTSP (Real-time Streaming Protocol) based which is not natively supported in internet browsers. So how do you use an IP camera for live streaming?

One solution is to connect to the RTSP stream and view it in VLC Media Player. However, that requires additional configurations and is not convenient. Typically users of your app won’t understand how to configure it, and it would be much easier if you could just view the video in a standard web browser. Plus what if you need to support thousands or millions of viewers?

Even more likely, what if you have hundreds of IP cameras, such as security cameras, that you need to programmatically access and provide the video stream of each to view in a webpage. This way anyone in the organization can quickly access the live stream of each security camera remotely and in near real-time.

Live Streaming Surveillance

There are other IP camera streaming solutions on the market for accessing the live video from IP cameras in browser applications, but most of these use high latency protocols for delivery such as the HLS (HTTP Live Streaming) protocol. Using solutions like this will add many seconds of delay to the IP camera’s video stream. This makes it difficult to respond to critical situations in real-time. Applications for first responders accessing security cameras to understand what is happening as an emergency situation unfolds cannot deal with seconds of delay in the IP camera live stream.

Particularly, in this situation, viewing multiple cameras with different views and keeping them in perfect sync and as close to the time that the action was captured the better. Luckily we also have live server-side mixing technology that can be leveraged to provide live grids of multiple live streams coming from security cameras.

Fortunately, WebRTC is a real-time latency protocol widely accepted by internet browsers and native applications. All you need to do is convert the RTSP video stream coming from your IP cameras to WebRTC. This post will explain how that can be done with Red5 Pro. Note that soon we will be adding support for RTSP IP camera streaming to our fully managed service, Red5 Cloud. We will be sure to update this post as soon as it’s ready.

Without further ado, let’s dive into the details of IP camera video streaming and how to accomplish getting those streams into browser applications.

How Do I Convert an RTSP video stream to WebRTC?

The simple answer is to use Red5 Pro. We’ve built that logic into our Restreamer Plugin so all that work is done automatically once it’s configured. We also have egress support for many other streaming protocols like RTMP, Zixi, and HLS for the few clients that don’t support WebRTC.

However, I’m sure that some of you reading this are wondering about the specific process we use to convert the IP camera protocol, RTSP to WebRTC. Read on, my inquisitive friends, and we will cover that in more detail.

In order to integrate an IP camera (or really any RTSP live stream) with WebRTC, you first need to achieve media interoperability (a fancy term for making them work together). The media stream sent out by the IP camera needs to be made compatible with formats supported by browsers and the WebRTC codecs.

What is RTSP used for?

RTSP is a streaming control protocol that is used to control the streaming server, kind of like how a remote control works with a TV (enabling play, pause, etc.). It does not actually transport the stream, rather it defines how the data in the live stream should be packaged for delivery. It also defines how both ends of the connection should behave to prepare a pathway for transportation.

This process is also known as signaling. WebRTC uses a different method to handle signaling often involving WebSockets, with more modern solutions using the HTTP based WHIP and WHEP standards.

RTSP, technically is a signaling protocol, and what it uses for transporting the live stream is not actually specified in the official RFC. That said, people tend to just refer to the signaling plus the transport (usually RTP) as one thing. This is the protocol stack that most IP cameras use today. We will get to the transport layer in a bit, but first let’s dive into signaling.

What is Signaling in Detail?

Again, signaling is just the ability to connect end points so that they can effectively stream to each other. RTSP and WebRTC are quite different signaling technologies, so let’s look at each in detail.

WebRTC Signaling

Signaling, particularly with WebRTC is a complex subject. As WebRTC was originally designed as a peer to peer protocol allowing for live video chat between two parties, it meant that you typically had to negotiate NAT, and find ways to map one IP address to the other peer’s IP address. WebRTC signaling does this by using the ICE protocol and it’s related protocols STUN and TURN. At it’s simplest, this part of signaling with WebRTC is just the process of routing one end point’s IP address to another so that they can connect to each other. In the case of a media server in the middle like we have with Red5 Pro, we are basically mimicking a client-server model, but having to use P2P technology to do the work.

RTSP Signaling

Since RTSP was designed as a client server model, it is much simpler. The IP address of the server is a known entity, typically publicly accessible on the internet. So connecting to it just involves using the proper IP address (typically through DNS). RTSP signaling comes into play through a series of commands sent between the client and server. These commands—such as SETUP, PLAY, PAUSE, and TEARDOWN—initiate and control the session. The SETUP command initiates the session and defines the transport mechanism to be used, while PLAY starts the stream, PAUSE temporarily halts it without tearing down the session, and TEARDOWN ends the session entirely. These commands are exchanged over a persistent connection, typically TCP (in Red5 Pro we support UDP as well), and dictate how the media server should handle the requested stream. Again, RTSP does not transport the actual media but coordinates the media streaming, often working alongside protocols like RTP (Real-time Transport Protocol) to ensure synchronized delivery of audio and video. This separation of control (via RTSP) and media data (via RTP) allows for efficient management of streams, making RTSP a versatile choice for both live and on-demand video services.

What is Transport, and What Do IP Cameras Use?

With almost all RTSP based IP cameras, RTP (Real-time Protocol) is used for the actual transportation of the data/video stream. The RTP transport protocol it could be said, is like a runway and flight path between two airports while RTSP (signaling) is the air traffic controller that makes sure the runway is open and the flight path is clear of any obstacles. The actual video codecs (covered in more detail below) are used to encode the video data would be the airplane itself.

The Real-Time Transport Protocol (RTP) is crucial for streaming audio and video, particularly in applications involving IP cameras. RTP is designed to efficiently transport real-time data over IP networks, making it ideal for live video surveillance, remote monitoring, and streaming from IP cameras.

In the context of IP cameras, RTP encapsulates audio and video data into packets for transmission over the network. Each packet includes a header containing timing information, sequence numbers, and synchronization details, essential for the correct reconstruction of the stream at the receiving end. This allows IP cameras to send high-quality, synchronized audio and video feeds to the Red5 Pro media server in real-time.

It also turns out that WebRTC also uses RTP (technically an encrypted version of RTP known as SRTP) for transport, which brings us to our next subject.

How Do RTSP and WebRTC Work Together?

As WebRTC also uses RTP for its transport protocol, they are very compatible together. The complication comes from how IP cameras behave.

Most IP cameras directly produce an RTSP stream, acting as an RTSP server. Normally in the case of webcam or cell phone videos, the Red5 Pro (origin) server receives the live stream from those clients. In this way, there is a challenge in connecting a Red5 Pro server, to an IP camera that is also acting as a server. You need a client implementation in the mix, in this case an RTSP client to consume the live stream from the IP camera. Just like an electrical socket, you can’t join two of the same types of connections: the female wall outlet will only take a male plug. Female to female can’t link to each other and you can’t connect servers. You need a converter.

To perform the conversions role, Red5 Pro created the Restreamer Plugin to pull the RTSP stream as a client and re-stream the IP camera’s video out over WebRTC. The live stream can then be delivered over WebRTC to the browser clients.

In Red5 Pro, our primary ingest codecs are H.264 for video and AAC for audio. Normally, the IP cameras use either RTSP or MPEG-TS (the latter not using RTP) to encode media while WebRTC defaults to VP8 (video) and Opus (audio) in most applications. Since all modern browsers now accept H.264 it is faster for Red5 Pro to simply pass the H.264 codec straight through WebRTC while transcoding the AAC codec to Opus. Finally, it routes to WebRTC clients using the WebRTC protocol stack. However, in some use cases older systems might not be able to handle H.264. In those rare cases Red5 Pro converts H.264 to VP8.

Though this post focuses on RTSP IP camera video streaming, it bears mentioning that Red5 Pro Re-streamer also supports various protocols including SRT, Zixi, MPEG-TS and RTSP (MPEG-TS as either multicast or unicast ingest whereas RTSP would be unicast). All of the protocols mentioned support H.264 video and AAC audio codecs.

Red5 Mixers and how do they help?

So far we’ve explained how to get one iP camera streaming to Red5 Pro and out as a WebRTC stream, but we’ve not covered how to get video from multiple IP cameras on to a single web page. This use case is particularly important in monitoring style apps. For example you might want your users to be able to watch many security cameras in a building at once.

One approach to accomplish this is to have each live stream as an individual WebRTC connection rendering each separately on a single page. This means you are letting the browser client do the work of mixing the videos and layout. The downside of this is that it requires that client to do a lot of work increasing the load on the processor, having it handle many incoming connections, and increasing the bandwidth requirements per client. This approach does work well when you only need a few feeds up at any one time, but the methodology breaks down quickly when you start adding more and more IP camera feeds to the page.

A better approach when creating large grids of IP camera streams is to do the work on the server side. Luckily Red5 Pro has a Mixer node available in it’s architecture that you can take advantage of.

Here’s a step by step process for incorporating a grid of the live IP camera streams into a single output over WebRTC.

  1. RTSP Stream Ingestion:Each IP camera stream is ingested into the Red5 Pro server using RTSP. As already discussed, Red5 Pro’s ingest servers can handle these RTSP streams and prepare them for further processing.
  2. Red5 Mixers Configuration:Red5 Mixers are a component of Red5 Pro that allow for the dynamic composition of video streams. They can take multiple video inputs and combine them into a single output stream in a customized layout. In this scenario, each RTSP stream from the cameras acts as an input to the Mixer.
  3. Combining Streams into a Grid Layout:The Mixer combines the RTSP streams into a grid layout. For instance, if you have four IP camera streams, you can configure the Mixer to create a 2×2 grid where each IP camera’s feed occupies one quadrant of the output video. This grid can be dynamically adjusted based on the number of input streams and desired layout, providing flexibility in how the final output is presented.
  4. Processing and Mixing:Once configured, the Mixer processes these inputs in real-time. It overlays, resizes, and synchronizes the streams to form a single coherent video feed. This processing includes handling any timing and synchronization issues to ensure that all camera feeds are displayed smoothly together without lag or jitter.
  5. Encoding the Mixed Stream:The combined grid video is then encoded into a format suitable for WebRTC streaming. This encoding ensures that the output stream meets the requirements for WebRTC, including codec specifications and real-time delivery constraints.
  6. Streaming via WebRTC:The final step is to deliver the combined stream as a WebRTC feed. WebRTC is chosen for its ability to provide low-latency, high-quality streaming directly in browsers without needing additional plugins. Red5 Pro’s server handles the WebRTC signaling and media transport, ensuring that viewers receive the grid view of the combined RTSP streams with minimal delay.

So That’s How it All Works

Fortunately, as we covered earlier, Red5 Pro has already done all this work for you. Find out more by visiting our website and documentation pages. Send any questions you have to info@red5.net or schedule a call with us.

The post IP Camera Live Streaming: Connecting RTSP to WebRTC appeared first on Red5.

]]>
Introducing Red5 Cloud https://www.red5.net/blog/introducing-red5-cloud/ Mon, 24 Jun 2024 23:07:01 +0000 https://www.red5.net/?p=35947 Introducing Red5 Cloud: The Full power of Red5’s real-time streaming… simplified In a world where real-time video streaming is becoming increasingly crucial for businesses across various industries, Red5 is excited to announce the launch of Red5 Cloud – a groundbreaking platform that is set to revolutionize the way organizations deliver and experience real-time video content.… Continue reading Introducing Red5 Cloud

The post Introducing Red5 Cloud appeared first on Red5.

]]>
Introducing Red5 Cloud: The Full power of Red5’s real-time streaming… simplified

In a world where real-time video streaming is becoming increasingly crucial for businesses across various industries, Red5 is excited to announce the launch of Red5 Cloud – a groundbreaking platform that is set to revolutionize the way organizations deliver and experience real-time video content.

Built on our cutting-edge Experience Delivery Network (XDN) architecture, Red5 Cloud offers an automated, ultra-low latency, and massively scalable streaming infrastructure. This empowers businesses to focus on developing innovative applications without the complexities of infrastructure management.

What sets Red5 Cloud apart?

1. Fully Managed Service

With Red5 Cloud, launching your streaming operations is easy. Our intuitive interface allows you to set up your dedicated streaming environment with just a few clicks, simplifying the process and saving you valuable time and resources.

2. Ultra-Low Latency

Red5 Cloud delivers unparalleled performance, with sub-250ms live streaming latency. This enables real-time interaction and engagement, opening up a world of possibilities for applications in sports, betting, media, education, and surveillance and security industries.

3. Flexible Integration

Our platform supports a wide range of protocols, ensuring seamless integration across various devices and platforms. Whether you’re streaming to web, mobile, or custom hardware, Red5 Cloud has you covered.

4. Advanced Features

Red5 Cloud’s TrueTime Solutions™ offer advanced tools for enhancing user experience and interactivity, including frame-accurate metadata syncing with TrueTime DataSync™ and comprehensive security measures. With extensive API integration featuring webhooks and full-featured SDKs, organizations can seamlessly integrate Red5 Cloud with their existing systems.

5. Unmatched Scalability

Leveraging our global presence across 10 strategic regions, Red5 Cloud provides automatic scaling to meet the demands of your growing audience. Whether you’re streaming to a few hundred or millions of viewers, our platform ensures a smooth and reliable experience.

As the demand for real-time video streaming continues to grow, Red5 Cloud is poised to be the go-to solution for businesses seeking to deliver exceptional experiences to their audience. With its unparalleled performance, flexibility, and cost-effectiveness.

At Red5, we don’t settle for “good enough” – we aim for the extraordinary. Our passion for providing excellent products and services drives us to continually innovate and challenge the status quo. We invite you to explore the potential of Red5 Cloud and join us as we lead the way in the video streaming revolution. Contact us today to schedule a personalized demo or to learn more about how Red5 Cloud can transform your organization’s streaming capabilities.

Unlock the true potential of real-time video streaming, milliseconds to millions, at the speed of thought.

To learn more visit our Red5 Cloud landing page or speak to our team at https://www.red5.net/contact/

The post Introducing Red5 Cloud appeared first on Red5.

]]>
Server Release 12.5.0 https://www.red5.net/blog/server-release-12-5-0/ Sun, 23 Jun 2024 12:56:14 +0000 https://www.red5.net/?p=35945 Server Release 12.5.0 New Features Improvements Bug Fixes Not a Red5 customer yet? We can help you innovate with interactive streaming video. Reach out to info@red5.net or set up a call.

The post Server Release 12.5.0 appeared first on Red5.

]]>
Server Release 12.5.0

New Features

  • MP3 Audio Support
  • Filter for converting Metadata to ID3 tags in HLS

Improvements

  • DTLS Handling for WebRTC
  • Mixer support for Offline Deployments

Bug Fixes

  • Streaming muted WebRTC to Transcoder may cause disconnect
  • Append recording for HLS
  • Unknown attribute can cause RTSP camera issues
  • Multiple SRT streams can connect using same stream name
  • Fixed recording API when not using dynamic sorting with publishers that offer only global h264 headers

Not a Red5 customer yet? We can help you innovate with interactive streaming video. Reach out to info@red5.net or set up a call.

The post Server Release 12.5.0 appeared first on Red5.

]]>
What is Ultra Low Latency and Why Does It Matter? https://www.red5.net/blog/what-is-ultra-low-latency-why-does-it-matter/ Fri, 17 May 2024 19:57:15 +0000 https://www.red5.net/?p=14412 In today’s fast-paced digital world, speed is everything. Whether you’re streaming live sports, placing bets on the big game, monitoring traffic cameras, or coordinating emergency response efforts, even the slightest delay can have a significant impact on user experience and business outcomes. This is where our ultra low latency video streaming comes into play. Ultra… Continue reading What is Ultra Low Latency and Why Does It Matter?

The post What is Ultra Low Latency and Why Does It Matter? appeared first on Red5.

]]>
In today’s fast-paced digital world, speed is everything. Whether you’re streaming live sports, placing bets on the big game, monitoring traffic cameras, or coordinating emergency response efforts, even the slightest delay can have a significant impact on user experience and business outcomes. This is where our ultra low latency video streaming comes into play.

Ultra Low Latency has become such an important aspect of live streaming that even giant consumer streaming platforms like Twitch have introduced low latency mode to their offering allowing for their streamers to reduce the delay in their video transmission for better interactivity.

In this blog post, we’ll explore what it is, why it matters, and how we at Red5 lead the way in delivering lightning-fast, real-time experiences across a wide range of industries.

What is Ultra Low Latency?

Ultra low latency streaming refers to the minimal delay between the transmission of data from a source and its reception by the intended recipient. In other words, it’s the time it takes for information to travel from point A to point B. When we talk about it, we’re typically referring to delays measured in milliseconds or even microseconds.

To put this into perspective, let’s consider a real-world example. Imagine you’re watching a live world cup soccer (fútbol) game online. When a goal is scored you expect to see it happen in real-time, without any noticeable delay. Let’s say you have a friend at the game and you are texting back and forth. If there’s a significant lag between the action on the field and what you see on your screen, it can be frustrating when your friend sends you an update before you see it live. This is where our ultra low latency video streaming comes into play, ensuring that you can experience the thrill of live sports as if you were there in person.

One potential point of confusion is that many in the video streaming business consider “ultra low latency” to be a delay in video transmission between 1 and 4 seconds. The above use case would probably still work with this level of delay in the live stream. However for us, 1 to 4 seconds would be considered extremely high latency, and many in the industry would call Red5 Pro’s WebRTC sub-second latency delivery (typically less than 250ms) “real-time latency”. For this sake of keeping things simple, in this post we are using the terms interchangeably.

Why Does Ultra Low Latency Streaming Matter?

It is crucial for a wide range of applications and industries. Here are just a few examples:

1. Live Sports Streaming: As mentioned earlier, live streaming with high latency video is an issue for delivering a seamless and engaging live sports viewing experience. Fans expect to see the action unfold in real-time, without any delays or interruptions. As new emerging experiences that include highly interactive live streams emerge, combining aspects from video conferencing such as watch parties, the latency of the live video video becomes even more important.

2. Live Sports Betting: In the world of sports betting, low latency live video is critical for ensuring that bets are placed and processed in real-time, based on the most up-to-date information. Even a slight delay can result in missed opportunities or unfair advantages.

3. Traffic Monitoring: Our ultra low latency live streaming is essential for enabling real-time traffic monitoring and management. By processing and analyzing data from traffic cameras and sensors with minimal delay, authorities can quickly identify and respond to accidents, congestion, and other issues.

4. Live Drone Footage: As drones become increasingly popular for a variety of applications, from aerial photography to search and rescue operations, ultra low latency video streaming is crucial for enabling real-time footage and remote control. Any delay in the transmission of video, data, or control signals can be dangerous or even catastrophic.

5. Surveillance and Security: Live streaming with sub-second latency is essential for enabling real-time monitoring and response in surveillance and security applications. Whether it’s detecting and responding to a security breach or monitoring a sensitive area for suspicious activity, our ultra low latency capabilities ensure that personnel can react quickly and effectively.

ultra low latency surveillance

6. Emergency Response: In emergency situations, every second counts. Our ultra low latency streaming solution is cucial for enabling real-time communication and coordination between first responders, dispatchers, and other personnel. By minimizing delays in the transmission of critical information, we can help save lives and minimize damage. We continue to see emerging applications that combine comms over video conferencing with the near real-time live stream become commonplace.

How We at Red5 Deliver Ultra Low Latency

At Red5 (www.red5.net), we are a leading provider of ultra low latency solutions for real-time video streaming, gaming, and data delivery. Our cutting-edge technology is designed to minimize latency and ensure seamless, real-time experiences for our users across a wide range of industries, from live sports and betting to traffic monitoring and emergency response.

One of the key components of our solution is our custom-built media server. Unlike traditional media servers, which can introduce significant delays due to buffering and processing, our server software is optimized for speed and efficiency. By leveraging advanced techniques like WebRTC (Web Real-Time Communication) and a wide variety of other protocols (Zixi, SRT, RTSP, etc), we are able to achieve ultra low latency video streaming with unparalleled speed and reliability.

Our Experience Delivery Network (XDN) architecture is designed to deliver real-time live video at scale, leveraging a network of origin and edge compute nodes that optimize streaming performance across diverse geographic locations. In this architecture, the origin server acts as the central hub, receiving initial streams from broadcasters. These streams are then intelligently distributed to edge nodes, which are strategically located closer to end-users to minimize latency. As viewers connect to watch a live event, they are automatically routed to the nearest edge node, ensuring that they receive the stream with the least possible delay. This method not only significantly reduces the latency that can hinder real-time interactions but also scales efficiently by distributing the load across multiple nodes. The result is a robust, high-performance streaming solution capable of handling large volumes of simultaneous viewers with near-zero latency, maintaining the quality and immediacy of live video content delivery.

Another important aspect of the XDN architecture is the flexibility of where it can be deployed, whether it be in the cloud on the customer’s tenant, on-prem, or on edge networks like AWS’s Wavelength. By our customers being able to strategically place origin and edge nodes in key locations around the world, they are able to minimize the physical distance between our users and the content they’re accessing, resulting in even lower network latency and faster response times.

Soon with the launch of Red5 Cloud, customers will be able to deploy Red5’s XDN with a click of a button reducing the overhead and complexity of maintaining the software on their own infrastructure. If you are interested in reducing latency and don’t want to manage Red5 Pro on your own system, then this is the solution for you. Sign up here to get early access.

In addition to our technical infrastructure (whether deployed on your servers or via Red5 Cloud), we also offer a range of tools and services to help businesses and developers integrate ultra low latency video streaming into their applications. Our software development kits (SDKs) and APIs make it easy to build sub-second latency mobile streaming and data delivery solutions, while our managed hosting and support services provide the expertise and resources needed to ensure optimal performance and reliability.

Real-World Applications of Our Ultra Low Latency Technology

Our technology has been deployed in a variety of industries and applications, demonstrating its versatility and effectiveness. Here are a few notable examples:

1. Live Sports Streaming: We have partnered with major sports leagues and broadcasters to deliver live, real-time latency video streaming of sporting events to millions of fans worldwide. By minimizing the delay between the action on the field and the viewer’s screen, we enable fans to experience the excitement of the game in real-time, as if they were there in person.

2. Live Sports Betting: Our technology is also being used to power live sports betting platforms. By ensuring that bets are placed and processed in real-time, based on the most up-to-date information, we help create a fair and engaging betting experience that keeps fans on the edge of their seats.

3. Traffic Monitoring: Red5’s ultra low latency video streaming is being used by transportation authorities and smart city initiatives to enable real-time traffic monitoring and management. By processing and analyzing data from traffic cameras and sensors with minimal latency, we help authorities quickly identify and respond to accidents, congestion, and other issues, keeping traffic flowing smoothly and safely.

4. Live Drone Footage: Our software is being used to power live drone video streaming for a variety of applications, from border patrol monitoring to search and rescue operations. By minimizing the latency between the drone’s camera and the operator’s screen, we enable precise control and real-time decision-making, even in challenging or time-sensitive situations.

5. Emergency Response: Our ultra low latency video streaming is used by emergency response organizations to enable real-time communication and coordination between first responders, dispatchers, and other personnel. By minimizing delays in the transmission of critical information, we help emergency teams respond quickly and effectively to life-threatening situations.

6. Surveillance and Security: Our low latency software is being used in a variety of surveillance and security applications, from monitoring sensitive areas to detecting and responding to security breaches. By enabling real-time latency high quality video streaming and analysis, we help security personnel stay on top of potential threats and respond quickly and effectively when needed.

The Future of Ultra Low Latency

As the demand for real-time, interactive experiences continues to grow across a wide range of industries, the importance of ultra low latency will only continue to increase. At Red5 we are at the forefront of this revolution, constantly innovating and pushing the boundaries of what’s possible with the lowest latency live streaming in the industry.

In the coming years, we can expect to see even more applications and industries embrace this technology, from live event streaming and gaming to industrial automation and remote healthcare. As high bandwidth 5G networks become more widespread, the potential for ultra low latency video streaming will only expand, enabling new and exciting use cases that we can only begin to imagine.

At the same time, the challenges of delivering it at scale will continue to drive innovation and investment in the field. We at Red5 will continue to optimize our infrastructure, develop new protocols and algorithms, and find ways to minimize latency across increasingly complex and distributed networks.

Conclusion

Ultra low latency is a critical component of the modern digital landscape, enabling real-time engagement that is transforming the way we live, work, and play. From live sports and betting to traffic monitoring and emergency response, the applications of ultra low latency are vast and far-reaching.

As a leader in the field, we at Red5 are at the forefront of delivering ultra low latency video streaming solutions that are fast, reliable, and scalable. Our custom-built media server, XDN architecture, and suite of tools and services make it easy for businesses and developers to integrate this technology into their applications and deliver lightning-fast, real-time experiences to users across a wide range of industries.

As we look to the future, it’s clear that ultra low latency will only become more important, as the demand for real-time, interactive experiences continues to grow. With our commitment to innovation and excellence, we at Red5 will continue to lead the way in delivering cutting-edge ultra low latency solutions that transform industries and enhance experiences for our users around the world.

The post What is Ultra Low Latency and Why Does It Matter? appeared first on Red5.

]]>
RTSP vs. WebRTC: Which to use for a Mobile App https://www.red5.net/blog/rtsp-vs-webrtc-which-to-use-for-a-mobile-app/ Wed, 15 May 2024 21:31:32 +0000 https://www.red5.net/2018/04/13/rtsp-vs-webrtc-which-to-use-for-a-mobile-app/ When we were designing the architecture for the Red5 Pro Mobile video streaming SDK, we had some critical early choices we needed to make. One of the biggest was what protocol we should base it on. After some initial research and experimenting, we decided on RTSP (Real Time Streaming Protocol). But why did we choose… Continue reading RTSP vs. WebRTC: Which to use for a Mobile App

The post RTSP vs. WebRTC: Which to use for a Mobile App appeared first on Red5.

]]>
When we were designing the architecture for the Red5 Pro Mobile video streaming SDK, we had some critical early choices we needed to make. One of the biggest was what protocol we should base it on. After some initial research and experimenting, we decided on RTSP (Real Time Streaming Protocol).

But why did we choose RTSP and not WebRTC (Web Real Time Communication)? The choice really came down to the stability of the protocol at the time and fast connection times.

Selecting RTSP

When we first started our mobile SDK implementation it was in 2014. WebRTC was still a moving target at that time, and it hadn’t yet made it to a final specification (this only happened in Sep 2017). On the other hand, RTSP had been around for years, and there were many stable implementations to reference. For example, RTSP streaming has long been used for security cameras, video surveillance applications, a wide variety of other video feed use cases. Plus, RTSP and WebRTC shared the same underlying low latency transport technology.

The other major reason we decided against WebRTC for our mobile SDK streaming protocol is that connection times can be high (a few seconds) due to the need for signaling and ICE negotiation. WebRTC actually uses multiple steps before the media connection starts and video can begin to flow.

Conversely, RTSP takes just a fraction of a second to negotiate a connection because its handshake is actually done upon the first connection. We saw too many use cases that relied on fast connection times, and because of this, it was the major factor that pushed us to use RTSP. Slow connection times with WebRTC is less relevant today with WHIP and WHEP becoming a more efficient standard for negotiating the media connections allowing to connect to low latency streaming to happen very quickly.

We also looked at other protocols including RTMP for real-time communication, but it had too many limitations including having limited codec support and being stuck with TCP (Transmission Control Protocol) only. We eventually settled on RTSP as our preferred choice.

What About Performance?

WebRTC is often touted as being designed as a low latency video streaming protocol. While that is certainly true, both WebRTC and RTSP employ the same underlying transport protocol for video and audio data streaming: RTP (or SRTP when encrypted). Due to this similarity, they both provide very low latency streaming. Video streams over both protocols can be encrypted using SRTP as well, and both employ critical packet retransmission over UDP such as NACK.

Our mobile SDK performs quite similarly to WebRTC in terms of low latency, but on the server side, RTSP is simpler to implement. RTSP is a little lighter so that allows us to fit more RTSP connections per streaming server instance because we don’t have as much overhead from WHEP/WHIP/WebSocket signaling and ICE negotiations.

Thus, the threading model is vastly simplified so overall our video streaming code is more optimized with RTSP due largely to the fact that the protocol is simpler to implement.

This simplicity and performance gains carries over to the client side as well. When compared to a laptop, mobile phones usually have less processing power, so the easier it is to send and receive streams the better.

When to Use WebRTC

When we first wrote the Red5 Pro Mobile SDK, we only really cared about and used WebRTC when we are dealing with real-time communication in a browser client. The only low latency protocol that browsers support is WebRTC. Since we had to adhere to what the browsers give us, we had no other choice.

Alternatively, when writing the mobile SDK we had a lot more flexibility and thus chose RTSP for its simplicity and performance.

That all said, WebRTC has come along way, and we’ve seen hardware encoders begin to support WebRTC streaming using WHIP as an ingest protocol, including popular software implementations like OBS. Because WebRTC continues to improve and support more and more endpoints, we’ve begun adding WebRTC as an option in our low latency video streaming SDKs.

Check out the Mobile SDK combined with the Red5 Pro streaming platform for yourself and send any questions over to info@red5.net or let us show you a demo.

The post RTSP vs. WebRTC: Which to use for a Mobile App appeared first on Red5.

]]>
NAB 2024 Made the Case for WebRTC Streaming https://www.red5.net/blog/nab-show-2024-made-the-case-for-webrtc-streaming/ Tue, 07 May 2024 22:12:13 +0000 https://www.red5.net/?p=14432 A lot of things are labeled historic these days, but it’s no exaggeration to suggest the M&E industry reached a milestone in April when every major theme at the National Association of Broadcasters Show spotlighted the need for affordable, massively scalable real-time streaming solutions. Cost-conscious discourse dominating the exhibit halls and conference rooms focused on… Continue reading NAB 2024 Made the Case for WebRTC Streaming

The post NAB 2024 Made the Case for WebRTC Streaming appeared first on Red5.

]]>
A lot of things are labeled historic these days, but it’s no exaggeration to suggest the M&E industry reached a milestone in April when every major theme at the National Association of Broadcasters Show spotlighted the need for affordable, massively scalable real-time streaming solutions.

Cost-conscious discourse dominating the exhibit halls and conference rooms focused on finding better approaches to monetizing content and services in the crowded video marketplace, especially when it comes to capitalizing on the shift to live programming in the streaming domain. The need for IP streaming technology supporting multidirectional transfers of video and other assets at real-time speeds was a given underlying pursuit of these goals across all major topics of discussion, including:

  • cloud production and contribution playout to distributors,
  • ways to capitalize on the benefits of AI in metadata creation and use of stored data in feature enrichment,
  • personalization of user experiences,
  • socializing streamed services,
  • new service development across a vast range of targeted applications, from microbetting to interactive e-commerce, multiplayer gaming, and networked implementations of artificial, virtual, and mixed reality technologies.

A dazzling display of technological innovations in all these arenas left no doubt that a new era in engagement with consumers is in reach for an M&E industry that now finds itself anchored in internet technology. But all this progress in the supply chain left unanswered the question hanging over everything: How can we use these advances to maximum effect if we have no choice but to rely on the dominant streaming infrastructure or proprietary transport alternatives that weren’t designed for a real-time marketplace?

Offering Red5’s answer to that question kept us hopping. Over four days of interactions with visitors to our demonstrations and in executive meetings on and off the show floor we made it clear that the interactive real-time streaming support everyone needs is available from Red5’s WebRTC-based platform at costs that make it possible to put all these solutions to work with ROI outcomes everyone is looking for.

And it wasn’t just M&E folks who were searching for answers. Word that Red5’s real-time streaming solution is operating in multiple fields drew a team of government officials to our demonstrations. They asked us a lot of questions without revealing their use cases, but we had a pretty good idea of what they had in mind.

Our Experience Delivery Network (XDN) streaming platform is used by police and fire departments, public transportation and traffic administrators, and branches of the military to enable simultaneous analysis of multiple video feeds aggregated and delivered in real time to control centers from cameras arrayed over land, on drones, and under water. We had no problem providing our tight-lipped visitors the answers they were looking for.

Red5 PoC Demos Illuminated the Way Forward

More generally, our NAB Show presence was focused on making the case that, when all factors in the cost-benefit equations impacting top-of-mind M&E industry agendas are taken into account, the need for real-time interactive video streaming support should no longer be an impediment to meeting core goals.

The point was underscored by our appearance as a featured partner with two NAB exhibitors that provide solutions essential to moving production to the cloud: Nomad Media, supplier of an advanced asset management platform widely used in cloud migration, and Zixi, whose Software-Defined Video Protocol (SVDP) has long been a mainstay in broadcast-quality video transport over IP networks that link disbursed participants in cloud production and connect output from production and playout centers to producers’ distribution affiliates.

We teamed with both partners in their booth spaces to show how customers can bring these industry-leading content management and transport capabilities into play with Red5 to support real-time video connectivity across all end points in production and postproduction workflows and in playout to distributors. The demonstrations also featured the compact, high-capacity encoding platform from Videon, another Red5 partner, which provided transcoding support for content ingested into Red5 streams.

We also demonstrated how our TrueTime Multiview solution makes it possible for anyone in distributed production workflows to instantly switch from one full screen rendering to another across any array of thumbnail displays, which we exemplified using a 4×6 matrix of stored content and live camera feeds from inside and outside the exhibit hall. This and many other real-time streaming capabilities, such as using advances in AI to find and pull data from any cloud storage location or linking remote commentators and influencers to live productions, are table stakes in the shift to cloud production.

Solidifying the Case for Live Cloud Production

When it comes to fully executing on the promise of cloud-based live content production, arguably the most dominant item on the industry’s to-do list, Red5’s inauguration of Red5 Cloud as a curated real-time platform service has opened what growing numbers of users and suppliers recognize as the most viable way forward. Red5 Cloud has significantly cut the costs of deploying XDN architecture by eliminating the time and staff resources that would otherwise be expended on calculating, configuring, and setting up the streaming infrastructure.

Red5 Cloud employs the intelligent orchestration processes of the XDN Stream Manager to execute those tasks in response to users’ input of basic use case requirements, such as how many streaming instances are involved, optimum bitrates, and number of end users. This gives customers immediate access to real-time streaming infrastructure matched to their needs with the ability to trigger adjustments in their set-ups with simple inputs on their Red5 Cloud dashboards.

With support for end-to-end video connectivity at sub-400ms speeds across all locations, Red5 Cloud removes any lingering doubts about the cost effectiveness of capturing the full benefits of real-time streaming. The cost/benefit case is especially strong in production and playout where fewer XDN points of ingestion (Origin Nodes), egress (Edge Nodes) and, if needed, midway points (Relay Nodes) are involved compared to the mass market distribution leg linking millions of end users. Notably, the real-time playout advantage applies to long-haul scenarios where producers of live content want to capitalize on IP transport as a low-cost alternative to satellite and dedicated fiber.

From the production perspective, the automated approach to capitalizing on the capabilities of XDN architecture is a much-needed breakthrough for sports and other live production scenarios. Without a cost-effective way to utilize real-time connectivity between cameras capturing on-field action and studios or other remote production locations it’s been impossible for event producers to eliminate the need for big on-site production set-ups.

Now, with Red5 Cloud in play, that’s no longer the case. The migration to cloud production in sports is getting underway with a top-tier professional league we can’t name. They’re starting with activation of XDN architecture to support real-time sharing of archived video in multiple use cases tied to internal operations.

A Winning Cost/Benefit Equation at Any Scale

At the same time, that league and many other sports organizations have been working with Red5 to enable public in-venue user experiences, including multiviewing,of action from different camera angles, user-activated replays on those feeds, and personalized feature enhancements. These producers are discovering that, with Red5’s real-time streaming support, they can deliver these benefits free of the audio echo effect that turns viewers off when sound from on-site speakers reaches their smartphones ahead of the streamed audio.

Critically, while real-time invigoration of on-site viewing experiences is important in its own right, the story doesn’t end there. We now know that the growing embrace of this strategy by service providers and sports producers is also serving as a launchpad for extensions of these viewing experiences to the entire online population.

Indeed, multiviewing as a live-event streaming feature was a major theme whenever sports services were discussed at the NAB Show. While various attempts to finally bring this long-envisioned benefit to end users were touted in Las Vegas, a close look at what can and can’t be done revealed that none measure up to the instantaneous access to an unbounded number of camera feeds enabled by TrueTime Multiview.

Another major NAB Show theme tied to the need for real-time interactive streaming centered on service providers’ efforts to socialize viewing experiences beyond the limitations imposed by text-based watch parties. Here again, the discussion was dominated by sports, where producers are searching for ways to increase Z-generation engagement.  

This and other use cases calling for advances in streaming support were explored at an exhibit hall conference session titled “Getting Real about Real-Time Streaming,” which drew a wide range of executives whose businesses depend on the prevailing one-way HTTP-based streaming infrastructure. The session marked the first time an NAB Show event has been devoted to this topic, but it won’t be the last. We understand plans are taking shape for a deeper dive into the implications of real-time streaming at next year’s show.

At this event, amid a diversity of opinions on practical and technical considerations, there was unanimous agreement on what session leaders described as baseline requirements for real-time streaming, including:

  • end-to-end latency at or below 500ms in all directions at any distance,
  • scalability to at least one million simultaneous viewers with support for interactive video engagement on the part of at least 1,000 audience members at any one time,
  • support for server-side ad insertions, feature personalization and other current streaming functionalities,
  • support for features and video interactivity beyond the reach of conventional streaming,
  • TV-caliber A/V quality with support for robust security (DRM, watermarking).

Nor was there any disagreement on the assertion that there’s no way at this point in time to meet these requirements without reliance on WebRTC-based streaming.

This year’s NAB Show made clear the priorities on M&E strategists’ minds add up to a growing market demand for new streaming infrastructure. It came as no surprise that our message touting Red5 Cloud as a WebRTC-based platform that meets or surpasses all requirements was well received.

We’ll be making sure that message gets out to the fullest extent possible in the months ahead. To learn more about the tools and solutions available from Red5 set up a call or contact us info@red5.net.

The post NAB 2024 Made the Case for WebRTC Streaming appeared first on Red5.

]]>
New WebRTC Tools Break Barriers to Multi-Camera Live Streaming https://www.red5.net/blog/webrtc-multi-camera-live-stream/ Tue, 30 Apr 2024 16:07:52 +0000 https://www.red5.net/?p=14382 Why is multi-camera live streaming still missing from the vast majority of sports productions? The answer is simple: efforts to shoehorn multiple simultaneous camera feeds into existing live streaming operations don’t work. As a result, searches for better approaches to this much-needed enhancement have reached the mission-critical stage across a broad range of consumer, government,… Continue reading New WebRTC Tools Break Barriers to Multi-Camera Live Streaming

The post New WebRTC Tools Break Barriers to Multi-Camera Live Streaming appeared first on Red5.

]]>

Why is multi-camera live streaming still missing from the vast majority of sports productions?

The answer is simple: efforts to shoehorn multiple simultaneous camera feeds into existing live streaming operations don’t work. As a result, searches for better approaches to this much-needed enhancement have reached the mission-critical stage across a broad range of consumer, government, and enterprise use cases.

While, in the M&E realm, producers have aspired to make watching live events from multiple camera angles a part of the viewer experience going back to the early days of cable TV, nothing has taken hold even at this late date in the over-the-top (OTT) era. Nor is support for multi-camera live streaming doing much to facilitate the shift to dispersed collaboration in high-value content production.

And when it comes to capitalizing on what real-time multi-camera streaming can mean to emergency responses, crowd control, traffic management, military operations, and other non-entertainment applications, there’s no solution to be found with reliance on traditional approaches to managing surveillance feeds. This is where mission critical translates to matters of life and death.

The Red5 TrueTime Breakthrough to Live Multi-Camera Streaming

Fortunately, there’s every reason to expect that this anomalous state of affairs will soon end. For anyone contemplating how to address the need for multi-camera streaming flexibility in any of these use cases the good news is there’s no longer any reason to be stymied by these dead ends.

In fact, that’s been the case for some time, but it’s truer now than ever.

Over the past few years Red5 has leveraged its Experience Delivery Network (XDN) architecture to help customers in many fields support end users’ seamless view changes across multiple camera angles delivered simultaneously in real time. Now we’ve streamlined the process with introduction of TrueTime MultiView™ toolsets designed for three categories of multi-camera live streaming use cases: fan engagement, live production monitoring, and surveillance.

As part of a suite of recently introduced TrueTime tools, MultiView leverages the combination of standards and Red5 innovations best suited to optimum performance in each of these market segments with SDKs that enable rapid yet highly customizable implementations of multi-camera live streaming in all the leading OSS environments, including web, Android, iOS, MacOS, Windows and Linux. Adding to the application versatility, all the tools in the TrueTime suite rely on a common standards-based foundation with multi-OSS compatibility, which allows customers to put them to use in any combination they deem appropriate to creating market-moving solutions and services.  

With more to come, the other toolsets in the TrueTime lineup include:

  • TrueTime DataSync™ – By ensuring frame-accurate data synchronization with live real-time video streaming, DataSync enables data overlays that can be used to enhance images, embed key information, and add predictive modeling.
  • TrueTime Studio™ – This is a production tool that facilitates creation of interactive video content and the ability to bring in remote guests and other collaborators in real-time streaming scenarios.
  • TrueTime WatchParty™ – Watch parties with the unlimited scalability and feature-rich functionalities enabled by XDN architecture are now easier to implement with this tool.

Live Multi-Camera Streaming in the Consumer Market

What a viable approach to multi-camera live streaming in the consumer market could mean to providers of streamed sports and other live events in their efforts to draw and retain audiences can’t be overstated. Such enhancements have become especially significant to engaging Z-generation users, whose comparatively lower interest in mainstream sports has become a major concern to rights holders.

The dimensions of the Z-gen drop off in engagement were underscored by two surveys conducted by researcher Morning Consult. Findings in one case showed that 60% of adults born since the late ‘90s watch live sports compared to 72% of all adults while the other survey found that only 53% of those Gen Z adults describe themselves as sports fans compared to 69% of millennials.

Of course, multi-camera live streaming isn’t just a key element in the pursuit of Z gen viewers; its appeal extends to people of all ages. In a 2023 survey of 3,000 U.S. sports fans aged 14 and over, Deloitte found strong demand for more immersive streamed sports viewing experiences with over a third of respondents across all ages ranking control over multiple camera angles as one of the two most sought after features, along with more advanced replay controls like slow-motion activation. An earlier Verizon Media-commissioned survey of 5,000 sports fans in the U.S. and four European countries produced nearly identical results.

Yet support for multi-camera streaming is rare in live-stream sports and other productions, with occasional exceptions like NASCAR and Formula 1 races and some experimentation elsewhere. But major pro sports producers like Major League Baseball, the National Football League and the National Basketball Association that have worked hard to create innovative live streaming experiences have yet to make viewing from multiple camera angles widely available.

Here it’s important to note there’s a distinction between performance parameters bearing on full-screen access to different live video streams from a simultaneous display of thumbnail videos versus applications where the options have to do with different viewing angles provided by multiple cameras from a single event. In the former case, a second or two delay in the switch from one stream to another is tolerable, whereas a change of viewing angles in the same streamcast must occur without perceptible delay.

In either case the problem with trying to support instantaneous access to multiple live streaming options on Hyprtext Transfer Protocol (HTTP) streaming platforms stems from the fact that the only way to eliminate lag time is to deliver all the full-screen renderings together in a single stream. The resulting high levels of per-user bandwidth consumption can choke bandwidth availability across the local broadband service area, disrupting people’s quality of service irrespective of what they’re watching.

But this isn’t the case when a service provider implements TrueTime MultiView for Fans on the XDN platform. In such instances, all the live video streams are ingested at XDN Origin Nodes and relayed directly or through intermediate Relay Nodes to intelligent Edge Nodes serving a given segment of the end user population, which varies in size depending on the proximity of each Edge Node to the service group. Users’ choices of camera angles or featured alternative programming are delivered over unicast live streams from the Edge Nodes at imperceptible latencies.

This real-time distribution allows whichever camera angle or separate game feed a viewer wants to access to be displayed in full-screen resolution as soon as the user clicks on one of the thumbnail videos that display the viewing options in an adjoining multi-screen window. These thumbnails, streaming each low-resolution version of the video at one frame per second, are packaged in a low bitrate data stream that goes out to all session users with no meaningful impact on bandwidth consumption. 

Unlike any attempt to enable HTTP streaming-based multiviewing by offering just two or three camera options to curtail bandwidth consumption, there’s no limit on how many camera feeds can be supported by TrueTime MultiView for Fans. It’s just a matter of how many thumbnail videos a provider wants to squeeze into the MultiView selection space.

The audience-drawing benefits of multi-camera live streaming could also be introduced as a natural enhancement to in-venue smartphone viewing experiences on offer at an expanding array of sports centers worldwide. In the U.S., for example, Verizon, the leading provider of enhanced in-venue 5G services, now has in-venue 5G links operating in 25 NFL stadiums and 35 other sports and concert venues, according to the Groupe Speciale Mobile Association (GSMA).

There have been some attempts at supporting multi-camera live streaming over stateside in-venue live streams and elsewhere as well. For example, in Germany Vodafone is working with Bundesliga, Germany’s largest soccer league, to deliver a multiview and replay app to Sky Sports customers attending league games. Other carriers experimenting with in-venue multi-camera streaming and other advanced features include Orange at a stadium testing site in France and U.K. MNO Three at Premier League games in  London.

But a serious problem with delivering in-venue streamed viewing experiences, whether from one or multiple camera feeds, arises when audio from sound systems reaches users ahead of the streamed audio, which results in a disorienting echo effect. Red5 has been directly involved with tests of TrueTime Multiview over in-venue connections at recent major sports events which we’re not authorized to name.

These engagements have proved beyond any doubt that it’s now possible to offer in-venue multiviewing experiences in real-time sync with onsite sound systems. Stay tuned for what comes next as these trials move to commercial rollouts.

Multi-Camera Live Streaming in Production Operations

Meanwhile, there are other barriers imposed by conventional streaming that are holding things back in the evolving realm of production and postproduction. Here multi-camera streaming in real time is critical to the movement toward dispersed collaboration in live productions.

While the ability to switch from one camera angle to another has long been a part of the live TV production process, there’s now a need to lower live-content production costs by reducing workloads at event venues to camera crew operations while locating final production at core studios or dispersed locations. Moreover, there’s a need to include camera feeds from remotely positioned commentators and influencers in the live production process.

These requirements can’t be met without synchronized reception and shared workflow engagement in real time at all workstations wherever they might be. TrueTime MultiView for Production Monitoring makes this possible.

Here the primary difference with MultiView Fan has to do with the inclusion of Red5’s Mixer Node technology in MultiView Production Monitoring, which facilitates mixing any combination of audio and video feeds into the stream going out over the XDN infrastructure to end users. This gives producers collaborating over any distance in real time great latitude in determining what end users see moment to moment, ranging from the content in a single A/V feed to split-screen displays to composites of multiple video streams or single image captures. 

In cases where distributors make use of Red5’s transcoding support for delivering multiple bitrate profiles in emulation of the adaptive bitrate (ABR) approach to accommodating variations in bandwidth conditions, the Mixer output is fed into Red5’s Caudron transcoder for multi-profile distribution. All of this is done while maintaining end-to-end latency at or below 400ms.

Multi-Camera Live Streaming in Surveillance Monitoring

As for exploiting the potential of synchronized multi-camera streaming in surveillance applications, there’s a need to assemble camera feeds across the broad sweep of unfolding emergencies, crowd scenes and traffic flows into perfectly synched matrices for real-time delivery to control centers. This applies as well to video feeds going from a battlefield to regional command centers.

Red5 has made it possible for surveillance operators in multiple use cases around the world to meet this challenge. For example, one of the latest video monitoring applications utilizing Red5’s real-time multi-directional XDN streaming architecture can be found in San Diego County, CA. There the sheriff’s department and partner agencies are operating a drone streaming command center where they can control and simultaneously view a virtually unlimited number of drone camera feeds through an easy-to-use interface provided through Red5 partner Nomad Media’s content management system.

While in most instances TrueTime MultiView for Surveillance applications rely on the XDN architecture’s ability to implement highly scalable uses of WebRTC, MultiView for Surveillance users can take advantage of the fact that the Real-Time Transport Protocol (RTP) underlying WebRTC also supports the Real-Time Streaming Protocol (RTSP) widely used in video surveillance cameras and smartphones. With no need to establish WebRTC connectivity, those cameras’ outputs can be directly fed into the Red5 Pro servers positioned at the multi-stream aggregation points.

These servers, consisting of Red5 software running on commodity appliances, ingest and package any number of camera feeds for synchronized distribution over XDN infrastructure to one or more monitoring posts, including any cloud locations where analytics engines are in play. Real-time performance can be achieved using the RTSP protocol end to end, provided that receiving devices are running a client player that supports the protocol.

If clients are not natively equipped to support RTSP, the XDN platform repackages content ingested from RTSP streams for distribution via WebRTC without adding latency. XDN infrastructure is also ideally suited to stream video from surveillance cameras that use Secure Reliable Transport (SRT), another protocol based on RTP which has gained traction as an open-source real-time alternative to RTSP for cameras streaming high-resolution video.

A Common Framework for Mixing and Matching TrueTime Applications

As explained in the foregoing discussion, given the different challenges associated with activating multi-camera live streaming in each use case category, there are significant elements differentiating the TrueTime Multiview Fan, Production Monitoring, and Surveillance toolsets. But they all have in common reliance on the unique capabilities that distinguish XDN architecture, not only from HTTP streaming architecture but also from other platforms that rely on WebRTC as the primary streaming mode.

WebRTC, the most commonly used XDN transport mode, is ideal because client-side support for interactions with the protocol have been implemented in all the major browsers, including Chrome, Edge, Firefox, Safari and Opera, eliminating the need for plug-ins or purpose-built hardware. XDN architecture has been designed to overcome the scaling issues widely associated with WebRTC, enabling real-time streaming at any distance to any number of end users at end-to-end latencies below 400ms and often below 200ms when transcontinental distances aren’t in play.

In addition, along with supporting real-time transport via RTSP and SRT as dictated by the types of devices in use, the XDN platform can ingest video formatted to other leading protocols used with video playout, including Zixi, Real-Time Messaging Protocol (RTMP), and MPEG-TS. RTMP and MPEG-TS streams can also be packaged for streaming on the RTP foundation with preservation of the original encapsulations in cases where clients compatible with those protocols can’t tap browser support for WebRTC.

XDN instantiations can also be configured to hand off content for conventional streaming over HTTP Live Streaming (HLS) in rare instances when there’s no other way to deliver the content to end user devices. And, as noted, XDN-based Caudron transcoders can be used to replicate ABR profiles with the real-time streamed content, which applies with any XDN-supported transport protocol.

TrueTime MultiView, along with leveraging WebRTC as the transport foundation for building multiviewing applications in the consumer, production, and surveillance segments, takes advantage of other key standards to streamline implementations. These standards, which are used in all TrueTime toolsets, include:

WebRTC-HTTP Ingestion Protocol (WHIP) and WebRTC-HTTP Egress Protocol

(WHEP) – These are the default modes used in TrueTime-enabled use cases to expedite, respectively, ingress and egress of content streams on the XDN platform.  They greatly simplify mass scaling of WebRTC.

On the origination side, WHIP defines how to convey the Session Description Protocol (SDP) messaging that describes and sets up sessions allowing content streamed over WebRTC from individual devices to be ingested across a topology of media servers acting as relays to client receivers. On the distribution end, WHEP defines the SDP messaging that sets up the media server connections with recipient clients.

Key Length Value (KLV) – KLV is a globally used SMPTE standard activated in TrueTime DataSync to ensure synchronization between primary WebRTC-streamed content and ancillary data that’s delivered over the standardized WebRTC data channel. KLV defines how metadata that’s used to describe the data elements is formatted, thereby enabling automated, frame-accurate association of data with the relevant content.

Notably, with delivery of real-time video and metadata over a single connection, the automated tagging makes it easier to search and manage video and other repositories, including with the aid of AI, to ensure that time-sensitive contextual data is always displayed as intended with the video. This opens the way to an unlimited array of possibilities for building advanced features into real-time streaming applications, from personalization of user experiences and advertising in consumer services to telemetry streams that enrich collaborative capabilities in production, surveillance, and any other sphere of enterprise, institutional and government operations.

For the first time, it’s now possible to realize the full potential of multi-camera live streaming in all the use cases where its absence has become a growing source of concern. In fact, just what that full potential is has been greatly expanded with the introduction of TrueTime MultiView.

As immediate needs for live multiviewing capabilities drive the implementation of MultiView across multiple market segments, users of the technology will find they are well positioned to take advantage of XDN-based real-time multi-directional streaming for any use case at any scale over any distance. To learn more about the possibilities contact us at info@red5.net or schedule a call

The post New WebRTC Tools Break Barriers to Multi-Camera Live Streaming appeared first on Red5.

]]>
Server Release 12.4.0 https://www.red5.net/blog/server-release-12-4-0/ Tue, 09 Apr 2024 15:06:02 +0000 https://www.red5.net/?p=14225 Server Release 12.4.0 New Features Improvements Bug Fixes Not a Red5 customer yet? We can help you innovate with interactive streaming video. Reach out to info@red5.net or set up a call.

The post Server Release 12.4.0 appeared first on Red5.

]]>
Server Release 12.4.0

New Features

  • RTMP Pull

Improvements

  • Updated Java Spring Library
  • Chrome/Chromium WebRTC DTLS Eliptical Curve Support

Bug Fixes

  • Append recording for JSON metadata
  • Rare image freeze while recording

Not a Red5 customer yet? We can help you innovate with interactive streaming video. Reach out to info@red5.net or set up a call.

The post Server Release 12.4.0 appeared first on Red5.

]]>
Introducing AI-Powered Friends for TrueTime Watch Party: A Revolutionary Enhancement! https://www.red5.net/blog/introducing-ai-powered-friends-for-truetime-watch-party-a-revolutionary-enhancement/ Mon, 01 Apr 2024 09:25:00 +0000 https://www.red5.net/?p=14133 We’re thrilled to announce an extraordinary breakthrough at Red5, set to revolutionize your TrueTime Watch Party experience! Acknowledging feedback from our community, we identified a unique challenge: many users wanted to host larger watch parties for sporting events but found their guest lists a bit… lacking. But fret not! We’ve devised a sensational solution to… Continue reading Introducing AI-Powered Friends for TrueTime Watch Party: A Revolutionary Enhancement!

The post Introducing AI-Powered Friends for TrueTime Watch Party: A Revolutionary Enhancement! appeared first on Red5.

]]>
We’re thrilled to announce an extraordinary breakthrough at Red5, set to revolutionize your TrueTime Watch Party experience! Acknowledging feedback from our community, we identified a unique challenge: many users wanted to host larger watch parties for sporting events but found their guest lists a bit… lacking. But fret not! We’ve devised a sensational solution to ensure your watch parties are bustling with energy and excitement, regardless of your real-world social circle.

Meet Your New Virtual Companions!

Introducing our AI-based system, designed to infuse life into your watch parties by adding fully animated, realistic bots as friends. Yes, you read that right! These aren’t your average, run-of-the-mill bots. Our AI friends are crafted to be friendly, engaging, and unbelievably realistic – they might even outshine your real-life pals!

How We Did It

Leveraging the prowess of an advanced LLM (Large Language Model) generative model, we’ve meticulously trained our AI to understand and participate in sports-related banter, react dynamically to live events, and provide insightful commentary, ensuring a vibrant and interactive viewing experience. These virtual companions are programmed to adapt to your interaction style, making your watch party experience uniquely personal and infinitely more enjoyable.

Breathing Life into AI with Live Video

The crux of making our AI friends realistic lies in the dynamic fusion of the LLM generative model with live video avatars. Each AI character is not just a static image but a fully animated entity that reacts and interacts in real-time. By leveraging cutting-edge animation and rendering techniques, these characters exhibit lifelike gestures and expressions, synchronized with their spoken responses, creating an illusion of genuine presence and engagement.

Real-Time Streaming with WebRTC

To ensure that these interactions occur in real time, leveraged WebRTC with its ultra-low latency video streaming capabilities. WebRTC enables us to stream the live video of the AI characters directly into the TrueTime Watch Party, ensuring that the virtual guests react and interact with near-zero delay, mirroring the immediacy of a real human participant.

Interactive Dialogue: From Audio to AI Response

The magic of interaction lies in the AI’s ability to respond aptly to the party attendees. 

Here’s how it works:

  • Audio Input Processing: As real participants speak, their audio is captured in real time on the Red5 Pro origin. This audio stream is then swiftly converted into text using advanced speech-to-text algorithms, ensuring that the essence and context of the conversation are preserved.
  • AI Processing: The converted text serves as input for the LLM, which then generates an appropriate response based on the dialogue’s context, the ongoing action in the watch party, and the AI character’s unique personality.
  • Voice Synthesis and Synchronization: The AI’s textual response is transformed into spoken word through a voice synthesis engine (we used our Cauldron media pipeline for this), matched with the character’s lip movements and facial expressions for a coherent and natural interaction.

Fine-Tuning Latency for Seamless Interactions

Achieving a seamless interaction required meticulous optimization to minimize latency at every step of the process. Our team fine-tuned the entire pipeline—from audio capture, speech-to-text conversion, AI processing, to video streaming—ensuring that the latency was trimmed to the bare minimum (ranging from 20 to 80ms). The result is a conversational flow so smooth that participants forget they’re interacting with AI.

The integration of these technologies culminates in an experience where AI friends respond to live input with remarkable speed, maintaining the lively, dynamic atmosphere of a TrueTime Watch Party. Attendees can enjoy banter, insights, and reactions from their AI companions in real time, making each watch party an unforgettable event.

By harnessing the power of WebRTC and advanced AI, we’ve not only solved the challenge of sparse attendance at watch parties but also introduced a new dimension of interactivity and engagement, setting a new standard for social viewing experiences.

Open Source Magic

In the spirit of innovation and community, we’re excited to share that the entire codebase for this groundbreaking feature is available on GitHub, completely open-source! We believe in the power of collaboration and invite developers, enthusiasts, and dreamers alike to explore, modify, and enhance our AI friends, pushing the boundaries of what’s possible. You can find a link to the new branch and all the code here. Note that you will need a decent familiarity with C++ development to really dive in and change the AI engine and Cauldron Brews we created.

Solving the Loneliness Quotient

We recognized a common sentiment among TrueTime Watch Party users – the desire for more social interaction during their favorite live sporting events. Our solution? AI friends who are always available, always entertaining, and, let’s be honest, ridiculously good-looking. Now, you’ll never have to worry about sending out invites or facing low turnout. Your AI companions are just a click away, ready to make every event a memorable one.

Why They’re Better Than Your Regular Friends

  • Always Available: Your AI friends are never too busy, ensuring you have a full house for every watch party.
  • Sports Aficionados: They come pre-loaded with sports knowledge and enthusiasm, ready to dive into deep discussions or simply cheer along.
  • No Awkward Small Talk: Say goodbye to dull moments. Your AI friends are programmed for fun and engaging interactions.
  • They’re Model-Like: Yes, our AI friends are designed to be easy on the eyes, enhancing the aesthetic of your watch party.

Join The Revolution!

Ready to elevate your TrueTime Watch Party experience with our AI-powered friends? Dive into our GitHub repository, explore the code, and be among the first to welcome these virtual pals into your home. This is more than just an enhancement; it’s a new era for watch parties, blending the realms of technology, entertainment, and friendship in ways you’ve never imagined.

Remember, the future of social viewing is here, and it’s brighter (and more populated) than ever. Welcome to the party of tomorrow, today!

The post Introducing AI-Powered Friends for TrueTime Watch Party: A Revolutionary Enhancement! appeared first on Red5.

]]>