When discussing latency, a series of questions quickly arise about the subject. Are we talking about the latency of a camera's output or the end-to-end latency in a system? Latency when outputting a compressed or an uncompressed signal? Should the latency be expressed in frame or milliseconds? etc. As evoked below, there are various different factors (hardware, software, signal type, protocol, system design...) that will make the latency unique for each system. But let's try to start from the beginning:


What is latency?

Latency is in this case the definition of the time that a video signal needs to be transferred through part of or through the whole chain of devices dealing with that signal. 

Some people consider a complete end-to-end video signal chain with less than 1 second low latency, in our world that is an eternity. On the other end there is ultra-low latency where the delay is barely perceivable by the human senses. 

Whereas some applications, such as live video at events require ultra-low latency for the on-site displays and audio to match what is happening on stage, streaming the same concert to TV or YouTube wouldn't be subject to nearly the same latency requirements. 

Whichever latency is required, feel free to reach out to our support team at support@avonic.com for specific advice on how to reach the best option for your specific requirement. 


Latency requirements will be subject to the demands of each individual application.


Latency Vs. Quality Vs. Bandwidth Vs. Distance

Generally speaking, latency and quality aren't friends with the forces of nature. The larger the distances are you want to cover, and the lower the bandwidth over the connection, the more latency or quality you will need to sacrifice. For example, say you want to cover a large distance and don't have a huge bandwidth connection -- you can either choose for low latency, at the risk of packet loss leading to a loss in quality, or you can build in a long buffer at the receiving end, giving the decoder time to recover lost packets, consequently maintaining quality at the cost of latency.



Every video processing device has its own latency.

Starting from the camera itself, there are different stages (sensor, image processing on chipset..) where some latency will be added. Generally speaking, this results in a latency of 2 to 3 frames between what happens in the real world and what comes out of the camera's output. Some cameras exist that offer even less latency, but these tend to cost a lot of money.


Then, a device like TV screen can add between 1 and 2 frames of latency but this will not only depend on the TV screen model but also on its configuration! (for instance the game mode usually creates less input lag) Opposite to that we have PC screens which have much less latency, mostly between 2 and 8ms as they usually do not do any image processing like TV screens do. (Note, most TV models do offer lower latency (<10ms) options in their settings, make sure to thoroughly read the documentation on how to get the best result in this regard).


Every millisecond counts, choose video processing devices that have low latency and minimize the total system delay.

In general, the fewer devices and video processing (video mixer, encoder/decoder, production..) from end to end, the less latency your signal should get. But there are still numerous other variables that can handicap the latency like processors, cabling, and signal types (encoded/compressed IP streaming signals have traditionally more latency compared to uncompressed HDMI/SDI signals) which depends also on the protocol used and how well it is implemented. 


Even when it comes to display the same signal, software and decoding hardware can show noticeable different performances. VLC itself adds a lot of latency as a decoder, by default 1 second, but this can be reduced to about 250ms with some adjustments, however, VLC is finally not the perfect decoder. 

It is also be good to remember that not only VLC itself (FFMPEG based) can be tuned, but the decoding hardware is a massive factor when it comes to latency; a decent computer with a good nVidia graphics card can make a big difference on the speed and display of the video output. 

This was at least a good example to prove that in some cases you can tune your devices and software to minimize the latency. Even online platforms like YouTube will add their own latency to your stream.


Frame or milliseconds?

Make sure, when reading about latency in frames or milliseconds, to not confuse the two. 


Amount of frames per second Time per frame in milliseconds
6016.67 
5020
3033.33
2540


IP streaming

Some choices like the codec (H.264, H.265, MJPEG), transport/streaming protocol (RTSP, RTMP, SRT, NDI, Multicast..) and especially the network infrastructure will definitely help to improve overall performance, but also some camera settings such as the full RTP packet size (limiting the amount of packages needed for each frame) and the amount of time between I-frames (by limiting it) can slightly help.

But everything stands or falls with the use of the right decoder. A well-written decoder (software) that uses the right hardware (graphics cards) can decode almost without latency (+/-20-40ms) Add to that the latency of the camera and the HDMI input lag of screens (good screens +/- 5-8ms, TVs can go up to 30-50ms). Even better are modern hardware decoders. Please note that H.264/265 are NOT slow codecs if implemented in the right way, example of it is the IP viewing in the same room with our NATO meeting case study.


Avonic cameras

Our CM70-IP, CM71-IP and CM73-IP cameras can achieve latency as low as 50 - 70ms (3-4 frames) on HDMI/SDI and 80-100ms (5-7 frames) on IP outputs. Supporting low latency IP Streaming (with a bitrate up to 40Mbit) is perfect for instance for large conferencing applications (with our partners like Bosch, Arbor media, MVI, Televic, etc). 

Avonic cameras can also support SRT (Secure Reliable Transport) with better latency than RTMP (SRT latency buffer can be configured from 200ms to 8000ms depending on network conditions and distance, to allow for packet retransmission).  An even more efficient option would be NDI that optimises latency between the camera and third party devices such as videomixers.. 

But it is not always all about latency, for instance in applications like lecture capture latency is not an issue but the storage optimization should be met! To be continued...


For full details on Avonic Camera's latency, we have detailed and executed thorough latency testing to ensure client needs are met. You can download the full latency report here.