对于视频点播还是实时视频开发,音视频同步是一个必要的环节。
目录
一、音视频同步原理
<https://blog.csdn.net/lincaig/article/details/81209895#%E4%B8%80%E3%80%81%E9%9F%B3%E8%A7%86%E9%A2%91%E5%90%8C%E6%AD%A5%E5%8E%9F%E7%90%86>
二、点播、直播视频播放器
<https://blog.csdn.net/lincaig/article/details/81209895#%E4%BA%8C%E3%80%81%E7%82%B9%E6%92%AD%E3%80%81%E7%9B%B4%E6%92%AD%E8%A7%86%E9%A2%91%E6%92%AD%E6%94%BE%E5%99%A8>
三、实时视频
<https://blog.csdn.net/lincaig/article/details/81209895#%E4%B8%89%E3%80%81%E5%AE%9E%E6%97%B6%E8%A7%86%E9%A2%91>
四、WebRTC音视频同步源码分析
<https://blog.csdn.net/lincaig/article/details/81209895#%E5%9B%9B%E3%80%81WebRTC%E9%9F%B3%E8%A7%86%E9%A2%91%E5%90%8C%E6%AD%A5%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90>
五、总结
<https://blog.csdn.net/lincaig/article/details/81209895#%E4%BA%94%E3%80%81%E6%80%BB%E7%BB%93>
*
一、音视频同步原理
一般来说,音视频同步就是视频同步到音频。视频在渲染的时候,每一帧视频根据与音频的时间戳对比,来调整立即渲染还是延迟渲染。比如有一个音频序列,他们的时间戳是A(0,
20, 40, 60,80,100, 120...),视频序列V(0, 40, 80, 120...)。音画同步的步骤如下:
1)取一帧音频A(0),播放。取一帧视频V(0),视频帧时间戳与音频相等,视频立即渲染。
2)取一帧音频A(20),播放。取一帧视频V(40),视频帧时间戳大于音频,视频太早,需要等待。
3)取一帧音频A(40),播放。取出视频,还是上面的V(40),视频帧时间戳与音频相等(真实场景中不一定完全相等,他们之间差值的绝对值在一个帧间隔时间内也可以认为是相同的时间戳),视频立即渲染。
对于视频播放器和实时视频,他们的同步原理如上面描述的一样,逃不开时间戳的对齐,只是在实现的时候可能有些差异。
*
二、点播、直播视频播放器
对于初学视频播放器的新手还是有多年音视频开发经验的人,都可以参照ffmpeg源码目录下的ffplay.c来了解音视频同步原来。本文不会对此展开讲。
*
三、实时视频
像我们生活中接触到的微信视频、视频会议都是实时视频的范畴。视频采集到对方观看的延迟一般最多在400ms。为了更快的传输速度,一般都会采用udp实现。但是由于udp传输是不可靠的,数据包容易丢失和乱序,所以在使用UDP的时候,会增加丢包重传、重排序的逻辑。这时就会引入了jitterbuffer的逻辑。实时视频的音画同步原理还是(一)中讲的,只是实现手段上是通过音视频两个jitterbuffer的控制。
实时视频比较好的框架就是WebRTC,本文将结合WebRTC的源码分析音视频同步的原理。
*
四、WebRTC音视频同步源码分析
WebRTC源码目录下video/stream_synchronization.cc实现了音视频同步
void RtpStreamsSynchronizer::Process() {
RTC_DCHECK_RUN_ON(&process_thread_checker_); last_sync_time_ =
rtc::TimeNanos(); rtc::CritScope lock(&crit_); if (!syncable_audio_) { return;
} RTC_DCHECK(sync_.get()); absl::optional<Syncable::Info> audio_info =
syncable_audio_->GetInfo(); if (!audio_info ||
!UpdateMeasurements(&audio_measurement_, *audio_info)) { return; } int64_t
last_video_receive_ms = video_measurement_.latest_receive_time_ms;
absl::optional<Syncable::Info> video_info = syncable_video_->GetInfo(); if
(!video_info || !UpdateMeasurements(&video_measurement_, *video_info)) {
return; } if (last_video_receive_ms ==
video_measurement_.latest_receive_time_ms) { // No new video packet has been
received since last update. return; } int relative_delay_ms; // Calculate how
much later or earlier the audio stream is compared to video. if
(!sync_->ComputeRelativeDelay(audio_measurement_, video_measurement_,
&relative_delay_ms)) { return; } TRACE_COUNTER1("webrtc",
"SyncCurrentVideoDelay", video_info->current_delay_ms);
TRACE_COUNTER1("webrtc", "SyncCurrentAudioDelay",
audio_info->current_delay_ms); TRACE_COUNTER1("webrtc", "SyncRelativeDelay",
relative_delay_ms); int target_audio_delay_ms = 0; int target_video_delay_ms =
video_info->current_delay_ms; // Calculate the necessary extra audio delay and
desired total video // delay to get the streams in sync. if
(!sync_->ComputeDelays(relative_delay_ms, audio_info->current_delay_ms,
&target_audio_delay_ms, &target_video_delay_ms)) { return; }
syncable_audio_->SetMinimumPlayoutDelay(target_audio_delay_ms);
syncable_video_->SetMinimumPlayoutDelay(target_video_delay_ms); }
为了对WebRTC的同步有比较好的理解,我们先看Process方法中最下面的两行代码。
syncable_audio_->SetMinimumPlayoutDelay(target_audio_delay_ms);
syncable_video_->SetMinimumPlayoutDelay(target_video_delay_ms);
SetMinimumPlayoutDelay会传递一个最小playout_delay的值(target_xxx_delay_ms)下去,一直传递到音频或视频jitterbuffer上。告诉jitterbuffer:之后渲染的每一帧至少要延迟target_xxx_delay_ms时间才能输出,除非下次重新传递新的值。特别注意:现在WebRTC最新代码里,jitterbuffer使用了新的实现modules/video_coding/frame_buffer2.cc,而且,这里的target_xxx_delay_ms只是至少要延迟的时间,可能真实的延迟会略大于target_xxx_delay_ms。因为jitterbuffer中还会计算一个current_delay,它包括了jitterdelay、renderdelay和requiredDecodeTime的总和。所以jitterbuffer总的delay时间为:int
actual_delay = std::max(current_delay_ms_,
min_playout_delay_ms_);jitterbuffer的细节在本文中就不再扩展。
回过头来,有了SetMinimumPlayoutDelay如何控制音画同步呢?原理是这样的:1、如果音频播放比视频快,调大音频的target_audio_delay_ms或者调小target_video_delay_ms;2、如果音频播放比视频慢了,调小音频的target_audio_delay_ms或者调大视频的target_video_delay_ms;3、音视频处于同步状态,不作调整;
至于如何判断音视频谁快谁慢以及如何调整target_audio_delay_ms和target_video_delay_ms,我们继续查看代码。
absl::optional<Syncable::Info> audio_info = syncable_audio_->GetInfo(); if
(!audio_info || !UpdateMeasurements(&audio_measurement_, *audio_info)) {
return; } int64_t last_video_receive_ms =
video_measurement_.latest_receive_time_ms; absl::optional<Syncable::Info>
video_info = syncable_video_->GetInfo(); if (!video_info ||
!UpdateMeasurements(&video_measurement_, *video_info)) { return; } if
(last_video_receive_ms == video_measurement_.latest_receive_time_ms) { // No
new video packet has been received since last update. return; }
UpdateMeasurements用于获取最新收到数据包的时间戳(latest_timestamp)和到达时刻(latest_receive_time_ms)。音频和视频分别记录在audio_measurement_和video_measurement_中。
接下来,
int relative_delay_ms; // Calculate how much later or earlier the audio stream
is compared to video. if (!sync_->ComputeRelativeDelay(audio_measurement_,
video_measurement_, &relative_delay_ms)) { return; }
ComputeRelativeDelay计算在网络传输上音频相对视频提早了多少毫秒relative_delay_ms。
// Positive diff means that video_measurement is behind audio_measurement. ///
relative_delay_ms meams A - V. *relative_delay_ms =
video_measurement.latest_receive_time_ms -
audio_measurement.latest_receive_time_ms - (video_last_capture_time_ms -
audio_last_capture_time_ms);
上面一步只是算出了音频数据包比视频数据包网络传输上相对提早时间(命名上叫delay,其实是提早时间),接下来开始计算target_audio_delay_ms和target_video_delay_ms。
int target_audio_delay_ms = 0; int target_video_delay_ms =
video_info->current_delay_ms;// 初始化为视频当前的TargetDelay //
会被赋值给ComputeDelays中的current_video_delay_ms // Calculate the necessary extra
audio delay and desired total video // delay to get the streams in sync. if
(!sync_->ComputeDelays(relative_delay_ms, audio_info->current_delay_ms,
&target_audio_delay_ms, &target_video_delay_ms)) { return; }
ComputeDelays会结合relative_delay_ms和音视频当前的target_delay_ms,
计算target_audio_delay_ms和target_video_delay_ms。
bool StreamSynchronization::ComputeDelays(int relative_delay_ms, int
current_audio_delay_ms, int* total_audio_delay_target_ms, int*
total_video_delay_target_ms) { assert(total_audio_delay_target_ms &&
total_video_delay_target_ms); int current_video_delay_ms =
*total_video_delay_target_ms; RTC_LOG(LS_VERBOSE) << "Audio delay: " <<
current_audio_delay_ms << " current diff: " << relative_delay_ms << " for
stream " << audio_stream_id_; // Calculate the difference between the lowest
possible video delay and // the current audio delay. /* *
可以如下计算音频与视频playout时间的相差值,大于0代表音频比视频播放快,小于0代表音频比视频播放慢 * A_playout_ts =
relative_delay_ms - current_audio_delay_ms * V_playout_ts =
-current_video_delay_ms * A-V_current_diff_ms = A_playout_ts - V_playout_ts */
int current_diff_ms = current_video_delay_ms - current_audio_delay_ms +
relative_delay_ms; // 平滑下current_diff_ms的值 avg_diff_ms_ = ((kFilterLength - 1)
* avg_diff_ms_ + current_diff_ms) / kFilterLength; if (abs(avg_diff_ms_) <
kMinDeltaMs) { // Don't adjust if the diff is within our margin. return false;
} // Make sure we don't move too fast. int diff_ms = avg_diff_ms_ / 2; diff_ms
= std::min(diff_ms, kMaxChangeMs); diff_ms = std::max(diff_ms, -kMaxChangeMs);
// Reset the average after a move to prevent overshooting reaction.
avg_diff_ms_ = 0; if (diff_ms > 0) { // The minimum video delay is longer than
the current audio delay. // We need to decrease extra video delay, or add extra
audio delay. // 视频的延迟比音频大,我们可以减小视频的额外延迟,或者增大音频的额外延迟 if
(channel_delay_.extra_video_delay_ms > base_target_delay_ms_) { // We have
extra delay added to ViE. Reduce this delay before adding // extra delay to
VoE. channel_delay_.extra_video_delay_ms -= diff_ms;
channel_delay_.extra_audio_delay_ms = base_target_delay_ms_; } else { //
channel_delay_.extra_video_delay_ms > 0 // We have no extra video delay to
remove, increase the audio delay. channel_delay_.extra_audio_delay_ms +=
diff_ms; channel_delay_.extra_video_delay_ms = base_target_delay_ms_; } } else
{ // if (diff_ms > 0) // The video delay is lower than the current audio delay.
// We need to decrease extra audio delay, or add extra video delay. //
视频的延迟比音频小,我们可以减小音频的额外延迟,或者增大视频的额外延迟 if (channel_delay_.extra_audio_delay_ms >
base_target_delay_ms_) { // We have extra delay in VoiceEngine. // Start with
decreasing the voice delay. // Note: diff_ms is negative; add the negative
difference. channel_delay_.extra_audio_delay_ms += diff_ms;
channel_delay_.extra_video_delay_ms = base_target_delay_ms_; } else { //
channel_delay_.extra_audio_delay_ms > base_target_delay_ms_ // We have no extra
delay in VoiceEngine, increase the video delay. // Note: diff_ms is negative;
subtract the negative difference. channel_delay_.extra_video_delay_ms -=
diff_ms; // X - (-Y) = X + Y. channel_delay_.extra_audio_delay_ms =
base_target_delay_ms_; } } // Make sure that video is never below our target.
channel_delay_.extra_video_delay_ms =
std::max(channel_delay_.extra_video_delay_ms, base_target_delay_ms_); int
new_video_delay_ms; if (channel_delay_.extra_video_delay_ms >
base_target_delay_ms_) { new_video_delay_ms =
channel_delay_.extra_video_delay_ms; } else { // No change to the extra video
delay. We are changing audio and we only // allow to change one at the time.
new_video_delay_ms = channel_delay_.last_video_delay_ms; } // Make sure that we
don't go below the extra video delay. new_video_delay_ms =
std::max(new_video_delay_ms, channel_delay_.extra_video_delay_ms); // Verify we
don't go above the maximum allowed video delay. new_video_delay_ms =
std::min(new_video_delay_ms, base_target_delay_ms_ + kMaxDeltaDelayMs); int
new_audio_delay_ms; if (channel_delay_.extra_audio_delay_ms >
base_target_delay_ms_) { new_audio_delay_ms =
channel_delay_.extra_audio_delay_ms; } else { // No change to the audio delay.
We are changing video and we only // allow to change one at the time.
new_audio_delay_ms = channel_delay_.last_audio_delay_ms; } // Make sure that we
don't go below the extra audio delay. new_audio_delay_ms =
std::max(new_audio_delay_ms, channel_delay_.extra_audio_delay_ms); // Verify we
don't go above the maximum allowed audio delay. new_audio_delay_ms =
std::min(new_audio_delay_ms, base_target_delay_ms_ + kMaxDeltaDelayMs); //
Remember our last audio and video delays. channel_delay_.last_video_delay_ms =
new_video_delay_ms; channel_delay_.last_audio_delay_ms = new_audio_delay_ms;
RTC_LOG(LS_VERBOSE) << "Sync video delay " << new_video_delay_ms << " for video
stream " << video_stream_id_ << " and audio delay " <<
channel_delay_.extra_audio_delay_ms << " for audio stream " <<
audio_stream_id_; // Return values. *total_video_delay_target_ms =
new_video_delay_ms; *total_audio_delay_target_ms = new_audio_delay_ms; return
true; }
上述代码结合网络传输上音频比视频的提早时间relative_delay_ms和音视频各自jitter
buffer的delay时间,计算出音频和视频playout的相对差值current_diff_ms,用于判断音频和视频谁快谁慢。当current_diff_ms
> 0时,音频比视频播放更快,换句话说,视频的延迟比音频大,我们可以减小视频的延迟,或者增大音频的延迟;当current_diff_ms <
0时,视频比音频播放更快,换句话说,视频的延迟比音频小,我们可以减小音频的延迟,或者增大视频的延迟。
这里面有几个变量需要解释下:
base_target_delay_ms_:至少需要的延迟时间,音频和视频至少需要这么多延迟。可以通过提供的接口更改此值;
extra_video_delay_ms:视频额外延迟时间,从字面上挺难理解这个值。刚开始初始化的时候,extra_video_delay_ms的值等于
base_target_delay_ms_,但在同步中可能会增加或者减少。经过纠正的逻辑,最后提供出来,作为本次同步算出来的video_delay_target_ms。
extra_audio_delay_ms:同extra_video_delay_ms。
我们根据简单的case来理解下这个delay的计算过程:
1、初始值:extra_video_delay_ms 和 extra_audio_delay_ms 都等于 base_target_delay_ms_
2、第一次进入ComputeDelays,这时如果判断出diff_ms>0,也就是说视频的延迟比音频大时,由于当前extra_video_delay_ms等于base_target_delay_ms_,采取了增加extra_audio_delay_ms的方法,结果就是extra_audio_delay_ms=extra_audio_delay_ms+diff_ms。extra_audio_delay_ms的值作为新的audio_delay_target_ms输出,extra_video_delay_ms的值不变作为新的video_delay_target_ms输出。
3、第二次进入ComputeDelays,如果这时还判断出diff_ms>0,由于当前extra_video_delay_ms等于base_target_delay_ms_,还是通过增加extra_audio_delay_ms的方法,结果就是extra_audio_delay_ms又累加了一个diff_ms。这里是否会有疑问,为什么在第一次调整之后,还是会有视频比音频延迟大的问题?这里可能的原因有两个:1)我们在计算diff_ms的时候为了不要调整力度太大,作了减半的处理,所以并没有立刻调整过来;2)我们是否注意到:我们将求得的audio_delay_target_ms设置给jitterbuffer的时候,更改了min_playout_delay_ms_,若它的值小于jitterbuffer的current_delay_ms_,jitterbuffer真实在使用的是current_delay_ms_的值。当然,经过几次的处理,extra_audio_delay_ms的值会越来越接近current_delay_ms_,直至超过它,这样就能起作用。
4、第三次进入ComputeDelays,这时如果判断出diff_ms<0,视频延迟比音频小,由于上两次调整,extra_audio_delay_ms的值已经比base_target_delay_ms_,我们会优先去减少extra_audio_delay_ms的值,而不是增大extra_video_delay_ms的值。
总结:这里的算法一直在保证extra_audio_delay_ms和extra_video_delay_ms的值不低于base_target_delay_ms_,只能往大于base_target_delay_ms_的方向累加。但在作同步处理的时候,有值extra_xxx_delay_ms大于base_target_delay_ms_的话,还是优先是向下减小extra_xxx_delay_ms的值,直至base_target_delay_ms_这个临界值。
五、总结
到这里已经分析完WebRTC的音画同步原理。WebRTC在不干扰到网络抖动至少需要的current_delay_ms_情况下,通过控制jitterbuffer中的min_playout_delay_ms_巧妙地做到音视频同步。音画同步的原理都是一样的,但通过分析WebRTC的音画同步原来,能学习到实时视频相对于点播播放器的不同。
热门工具 换一换