当前位置: 首页>>代码示例>>C++>>正文


C++ TimeRange::getEnd方法代码示例

本文整理汇总了C++中TimeRange::getEnd方法的典型用法代码示例。如果您正苦于以下问题:C++ TimeRange::getEnd方法的具体用法?C++ TimeRange::getEnd怎么用?C++ TimeRange::getEnd使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在TimeRange的用法示例。


在下文中一共展示了TimeRange::getEnd方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C++代码示例。

示例1: progressMerger

JoiningBoundedTimeline<void> webRtcDetectVoiceActivity(const AudioClip& audioClip, ProgressSink& progressSink) {
    VadInst* vadHandle = WebRtcVad_Create();
    if (!vadHandle) throw runtime_error("Error creating WebRTC VAD handle.");

    auto freeHandle = gsl::finally([&]() {
        WebRtcVad_Free(vadHandle);
    });

    int error = WebRtcVad_Init(vadHandle);
    if (error) throw runtime_error("Error initializing WebRTC VAD handle.");

    const int aggressiveness = 2; // 0..3. The higher, the more is cut off.
    error = WebRtcVad_set_mode(vadHandle, aggressiveness);
    if (error) throw runtime_error("Error setting WebRTC VAD aggressiveness.");

    ProgressMerger progressMerger(progressSink);
    ProgressSink& pass1ProgressSink = progressMerger.addSink(1.0);
    ProgressSink& pass2ProgressSink = progressMerger.addSink(0.3);

    // Detect activity
    JoiningBoundedTimeline<void> activity(audioClip.getTruncatedRange());
    centiseconds time = 0_cs;
    const size_t bufferCapacity = audioClip.getSampleRate() / 100;
    auto processBuffer = [&](const vector<int16_t>& buffer) {
        // WebRTC is picky regarding buffer size
        if (buffer.size() < bufferCapacity) return;

        int result = WebRtcVad_Process(vadHandle, audioClip.getSampleRate(), buffer.data(), buffer.size()) == 1;
        if (result == -1) throw runtime_error("Error processing audio buffer using WebRTC VAD.");

        bool isActive = result != 0;
        if (isActive) {
            activity.set(time, time + 1_cs);
        }
        time += 1_cs;
    };
    process16bitAudioClip(audioClip, processBuffer, bufferCapacity, pass1ProgressSink);

    // WebRTC adapts to the audio. This means results may not be correct at the very beginning.
    // It sometimes returns false activity at the very beginning, mistaking the background noise for speech.
    // So we delete the first recognized utterance and re-process the corresponding audio segment.
    if (!activity.empty()) {
        TimeRange firstActivity = activity.begin()->getTimeRange();
        activity.clear(firstActivity);
        unique_ptr<AudioClip> streamStart = audioClip.clone() | segment(TimeRange(0_cs, firstActivity.getEnd()));
        time = 0_cs;
        process16bitAudioClip(*streamStart, processBuffer, bufferCapacity, pass2ProgressSink);
    }

    return activity;
}
开发者ID:DanielSWolf,项目名称:rhubarb-lip-sync,代码行数:51,代码来源:voiceActivityDetection.cpp


注:本文中的TimeRange::getEnd方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。