当前位置: 首页>>代码示例>>Java>>正文


Java Logging.w方法代码示例

本文整理汇总了Java中org.webrtc.Logging.w方法的典型用法代码示例。如果您正苦于以下问题:Java Logging.w方法的具体用法?Java Logging.w怎么用?Java Logging.w使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.webrtc.Logging的用法示例。


在下文中一共展示了Logging.w方法的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createAudioTrackOnLollipopOrHigher

import org.webrtc.Logging; //导入方法依赖的package包/类
@TargetApi(21)
private static AudioTrack createAudioTrackOnLollipopOrHigher(
    int sampleRateInHz, int channelConfig, int bufferSizeInBytes) {
  Logging.d(TAG, "createAudioTrackOnLollipopOrHigher");
  // TODO(henrika): use setPerformanceMode(int) with PERFORMANCE_MODE_LOW_LATENCY to control
  // performance when Android O is supported. Add some logging in the mean time.
  final int nativeOutputSampleRate =
      AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_VOICE_CALL);
  Logging.d(TAG, "nativeOutputSampleRate: " + nativeOutputSampleRate);
  if (sampleRateInHz != nativeOutputSampleRate) {
    Logging.w(TAG, "Unable to use fast mode since requested sample rate is not native");
  }
  if (usageAttribute != DEFAULT_USAGE) {
    Logging.w(TAG, "A non default usage attribute is used: " + usageAttribute);
  }
  // Create an audio track where the audio usage is for VoIP and the content type is speech.
  return new AudioTrack(
      new AudioAttributes.Builder()
          .setUsage(usageAttribute)
          .setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
      .build(),
      new AudioFormat.Builder()
        .setEncoding(AudioFormat.ENCODING_PCM_16BIT)
        .setSampleRate(sampleRateInHz)
        .setChannelMask(channelConfig)
        .build(),
      bufferSizeInBytes,
      AudioTrack.MODE_STREAM,
      AudioManager.AUDIO_SESSION_ID_GENERATE);
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:31,代码来源:WebRtcAudioTrack.java

示例2: setAEC

import org.webrtc.Logging; //导入方法依赖的package包/类
public boolean setAEC(boolean enable) {
  Logging.d(TAG, "setAEC(" + enable + ")");
  if (!canUseAcousticEchoCanceler()) {
    Logging.w(TAG, "Platform AEC is not supported");
    shouldEnableAec = false;
    return false;
  }
  if (aec != null && (enable != shouldEnableAec)) {
    Logging.e(TAG, "Platform AEC state can't be modified while recording");
    return false;
  }
  shouldEnableAec = enable;
  return true;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:15,代码来源:WebRtcAudioEffects.java

示例3: setNS

import org.webrtc.Logging; //导入方法依赖的package包/类
public boolean setNS(boolean enable) {
  Logging.d(TAG, "setNS(" + enable + ")");
  if (!canUseNoiseSuppressor()) {
    Logging.w(TAG, "Platform NS is not supported");
    shouldEnableNs = false;
    return false;
  }
  if (ns != null && (enable != shouldEnableNs)) {
    Logging.e(TAG, "Platform NS state can't be modified while recording");
    return false;
  }
  shouldEnableNs = enable;
  return true;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:15,代码来源:WebRtcAudioEffects.java

示例4: isAcousticEchoCancelerBlacklisted

import org.webrtc.Logging; //导入方法依赖的package包/类
public static boolean isAcousticEchoCancelerBlacklisted() {
  List<String> blackListedModels = WebRtcAudioUtils.getBlackListedModelsForAecUsage();
  boolean isBlacklisted = blackListedModels.contains(Build.MODEL);
  if (isBlacklisted) {
    Logging.w(TAG, Build.MODEL + " is blacklisted for HW AEC usage!");
  }
  return isBlacklisted;
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:9,代码来源:WebRtcAudioEffects.java

示例5: isNoiseSuppressorBlacklisted

import org.webrtc.Logging; //导入方法依赖的package包/类
public static boolean isNoiseSuppressorBlacklisted() {
  List<String> blackListedModels = WebRtcAudioUtils.getBlackListedModelsForNsUsage();
  boolean isBlacklisted = blackListedModels.contains(Build.MODEL);
  if (isBlacklisted) {
    Logging.w(TAG, Build.MODEL + " is blacklisted for HW NS usage!");
  }
  return isBlacklisted;
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:9,代码来源:WebRtcAudioEffects.java

示例6: create

import org.webrtc.Logging; //导入方法依赖的package包/类
static WebRtcAudioEffects create() {
  // Return null if VoIP effects (AEC, AGC and NS) are not supported.
  if (!WebRtcAudioUtils.runningOnJellyBeanOrHigher()) {
    Logging.w(TAG, "API level 16 or higher is required!");
    return null;
  }
  return new WebRtcAudioEffects();
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:9,代码来源:WebRtcAudioEffects.java

示例7: setAudioTrackUsageAttribute

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized void setAudioTrackUsageAttribute(int usage) {
  Logging.w(TAG, "Default usage attribute is changed from: "
      + DEFAULT_USAGE + " to " + usage);
  usageAttribute = usage;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:6,代码来源:WebRtcAudioTrack.java

示例8: setSpeakerMute

import org.webrtc.Logging; //导入方法依赖的package包/类
public static void setSpeakerMute(boolean mute) {
  Logging.w(TAG, "setSpeakerMute(" + mute + ")");
  speakerMute = mute;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:5,代码来源:WebRtcAudioTrack.java

示例9: setMicrophoneMute

import org.webrtc.Logging; //导入方法依赖的package包/类
public static void setMicrophoneMute(boolean mute) {
  Logging.w(TAG, "setMicrophoneMute(" + mute + ")");
  microphoneMute = mute;
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:5,代码来源:WebRtcAudioRecord.java

示例10: setStereoInput

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized void setStereoInput(boolean enable) {
  Logging.w(TAG, "Overriding default input behavior: setStereoInput(" + enable + ')');
  useStereoInput = enable;
}
 
开发者ID:lgyjg,项目名称:AndroidRTC,代码行数:5,代码来源:WebRtcAudioManager.java

示例11: setStereoOutput

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized void setStereoOutput(boolean enable) {
  Logging.w(TAG, "Overriding default output behavior: setStereoOutput(" + enable + ')');
  useStereoOutput = enable;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:5,代码来源:WebRtcAudioManager.java

示例12: setWebRtcBasedAutomaticGainControl

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized void setWebRtcBasedAutomaticGainControl(boolean enable) {
  // TODO(henrika): deprecated; remove when no longer used by any client.
  Logging.w(TAG, "setWebRtcBasedAutomaticGainControl() is deprecated");
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:5,代码来源:WebRtcAudioUtils.java

示例13: useWebRtcBasedAcousticEchoCanceler

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized boolean useWebRtcBasedAcousticEchoCanceler() {
  if (useWebRtcBasedAcousticEchoCanceler) {
    Logging.w(TAG, "Overriding default behavior; now using WebRTC AEC!");
  }
  return useWebRtcBasedAcousticEchoCanceler;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:7,代码来源:WebRtcAudioUtils.java

示例14: useWebRtcBasedNoiseSuppressor

import org.webrtc.Logging; //导入方法依赖的package包/类
public static synchronized boolean useWebRtcBasedNoiseSuppressor() {
  if (useWebRtcBasedNoiseSuppressor) {
    Logging.w(TAG, "Overriding default behavior; now using WebRTC NS!");
  }
  return useWebRtcBasedNoiseSuppressor;
}
 
开发者ID:Piasy,项目名称:AppRTC-Android,代码行数:7,代码来源:WebRtcAudioUtils.java


注:本文中的org.webrtc.Logging.w方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。