當前位置: 首頁>>代碼示例>>Java>>正文


Java AudioFormat.ENCODING_PCM_16BIT屬性代碼示例

本文整理匯總了Java中android.media.AudioFormat.ENCODING_PCM_16BIT屬性的典型用法代碼示例。如果您正苦於以下問題:Java AudioFormat.ENCODING_PCM_16BIT屬性的具體用法?Java AudioFormat.ENCODING_PCM_16BIT怎麽用?Java AudioFormat.ENCODING_PCM_16BIT使用的例子?那麽, 這裏精選的屬性代碼示例或許可以為您提供幫助。您也可以進一步了解該屬性所在android.media.AudioFormat的用法示例。


在下文中一共展示了AudioFormat.ENCODING_PCM_16BIT屬性的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getMinBufferSize

private int getMinBufferSize(int sampleRate, int channelConfig, int audioFormat) {
        int numOfChannels, bitsPersample;
        if (channelConfig == AudioFormat.CHANNEL_IN_MONO) {
            numOfChannels = 1;
        } else {
            numOfChannels = 2;
        }
        if (AudioFormat.ENCODING_PCM_16BIT == audioFormat) {
            bitsPersample = 16;
        } else {
            bitsPersample = 8;
        }
        int periodInFrames = sampleRate * TIMER_INTERVAL / 1000;		//num of frames in a second is same as sample rate
        //refer to android/4.1.1/frameworks/av/media/libmedia/AudioRecord.cpp, AudioRecord::getMinFrameCount method
        //we times 2 for ping pong use of record buffer
        mMinBufferSize = periodInFrames * 2  * numOfChannels * bitsPersample / 8;
        if (mMinBufferSize < AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat)) {
            // Check to make sure buffer size is not smaller than the smallest allowed one
            mMinBufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
            // Set frame period and timer interval accordingly
//            periodInFrames = mMinBufferSize / ( 2 * bitsPersample * numOfChannels / 8 );
        }

        return mMinBufferSize;
    }
 
開發者ID:ThinkKeep,項目名稱:EvilsLive,代碼行數:25,代碼來源:AudioCapture.java

示例2: playSound

/**
 * This method plays the sound data in the specified buffer.
 *
 * @param buffer specifies the sound data buffer.
 */
public void playSound(short[] buffer)
{
    final String funcName = "playSound";

    if (debugEnabled)
    {
        dbgTrace.traceEnter(funcName, TrcDbgTrace.TraceLevel.API);
        dbgTrace.traceExit(funcName, TrcDbgTrace.TraceLevel.API);
    }

    audioTrack = new AudioTrack(
            AudioManager.STREAM_MUSIC,
            sampleRate,
            AudioFormat.CHANNEL_OUT_MONO,
            AudioFormat.ENCODING_PCM_16BIT,
            buffer.length*2,    //buffer length in bytes
            AudioTrack.MODE_STATIC);
    audioTrack.write(buffer, 0, buffer.length);
    audioTrack.setNotificationMarkerPosition(buffer.length);
    audioTrack.setPlaybackPositionUpdateListener(this);
    audioTrack.play();
    playing = true;
}
 
開發者ID:trc492,項目名稱:FtcSamples,代碼行數:28,代碼來源:FtcAndroidTone.java

示例3: AudioSink

/**
 * Constructor. Will create a new AudioSink.
 *
 * @param packetSize	size of the incoming packets
 * @param sampleRate	sample rate of the audio signal
 */
public AudioSink (int packetSize, int sampleRate) {
	this.packetSize = packetSize;
	this.sampleRate = sampleRate;

	// Create the queues and fill them with
	this.inputQueue = new ArrayBlockingQueue<SamplePacket>(QUEUE_SIZE);
	this.outputQueue = new ArrayBlockingQueue<SamplePacket>(QUEUE_SIZE);
	for (int i = 0; i < QUEUE_SIZE; i++)
		this.outputQueue.offer(new SamplePacket(packetSize));

	// Create an instance of the AudioTrack class:
	int bufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
	this.audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_OUT_MONO,
								AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);

	// Create the audio filters:
	this.audioFilter1 = FirFilter.createLowPass(2, 1, 1, 0.1f, 0.15f, 30);
	Log.d(LOGTAG,"constructor: created audio filter 1 with " + audioFilter1.getNumberOfTaps() + " Taps.");
	this.audioFilter2 = FirFilter.createLowPass(4, 1, 1, 0.1f, 0.1f, 30);
	Log.d(LOGTAG,"constructor: created audio filter 2 with " + audioFilter2.getNumberOfTaps() + " Taps.");
	this.tmpAudioSamples = new SamplePacket(packetSize);
}
 
開發者ID:takyonxxx,項目名稱:AndroidSdrRtlTuner,代碼行數:28,代碼來源:AudioSink.java

示例4: PcmPlayer

public PcmPlayer(Context context, Handler handler) {
    this.mContext = context;
    this.audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, wBufferSize, AudioTrack.MODE_STREAM);
    this.handler = handler;
    audioTrack.setPlaybackPositionUpdateListener(this, handler);
    cacheDir = context.getExternalFilesDir(Environment.DIRECTORY_MUSIC);
}
 
開發者ID:LingjuAI,項目名稱:AssistantBySDK,代碼行數:7,代碼來源:PcmPlayer.java

示例5: findAudioRecord

public AudioRecord findAudioRecord() {

        for (int rate : mSampleRates) {
            for (short audioFormat : new short[] { AudioFormat.ENCODING_PCM_16BIT }) {
                for (short channelConfig : new short[] { AudioFormat.CHANNEL_IN_MONO }) {
                    try {
                        Log.d("C.TAG", "Attempting rate " + rate + "Hz, bits: " + audioFormat +      ", channel: "
                                + channelConfig);
                        int bufferSize = AudioRecord.getMinBufferSize(rate,      AudioFormat.CHANNEL_IN_MONO , AudioFormat.ENCODING_PCM_16BIT);

                        if (bufferSize != AudioRecord.ERROR_BAD_VALUE) {
                            // check if we can instantiate and have a success
                            AudioRecord recorder = new AudioRecord(AudioSource.MIC, DEFAULT_RATE,      channelConfig, audioFormat, bufferSize);

                            if (recorder.getState() == AudioRecord.STATE_INITIALIZED)
                                return recorder;
                        }
                    } catch (Exception e) {
                        Log.e("C.TAG", rate + "Exception, keep trying.",e);
                    }
                }
            }
        }
        return null;
    }
 
開發者ID:n8fr8,項目名稱:LittleBitLouder,代碼行數:25,代碼來源:TOne.java

示例6: AudioRecorder

public AudioRecorder(LoudnessSensor sensor) {

            this.mSensor = sensor;

            int channel = AudioFormat.CHANNEL_IN_MONO;
            int mic = AudioSource.MIC;

            // Berechne den Puffer
            int minAudioBuffer = AudioRecord.getMinBufferSize(
                    COMMON_AUDIO_FREQUENCY,
                    channel,
                    AudioFormat.ENCODING_PCM_16BIT);
            int audioBuffer = minAudioBuffer * 6;

            // Erstelle den Recorder
            audioInput = new AudioRecord(
                    mic,
                    COMMON_AUDIO_FREQUENCY,
                    channel,
                    AudioFormat.ENCODING_PCM_16BIT,
                    audioBuffer);
        }
 
開發者ID:Telecooperation,項目名稱:assistance-platform-client-sdk-android,代碼行數:22,代碼來源:LoudnessSensor.java

示例7: getInstance

public static ExtAudioRecorder getInstance(Boolean recordingCompressed, VoiceCallback callback) {
	if (recordingCompressed) {
		result = new ExtAudioRecorder(false, AudioSource.MIC,
				sampleRates[3], AudioFormat.CHANNEL_CONFIGURATION_MONO,
				AudioFormat.ENCODING_PCM_16BIT, callback);
	} else {
		int i = 3;
		do {
			result = new ExtAudioRecorder(true, AudioSource.MIC,
					sampleRates[i], AudioFormat.CHANNEL_CONFIGURATION_MONO,
					AudioFormat.ENCODING_PCM_16BIT, callback);

		} while ((--i >= 0)
				&& !(result.getState() == ExtAudioRecorder.State.INITIALIZING));
	}
	return result;
}
 
開發者ID:entboost,項目名稱:EntboostIM,代碼行數:17,代碼來源:ExtAudioRecorder.java

示例8: start

public void start() {
    int minBufferSize = AudioRecord.getMinBufferSize(mSampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
    int targetSize = mSampleRate * mChannels;      // 1 seconds buffer size
    if (targetSize < minBufferSize) {
        targetSize = minBufferSize;
    }
    if (audioCapture == null) {
        try {
            audioCapture = new AudioRecord(MediaRecorder.AudioSource.MIC,
                    mSampleRate,
                    AudioFormat.CHANNEL_IN_STEREO,
                    AudioFormat.ENCODING_PCM_16BIT,
                    targetSize);
        } catch (IllegalArgumentException	 e) {
            audioCapture = null;
        }
    }

    LiveJniLib.native_audio_init(mSampleRate, mChannels);

    if ( audioCapture != null) {
        audioCapture.startRecording();
        AudioEncoder audioEncoder = new AudioEncoder();
        audioEncoder.start();
    }
}
 
開發者ID:peterfuture,項目名稱:dtlive_android,代碼行數:26,代碼來源:AudioCapture.java

示例9: createAudioTrack

private AudioTrack createAudioTrack(int sampleRate, int channelCount) {
    int channelConfig = channelCount == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO;
    int bufferSize = ((sampleRate * 2) * channelCount / 100) * 8;//最多緩衝80毫秒的數據
    return new AudioTrack(
            AudioManager.STREAM_MUSIC,
            sampleRate,
            channelConfig,
            AudioFormat.ENCODING_PCM_16BIT,
            bufferSize,
            AudioTrack.MODE_STREAM);
}
 
開發者ID:vipycm,項目名稱:mao-android,代碼行數:11,代碼來源:AudioDecoderFragment.java

示例10: writeWavHeader

/**
 * Writes the proper 44-byte RIFF/WAVE header to/for the given stream
 * Two size fields are left empty/null since we do not yet know the final stream size
 *
 * @param out The stream to write the header to
 * @param channelMask An AudioFormat.CHANNEL_* mask
 * @param sampleRate The sample rate in hertz
 * @param encoding An AudioFormat.ENCODING_PCM_* value
 * @throws IOException
 */
private void writeWavHeader(OutputStream out, int channelMask, int sampleRate, int encoding)
    throws IOException {
  short channels;
  switch (channelMask) {
    case AudioFormat.CHANNEL_IN_MONO:
      channels = 1;
      break;
    case AudioFormat.CHANNEL_IN_STEREO:
      channels = 2;
      break;
    default:
      throw new IllegalArgumentException("Unacceptable channel mask");
  }

  short bitDepth;
  switch (encoding) {
    case AudioFormat.ENCODING_PCM_8BIT:
      bitDepth = 8;
      break;
    case AudioFormat.ENCODING_PCM_16BIT:
      bitDepth = 16;
      break;
    case AudioFormat.ENCODING_PCM_FLOAT:
      bitDepth = 32;
      break;
    default:
      throw new IllegalArgumentException("Unacceptable encoding");
  }

  writeWavHeader(out, channels, sampleRate, bitDepth);
}
 
開發者ID:Arjun-sna,項目名稱:Android-AudioRecorder-App,代碼行數:41,代碼來源:AudioSaveHelper.java

示例11: RecordAudioinBytes

public RecordAudioinBytes() throws IllegalArgumentException {
    super(RecorderConstants.AudioSource, RecorderConstants.mSampleRate,AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT,RecorderConstants.bufferSize);

    mOneSec = RESOLUTION_IN_BYTES*CHANNELS*RecorderConstants.mSampleRate;

    mRecording = new byte[mOneSec*35];

      //   int bufferSize = getBufferSize();
     //   int framePeriod = bufferSize / (2 * RESOLUTION_IN_BYTES * CHANNELS);
    //    createRecorder(audioSource, sampleRateInHz, bufferSize);
    checkthingsforrecoder();
    mBuffer = new byte[RecorderConstants.framePeriod*RESOLUTION_IN_BYTES*CHANNELS];

}
 
開發者ID:raj10071997,項目名稱:Alexa-Voice-Service,代碼行數:14,代碼來源:RecordAudioinBytes.java

示例12: createAudioTrack

private void createAudioTrack() throws InitializationException {
    // The AudioTrack configurations parameters used here, are guaranteed to
    // be supported on all devices.

    // AudioFormat.CHANNEL_OUT_MONO should be used in place of deprecated
    // AudioFormat.CHANNEL_CONFIGURATION_MONO, but it is not available for
    // API level 3.

    // Output buffer for playing should be as short as possible, so
    // AudioBufferPlayed events are not invoked long before audio buffer is
    // actually played. Also, when AudioTrack is stopped, it is filled with
    // silence of length audioTrackBufferSizeInBytes. If the silence is too
    // long, it causes a delay before the next recorded data starts playing.
    audioTrackBufferSizeInBytes = AudioTrack.getMinBufferSize(
            SpeechTrainerConfig.SAMPLE_RATE_HZ,
            AudioFormat.CHANNEL_CONFIGURATION_MONO,
            AudioFormat.ENCODING_PCM_16BIT);
    if (audioTrackBufferSizeInBytes <= 0) {
        throw new InitializationException("Failed to initialize playback.");
    }

    audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
            SpeechTrainerConfig.SAMPLE_RATE_HZ,
            AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
            audioTrackBufferSizeInBytes,
            AudioTrack.MODE_STREAM);
    if (audioTrack.getState() != AudioTrack.STATE_INITIALIZED) {
        audioTrack = null;
        throw new InitializationException("Failed to initialize playback.");
    }
}
 
開發者ID:sdrausty,項目名稱:buildAPKsApps,代碼行數:31,代碼來源:ControllerFactory.java

示例13: audioTrackInit

public int audioTrackInit() {
//	  Log.e("  ffff mediaplayer audiotrackinit start .  sampleRateInHz:=" + sampleRateInHz + " channels:=" + channels );
	    audioTrackRelease();
	    int channelConfig = channels >= 2 ? AudioFormat.CHANNEL_OUT_STEREO : AudioFormat.CHANNEL_OUT_MONO;
	    try {
	      mAudioTrackBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz, channelConfig, AudioFormat.ENCODING_PCM_16BIT);
	      mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRateInHz, channelConfig, AudioFormat.ENCODING_PCM_16BIT, mAudioTrackBufferSize, AudioTrack.MODE_STREAM);
	    } catch (Exception e) {
	      mAudioTrackBufferSize = 0;
	      Log.e("audioTrackInit", e);
	    }
	    return mAudioTrackBufferSize;
	  }
 
開發者ID:Leavessilent,項目名稱:QuanMinTV,代碼行數:13,代碼來源:MediaPlayer.java

示例14: SpeechRecord

public SpeechRecord(int sampleRateInHz, int bufferSizeInBytes)
        throws IllegalArgumentException {

    this(
            MediaRecorder.AudioSource.VOICE_RECOGNITION,
            sampleRateInHz,
            AudioFormat.CHANNEL_IN_MONO,
            AudioFormat.ENCODING_PCM_16BIT,
            bufferSizeInBytes,
            false,
            false,
            false
    );
}
 
開發者ID:vaibhavs4424,項目名稱:AI-Powered-Intelligent-Banking-Platform,代碼行數:14,代碼來源:SpeechRecord.java

示例15: onCreate

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    mImageButton = (ImageButton) findViewById(R.id.action_image);
    mImageButton.setOnClickListener(new View.OnClickListener() {
        @Override
        public void onClick(View v) {
            if (mRecorder == null) {
                return;
            }
            boolean recording = mRecorder.isRecording();
            if (recording) {
                ((ImageButton) v).setImageResource(R.drawable.record);
                mRecorder.stop();
            } else {
                ((ImageButton) v).setImageResource(R.drawable.pause);
                mRecorder.startRecording();
            }
        }
    });

    boolean result = createOutputFile();
    if (!result) {
        Toast.makeText(this, "創建文件失敗~", Toast.LENGTH_SHORT).show();
    }

    mRecorder = new Recorder(44100,
            AudioFormat.CHANNEL_IN_MONO/*單雙聲道*/,
            AudioFormat.ENCODING_PCM_16BIT/*格式*/,
            MediaRecorder.AudioSource.MIC/*AudioSource*/,
            NUM_SAMPLES/*period*/,
            this/*onDataChangeListener*/);
    output = new byte[NUM_SAMPLES * 2];

}
 
開發者ID:lrannn,項目名稱:SimpleRecorder,代碼行數:37,代碼來源:MainActivity.java


注:本文中的android.media.AudioFormat.ENCODING_PCM_16BIT屬性示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。