当前位置: 首页>>代码示例>>Java>>正文


Java AudioEncoding类代码示例

本文整理汇总了Java中com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding的典型用法代码示例。如果您正苦于以下问题:Java AudioEncoding类的具体用法?Java AudioEncoding怎么用?Java AudioEncoding使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


AudioEncoding类属于com.google.cloud.speech.v1.RecognitionConfig包,在下文中一共展示了AudioEncoding类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: syncRecognizeFile

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
/**
 * Performs speech recognition on raw PCM audio and prints the transcription.
 *
 * @param fileName the path to a PCM audio file to transcribe.
 */
public static void syncRecognizeFile(String fileName) throws Exception, IOException {
  SpeechClient speech = SpeechClient.create();

  Path path = Paths.get(fileName);
  byte[] data = Files.readAllBytes(path);
  ByteString audioBytes = ByteString.copyFrom(data);

  // Configure request with local raw PCM audio
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.LINEAR16)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setContent(audioBytes)
      .build();

  // Use blocking call to get audio transcript
  RecognizeResponse response = speech.recognize(config, audio);
  List<SpeechRecognitionResult> results = response.getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s%n", alternative.getTranscript());
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:35,代码来源:Recognize.java

示例2: syncRecognizeWords

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
/**
 * Performs sync recognize and prints word time offsets.
 *
 * @param fileName the path to a PCM audio file to transcribe get offsets on.
 */
public static void syncRecognizeWords(String fileName) throws Exception, IOException {
  SpeechClient speech = SpeechClient.create();

  Path path = Paths.get(fileName);
  byte[] data = Files.readAllBytes(path);
  ByteString audioBytes = ByteString.copyFrom(data);

  // Configure request with local raw PCM audio
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.LINEAR16)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .setEnableWordTimeOffsets(true)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setContent(audioBytes)
      .build();

  // Use blocking call to get audio transcript
  RecognizeResponse response = speech.recognize(config, audio);
  List<SpeechRecognitionResult> results = response.getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s%n", alternative.getTranscript());
    for (WordInfo wordInfo: alternative.getWordsList()) {
      System.out.println(wordInfo.getWord());
      System.out.printf("\t%s.%s sec - %s.%s sec\n",
          wordInfo.getStartTime().getSeconds(),
          wordInfo.getStartTime().getNanos() / 100000000,
          wordInfo.getEndTime().getSeconds(),
          wordInfo.getEndTime().getNanos() / 100000000);
    }
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:44,代码来源:Recognize.java

示例3: syncRecognizeGcs

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
/**
 * Performs speech recognition on remote FLAC file and prints the transcription.
 *
 * @param gcsUri the path to the remote FLAC audio file to transcribe.
 */
public static void syncRecognizeGcs(String gcsUri) throws Exception, IOException {
  // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
  SpeechClient speech = SpeechClient.create();

  // Builds the request for remote FLAC file
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.FLAC)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setUri(gcsUri)
      .build();

  // Use blocking call for getting audio transcript
  RecognizeResponse response = speech.recognize(config, audio);
  List<SpeechRecognitionResult> results = response.getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s%n", alternative.getTranscript());
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:32,代码来源:Recognize.java

示例4: asyncRecognizeFile

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
public static void asyncRecognizeFile(String fileName) throws Exception, IOException {
  // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
  SpeechClient speech = SpeechClient.create();

  Path path = Paths.get(fileName);
  byte[] data = Files.readAllBytes(path);
  ByteString audioBytes = ByteString.copyFrom(data);

  // Configure request with local raw PCM audio
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.LINEAR16)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setContent(audioBytes)
      .build();

  // Use non-blocking call for getting file transcription
  OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
      speech.longRunningRecognizeAsync(config, audio);

  while (!response.isDone()) {
    System.out.println("Waiting for response...");
    Thread.sleep(10000);
  }

  List<SpeechRecognitionResult> results = response.get().getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s%n", alternative.getTranscript());
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:38,代码来源:Recognize.java

示例5: asyncRecognizeGcs

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
/**
 * Performs non-blocking speech recognition on remote FLAC file and prints
 * the transcription.
 *
 * @param gcsUri the path to the remote LINEAR16 audio file to transcribe.
 */
public static void asyncRecognizeGcs(String gcsUri) throws Exception, IOException {
  // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
  SpeechClient speech = SpeechClient.create();

  // Configure remote file request for Linear16
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.FLAC)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setUri(gcsUri)
      .build();

  // Use non-blocking call for getting file transcription
  OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
      speech.longRunningRecognizeAsync(config, audio);
  while (!response.isDone()) {
    System.out.println("Waiting for response...");
    Thread.sleep(10000);
  }

  List<SpeechRecognitionResult> results = response.get().getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s\n",alternative.getTranscript());
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:39,代码来源:Recognize.java

示例6: main

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
public static void main(String... args) throws Exception {
  // Instantiates a client
  SpeechClient speech = SpeechClient.create();

  // The path to the audio file to transcribe
  String fileName = "./resources/audio.raw";

  // Reads the audio file into memory
  Path path = Paths.get(fileName);
  byte[] data = Files.readAllBytes(path);
  ByteString audioBytes = ByteString.copyFrom(data);

  // Builds the sync recognize request
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.LINEAR16)
      .setSampleRateHertz(16000)
      .setLanguageCode("en-US")
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setContent(audioBytes)
      .build();

  // Performs speech recognition on the audio file
  RecognizeResponse response = speech.recognize(config, audio);
  List<SpeechRecognitionResult> results = response.getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s%n", alternative.getTranscript());
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:35,代码来源:QuickstartSample.java

示例7: asyncRecognizeWords

import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding; //导入依赖的package包/类
/**
 * Performs non-blocking speech recognition on remote FLAC file and prints
 * the transcription as well as word time offsets.
 *
 * @param gcsUri the path to the remote LINEAR16 audio file to transcribe.
 */
public static void asyncRecognizeWords(String gcsUri) throws Exception, IOException {
  // Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
  SpeechClient speech = SpeechClient.create();

  // Configure remote file request for Linear16
  RecognitionConfig config = RecognitionConfig.newBuilder()
      .setEncoding(AudioEncoding.FLAC)
      .setLanguageCode("en-US")
      .setSampleRateHertz(16000)
      .setEnableWordTimeOffsets(true)
      .build();
  RecognitionAudio audio = RecognitionAudio.newBuilder()
      .setUri(gcsUri)
      .build();

  // Use non-blocking call for getting file transcription
  OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
      speech.longRunningRecognizeAsync(config, audio);
  while (!response.isDone()) {
    System.out.println("Waiting for response...");
    Thread.sleep(10000);
  }

  List<SpeechRecognitionResult> results = response.get().getResultsList();

  for (SpeechRecognitionResult result: results) {
    // There can be several alternative transcripts for a given chunk of speech. Just use the
    // first (most likely) one here.
    SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
    System.out.printf("Transcription: %s\n",alternative.getTranscript());
    for (WordInfo wordInfo: alternative.getWordsList()) {
      System.out.println(wordInfo.getWord());
      System.out.printf("\t%s.%s sec - %s.%s sec\n",
          wordInfo.getStartTime().getSeconds(),
          wordInfo.getStartTime().getNanos() / 100000000,
          wordInfo.getEndTime().getSeconds(),
          wordInfo.getEndTime().getNanos() / 100000000);
    }
  }
  speech.close();
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:48,代码来源:Recognize.java


注:本文中的com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。