当前位置: 首页>>代码示例>>Python>>正文


Python prediction_service_pb2.beta_create_PredictionService_stub方法代码示例

本文整理汇总了Python中tensorflow_serving.apis.prediction_service_pb2.beta_create_PredictionService_stub方法的典型用法代码示例。如果您正苦于以下问题:Python prediction_service_pb2.beta_create_PredictionService_stub方法的具体用法?Python prediction_service_pb2.beta_create_PredictionService_stub怎么用?Python prediction_service_pb2.beta_create_PredictionService_stub使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在tensorflow_serving.apis.prediction_service_pb2的用法示例。


在下文中一共展示了prediction_service_pb2.beta_create_PredictionService_stub方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: run

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def run(host, port, test_json, model_name, signature_name):

    # channel = grpc.insecure_channel('%s:%d' % (host, port))
    channel = implementations.insecure_channel(host, port)
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    with open(test_json, "r") as frobj:
        content = json.load(frobj)
        print(len(content), "======")

    start = time.time()

    for i, input_dict in enumerate(content):
        request = prepare_grpc_request(model_name, signature_name, input_dict)
        result = stub.Predict(request, 10.0)
        print(result, i)

    end = time.time()
    time_diff = end - start
    print('time elapased: {}'.format(time_diff)) 
开发者ID:yyht,项目名称:BERT,代码行数:22,代码来源:test_grpc_serving.py

示例2: main

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def main():
  # Generate inference data
  keys = numpy.asarray([1, 2, 3, 4])
  keys_tensor_proto = tf.contrib.util.make_tensor_proto(keys, dtype=tf.int32)
  features = numpy.asarray(
      [[1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1],
       [9, 8, 7, 6, 5, 4, 3, 2, 1], [9, 9, 9, 9, 9, 9, 9, 9, 9]])
  features_tensor_proto = tf.contrib.util.make_tensor_proto(
      features, dtype=tf.float32)

  # Create gRPC client
  channel = implementations.insecure_channel(FLAGS.host, FLAGS.port)
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
  request = predict_pb2.PredictRequest()
  request.model_spec.name = FLAGS.model_name
  if FLAGS.model_version > 0:
    request.model_spec.version.value = FLAGS.model_version
  if FLAGS.signature_name != "":
    request.model_spec.signature_name = FLAGS.signature_name
  request.inputs["keys"].CopyFrom(keys_tensor_proto)
  request.inputs["features"].CopyFrom(features_tensor_proto)

  # Send request
  result = stub.Predict(request, FLAGS.request_timeout)
  print(result) 
开发者ID:tobegit3hub,项目名称:tensorflow_template_application,代码行数:27,代码来源:predict_client.py

示例3: _do_local_inference

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def _do_local_inference(host, port, serialized_examples, model_name):
  """Performs inference on a model hosted by the host:port server."""

  channel = implementations.insecure_channel(host, int(port))
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

  request = predict_pb2.PredictRequest()
  # request.model_spec.name = 'chicago_taxi'
  request.model_spec.name = model_name
  request.model_spec.signature_name = 'predict'

  tfproto = tf.contrib.util.make_tensor_proto([serialized_examples],
                                              shape=[len(serialized_examples)],
                                              dtype=tf.string)
  # The name of the input tensor is 'examples' based on
  # https://github.com/tensorflow/tensorflow/blob/r1.9/tensorflow/python/estimator/export/export.py#L290
  request.inputs['examples'].CopyFrom(tfproto)
  print(stub.Predict(request, _LOCAL_INFERENCE_TIMEOUT_SECONDS)) 
开发者ID:amygdala,项目名称:code-snippets,代码行数:20,代码来源:chicago_taxi_client.py

示例4: _create_stub

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def _create_stub(server):
  host, port = server.split(":")
  channel = implementations.insecure_channel(host, int(port))
  # TODO(bgb): Migrate to GA API.
  return prediction_service_pb2.beta_create_PredictionService_stub(channel) 
开发者ID:akzaidi,项目名称:fine-lm,代码行数:7,代码来源:serving_utils.py

示例5: main

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def main(_):
    if not FLAGS.text:
        raise ValueError("No --text provided")
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    request = Request(FLAGS.text, FLAGS.ngrams)
    result = stub.Classify(request, 10.0)  # 10 secs timeout
    print(result) 
开发者ID:apcode,项目名称:tensorflow_fasttext,代码行数:11,代码来源:predictor_client.py

示例6: do_inference

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def do_inference(num_tests, concurrency=1):
  channel = implementations.insecure_channel(host, int(port))
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

  coord = _Coordinator(num_tests, concurrency)

  for _ in range(num_tests):
    # dummy audio
    duration, sr, n_fft, win_length, hop_length, n_mels, max_db, min_db = 4, 16000, 512, 512, 128, 80, 35, -55
    filename = librosa.util.example_audio_file()
    wav = read_wav(filename, sr=sr, duration=duration)
    mel = wav2melspec_db(wav, sr, n_fft, win_length, hop_length, n_mels)
    mel = normalize_db(mel, max_db=max_db, min_db=min_db)
    mel = mel.astype(np.float32)
    mel = np.expand_dims(mel, axis=0)  # single batch
    n_timesteps = sr / hop_length * duration + 1

    # build request
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'voice_vector'
    request.model_spec.signature_name = 'predict'
    request.inputs['x'].CopyFrom(tf.contrib.util.make_tensor_proto(mel, shape=[1, n_timesteps, n_mels]))

    coord.throttle()

    # send asynchronous response (recommended. use this.)
    result_future = stub.Predict.future(request, 10.0)  # timeout
    result_future.add_done_callback(_create_rpc_callback(coord))

    # send synchronous response (NOT recommended)
    # result = stub.Predict(request, 5.0)

  coord.wait_all_done() 
开发者ID:andabi,项目名称:voice-vector,代码行数:35,代码来源:client.py

示例7: main

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    # Send request
    image = tf.gfile.FastGFile(FLAGS.image, 'rb').read()
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'tensorflow-serving'
    request.model_spec.signature_name = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
    request.inputs['image'].CopyFrom(tf.contrib.util.make_tensor_proto(image))
    #request.inputs['input'].CopyFrom()

    result = stub.Predict(request, 10.0)  # 10 secs timeout
    print(result) 
开发者ID:microsoft,项目名称:MMdnn,代码行数:16,代码来源:client.py

示例8: do_inference

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def do_inference(hostport, work_dir, concurrency, num_tests):
    """Tests PredictionService with concurrent requests.
    Args:
        hostport: Host:port address of the PredictionService.
        work_dir: The full path of working directory for test data set.
        concurrency: Maximum number of concurrent requests.
        num_tests: Number of test images to use.
    Returns:
        The classification error rate.
    Raises:
        IOError: An error occurred processing test data set.
    """
    test_data_set = mnist_input_data.read_data_sets(work_dir).test
    host, port = hostport.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    result_counter = _ResultCounter(num_tests, concurrency)
    for _ in range(num_tests):
        request = predict_pb2.PredictRequest()
        request.model_spec.name = 'mnist'
        request.model_spec.signature_name = 'predict_images'
        image, label = test_data_set.next_batch(1)
        request.inputs['images'].CopyFrom(
            tf.contrib.util.make_tensor_proto(image[0], shape=[1, image[0].size]))
        result_counter.throttle()
        result_future = stub.Predict.future(request, 5.0)  # 5 seconds
        result_future.add_done_callback(
            _create_rpc_callback(label[0], result_counter))
    return result_counter.get_error_rate() 
开发者ID:Lapis-Hong,项目名称:wide_deep,代码行数:31,代码来源:client.py

示例9: main

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    request = predict_pb2.PredictRequest()
    request.model_spec.name = FLAGS.model
    request.model_spec.signature_name = 'serving_default'
    # feature_dict = {'age': _float_feature(value=25),
    #               'capital_gain': _float_feature(value=0),
    #               'capital_loss': _float_feature(value=0),
    #               'education': _bytes_feature(value='11th'.encode()),
    #               'education_num': _float_feature(value=7),
    #               'gender': _bytes_feature(value='Male'.encode()),
    #               'hours_per_week': _float_feature(value=40),
    #               'native_country': _bytes_feature(value='United-States'.encode()),
    #               'occupation': _bytes_feature(value='Machine-op-inspct'.encode()),
    #               'relationship': _bytes_feature(value='Own-child'.encode()),
    #               'workclass': _bytes_feature(value='Private'.encode())}
    # label = 0
    data = _read_test_input()
    feature_dict = pred_input_fn(data)

    example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
    serialized = example.SerializeToString()

    request.inputs['inputs'].CopyFrom(
        tf.contrib.util.make_tensor_proto(serialized, shape=[1]))

    result_future = stub.Predict.future(request, 5.0)
    prediction = result_future.result().outputs['scores']

    # print('True label: ' + str(label))
    print('Prediction: ' + str(np.argmax(prediction.float_val))) 
开发者ID:Lapis-Hong,项目名称:wide_deep,代码行数:36,代码来源:client.py

示例10: do_inference

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def do_inference(self, output_dir, image_path=None, image_np=None):
    """Tests PredictionService with concurrent requests.

    Args:
      output_dir: Directory to output image.
      image_path: Path to image.
      image_np: Image in np format. Ignored when image_path is set.

    Returns:
      `output_dir`.
    """
    if image_path is None and image_np is None:
      raise ValueError('Either `image_np` or `image_path` must be specified.')

    if image_path:
      image_resized = util_io.imread(image_path, (self.image_hw, self.image_hw))
    else:
      image_resized = scipy.misc.imresize(image_np, (self.image_hw, self.image_hw))
    # TODO: do preprocessing in a separate function. Check whether image has already been preprocessed.
    image = np.expand_dims(image_resized / np.float32(255.0), 0)

    stub = prediction_service_pb2.beta_create_PredictionService_stub(self.channel)
    request = predict_pb2.PredictRequest()
    request.CopyFrom(self.request_template)
    self._request_set_input_image(request, image)
    result_future = stub.Predict.future(request, 5.0)  # 5 seconds
    result_future.add_done_callback(self._create_rpc_callback(output_dir))
    return output_dir 
开发者ID:jerryli27,项目名称:TwinGAN,代码行数:30,代码来源:twingan_client.py

示例11: predict

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def predict(image_data,
            model_name='inception',
            host='localhost',
            port=9000,
            timeout=10):
  """
  Arguments:
    image_data (list): A list of image data. The image data should either be the image bytes or
      float arrays.
    model_name (str): The name of the model to query (specified when you started the Server)
    model_signature_name (str): The name of the signature to query (specified when you created the exported model)
    host (str): The machine host identifier that the classifier is running on.
    port (int): The port that the classifier is listening on.
    timeout (int): Time in seconds before timing out.

  Returns:
    PredictResponse protocol buffer. See here: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/predict.proto
  """

  if len(image_data) <= 0:
    return None

  channel = implementations.insecure_channel(host, int(port))
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
  request = predict_pb2.PredictRequest()
  request.model_spec.name = model_name

  if type(image_data[0]) == str:
    request.model_spec.signature_name = 'predict_image_bytes'
    request.inputs['images'].CopyFrom(
        tf.contrib.util.make_tensor_proto(image_data, shape=[len(image_data)]))
  else:
    request.model_spec.signature_name = 'predict_image_array'
    request.inputs['images'].CopyFrom(
        tf.contrib.util.make_tensor_proto(image_data, shape=[len(image_data), len(image_data[1])]))

  result = stub.Predict(request, timeout)
  return result 
开发者ID:visipedia,项目名称:tf_classification,代码行数:40,代码来源:tfserver.py

示例12: process_image

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def process_image(path, label_data, top_k=3):
    start_time = datetime.now()
    img = imread(path)

    host, port = "0.0.0.0:9000".split(":")
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    request = predict_pb2.PredictRequest()
    request.model_spec.name = "pet-model"
    request.model_spec.signature_name = "predict_images"

    request.inputs["images"].CopyFrom(
        tf.contrib.util.make_tensor_proto(
            img.astype(dtype=float),
            shape=img.shape, dtype=tf.float32
        )
    )

    result = stub.Predict(request, 20.)
    scores = tf.contrib.util.make_ndarray(result.outputs["scores"])[0]
    probs = softmax(scores)
    index = sorted(range(len(probs)), key=lambda x: probs[x], reverse=True)

    outputs = []
    for i in range(top_k):
        outputs.append(Output(score=float(probs[index[i]]), label=label_data[index[i]]))

    print(outputs)
    print("total time", (datetime.now() - start_time).total_seconds())
    return outputs 
开发者ID:PacktPublishing,项目名称:Machine-Learning-with-TensorFlow-1.x,代码行数:33,代码来源:client.py

示例13: get_prediction_service_stub

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def get_prediction_service_stub(host, port):
    channel = implementations.insecure_channel(host, port)
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    return stub 
开发者ID:PacktPublishing,项目名称:-Learn-Artificial-Intelligence-with-TensorFlow,代码行数:6,代码来源:predict.py

示例14: main

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def main(_):
  host, port = FLAGS.server.split(':')
  channel = implementations.insecure_channel(host, int(port))
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
  # Send request
  with open(FLAGS.image, 'rb') as f:
    # See prediction_service.proto for gRPC request/response details.
    data = f.read()
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'inception'
    request.model_spec.signature_name = 'predict_images'
    request.inputs['images'].CopyFrom(
        tf.contrib.util.make_tensor_proto(data, shape=[1]))
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    print(result) 
开发者ID:PipelineAI,项目名称:models,代码行数:17,代码来源:inception_client.py

示例15: get_prediction

# 需要导入模块: from tensorflow_serving.apis import prediction_service_pb2 [as 别名]
# 或者: from tensorflow_serving.apis.prediction_service_pb2 import beta_create_PredictionService_stub [as 别名]
def get_prediction(image, server_host='127.0.0.1', server_port=9000,
                   server_name="server", timeout=10.0):
    """
    Retrieve a prediction from a TensorFlow model server

    :param image:       a MNIST image represented as a 1x784 array
    :param server_host: the address of the TensorFlow server
    :param server_port: the port used by the server
    :param server_name: the name of the server
    :param timeout:     the amount of time to wait for a prediction to complete
    :return 0:          the integer predicted in the MNIST image
    :return 1:          the confidence scores for all classes
    :return 2:          the version number of the model handling the request
    """

    print("connecting to:%s:%i" % (server_host, server_port))
    # initialize to server connection
    channel = implementations.insecure_channel(server_host, server_port)
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    # build request
    request = predict_pb2.PredictRequest()
    request.model_spec.name = server_name
    request.model_spec.signature_name = 'predict_images'
    request.inputs['images'].CopyFrom(
        tf.contrib.util.make_tensor_proto(image, shape=image.shape))
 
    # retrieve results
    result = stub.Predict(request, timeout)
    resultVal = result.outputs['prediction'].int64_val
    scores = result.outputs['scores'].float_val
    version = result.outputs['model-version'].string_val
    return resultVal[0], scores, version[0] 
开发者ID:googlecodelabs,项目名称:kubeflow-introduction,代码行数:35,代码来源:mnist_client.py


注:本文中的tensorflow_serving.apis.prediction_service_pb2.beta_create_PredictionService_stub方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。