本文整理匯總了Python中visualization_utils.visualize_boxes_and_labels_on_image_array方法的典型用法代碼示例。如果您正苦於以下問題:Python visualization_utils.visualize_boxes_and_labels_on_image_array方法的具體用法?Python visualization_utils.visualize_boxes_and_labels_on_image_array怎麽用?Python visualization_utils.visualize_boxes_and_labels_on_image_array使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類visualization_utils
的用法示例。
在下文中一共展示了visualization_utils.visualize_boxes_and_labels_on_image_array方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。
示例1: _draw_detections
# 需要導入模塊: import visualization_utils [as 別名]
# 或者: from visualization_utils import visualize_boxes_and_labels_on_image_array [as 別名]
def _draw_detections(image_np, detections, category_index):
"""Draws detections on to the image.
Args:
image_np: Image in the form of uint8 numpy array.
detections: a dictionary that contains the detection outputs.
category_index: contains the mapping between indexes and the category names.
Returns:
Does not return anything but draws the boxes on the
"""
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
detections['detection_boxes'],
detections['detection_classes'],
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=1000,
min_score_thresh=.0,
agnostic_mode=False)
示例2: camThread
# 需要導入模塊: import visualization_utils [as 別名]
# 或者: from visualization_utils import visualize_boxes_and_labels_on_image_array [as 別名]
def camThread():
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
return
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
height = color_image.shape[0]
width = color_image.shape[1]
frame_expanded = np.expand_dims(color_image, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: frame_expanded})
# Draw the results of the detection (aka 'visulaize the results')
img = vis_util.visualize_boxes_and_labels_on_image_array(
color_image,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=2,
min_score_thresh=0.55,
depth_frame=depth_frame,
height=height,
width=width)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glColor3f(1.0, 1.0, 1.0)
glEnable(GL_TEXTURE_2D)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glBegin(GL_QUADS)
glTexCoord2d(0.0, 1.0)
glVertex3d(-1.0, -1.0, 0.0)
glTexCoord2d(1.0, 1.0)
glVertex3d( 1.0, -1.0, 0.0)
glTexCoord2d(1.0, 0.0)
glVertex3d( 1.0, 1.0, 0.0)
glTexCoord2d(0.0, 0.0)
glVertex3d(-1.0, 1.0, 0.0)
glEnd()
glFlush()
glutSwapBuffers()