本文整理汇总了Python中action.Action.get_reference_point方法的典型用法代码示例。如果您正苦于以下问题:Python Action.get_reference_point方法的具体用法?Python Action.get_reference_point怎么用?Python Action.get_reference_point使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类action.Action
的用法示例。
在下文中一共展示了Action.get_reference_point方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: Tracker
# 需要导入模块: from action import Action [as 别名]
# 或者: from action.Action import get_reference_point [as 别名]
class Tracker(object):
"""
This is the main program which gives a high-level view
of all the running subsystems. It connects camera input with
output in form of "actions" (such as keyboard shortcuts on the users behalf).
This is done by locating a hand in an image and detecting features,
like the number of fingers, and trying to match that data with a
known gesture.
"""
def __init__(self):
"""
Configuration
"""
# Camera settings
self.FRAME_WIDTH = 341
self.FRAME_HEIGHT = 256
self.flip_camera = True # Mirror image
self.camera = cv2.VideoCapture(1)
# ...you can also use a test video for input
#video = "/Users/matthiasendler/Code/snippets/python/tracker/final/assets/test_video/10.mov"
#self.camera = cv2.VideoCapture(video)
#self.skip_input(400) # Skip to an interesting part of the video
if not self.camera.isOpened():
print "couldn't load webcam"
return
#self.camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, self.FRAME_WIDTH)
#self.camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, self.FRAME_HEIGHT)
self.filters_dir = "filters/" # Filter settings in trackbar
self.filters_file = "filters_default"
# Load filter settings
current_config = self.filters_dir + self.filters_file
self.filters = Filters(current_config)
# No actions will be triggered in test mode
# (can be used to adjust settings at runtime)
self.test_mode = False
# Create a hand detector
# In fact, this is a wrapper for many detectors
# to increase detection confidence
self.detector = Detector(self.filters.config)
# Knowledge base for all detectors
self.kb = KB()
# Create gesture recognizer.
# A gesture consists of a motion and a hand state.
self.gesture = Gesture()
# The action module executes keyboard and mouse commands
self.action = Action()
# Show output of detectors
self.output = Output()
self.run()
def run(self):
"""
In each step: Read the input image and keys,
process it and react on it (e.g. with an action).
"""
while True:
img = self.get_input()
hand = self.process(img)
ref = self.action.get_reference_point()
self.output.show(img, hand, ref)
def process(self, img):
"""
Process input
"""
# Run detection
hand = self.detector.detect(img)
# Store result in knowledge base
self.kb.update(hand)
if not self.test_mode:
# Try to interprete as gesture
self.interprete(hand)
return hand
def interprete(self, hand):
"""
Try to interprete the input as a gesture
"""
self.gesture.add_hand(hand)
operation = self.gesture.detect_gesture()
self.action.execute(operation)
def get_input(self):
"""
Get input from camera and keyboard
"""
self.get_key()
_, img = self.camera.read()
#.........这里部分代码省略.........