当前位置: 首页>>代码示例>>Python>>正文


Python dlib.net方法代码示例

本文整理汇总了Python中dlib.net方法的典型用法代码示例。如果您正苦于以下问题:Python dlib.net方法的具体用法?Python dlib.net怎么用?Python dlib.net使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在dlib的用法示例。


在下文中一共展示了dlib.net方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: prepare_data

# 需要导入模块: import dlib [as 别名]
# 或者: from dlib import net [as 别名]
def prepare_data(video_dir, output_dir, max_video_limit=1, screen_display=False):
	"""
	Args:
		1. video_dir:			Directory storing all videos to be processed.
		2. output_dir:			Directory where all mouth region images are to be stored.
		3. max_video_limit:	 	Puts a limit on number of videos to be used for processing.
		4. screen_display:		Decides whether to use screen (to display video being processed).
	"""

	video_file_paths = sorted(glob.glob(video_dir + "*.mp4"))[:max_video_limit]

	load_trained_models()

	if not FACE_DETECTOR_MODEL:
		print "[ERROR]: Please ensure that you have dlib's landmarks predictor file " + \
			  "at data/dlib_data/. You can download it here: " + \
			  "http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2"
		return False

	for path in video_file_paths:
		extract_mouth_regions(path, output_dir, screen_display)

	return True 
开发者ID:pandeydivesh15,项目名称:AVSR-Deep-Speech,代码行数:25,代码来源:data_preprocessing_autoencoder.py

示例2: ensure_dlib_model

# 需要导入模块: import dlib [as 别名]
# 或者: from dlib import net [as 别名]
def ensure_dlib_model():
    if not os.path.isfile(predictor_path):
        import urllib.request
        urllib.request.urlretrieve("http://dlib.net/files/shape_predictor_5_face_landmarks.dat.bz2",
                                   filename="models/shape_predictor_5_face_landmarks.dat.bz2") 
开发者ID:foamliu,项目名称:FaceNet,代码行数:7,代码来源:pre_process.py

示例3: compute_template

# 需要导入模块: import dlib [as 别名]
# 或者: from dlib import net [as 别名]
def compute_template(globspec='images/lfw_aegan/*/*.png',image_dims=[400,400],predictor_path='models/shape_predictor_68_face_landmarks.dat',center_crop=None,subsample=1):
  # Credit: http://dlib.net/face_landmark_detection.py.html
  detector=dlib.get_frontal_face_detector()
  predictor=dlib.shape_predictor(predictor_path)

  template=numpy.zeros((68,2),dtype=numpy.float64)
  count=0

  if not center_crop is None:
    center_crop=numpy.asarray(center_crop)
    cy,cx=(numpy.asarray(image_dims)-center_crop)//2

  # compute mean landmark locations
  S=sorted(glob.glob(globspec))
  S=S[::subsample]
  for ipath in S:
    print("Processing file: {}".format(ipath))
    img=(skimage.transform.resize(skimage.io.imread(ipath)/255.0,tuple(image_dims)+(3,),order=2,mode='nearest')*255).clip(0,255).astype(numpy.ubyte)
    if not center_crop is None:
      img=img[cy:cy+center_crop[0],cx:cx+center_crop[0]]

    upsample=0
    dets=detector(img,upsample)
    if len(dets)!=1: continue

    for k,d in enumerate(dets):
      shape=predictor(img, d)
      for i in range(68):
        template[i]+=(shape.part(i).y,shape.part(i).x)
      count+=1
  template/=float(count)
  return template
  # lfw_aegan 400x400 template map
  # [[ 251.58852868  201.50275826]  # 33 where nose meets upper-lip
  #  [ 172.69409809  168.66523086]  # 39 inner-corner of left eye
  #  [ 171.72236076  232.09718129]] # 42 inner-corner or right eye 
开发者ID:paulu,项目名称:deepfeatinterp,代码行数:38,代码来源:alignface.py

示例4: _get_dlib_data_file

# 需要导入模块: import dlib [as 别名]
# 或者: from dlib import net [as 别名]
def _get_dlib_data_file(dat_name):
    dat_dir = os.path.relpath('%s/../3rdparty' % os.path.basename(__file__))
    dat_path = '%s/%s' % (dat_dir, dat_name)
    if not os.path.isdir(dat_dir):
        os.mkdir(dat_dir)

    # Download trained shape detector
    if not os.path.isfile(dat_path):
        with urlopen('http://dlib.net/files/%s.bz2' % dat_name) as response:
            with bz2.BZ2File(response) as bzf, open(dat_path, 'wb') as f:
                shutil.copyfileobj(bzf, f)

    return dat_path 
开发者ID:swook,项目名称:GazeML,代码行数:15,代码来源:frames.py


注:本文中的dlib.net方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。