當前位置: 首頁>>代碼示例>>Python>>正文


Python transforms.ColorJitter方法代碼示例

本文整理匯總了Python中torchvision.transforms.transforms.ColorJitter方法的典型用法代碼示例。如果您正苦於以下問題:Python transforms.ColorJitter方法的具體用法?Python transforms.ColorJitter怎麽用?Python transforms.ColorJitter使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在torchvision.transforms.transforms的用法示例。


在下文中一共展示了transforms.ColorJitter方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: call_image

# 需要導入模塊: from torchvision.transforms import transforms [as 別名]
# 或者: from torchvision.transforms.transforms import ColorJitter [as 別名]
def call_image(self, img):
        return torch_transforms.ColorJitter(self.brightness, self.contrast, self.saturation, self.hue)(img) 
開發者ID:vacancy,項目名稱:Jacinle,代碼行數:4,代碼來源:transforms.py

示例2: preprocessImage

# 需要導入模塊: from torchvision.transforms import transforms [as 別名]
# 或者: from torchvision.transforms.transforms import ColorJitter [as 別名]
def preprocessImage(img, use_color_jitter, image_size_dict, img_norm_info, use_caffe_pretrained_model):
		# calculate target_size and scale_factor, target_size's format is (h, w)
		w_ori, h_ori = img.width, img.height
		if w_ori > h_ori:
			target_size = (image_size_dict.get('SHORT_SIDE'), image_size_dict.get('LONG_SIDE'))
		else:
			target_size = (image_size_dict.get('LONG_SIDE'), image_size_dict.get('SHORT_SIDE'))
		h_t, w_t = target_size
		scale_factor = min(w_t/w_ori, h_t/h_ori)
		target_size = (round(scale_factor*h_ori), round(scale_factor*w_ori))
		# define and do transform
		if use_caffe_pretrained_model:
			means_norm = img_norm_info['caffe'].get('mean_rgb')
			stds_norm = img_norm_info['caffe'].get('std_rgb')
			if use_color_jitter:
				transform = transforms.Compose([transforms.Resize(target_size),
												transforms.ColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1),
												transforms.ToTensor(),
												transforms.Normalize(mean=means_norm, std=stds_norm)])
			else:
				transform = transforms.Compose([transforms.Resize(target_size),
												transforms.ToTensor(),
												transforms.Normalize(mean=means_norm, std=stds_norm)])
			img = transform(img) * 255
			img = img[(2, 1, 0), :, :]
		else:
			means_norm = img_norm_info['pytorch'].get('mean_rgb')
			stds_norm = img_norm_info['pytorch'].get('std_rgb')
			if use_color_jitter:
				transform = transforms.Compose([transforms.Resize(target_size),
												transforms.ColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1),
												transforms.ToTensor(),
												transforms.Normalize(mean=means_norm, std=stds_norm)])
			else:
				transform = transforms.Compose([transforms.Resize(target_size),
												transforms.ToTensor(),
												transforms.Normalize(mean=means_norm, std=stds_norm)])
			img = transform(img)
		# return necessary data
		return img, scale_factor, target_size 
開發者ID:DetectionBLWX,項目名稱:FasterRCNN.pytorch,代碼行數:42,代碼來源:COCODataset.py


注:本文中的torchvision.transforms.transforms.ColorJitter方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。