当前位置: 首页>>代码示例>>Java>>正文


Java ConvertBufferedImage.convertFrom方法代码示例

本文整理汇总了Java中boofcv.io.image.ConvertBufferedImage.convertFrom方法的典型用法代码示例。如果您正苦于以下问题:Java ConvertBufferedImage.convertFrom方法的具体用法?Java ConvertBufferedImage.convertFrom怎么用?Java ConvertBufferedImage.convertFrom使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在boofcv.io.image.ConvertBufferedImage的用法示例。


在下文中一共展示了ConvertBufferedImage.convertFrom方法的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: coupledHueSat

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
/**
 * HSV stores color information in Hue and Saturation while intensity is in Value.  This computes a 2D histogram
 * from hue and saturation only, which makes it lighting independent.
 */
public static double[] coupledHueSat(BufferedImage image) {
    Planar<GrayF32> rgb = new Planar<>(GrayF32.class, image.getWidth(), image.getHeight(), 3);
    Planar<GrayF32> hsv = new Planar<>(GrayF32.class, image.getWidth(), image.getHeight(), 3);

    ConvertBufferedImage.convertFrom(image, rgb, true);
    ColorHsv.rgbToHsv_F32(rgb, hsv);

    Planar<GrayF32> hs = hsv.partialSpectrum(0, 1);

    // The number of bins is an important parameter.  Try adjusting it
    Histogram_F64 histogram = new Histogram_F64(10, 10);
    histogram.setRange(0, 0, 2.0 * Math.PI); // range of hue is from 0 to 2PI
    histogram.setRange(1, 0, 1.0);         // range of saturation is from 0 to 1

    // Compute the histogram
    GHistogramFeatureOps.histogram(hs, histogram);
    histogram.value[0] = 0.0; // remove black

    UtilFeature.normalizeL2(histogram); // normalize so that image size doesn't matter

    return histogram.value;
}
 
开发者ID:tomwhite,项目名称:set-game,代码行数:27,代码来源:ImageUtils.java

示例2: coupledRGB

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
/**
 * Constructs a 3D histogram using RGB.  RGB is a popular color space, but the resulting histogram will
 * depend on lighting conditions and might not produce the accurate results.
 */
public static double[] coupledRGB(BufferedImage image) {

    Planar<GrayF32> rgb = new Planar<>(GrayF32.class,1,1,3);

    rgb.reshape(image.getWidth(), image.getHeight());
    ConvertBufferedImage.convertFrom(image, rgb, true);

    // The number of bins is an important parameter.  Try adjusting it
    Histogram_F64 histogram = new Histogram_F64(5,5,5);
    histogram.setRange(0, 0, 255);
    histogram.setRange(1, 0, 255);
    histogram.setRange(2, 0, 255);

    GHistogramFeatureOps.histogram(rgb,histogram);
    histogram.value[0] = 0.0; // remove black

    UtilFeature.normalizeL2(histogram); // normalize so that image size doesn't matter

    return histogram.value;
}
 
开发者ID:tomwhite,项目名称:set-game,代码行数:25,代码来源:ImageUtils.java

示例3: getEdgePixels

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
public static boolean[] getEdgePixels(MultiImage img, boolean[] out) {
	LOGGER.traceEntry();

	if (out == null || out.length != img.getWidth() * img.getHeight()) {
		out = new boolean[img.getWidth() * img.getHeight()];
	}

	GrayU8 gray = ConvertBufferedImage.convertFrom(img.getBufferedImage(), (GrayU8) null);

	if(!isSolid(gray)){
		getCanny().process(gray, THRESHOLD_LOW, THRESHOLD_HIGH, gray);
		
	}

	for (int i = 0; i < gray.data.length; ++i) {
		out[i] = (gray.data[i] != 0);
	}

	LOGGER.traceExit();
	return out;
}
 
开发者ID:vitrivr,项目名称:cineast,代码行数:22,代码来源:EdgeImg.java

示例4: getCannyContours

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
public static List<Contour> getCannyContours(BufferedImage image) {
	GrayU8 gray = ConvertBufferedImage.convertFrom(image, (GrayU8) null);
	GrayU8 edgeImage = gray.createSameShape();
	canny.process(gray, 0.1f, 0.3f, edgeImage);
	List<Contour> contours = BinaryImageOps.contour(edgeImage, ConnectRule.EIGHT, null);

	return contours;
}
 
开发者ID:ForOhForError,项目名称:MTG-Card-Recognizer,代码行数:9,代码来源:FindCardCandidates.java

示例5: independentHueSat

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
/**
 * Computes two independent 1D histograms from hue and saturation.  Less affects by sparsity, but can produce
 * worse results since the basic assumption that hue and saturation are decoupled is most of the time false.
 */
public static double[] independentHueSat(BufferedImage image) {
    // The number of bins is an important parameter.  Try adjusting it
    TupleDesc_F64 histogramHue = new TupleDesc_F64(5);
    TupleDesc_F64 histogramValue = new TupleDesc_F64(5);

    List<TupleDesc_F64> histogramList = new ArrayList<>();
    histogramList.add(histogramHue); histogramList.add(histogramValue);

    Planar<GrayF32> rgb = new Planar<>(GrayF32.class,1,1,3);
    Planar<GrayF32> hsv = new Planar<>(GrayF32.class,1,1,3);

    rgb.reshape(image.getWidth(), image.getHeight());
    hsv.reshape(image.getWidth(), image.getHeight());
    ConvertBufferedImage.convertFrom(image, rgb, true);
    ColorHsv.rgbToHsv_F32(rgb, hsv);

    GHistogramFeatureOps.histogram(hsv.getBand(0), 0, 2*Math.PI,histogramHue);
    GHistogramFeatureOps.histogram(hsv.getBand(1), 0, 1, histogramValue);

    // need to combine them into a single descriptor for processing later on
    TupleDesc_F64 imageHist = UtilFeature.combine(histogramList,null);

    UtilFeature.normalizeL2(imageHist); // normalize so that image size doesn't matter

    return imageHist.value;
}
 
开发者ID:tomwhite,项目名称:set-game,代码行数:31,代码来源:ImageUtils.java

示例6: run

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
private void run() throws IOException {
    BufferedImage image = UtilImageIO.loadImage(UtilIO.pathExample("C:\\development\\readySET\\deck\\1221.png"));

    GrayU8 gray = ConvertBufferedImage.convertFrom(image,(GrayU8)null);
    GrayU8 edgeImage = gray.createSameShape();

    // Create a canny edge detector which will dynamically compute the threshold based on maximum edge intensity
    // It has also been configured to save the trace as a graph.  This is the graph created while performing
    // hysteresis thresholding.
    CannyEdge<GrayU8,GrayS16> canny = FactoryEdgeDetectors.canny(2,true, true, GrayU8.class, GrayS16.class);

    // The edge image is actually an optional parameter.  If you don't need it just pass in null
    canny.process(gray,0.1f,0.3f,edgeImage);

    // First get the contour created by canny
    List<EdgeContour> edgeContours = canny.getContours();
    // The 'edgeContours' is a tree graph that can be difficult to process.  An alternative is to extract
    // the contours from the binary image, which will produce a single loop for each connected cluster of pixels.
    // Note that you are only interested in verticesnal contours.
    List<Contour> contours = BinaryImageOps.contour(edgeImage, ConnectRule.EIGHT, null);

    // display the results
    BufferedImage visualBinary = VisualizeBinaryData.renderBinary(edgeImage, false, null);
    BufferedImage visualCannyContour = VisualizeBinaryData.renderContours(edgeContours,null,
            gray.width,gray.height,null);
    BufferedImage visualEdgeContour = new BufferedImage(gray.width, gray.height,BufferedImage.TYPE_INT_RGB);
    VisualizeBinaryData.render(contours, (int[]) null, visualEdgeContour);

    ListDisplayPanel panel = new ListDisplayPanel();
    panel.addImage(visualBinary,"Binary Edges from Canny");
    panel.addImage(visualCannyContour, "Canny Trace Graph");
    panel.addImage(visualEdgeContour,"Contour from Canny Binary");
    ShowImages.showWindow(panel,"Canny Edge", true);
}
 
开发者ID:tuomilabs,项目名称:readySET,代码行数:35,代码来源:Converter.java

示例7: process

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
protected static float[] process(MultiImage img, float[] hist) {
  GrayU8 gray = ConvertBufferedImage.convertFrom(img.getBufferedImage(), (GrayU8) null);
  int width = img.getWidth(), height = img.getHeight();
  for (int x = 0; x < 4; ++x) {
    for (int y = 0; y < 4; ++y) {
      GrayU8 subImage = gray
          .subimage(width * x / 4, height * y / 4, width * (x + 1) / 4, height * (y + 1) / 4,
              null);
      int count = 0;
      int[] tmp = new int[5];
      for (int xx = 0; xx < subImage.getWidth() - 1; xx += 2) {
        for (int yy = 0; yy < subImage.getHeight() - 1; yy += 2) {
          count++;
          int index = edgeType(
              subImage.unsafe_get(xx, yy),
              subImage.unsafe_get(xx + 1, yy),
              subImage.unsafe_get(xx, yy + 1),
              subImage.unsafe_get(xx + 1, yy + 1)
          );
          if (index > -1) {
            tmp[index]++;
          }
        }
      }
      int offset = (4 * x + y) * 5;
      for (int i = 0; i < 5; ++i) {
        hist[offset + i] += ((float) tmp[i]) / (float) count;
      }
    }
  }
  return hist;
}
 
开发者ID:vitrivr,项目名称:cineast,代码行数:33,代码来源:EHD.java

示例8: getEdgeImg

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
public static MultiImage getEdgeImg(MultiImage img) {
	LOGGER.traceEntry();

	GrayU8 gray = ConvertBufferedImage.convertFrom(img.getBufferedImage(), (GrayU8) null);
	if(!isSolid(gray)){
		getCanny().process(gray, THRESHOLD_LOW, THRESHOLD_HIGH, gray);
	}

	BufferedImage bout = VisualizeBinaryData.renderBinary(gray, false, null);

	return LOGGER.traceExit(MultiImageFactory.newMultiImage(bout));
}
 
开发者ID:vitrivr,项目名称:cineast,代码行数:13,代码来源:EdgeImg.java

示例9: getEdgeList

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
public static List<EdgeContour> getEdgeList(MultiImage img){
	LOGGER.traceEntry();
	BufferedImage withBackground = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
	Graphics g = withBackground.getGraphics();
	g.setColor(Color.white);
	g.fillRect(0, 0, img.getWidth(), img.getHeight());
	g.drawImage(img.getBufferedImage(), 0, 0, null);
	GrayU8 gray = ConvertBufferedImage.convertFrom(withBackground, (GrayU8) null);
	CannyEdge<GrayU8, GrayS16> canny = getCanny();
	canny.process(gray, THRESHOLD_LOW, THRESHOLD_HIGH, null);
	List<EdgeContour> _return = canny.getContours();
	LOGGER.traceExit();
	return _return;
}
 
开发者ID:vitrivr,项目名称:cineast,代码行数:15,代码来源:EdgeList.java

示例10: coupledHueSat

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
/**
 * HSV stores color information in Hue and Saturation while intensity is in Value.  This computes a 2D histogram
 * from hue and saturation only, which makes it lighting independent.
 */
public double[] coupledHueSat(byte[] image) throws IOException {
	Planar<GrayF32> rgb = new Planar<GrayF32>(GrayF32.class,1,1,3);
	Planar<GrayF32> hsv = new Planar<GrayF32>(GrayF32.class,1,1,3);

	BufferedImage buffered = ImageIO.read(new ByteArrayInputStream(image));
	if (buffered == null) {
		throw new RuntimeException("Can't load image!");
	}

	rgb.reshape(buffered.getWidth(), buffered.getHeight());
	hsv.reshape(buffered.getWidth(), buffered.getHeight());

	ConvertBufferedImage.convertFrom(buffered, rgb, true);
	ColorHsv.rgbToHsv_F32(rgb, hsv);

	Planar<GrayF32> hs = hsv.partialSpectrum(0,1);

	// The number of bins is an important parameter.  Try adjusting it
	Histogram_F64 histogram = new Histogram_F64(12,12);
	histogram.setRange(0, 0, 2.0 * Math.PI); // range of hue is from 0 to 2PI
	histogram.setRange(1, 0, 1.0);         // range of saturation is from 0 to 1

	// Compute the histogram
	GHistogramFeatureOps.histogram(hs,histogram);

	UtilFeature.normalizeL2(histogram); // normalize so that image size doesn't matter

	return histogram.value;
}
 
开发者ID:BotLibre,项目名称:BotLibre,代码行数:34,代码来源:Vision.java

示例11: getDensePaths

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
public static LinkedList<Pair<Integer,ArrayList<AssociatedPair>>> getDensePaths(List<VideoFrame> videoFrames){
	if(videoFrames.size() < 2){
		return null;
	}

	PkltConfig configKlt = new PkltConfig(3, new int[] { 1, 2, 4 });
	configKlt.config.maxPerPixelError = 45;
	ImageGradient<GrayU8, GrayS16> gradient = FactoryDerivative.sobel(GrayU8.class, GrayS16.class);
	PyramidDiscrete<GrayU8> pyramidForeward = FactoryPyramid.discreteGaussian(configKlt.pyramidScaling,-1,2,true,GrayU8.class);
	PyramidDiscrete<GrayU8> pyramidBackward = FactoryPyramid.discreteGaussian(configKlt.pyramidScaling,-1,2,true,GrayU8.class);
	PyramidKltTracker<GrayU8, GrayS16> trackerForeward = FactoryTrackerAlg.kltPyramid(configKlt.config, GrayU8.class, null);
	PyramidKltTracker<GrayU8, GrayS16> trackerBackward = FactoryTrackerAlg.kltPyramid(configKlt.config, GrayU8.class, null);
	
	GrayS16[] derivX = null;
	GrayS16[] derivY = null;
	
	LinkedList<PyramidKltFeature> tracks = new LinkedList<PyramidKltFeature>();
	LinkedList<Pair<Integer,ArrayList<AssociatedPair>>> paths = new LinkedList<Pair<Integer,ArrayList<AssociatedPair>>>();
	
	GrayU8 gray = null;
	int frameIdx = 0;
	int cnt = 0;
	for (VideoFrame videoFrame : videoFrames){
		++frameIdx;
		
		if(cnt >= frameInterval){
			cnt = 0;
			continue;
		}
		cnt += 1;
		
		gray = ConvertBufferedImage.convertFrom(videoFrame.getImage().getBufferedImage(), gray);
		ArrayList<AssociatedPair> tracksPairs = new ArrayList<AssociatedPair>();
		
		if (frameIdx == 0){
			tracks = denseSampling(gray, derivX, derivY, samplingInterval, configKlt, gradient, pyramidBackward, trackerBackward);
		}
		else{
			tracking(gray, derivX, derivY, tracks, tracksPairs, gradient, pyramidForeward, pyramidBackward, trackerForeward, trackerBackward);
			tracks = denseSampling(gray, derivX, derivY, samplingInterval, configKlt, gradient, pyramidBackward, trackerBackward);
		}
		
		paths.add(new Pair<Integer,ArrayList<AssociatedPair>>(frameIdx,tracksPairs));
	}
	return paths;
}
 
开发者ID:vitrivr,项目名称:cineast,代码行数:47,代码来源:PathList.java

示例12: run

import boofcv.io.image.ConvertBufferedImage; //导入方法依赖的package包/类
/**
 * Invoke to start the main processing loop.
 */
public void run() {
  webcam = UtilWebcamCapture.openDefault(desiredWidth, desiredHeight);
  // Mapper mapperX = new Mapper(0,desiredWidth,0.0,1.0);

  // adjust the window size and let the GUI know it has changed
  Dimension actualSize = webcam.getViewSize();
  setPreferredSize(actualSize);
  setMinimumSize(actualSize);
  window.setMinimumSize(actualSize);
  window.setPreferredSize(actualSize);
  window.setVisible(true);

  // create
  T input = tracker.getImageType().createImage(actualSize.width, actualSize.height);

  workImage = new BufferedImage(input.getWidth(), input.getHeight(), BufferedImage.TYPE_INT_RGB);
  processing = true;

  while (processing) {
    BufferedImage buffered = webcam.getImage();
    
    ConvertBufferedImage.convertFrom(webcam.getImage(), input, true);

    // mode is read/written to by the GUI also
    int mode = this.mode;

    boolean success = false;
    if (mode == 2) {
      Rectangle2D_F64 rect = new Rectangle2D_F64();
      rect.set(point0.x, point0.y, point1.x, point1.y);
      UtilPolygons2D_F64.convert(rect, target);
      success = tracker.initialize(input, target);
      this.mode = success ? 3 : 0;
    } else if (mode == 3) {
      success = tracker.process(input, target);
    }

    synchronized (workImage) {
      // copy the latest image into the work buffered
      Graphics2D g2 = workImage.createGraphics();
      g2.drawImage(buffered, 0, 0, null);

      // visualize the current results
      if (mode == 1) {
        drawSelected(g2);
      } else if (mode == 3) {
        if (success) {
          drawTrack(g2);

        }
      }
    }

    repaint();
  }
}
 
开发者ID:MyRobotLab,项目名称:myrobotlab,代码行数:60,代码来源:ObjectTracker.java


注:本文中的boofcv.io.image.ConvertBufferedImage.convertFrom方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。