当前位置: 首页>>代码示例>>C++>>正文


C++ vpPoint::get_y方法代码示例

本文整理汇总了C++中vpPoint::get_y方法的典型用法代码示例。如果您正苦于以下问题:C++ vpPoint::get_y方法的具体用法?C++ vpPoint::get_y怎么用?C++ vpPoint::get_y使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在vpPoint的用法示例。


在下文中一共展示了vpPoint::get_y方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C++代码示例。

示例1: sqrt

/*!

  Initialize a point feature with polar coordinates
  \f$(\rho,\theta)\f$ using the coordinates of the point
  \f$(x,y,Z)\f$, where \f$(x,y)\f$ correspond to the perspective
  projection of the point in the image plane and \f$Z\f$ the 3D depth
  of the point in the camera frame. The values of \f$(x,y,Z)\f$ are
  expressed in meters.

  This function intends to introduce noise in the conversion from
  cartesian to polar coordinates. Cartesian \f$(x,y)\f$ coordinates
  are first converted in pixel coordinates in the image using \e
  goodCam camera parameters. Then, the pixels coordinates of the point
  are converted back to cartesian coordinates \f$(x^{'},y^{'})\f$ using
  the noisy camera parameters \e wrongCam. From these new coordinates
  in the image plane, the polar coordinates are computed by:

  \f[\rho = \sqrt{x^2+y^2}  \hbox{,}\; \; \theta = \arctan \frac{y}{x}\f]

  \param s : Visual feature \f$(\rho,\theta)\f$ and \f$Z\f$ to initialize.

  \param goodCam : Camera parameters used to introduce noise. These
  parameters are used to convert cartesian coordinates of the point \e
  p in the image plane in pixel coordinates.

  \param wrongCam : Camera parameters used to introduce noise. These
  parameters are used to convert pixel coordinates of the point in
  cartesian coordinates of the point in the image plane.

  \param p : A point with \f$(x,y)\f$ cartesian coordinates in the
  image plane corresponding to the camera perspective projection, and
  with 3D depth \f$Z\f$.
*/
void
vpFeatureBuilder::create(vpFeaturePointPolar &s,
			 const vpCameraParameters &goodCam,
			 const vpCameraParameters &wrongCam,
			 const vpPoint &p)
{
  try {
    double x = p.get_x();
    double y = p.get_y();

    s.set_Z( p.get_Z() );

    double u=0, v=0;
    vpMeterPixelConversion::convertPoint(goodCam, x, y, u, v);
    vpPixelMeterConversion::convertPoint(wrongCam, u, v, x, y);

    double rho   = sqrt(x*x + y*y);
    double theta = atan2(y, x);

    s.set_rho(rho) ;
    s.set_theta(theta) ;
  }
  catch(...) {
    vpERROR_TRACE("Error caught") ;
    throw ;
  }
}
开发者ID:976717326,项目名称:visp,代码行数:60,代码来源:vpFeatureBuilderPointPolar.cpp

示例2: catch

/*!
  Initialize a vpFeatureVanishingPoint thanks to a vpPoint.
  The vpFeatureVanishingPoint is initialized thanks to the parameters of the point in the image plan.
  All the parameters are given in meter.

  \param s : Visual feature to initialize.

  \param t : The vpPoint used to create the vpFeatureVanishingPoint.
*/
void
vpFeatureBuilder::create(vpFeatureVanishingPoint &s, const vpPoint &t)
{
  try
  {
    s.set_x( t.get_x()) ;
    s.set_y( t.get_y()) ;
  }
  catch(...)
  {
    vpERROR_TRACE("Cannot create vanishing point feature") ;
    throw ;
  }
}
开发者ID:nttputus,项目名称:visp,代码行数:23,代码来源:vpFeatureBuilderVanishingPoint.cpp

示例3: v

/*!
  From the coordinates of the point in image plane b and the homography between image
  a and b computes the coordinates of the point in image plane a.

  \param b_P : 2D coordinates of the point in the image plane b.

  \return A point with 2D coordinates in the image plane a.
*/
vpPoint vpHomography::operator*(const vpPoint& b_P) const
{
  vpPoint a_P ;
  vpColVector v(3),v1(3) ;

  v[0] = b_P.get_x() ;
  v[1] = b_P.get_y() ;
  v[2] = b_P.get_w() ;

  v1[0] = (*this)[0][0]*v[0] + (*this)[0][1]*v[1]+ (*this)[0][2]*v[2] ;
  v1[1] = (*this)[1][0]*v[0] + (*this)[1][1]*v[1]+ (*this)[1][2]*v[2] ;
  v1[2] = (*this)[2][0]*v[0] + (*this)[2][1]*v[1]+ (*this)[2][2]*v[2] ;

  //  v1 = M*v ;
  a_P.set_x(v1[0]) ;
  a_P.set_y(v1[1]) ;
  a_P.set_w(v1[2]) ;

  return a_P ;
}
开发者ID:DaikiMaekawa,项目名称:visp,代码行数:28,代码来源:vpHomography.cpp

示例4: H

/*!
\brief Carries out the camera pose the image of a rectangle and
the intrinsec parameters, the length on x axis is known but the
proprtion of the rectangle are unknown.

This method is taken from "Markerless Tracking using Planar Structures
in the Scene" by Gilles Simon. The idea is to compute the homography H
giving the image point of the rectangle by associating them with the
coordinates (0,0)(1,0)(1,1/s)(0,1/s) (the rectangle is on the Z=0 plane).
If K is the intrinsec parameters matrix, we have  s = ||Kh1||/ ||Kh2||. s
gives us the proportion of the rectangle

\param p1,p2,p3,p4: the image of the corners of the rectangle
(respectively the image of  (0,0),(lx,0),(lx,lx/s) and (0,lx/s)) (input)
\param cam: the camera used (input)
\param lx: the rectangle size on the x axis (input)
\param cMo: the camera pose (output)
\return int : OK if no pb occurs
*/
double
vpPose::poseFromRectangle(vpPoint &p1,vpPoint &p2,
                          vpPoint &p3,vpPoint &p4,
                          double lx, vpCameraParameters & cam,
                          vpHomogeneousMatrix & cMo)
{

  double rectx[4] ;
  double recty[4] ;
  rectx[0]= 0 ;
  recty[0]=0 ;
  rectx[1]=1 ;
  recty[1]=0 ;
  rectx[2]=1 ;
  recty[2]=1 ;
  rectx[3]=0 ;
  recty[3]=1 ;
  double irectx[4] ;
  double irecty[4] ;
  irectx[0]=(p1.get_x()) ;
  irecty[0]=(p1.get_y()) ;
  irectx[1]=(p2.get_x()) ;
  irecty[1]=(p2.get_y()) ;
  irectx[2]=(p3.get_x()) ;
  irecty[2]=(p3.get_y()) ;
  irectx[3]=(p4.get_x()) ;
  irecty[3]=(p4.get_y()) ;

  //calcul de l'homographie
  vpMatrix H(3,3);
  vpHomography hom;

  //  vpHomography::HartleyDLT(4,rectx,recty,irectx,irecty,hom);
  vpHomography::HLM(4,rectx,recty,irectx,irecty,1,hom);
  for (unsigned int i=0 ; i < 3 ; i++)
    for(unsigned int j=0 ; j < 3 ; j++)
      H[i][j] = hom[i][j] ;
  //calcul de s =  ||Kh1||/ ||Kh2|| =ratio (length on x axis/ length on y axis)
  vpColVector kh1(3);
  vpColVector kh2(3);
  vpMatrix K(3,3);
  K = cam.get_K();
  K.setIdentity();
  vpMatrix Kinv =K.pseudoInverse();

  vpMatrix KinvH =Kinv*H;
  kh1=KinvH.column(1);
  kh2=KinvH.column(2);


  double s= sqrt(kh1.sumSquare())/sqrt(kh2.sumSquare());


  vpMatrix D(3,3);
  D.setIdentity();
  D[1][1]=1/s;
  vpMatrix cHo=H*D;

  //Calcul de la rotation et de la translation
  //  PoseFromRectangle(p1,p2,p3,p4,1/s,lx,cam,cMo );
  p1.setWorldCoordinates(0,0,0) ;
  p2.setWorldCoordinates(lx,0,0) ;
  p3.setWorldCoordinates(lx,lx/s,0) ;
  p4.setWorldCoordinates(0,lx/s,0) ;


  vpPose P ;
  P.addPoint(p1) ;
  P.addPoint(p2) ;
  P.addPoint(p3) ;
  P.addPoint(p4) ;


  P.computePose(vpPose::DEMENTHON_LOWE,cMo) ;
  return lx/s ;

}
开发者ID:GVallicrosa,项目名称:Armed-turtlebot,代码行数:96,代码来源:vpPose.cpp

示例5: size

/*!
  Test the visibility of a line. As a result, a subsampled line of the given one with all its visible parts.

  \param a : First point of the line.
  \param b : Second point of the line.
  \param lines : List of lines corresponding of the visible parts of the given line.
  \param displayResults : True if the results have to be displayed. False otherwise.
*/
void
vpMbScanLine::queryLineVisibility(vpPoint a,
                         vpPoint b,
                         std::vector<std::pair<vpPoint, vpPoint> > &lines, const bool &displayResults)
{
  vpColVector _a, _b;
  createVectorFromPoint(a, _a, K);
  createVectorFromPoint(b, _b, K);

  double x0 = _a[0] / _a[2];
  double y0 = _a[1] / _a[2];
  double z0 = _a[2];
  double x1 = _b[0] / _b[2];
  double y1 = _b[1] / _b[2];
  double z1 = _b[2];

  vpMbScanLineEdge edge = makeMbScanLineEdge(a, b);
  lines.clear();

  if(displayResults){
#if defined(VISP_HAVE_X11) && defined(DEBUG_DISP)
    double i1(0.0), j1(0.0), i2(0.0), j2(0.0);
    a.project();
    b.project();
    vpMeterPixelConversion::convertPoint(K,a.get_x(), a.get_y(),j1,i1);
    vpMeterPixelConversion::convertPoint(K,b.get_x(), b.get_y(),j2,i2);

    vpDisplay::displayLine(linedebugImg,i1,j1,i2,j2,vpColor::yellow,3);
#endif
  }

  if (!visibility_samples.count(edge))
      return;

  // Initialized as the biggest difference between the two points is on the X-axis
  double *v0(&x0), *w0(&z0);
  double *v1(&x1), *w1(&z1);
  unsigned int size(w);

  if (std::fabs(y0 - y1) > std::fabs(x0 - x1)) // Test if the biggest difference is on the Y-axis
  {
    v0 = &y0;
    v1 = &y1;
    size = h;
  }

  if (*v0 > *v1)
  {
      std::swap(v0, v1);
      std::swap(w0, w1);
      std::swap(a, b);
  }

  //if (*v0 >= size - 1 || *v1 < 0 || *v1 == *v0)
  if (*v0 >= size - 1 || *v1 < 0 || std::fabs(*v1 - *v0) <= std::numeric_limits<double>::epsilon())
      return;

  const int _v0 = std::max(0, int(std::ceil(*v0)));
  const int _v1 = std::min<int>((int)(size - 1), (int)(std::ceil(*v1) - 1));

  const std::set<int> &visible_samples = visibility_samples[edge];
  int last = _v0;
  vpPoint line_start;
  vpPoint line_end;
  bool b_line_started = false;
  for(std::set<int>::const_iterator it = visible_samples.begin() ; it != visible_samples.end() ; ++it)
  {
      const int v = *it;
      const double alpha = getAlpha(v, (*v0) * (*w0), (*w0), (*v1) * (*w1), (*w1));
      const vpPoint p = mix(a, b, alpha);
      if (last + 1 != v)
      {
          if(b_line_started)
            lines.push_back(std::make_pair(line_start, line_end));
          b_line_started = false;
      }
      if (v == _v0)
      {
          line_start = a;
          line_end = p;
          b_line_started = true;
      }
      else if (v == _v1)
      {
          line_end = b;
          if (!b_line_started)
              line_start = p;
          b_line_started = true;
      }
      else
      {
          line_end = p;
//.........这里部分代码省略.........
开发者ID:tswang,项目名称:visp,代码行数:101,代码来源:vpMbScanLine.cpp


注:本文中的vpPoint::get_y方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。