OpenCV - How to get real world distance from a 2D image using a chessboard as reference
在检查了几段代码之后,我拍了几张照片,找到棋盘的角,并用它们来获得相机矩阵、畸变系数、旋转和平移矢量。现在,有人能告诉我需要哪个python opencv函数来计算现实世界中与二维图像之间的距离吗?项目要点?例如,使用棋盘作为参考(见图),如果瓷砖尺寸为5厘米,则4块瓷砖的距离应为20厘米。我看到了一些函数,如投影点、findhomography、solvepnp,但我不确定需要哪一个函数来解决我的问题,并得到相机世界和棋盘世界之间的转换矩阵。1个单摄像机,所有情况下摄像机的位置相同,但不完全在棋盘上,棋盘放在平面物体(桌子)上。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((nx * ny, 3), np.float32) objp[:, :2] = np.mgrid[0:nx, 0:ny].T.reshape(-1, 2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob(path.join(calib_images_dir, 'calibration*.jpg')) print(images) # Step through the list and search for chessboard corners for filename in images: img = cv2.imread(filename) imgScale = 0.5 newX,newY = img.shape[1]*imgScale, img.shape[0]*imgScale res = cv2.resize(img,(int(newX),int(newY))) gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY) # Find the chessboard corners pattern_found, corners = cv2.findChessboardCorners(gray, (nx,ny), None) # If found, add object points, image points (after refining them) if pattern_found is True: objpoints.append(objp) # Increase accuracy using subpixel corner refinement cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 )) imgpoints.append(corners) if verbose: # Draw and display the corners draw = cv2.drawChessboardCorners(res, (nx, ny), corners, pattern_found) cv2.imshow('img',draw) cv2.waitKey(500) if verbose: cv2.destroyAllWindows() #Now we have our object points and image points, we are ready to go for calibration # Get the camera matrix, distortion coefficients, rotation and translation vectors ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None) print(mtx) print(dist) print('rvecs:', type(rvecs),' ',len(rvecs),' ',rvecs) print('tvecs:', type(tvecs),' ',len(tvecs),' ',tvecs) mean_error = 0 for i in range(len(objpoints)): imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist) error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2) mean_error += error print("total error:", mean_error/len(objpoints)) imagePoints,jacobian = cv2.projectPoints(objpoints[0], rvecs[0], tvecs[0], mtx, dist) print('Image points: ',imagePoints) |
你是对的,我认为你应该用solvepnp来解决这个问题。(阅读更多关于透视-n点问题的信息,请访问:https://en.wikipedia.org/wiki/perspective-n-point。)
python opencv solvepnp函数接受以下参数并返回输出旋转和输出转换矢量,该矢量将模型坐标系转换为相机坐标系。
1 | cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec |
在您的情况下,图像点将是棋盘的角,因此它看起来像:
1 | ret, rvec, tvec = cv2.solvePnP(objpoints, corners, mtx, dist) |
通过返回的平移向量,您可以计算从相机到棋盘的距离。solvepnp的输出转换与objectpoints中指定的单位相同。
最后,您可以用欧几里得距离计算到Tvec的实际距离:
1 | d = math.sqrt(tx*tx + ty*ty + tz*tz). |
您的问题主要与相机校准有关,尤其是在OpenCV中解决相机失真的执行不佳。你必须通过在棋盘的不同坐标上取几根距离探针来近似你相机镜头的畸变函数。好主意是先在透镜中心取一个小距离,然后再取一个正方形,再取一个稍长的距离,再重复操作到边界。它会给你的失真函数的系数。matlab有自己的库来解决你的问题,而且精度很高,不幸的是它非常昂贵。根据:
Now, can someone tell me which python opencv function do I need to
calculate the distance in real world from the 2D image?
我认为本文对python opencv函数集进行了很好的解释,以生成真正的度量。根据我上面所说的分解系数,你可以得到很好的精度。不管怎样,我不这么认为,如果它是一个开源的函数实现
1 | cv2.GetRealDistance(...) |