OpenCV fisheye calibration cuts too much of the resulting image
我正在使用OpenCV校准使用带鱼眼镜头的相机拍摄的图像。
我正在使用的功能是:
-
findChessboardCorners(...); 查找校准图案的角。 -
cornerSubPix(...); 细化找到的角。 -
fisheye::calibrate(...); 校准摄像机矩阵和失真系数。 -
fisheye::undistortImage(...); 使用从校准获得的相机信息使图像不失真。
虽然生成的图像看起来确实不错(直线等),但我的问题是该功能切除了太多图像。
这是一个真正的问题,因为我使用的是四台相机,两台相机之间呈90度角,并且当切掉太多的侧面时,在我要拼接图像时,它们之间没有重叠的区域。
我研究了使用
下图显示了我的不失真结果的图像,以及我理想中想要的那种结果的示例。
不失真:
所需结果的示例:
我想我也遇到了类似的问题,在鱼眼的getOptimalNewCameraMatrix中寻找" alpha"结。
原始照片:
我用cv2.fisheye.calibrate进行了校准,得到了K和D参数
1 2 3 4 5 6 7 8 | K = [[ 329.75951163 0. 422.36510555] [ 0. 329.84897388 266.45855056] [ 0. 0. 1. ]] D = [[ 0.04004325] [ 0.00112638] [ 0.01004722] [-0.00593285]] |
这就是我得到的
1 2 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, d, np.eye(3), k, (800,600), cv2.CV_16SC2) nemImg = cv2.remap( img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT) |
而且我认为它太多了。我想看整个魔方
我用
1 2 3 4 5 6 7 | nk = k.copy() nk[0,0]=k[0,0]/2 nk[1,1]=k[1,1]/2 # Just by scaling the matrix coefficients! map1, map2 = cv2.fisheye.initUndistortRectifyMap(k, d, np.eye(3), nk, (800,600), cv2.CV_16SC2) # Pass k in 1st parameter, nk in 4th parameter nemImg = cv2.remap( img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT) |
TADA!
正如Paul Bourke在此处所述:
a fisheye projection is not a"distorted" image, and the process isn't
a"dewarping". A fisheye like other projections is one of many ways of
mapping a 3D world onto a 2D plane, it is no more or less"distorted"
than other projections including a rectangular perspective projection
要获得不裁剪图像的投影(并且您的相机的视角约为180度),您可以使用以下方式将鱼眼图像投影到正方形中:
源代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | #include <iostream> #include <sstream> #include <time.h> #include <stdio.h> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/calib3d/calib3d.hpp> #include <opencv2/highgui/highgui.hpp> // - compile with: // g++ -ggdb `pkg-config --cflags --libs opencv` fist2rect.cpp -o fist2rect // - execute: // fist2rect input.jpg output.jpg using namespace std; using namespace cv; #define PI 3.1415926536 Point2f getInputPoint(int x, int y,int srcwidth, int srcheight) { Point2f pfish; float theta,phi,r, r2; Point3f psph; float FOV =(float)PI/180 * 180; float FOV2 = (float)PI/180 * 180; float width = srcwidth; float height = srcheight; // Polar angles theta = PI * (x / width - 0.5); // -pi/2 to pi/2 phi = PI * (y / height - 0.5); // -pi/2 to pi/2 // Vector in 3D space psph.x = cos(phi) * sin(theta); psph.y = cos(phi) * cos(theta); psph.z = sin(phi) * cos(theta); // Calculate fisheye angle and radius theta = atan2(psph.z,psph.x); phi = atan2(sqrt(psph.x*psph.x+psph.z*psph.z),psph.y); r = width * phi / FOV; r2 = height * phi / FOV2; // Pixel in fisheye space pfish.x = 0.5 * width + r * cos(theta); pfish.y = 0.5 * height + r2 * sin(theta); return pfish; } int main(int argc, char **argv) { if(argc< 3) return 0; Mat orignalImage = imread(argv[1]); if(orignalImage.empty()) { cout<<"Empty image\ "; return 0; } Mat outImage(orignalImage.rows,orignalImage.cols,CV_8UC3); namedWindow("result",CV_WINDOW_NORMAL); for(int i=0; i<outImage.cols; i++) { for(int j=0; j<outImage.rows; j++) { Point2f inP = getInputPoint(i,j,orignalImage.cols,orignalImage.rows); Point inP2((int)inP.x,(int)inP.y); if(inP2.x >= orignalImage.cols || inP2.y >= orignalImage.rows) continue; if(inP2.x < 0 || inP2.y < 0) continue; Vec3b color = orignalImage.at<Vec3b>(inP2); outImage.at<Vec3b>(Point(i,j)) = color; } } imwrite(argv[2],outImage); } |
您需要将
1 2 3 4 | new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, D, dim, np.eye(3), balance=balance) map1, map2 = cv2.fisheye.initUndistortRectifyMap(scaled_K, D, np.eye(3), new_K, dim, cv2.CV_32FC1) # and then remap: undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT) |
我堆积了同样的问题。而且,如果相机的FOV约为180度,我认为您将无法使100%的初始图像表面失真。我在这里放置了更详细的解释
一切都很好,您只需使用