Screen Coordinates to World Coordinates
我想在
这是我的代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | static void mouse_callback(GLFWwindow* window, int button, int action, int mods) { if (button == GLFW_MOUSE_BUTTON_LEFT) { if(GLFW_PRESS == action){ int height = 768, width =1024; double xpos,ypos,zpos; glfwGetCursorPos(window, &xpos, &ypos); glReadPixels(xpos, ypos, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &zpos); glm::mat4 m_projection = glm::perspective(glm::radians(45.0f), (float)(1024/768), 0.1f, 1000.0f); glm::vec3 win(xpos,height - ypos, zpos); glm::vec4 viewport(0.0f,0.0f,(float)width, (float)height); glm::vec3 world = glm::unProject(win, mesh.getView() * mesh.getTransform(),m_projection,viewport); std::cout <<"screen" << xpos <<"" << ypos <<"" << zpos << std::endl; std::cout <<"world" << world.x <<"" << world.y <<"" << world.z << std::endl; } } } |
现在,我有2个问题,第一个问题是,我从
第二个问题是,正如glm文档(https://glm.g-truc.net/0.9.8/api/a00169.html ga82a558de3ce42cbeed0f6ec292a4e1b3)中所述,结果以对象坐标返回。所以为了将屏幕坐标转换为世界坐标,我应该使用一个网格的转换矩阵,但是如果A有许多网格,并且我想从屏幕坐标转换为世界坐标,会发生什么?我应该多按摄像机视图矩阵来形成什么模型矩阵?
这个序列有几个问题:
1
2
3
4 glfwGetCursorPos(window, &xpos, &ypos);
glReadPixels(xpos, ypos, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &zpos);
[...]
glm::vec3 win(xpos,height - ypos, zpos);
窗口空间原点。
此外,您的翻页是错误的。因为
"屏幕坐标"与像素坐标。您的代码假设从GLFW返回的坐标是以像素为单位的。事实并非如此。GLFW使用"虚拟屏幕坐标"的概念,不一定映射到像素:
Pixels and screen coordinates may map 1:1 on your machine, but they
won't on every other machine, for example on a Mac with a Retina
display. The ratio between screen coordinates and pixels may also
change at run-time depending on which monitor the window is currently
considered to be on.
glfw通常为一个窗口提供两种大小,
亚像素位置。虽然
要将它们放在一起,您可以(概念上)这样做:
1 2 3 4 5 6 7 8 9 | glfwGetWindowSize(win, &screen_w, &screen_h); // better use the callback and cache the values glfwGetFramebufferSize(win, &pixel_w, &pixel_h); // better use the callback and cache the values glfwGetCursorPos(window, &xpos, &ypos); glm::vec2 screen_pos=glm::vec2(xpos, ypos); glm::vec2 pixel_pos=screen_pos * glm::vec2(pixel_w, pixel_h) / glm::vec2(screen_w, screen_h); // note: not necessarily integer pixel_pos = pixel_pos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention glm::vec3 win=glm::vec3(pixel_pos., pixel_h-1-pixel_pos.y, 0.0f); glReadPixels( (GLint)win.x, (GLint)win.y, ..., &win.z) // ... unproject win |
what model matrix should I multuply by camera view matrix to form ModelView matrix?
一个也没有。基本坐标转换管道是
1 | object space -> {MODEL} -> World Space -> {VIEW} -> Eye Space -> {PROJ} -> Clip Space -> {perspective divide} -> NDC -> {Viewport/DepthRange} -> Window Space |
没有模型矩阵影响从世界到窗口空间的方式,因此反过来它也不依赖于任何模型矩阵。
that as said in the glm docs (https://glm.g-truc.net/0.9.8/api/a00169.html#ga82a558de3ce42cbeed0f6ec292a4e1b3) the result is returned in object coordinates.
数学不关心你在哪些空间之间转换。文档中提到了对象空间,函数使用了一个名为
So in order to convert screen to world coordinates I should use a transform matrix from one mesh.
你甚至可以这样做。你可以使用任何对象的任何模型矩阵,只要矩阵不是奇异的,只要你使用相同的矩阵作为未被投影,就像你以后用于从对象空间到世界空间。你甚至可以组成一个随机矩阵,如果你确定它是规则的。(如果矩阵条件不好,可能会出现数值问题)。这里的关键是,当你指定(v*m)和p作为