Alain2131 Posted July 14, 2023 Share Posted July 14, 2023 (edited) Hello ! I want to be able to manually compute what the Frame shortcut does in the viewport, in VEX. I have this in Python node = hou.node("model") geo = node.geometry() bbox = geo.boundingBox() sceneViewer = hou.ui.paneTabOfType(hou.paneTabType.SceneViewer) viewport = sceneViewer.curViewport() viewport.frameBoundingBox(bbox) This frames the viewport based on an specific bounding box, but I want to calculate this for multiple view angles at once. With the python solution I would have to actually frame the viewport multiple times and record the resulting camera position each time. Attached is a scene with the tests I made after searching online. manual camera framing exploration.hiplc The main part of this scene is this code. (see the link for the reference I used) Detail Wrangle, input 1 is the geometry you want to frame (make it a convex hull), input 2 is another wrangle, input 3 is yet another wrangle (see scene for their contents). // https://gamedev.stackexchange.com/questions/136396/how-do-you-make-a-camera-look-at-a-box-and-ensure-all-of-it-is-visible // function inputs matrix proj_view = 4@opinput2_proj_view; float aspect = f@opinput3_aspect; float FOV = f@opinput3_FOV; vector camera_direction = v@opinput3_direction; // Get input points into camera space (clip space ?) vector projected_points[]; for(int i=0; i<npoints(1); i++) { vector P = point(1, "P", i); // This doesn't make sense to me, I think it should be invert(proj_view) instead P = (proj_view) * P; // addpoint(0, P); // visualize points append(projected_points, P); } // Get bounding box/rectangle/lines thing vector A1, B1, A2, B2; float width = 0, height = 0; for(int i=0; i<npoints(1)-1; i++) { for(int j=i; j<npoints(1); j++) { float new_width = abs(projected_points[i].x - projected_points[j].x); float new_height = abs(projected_points[i].y - projected_points[j].y); if(new_width > width) { A1 = projected_points[i]; B1 = projected_points[j]; width = new_width; } if(new_height > height) { A2 = projected_points[i]; B2 = projected_points[j]; height = new_height; } } } // Get relevant diagonal and distance vector A, B; float dist = 0; if(height * aspect > width) { dist = height / 2 / tan(FOV / 2); // I feel like A1/B1 and A2/B2 is reversed A = A1; B = B1; } else { dist = width / 2 / tan(FOV * aspect / 2); // I feel like A1/B1 and A2/B2 is reversed A = A2; B = B2; } // I don't understand the logic to use the camera-space center to offset from. vector center = (A + B) / 2; vector camera_pos = center - camera_direction * dist; addpoint(0, camera_pos); I like this method because it takes into account camera FOV, aspect ratio, aperture, etc. But this does not give a camera position that makes sense. There are a few parts of this code that doesn't make much sense to me (as I copied this logic from the link). There is a version where I split the code in three nodes to see the result of each steps in the viewport. If you have another solution, or can help me with this code, I would appreciate it ! I feel like there might be something to do with toNDC()/fromNDC() Thanks for reading ! Edited July 15, 2023 by Alain2131 Quote Link to comment Share on other sites More sharing options...
Alain2131 Posted July 14, 2023 Author Share Posted July 14, 2023 (edited) Here's an explanation of the current test. projected points I think this makes more sense, notice how the above screenshot shows the templated geometry in some weird shape and location, and in the below screenshot it's neatly around the origin. By inverting the proj_view matrix, the geometry goes into -1|1 range, based on what's visible in the camera frustum. Notice how the tail is cut off in above's screenshot, which shows in the screenshot below as the tail going beyond 1 in X. That is why doing invert(proj_view) makes more sense to me, this looks a whole lot more like a camera-space representation of the geometry. Then, bounding box/rectangle/diagonal thingy This simply takes the two furthest points in X, and the two furthest points in Y. Finally, the resulting camera position In Red is the reference camera, the only thing that should be retained from it is its orientation. The point 0 is the resulting camera. Hand-drawn in yellow is the reference camera frustum, and what the camera frustum would be on the resulting camera position. Notice how the resulting position "is looking away" from the templated geometry. The point should be on the same side as the camera in this case. (Note that "being on the same side" means nothing in general, the camera could be on the opposite side looking away from the geometry. As long as the orientation is the same, the calculated position should be the same. No matter what the Red camera LOCATION is, the result should be the same given an orientation. I know I repeated myself at least three times, I just wanted to drive this point home.) Edited July 14, 2023 by Alain2131 Quote Link to comment Share on other sites More sharing options...
Alain2131 Posted July 19, 2023 Author Share Posted July 19, 2023 (edited) I have made progress after sleeping on this. I did invert(camera_transform), which gave me the geometry in camera space. From there, I placed a point away from the geometry center in the Z direction with an arbitrary distance. I then re-applied the original camera_transform to the resulting position, and that is almost working properly ! The only issue is the arbitrary distance. This needs to be determined automatically based on the geometry's size. I tried to compute that distance with this code from that link. // https://gamedev.stackexchange.com/questions/136396/how-do-you-make-a-camera-look-at-a-box-and-ensure-all-of-it-is-visible vector size = getbbox_size(1); float height = size.y; float width = size.x; float aspect = f@opinput2_aspect; float FOV = f@opinput2_FOV; // Important code is here if(height * aspect > width) { f@dist = height / 2 / tan(FOV / 2); } else { f@dist = width / 2 / tan(FOV * aspect / 2); } But either this code is incorrect in my situation, and/or the aspect and/or FOV I calculate here are wrong. // https://www.sidefx.com/docs/houdini/ref/cameralenses.html string camera = chs("camera"); int resx = chi(camera + "/resx"); int resy = chi(camera + "/resy"); float asp = chf(camera + "/aspect"); float focal = chf(camera + "/focal"); float apx = chf(camera + "/aperture"); float apy = (resy*apx) / (resx*asp); float fovx = 2 * atan( (apx/2) / focal ); float fovy = 2 * atan( (apy/2) / focal ); f@FOV = fovy; f@aspect = asp; Here's an overview of the current result with the auto-distance code from above. With the updated scene file → manual camera framing exploration.hiplc The question has now changed a little bit. How can I compute the relevant distance, taking into consideration the various camera properties such as aspect ratio, focal length and aperture ? But honestly, if you have any other solution, I'm open to your ideas ! Thanks for reading ! Edited July 19, 2023 by Alain2131 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.