Search the Community
Showing results for tags 'photogrammetry'.
Hello, I'm a student and just picking up Houdini for this final year of study and want to create custom UVs for my models coming out of Reality Capture so the texture maps aren't as messy. However there are two issues I'm having and even after looking online and trying out the solutions I got no results sadly. 1) My first issue is upon making custom UVs with poly reduce, auto UVs and transferring those new UVs onto the old high rez model. Importing into Reality Capture there appears to be missing polygons along the UV seams (this will be show in Fig.1)and quite a reduction in poly count. https://imgur.com/i7OFlYu [imgur.com] 2) I'm not sure if this is related to Houdini or Reality Capture but the textures look a little off as well (minus the very obvious texture stretch on the far right in Fig.2 I know this is UV unwrapping problem), it appears that the texture created onto these new UVs are looking at edge normals but I did add a normal node to smooth out the edges. https://imgur.com/CSgRBU7 [imgur.com] Fig.3 shows the texture straight out of Reality Capture and compared to Fig.2 This is a hella lot better https://imgur.com/bbTDbSx [imgur.com] Let me know if the attachments worked, not sure if they did Any help will be greatly appreciated Custom_UVs.hip
Hey guys, i thought that this topic is probably too large and specialized for the WIP thread, so it probably fits in here better. This thread is about our attempt to design a well rounded lighting and shading pipeline for our animated full cg diploma movie with the working title "helga". (A task that is up very soon...) For more infos about it, please have a look at the WIP thread. Transitioning from Maya/Vray/Arnold/MentalRay Additionally this thread might become a good ressource for people that are very familiar with other rendering pipelines (like the classic Maya/VRay pipe) for example and want to try something different, but struggle to find equivalents to some of their old tested&true workflows. This is basically a huge motivation for this thread as well 1. Shading Assets What we plan to do is deploy our shaders as Shading HDAs. They are planned to be Subnets that contain all the materials needed. You create them in a seperate file and everybody uses them in their shots. (Automatic update for HDAs enabled). Assignment on the geometry is done as a "local edit" on the geo per shot instead of having an object merge node into your asset and assign it internally. We had our shading assets with object merge nodes and internal assignment of materials so far, but found that to be unpleasing when light-linking and takes are added to the mix. What is the common practice for shading Assets/HDAs in Houdini? The goal is to not have takes (renderlayers) for different AOV passes, only for different elements (like characters, foreground, midground etc.). An issue we have with the new setup is that Houdini often complains about the "F" variable of the surface model (Image attached at the bottom). 2. Global standardized additional AOVs Im not talking about light passes here, since those are standardized. I'm thinking about AmbOcc., world position, wide/small fresnel passes that can be turned on/off globally for all Helga shaders just like you would turn on/off RenderElements in VRay for example. I understand that Houdini gives us the freedom to care about this ourselfes, so how would we do this? Currently i envision a "standard_additional_AOVs" HDA embedded in our Shading HDAs that holds all the parameters and exports. Additionally a little Python UI would parse your scene for HDAs of this type and allow you to toggle/adjust them globally. You can then add them as image planes on your Mantra node (or maybe this happens automatically through the Python UI on toggle). Nested HDAs update recusively right? So when you have a newer version of an HDA embedded in another HDA it will update!? 3. Decoupling Displacement and Shading Can we separate the displacement from the shader nodes (but keep it at rendertime), just like VRay displacement sets do in Maya ?. The reason is, that we have displacement on almost every object (at least thats how it is at the moment...). It would be nice to be able to do global material overrides for the whole scene without loosing the displacement. 4. Blend materials What is the common practice for Blend materials with Houdini? 5. Alembic / packed primitives / Houdini Geometry / Crazy Highpoly Geo How do you tend to bring in your Alembic files? As far as i understand the docs, when you bring in an Alembic Geo in H13 (which we are using) as Alembic Delayed Load Archive it is super efficient. As we can solve our props really highpoly with Photoscan to capture tiny surface detail, this is def. an option if it doesnt blow our workflow. So in general, is bringing in Alembics in Houdini as ADLPs the same as having VRay or Arnold Proxy nodes in your scene and loading highpoly geo at rendertime ? Can it take the same polycounts? (10million upwards per asset approx. x 30 assets...) Will we put crazy traffic on the renderfarm network when we do this since all the geo data needs to be loaded at rendertime? The alternative is, as we do it right now, to have moderate polycounts on all geometries and displace them with the luminance of the colortexture. 6.Houdini LUT format issues As we manage the color through our entire pipeline with OCIO, we wanted to try and use the Sony Pictures sRGB LUT for the show. (It has a slightly more contrasty curve than srgb, which is ment to account for the dynamic loss of displays like monitors vs reality). However i couldnt bake a LUT out in a format that Houdini understand (.lut and .itx) that works. Somehow the highlights are always clipped. The LUT gives the same transformation in MPlay, HouRenderview and Nuke. When i bake the same transform out as .csp (cinespace LUT) for example, it looks right in Nuke....but Houdini cant read it. Thats a minor point though, standard sRGB will be equally fine. Pheww, that was a lot. Thanks for your attention, and i'm curious about your thoughts