Jump to content

Search the Community

Showing results for tags 'Mantra'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 237 results

  1. Mantra UV Pass

    Is there a way to add a uv aov in a shader ? I've tried to bind uv to an output ut uv pass is black ? Any idea ? Thx untitled2.hipnc
  2. Hello guys! Can I ask U question, Im very new in houdini. Can u help me please? Im trying to render short video of volume (smoke) in matra, but I really do not know how to set the mantra settings, because one frame is mantra creating for 43 minutes, which is rediculously long time for 250 frames. can U help me, with settings? I already increased the Volume step size, but thats all what I did, (I read somewhere its helpfull) is there any other way, how to make render time (per frame) shorter? Thanks a lot for any advices! Im looking forward to your re´s! By! advice.hip
  3. Houdini Rendering Issue

    Hello, I am needing advice/solutions in regards to my current pyro render in H17. I am doing a volcano plume with some billow smoke, I have a pyro shader sourced into a OBJ merge for my Mantra. I also have a directional light and an environment light that has a 2k IBL. The sampling on my environment light is at 10 and also my mantra pixel samples are at 12x12 and I seem to not be able to get rid of this flickering happening with my environment light. I have looked all all sorts of setting and such changed my HDR to a .rat yet it still doesn't see to remove this flicker. My camera is set at 1080 and is doing a slow pan over 300 frames. Would anyone be able to advise what this may be? Thanks, K
  4. Hi everyone, so currently I am stuggling with wedges in Houdini 17.0. The current setup is the following: My intention is to run a wedge on a pyroSim, cache it out with the Geometry ROP (which works), and then with each one of the cached wedge sequences render out a sequence of .exr with the Mantra ROP. In whichever way I connect them though, I can't seem to find a way for it to work - it only does with the geo caches. I hope any of you could lighten me up a bit on the matter. Thanks!
  5. Hi houdinist ! I am trying to apply saturn ring texture in mantra but I failed to reproduce the texture mapping in attachement (done in blender), someone could help ? I tried to rotate UV by an angle of 90° over each face, but I don't know how to achieve this hip file in attachment with saturn ring texture thanks Alex ring.hipnc
  6. Hey guys, Im rendering a pyro sim around 1-3 mil voxels, im using an environment light and a volume light emitting from the pyroshader. Rendering using PBR with pixel samples 3x3, volume limit 1, noise level, 0.1, volume quality 0.1, volume shadow quality 0.5 and stochastic 8 im not even using motion blur, but one frame is almost taking 1.5 hours. Any ideas how to increase the performance, am i doing some nonos that i missed? the smoke is fairly transparent and the light from the fire is strong.
  7. [I think I'm probably more than a few days from understanding and being able to write the necessary recursive function, but I thought I'd tap the brains here to save time (or see if I'm on the wrong track and it's easier than I assume).] I want to build a list of render nodes that include the nodes they depend on. (see image) Something like: (node name, nodes it immediately depends on) (r,as & b), (as, cs), (b, ds & es), (cs, 0), (ds, 0), (es, 0) --- I know I can run hou.node("out/r").inputDependencies() to get the full list, but that doesn't give me the dependencies of each node in turn. I figure, like I said above, that I'll probably need to write a recursive function to spit these out. Something that runs over each subsequent list of dependencies until there are none then return the list. Somehow. But maybe not? I imagine there might be an internal to Houdini way to do this, no?
  8. I am using an IFD workflow, and I want to render an identical scene from 3+ different camera positions. I've come across several different ways of rendering these 3+ images sequentially, but that's not exactly what I want. I would like to make 1 IFD file that renders 3+ images from different camera positions. Is there a way to do that? I am working with a large dataset. For easy math, let's say that it takes 10 minutes to read/load, and then 10 minutes to render. Rather than doing three sets of load+render + load+render + load+render (60 minutes), I'd like to be able to do load+render+render+render (40 minutes). I have found that I can use the "Stereo Cam Template" (even though I am not rendering in stereo) to work with two of these cameras to do load+render+render in one IFD file. But how can I do this with more cameras?
  9. Hi guys, I had a problem while compositing my project. For some reason, around my object, on PyroSim there is a transparent strip that does not look beautiful. If I render each object separately, then I have this empty space (screenshot1 and screenshot3), if smoke + object in one mantra render, then everything is fine (screenshot2). But I need to have them separately for compositing. Here are screenshots of my render settings (object - spacesip and PyroFx - smoke) I would be happy for any of your advice, I see that there are a lot of cool guys here who know the houdini like their 5 fingers Please forgive me for my english
  10. Render specific frames

    i am trying to render out specific frames from my timeline, can this be automated and saved out to disk? i would like to have control over which frames exactly or control via a pattern, for example per 10 frames render
  11. How would you render the furthest ray hit surface? I'm not even sure where to start. Well, that's not entirely true. I have an altered shader using IsFrontFace to "turn off" the camera facing polygon. The problem is that this only gives me the next surface behind the first. Which is great if one is rendering a simple convex shape. But for a more complicated concave shape, like a wildly rippling blob of liquid, this isn't very usefull for comp. notBackHelp.hip
  12. Hi guys, I'm not quite sure if I'm doing something wrong here, but starting with houdini 17, whenever I'm using the surfaceexport-node to connect i.e. a classiccore I'm getting this warning: ‘Warning: /opt/hfs17.0/houdini/vex/include/physicalsss.h: Using uninitialized variable: nmax (247,26:29)’ - using computelighting doesn't yield that message… (tested under windows and Linux) Anyone else experiencing this? cheers, flo
  13. Sulaiman - Compositing/Houdini FX

    Hi everyone! I am a VFX Artist who just recently graduated from 3dsense Media School who specialises in a wide range of skillsets Below is a Vimeo link to my showreel as well as my submission to The Rookies! Do feel free to drop me a message, thank you! Contact me and check my channels: sulaimanwar18@gmail.com https://linkedin.com/in/sulaimanwar/ https://www.artstation.com/sulaimanwar =================================== https://vimeo.com/325603170 https://www.therookies.co/entries/427 FinalProject10.mov
  14. I am working on a big destruction job and I’m trying to optimise our workflow as much as possible. my usual workflow: - cache a single frame of my fractured geo as packed fragments - import rbd sim and create point per rbd piece (dop import) then lighting would import these two caches and use transform pieces. This works well since we only cache a single frame of the fractured geo, and the cache for the rbd points is very small. however the geo is still written into the ifd file every frame, which can be quite large. So, is it possible to specify the geo needed for the whole sequence and then have mantra transform the geo at render time (the same as transform pieces). This would make the ifd’s tiny an alternative method is to write each fractured piece out as a separate file then copy empty packed disk primitves to the rbd points then use the unexpandfilename intrinsic to expand it at render time. This makes tiny and fast ifd files which is great but seems quite slow to render - probably because it has to pull so many files from disk (1000s of pieces).. is it possible to do the render time transform pieces approach or does anybody have a better method? (The two I’ve mentioned are fine I’m just trying to optimise it!)
  15. Does anyone use mantra as the sole renderer for production? Let me rephrase: If I choose to use Mantra for production, am I missing something? am I limited? In which situation do you think I shouldn't be using Mantra? Obviously I didn't mention another renderer to avoid the A vs B. There is just A, Mantra.
  16. Hi, I started to have a black screen in my render view.. then looking up into where the issue could be coming from, I found online that I tried : - restarting houdini - setting up new scenes - switching from ray tracing to PBR none of them worked, then I realized that if I start rendering, then I simultaneously try to open a new file then cancel the file opening, mantra start rendering... Anyone have a hint on what Im probably doing wrong? Thanks. Houdini FX Version 16.0.671 win10 NVIDIA GeForce GTX 9703 gb
  17. Hello! I'm working on a custom toon shader. As a basis I want to use the lambert lighting model as described in the tutorial embedded below. To test this I created a very simple scene with a sphere placed at the origin, a point light and a material with the material builder applied to the sphere (see image below). Now, here's the problem: The shading of the sphere is incorrect as soon as the point light has negative coordinates. It seems as if the surface_global's P and N are expressed in a different space compared to the point light's position. This happend with Mantra as well as Redshift (where I used the same setup as in the video below, apart from getting the light's position through a constant and channel references). Am I wrong to assume that P is the world position of the current point being shaded and N is it's normalised normal? Is this the wrong way to get the light position into the shader? I spent some time testing and searching on the internet but I couldn't find anything that clarified these quesitons sufficiently for me. custom_lambert.hipnc
  18. Hey there, I'm rendering an ocean wave tank that I setup via shelf tools and I'm getting very strange artifacts in my renders at various frames. Sometimes the tank completely disappears and I'm not sure why. There will also be jagged shards of geo popping up ever now and then and I can't for the life of me figure it out. I don't know if there's some discrepancy between the size of my bounding boxes which is causing this odd behavior. If anyone has time to take a look I'd really appreciate it! ocean_js.zip
  19. I'm learning about material layering in Mantra, and up until now I have always used the layermix VOP. This video shows another way of combining materials, chaining one node's "layer" output into another node's "base" input (at 33:22): Are these methods equivalent? The documentation has got me a bit confused. It makes it sound like the layermix VOP just averages the inputs and the bsdfs whereas the chaining method also performs some sort of energy conservation: The nodes take care of the physical aspects of combining the looks (fresnel components, energy conservation) automatically. Does that mean the layermix VOP does NOT perform these conservations? What exactly is happening in both of these cases? The fact that the video advises that the order in which you chain materials has gotten me especially confused. I am somewhat familiar with the layering techniques (averaging output pixel color in realtime world, performing inter-layer scattering in offline world) but I have no idea how Mantra works in either of these two cases. Does anybody have insight into layermix VOP vs chaining layers to base inputs?
  20. Hello, everyone. I have a rendering problem and can't figure out the reason. Doing a simple test of box rendering using mantra, and there are hard edge of rendering when the camera angle is changed. (can see the red highlight area in attachment) Light is simple just an default environment light and camera also a default one. Do you guy have any idea how this problems happened? Thank you! Wish you guy have happy new year!! cheers, Ricky
  21. Hi, I tried to recreate gravel / terrazzo floor material, fully procedural. As you can see in attachment, I have white spots artefacts on final output. Also, and I don't know why, I have more obvious problem with displacement. I connected Properties Node with Displacement Bound set higher than Scale, so if I'm not wrong, that should not be the problem. Render is Mantra PBR, default settings. Does anyone have idea why those nasty white spots appear and why displacement break poly's.
  22. Why is Mantra so Slow

    Hi, I'm trying to learn shading and rendering in Houdini, but I've just set up a simple scene with just a sphere, and it's taking up to 20 minutes to render it? Thanks
  23. Hello, I am working on an RBD simulation of a hotel collapsing and I have roughly 73,000 pieces in the simulation. I simulated with low resolution pieces and I want to replace them with high resolution pieces but render times are ballooning/render crashes when I swap them. My hi-res pieces are packed and have materials inside. I have the whole set of pieces on a single .bgeo.sc file (5gb) and also each piece individually on disk (See graphic 1). I will refer to them as Set of Pieces and Individual Pieces respectively. I'm fairly new to Houdini so go easy on me, here's the workflow that has gotten me the furthest: - I load the Set of high-res pieces and plug both them and the low-res pieces into a Transform Pieces SOP. (See graphic 2). This allows me to copy the transforms of the low-res onto the hi-res pieces. Up to this part I have no problems. - When I start to render, Mantra takes a long 10-15 minutes generating the scene, using all of my RAM (64gb). If I'm lucky it starts rendering by minute 15, but 90% of the time it just crashes. I'm using Mantra on Houdini 16.5.268 by the way. I know that Mantra needs a copy of the geometry when it renders, so the Set of Pieces is the problem here. I have also tried the Instance SOP, creating an Instancefile attibute on the low-res pieces linking them to the hi-res pieces on disk, but it doesn't seem to be working, I have to be missing something here but I can't find the solution. (See graphic 3) I have almost no knowledge of Mantra, so I have the default settings in the Mantra node. Is there a way of referencing/swapping/instancing the individual hi-res pieces onto Mantra ONLY at render time while also having the transforms of the low-res pieces? Is there another way that I'm missing? Any way of reducing Memory Usage? Am I doing it completely wrong? Thank you in advance!
  24. shelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipI am trying to use the default beach tank and do a test render but I found out some area of the surface like the water droplets are a lot darker than the rest of the water. I tried increasing the refraction and reflection limit and the displacement bound but it didn't help with the issue. It's looks fine if the refraction is at 1 when everything is transparent but it's not the look I was looking for. Does anyone know how to fix it? shelf_beach_compare_shader_test.zip
×