Jump to content

Search the Community

Showing results for tags 'Mantra'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 230 results

  1. I am using an IFD workflow, and I want to render an identical scene from 3+ different camera positions. I've come across several different ways of rendering these 3+ images sequentially, but that's not exactly what I want. I would like to make 1 IFD file that renders 3+ images from different camera positions. Is there a way to do that? I am working with a large dataset. For easy math, let's say that it takes 10 minutes to read/load, and then 10 minutes to render. Rather than doing three sets of load+render + load+render + load+render (60 minutes), I'd like to be able to do load+render+render+render (40 minutes). I have found that I can use the "Stereo Cam Template" (even though I am not rendering in stereo) to work with two of these cameras to do load+render+render in one IFD file. But how can I do this with more cameras?
  2. Hi guys, I had a problem while compositing my project. For some reason, around my object, on PyroSim there is a transparent strip that does not look beautiful. If I render each object separately, then I have this empty space (screenshot1 and screenshot3), if smoke + object in one mantra render, then everything is fine (screenshot2). But I need to have them separately for compositing. Here are screenshots of my render settings (object - spacesip and PyroFx - smoke) I would be happy for any of your advice, I see that there are a lot of cool guys here who know the houdini like their 5 fingers Please forgive me for my english
  3. Render specific frames

    i am trying to render out specific frames from my timeline, can this be automated and saved out to disk? i would like to have control over which frames exactly or control via a pattern, for example per 10 frames render
  4. How would you render the furthest ray hit surface? I'm not even sure where to start. Well, that's not entirely true. I have an altered shader using IsFrontFace to "turn off" the camera facing polygon. The problem is that this only gives me the next surface behind the first. Which is great if one is rendering a simple convex shape. But for a more complicated concave shape, like a wildly rippling blob of liquid, this isn't very usefull for comp. notBackHelp.hip
  5. Hi guys, I'm not quite sure if I'm doing something wrong here, but starting with houdini 17, whenever I'm using the surfaceexport-node to connect i.e. a classiccore I'm getting this warning: ‘Warning: /opt/hfs17.0/houdini/vex/include/physicalsss.h: Using uninitialized variable: nmax (247,26:29)’ - using computelighting doesn't yield that message… (tested under windows and Linux) Anyone else experiencing this? cheers, flo
  6. Sulaiman - Compositing/Houdini FX

    Hi everyone! I am a VFX Artist who just recently graduated from 3dsense Media School who specialises in a wide range of skillsets Below is a Vimeo link to my showreel as well as my submission to The Rookies! Do feel free to drop me a message, thank you! Contact me and check my channels: sulaimanwar18@gmail.com https://linkedin.com/in/sulaimanwar/ https://www.artstation.com/sulaimanwar =================================== https://vimeo.com/325603170 https://www.therookies.co/entries/427 FinalProject10.mov
  7. I am working on a big destruction job and I’m trying to optimise our workflow as much as possible. my usual workflow: - cache a single frame of my fractured geo as packed fragments - import rbd sim and create point per rbd piece (dop import) then lighting would import these two caches and use transform pieces. This works well since we only cache a single frame of the fractured geo, and the cache for the rbd points is very small. however the geo is still written into the ifd file every frame, which can be quite large. So, is it possible to specify the geo needed for the whole sequence and then have mantra transform the geo at render time (the same as transform pieces). This would make the ifd’s tiny an alternative method is to write each fractured piece out as a separate file then copy empty packed disk primitves to the rbd points then use the unexpandfilename intrinsic to expand it at render time. This makes tiny and fast ifd files which is great but seems quite slow to render - probably because it has to pull so many files from disk (1000s of pieces).. is it possible to do the render time transform pieces approach or does anybody have a better method? (The two I’ve mentioned are fine I’m just trying to optimise it!)
  8. Does anyone use mantra as the sole renderer for production? Let me rephrase: If I choose to use Mantra for production, am I missing something? am I limited? In which situation do you think I shouldn't be using Mantra? Obviously I didn't mention another renderer to avoid the A vs B. There is just A, Mantra.
  9. Hi, I started to have a black screen in my render view.. then looking up into where the issue could be coming from, I found online that I tried : - restarting houdini - setting up new scenes - switching from ray tracing to PBR none of them worked, then I realized that if I start rendering, then I simultaneously try to open a new file then cancel the file opening, mantra start rendering... Anyone have a hint on what Im probably doing wrong? Thanks. Houdini FX Version 16.0.671 win10 NVIDIA GeForce GTX 9703 gb
  10. Hello! I'm working on a custom toon shader. As a basis I want to use the lambert lighting model as described in the tutorial embedded below. To test this I created a very simple scene with a sphere placed at the origin, a point light and a material with the material builder applied to the sphere (see image below). Now, here's the problem: The shading of the sphere is incorrect as soon as the point light has negative coordinates. It seems as if the surface_global's P and N are expressed in a different space compared to the point light's position. This happend with Mantra as well as Redshift (where I used the same setup as in the video below, apart from getting the light's position through a constant and channel references). Am I wrong to assume that P is the world position of the current point being shaded and N is it's normalised normal? Is this the wrong way to get the light position into the shader? I spent some time testing and searching on the internet but I couldn't find anything that clarified these quesitons sufficiently for me. custom_lambert.hipnc
  11. Hey there, I'm rendering an ocean wave tank that I setup via shelf tools and I'm getting very strange artifacts in my renders at various frames. Sometimes the tank completely disappears and I'm not sure why. There will also be jagged shards of geo popping up ever now and then and I can't for the life of me figure it out. I don't know if there's some discrepancy between the size of my bounding boxes which is causing this odd behavior. If anyone has time to take a look I'd really appreciate it! ocean_js.zip
  12. I'm learning about material layering in Mantra, and up until now I have always used the layermix VOP. This video shows another way of combining materials, chaining one node's "layer" output into another node's "base" input (at 33:22): Are these methods equivalent? The documentation has got me a bit confused. It makes it sound like the layermix VOP just averages the inputs and the bsdfs whereas the chaining method also performs some sort of energy conservation: The nodes take care of the physical aspects of combining the looks (fresnel components, energy conservation) automatically. Does that mean the layermix VOP does NOT perform these conservations? What exactly is happening in both of these cases? The fact that the video advises that the order in which you chain materials has gotten me especially confused. I am somewhat familiar with the layering techniques (averaging output pixel color in realtime world, performing inter-layer scattering in offline world) but I have no idea how Mantra works in either of these two cases. Does anybody have insight into layermix VOP vs chaining layers to base inputs?
  13. Hello, everyone. I have a rendering problem and can't figure out the reason. Doing a simple test of box rendering using mantra, and there are hard edge of rendering when the camera angle is changed. (can see the red highlight area in attachment) Light is simple just an default environment light and camera also a default one. Do you guy have any idea how this problems happened? Thank you! Wish you guy have happy new year!! cheers, Ricky
  14. Hi, I tried to recreate gravel / terrazzo floor material, fully procedural. As you can see in attachment, I have white spots artefacts on final output. Also, and I don't know why, I have more obvious problem with displacement. I connected Properties Node with Displacement Bound set higher than Scale, so if I'm not wrong, that should not be the problem. Render is Mantra PBR, default settings. Does anyone have idea why those nasty white spots appear and why displacement break poly's.
  15. Why is Mantra so Slow

    Hi, I'm trying to learn shading and rendering in Houdini, but I've just set up a simple scene with just a sphere, and it's taking up to 20 minutes to render it? Thanks
  16. Hello, I am working on an RBD simulation of a hotel collapsing and I have roughly 73,000 pieces in the simulation. I simulated with low resolution pieces and I want to replace them with high resolution pieces but render times are ballooning/render crashes when I swap them. My hi-res pieces are packed and have materials inside. I have the whole set of pieces on a single .bgeo.sc file (5gb) and also each piece individually on disk (See graphic 1). I will refer to them as Set of Pieces and Individual Pieces respectively. I'm fairly new to Houdini so go easy on me, here's the workflow that has gotten me the furthest: - I load the Set of high-res pieces and plug both them and the low-res pieces into a Transform Pieces SOP. (See graphic 2). This allows me to copy the transforms of the low-res onto the hi-res pieces. Up to this part I have no problems. - When I start to render, Mantra takes a long 10-15 minutes generating the scene, using all of my RAM (64gb). If I'm lucky it starts rendering by minute 15, but 90% of the time it just crashes. I'm using Mantra on Houdini 16.5.268 by the way. I know that Mantra needs a copy of the geometry when it renders, so the Set of Pieces is the problem here. I have also tried the Instance SOP, creating an Instancefile attibute on the low-res pieces linking them to the hi-res pieces on disk, but it doesn't seem to be working, I have to be missing something here but I can't find the solution. (See graphic 3) I have almost no knowledge of Mantra, so I have the default settings in the Mantra node. Is there a way of referencing/swapping/instancing the individual hi-res pieces onto Mantra ONLY at render time while also having the transforms of the low-res pieces? Is there another way that I'm missing? Any way of reducing Memory Usage? Am I doing it completely wrong? Thank you in advance!
  17. shelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipshelf_beach_compare_shader_test.zipI am trying to use the default beach tank and do a test render but I found out some area of the surface like the water droplets are a lot darker than the rest of the water. I tried increasing the refraction and reflection limit and the displacement bound but it didn't help with the issue. It's looks fine if the refraction is at 1 when everything is transparent but it's not the look I was looking for. Does anyone know how to fix it? shelf_beach_compare_shader_test.zip
  18. Hello all, I've seem to run into a weird problem on my most recent project. One of my objects doesn't render, with a material or without. Yes it's in my force objects, yes it's a polygon mesh(unless vellum does something weird that I didn't know about). I tried switching the shader from the generic to another material but it doesn't matter. It will render my other polygon object, the ripples, and even the smoke sim later on in the sequence. But it won't render the ball, nomatter what I do. I'm using 17 just incase in matters. This is literally the last step of my project and I can't wait to see it done, but this silly thing has to get fixed first. Thanks for your help.
  19. Hi guys, I'm trying to write a light path expression that excludes the default custom bsdf label "coat" To have only the coat layer is: lpe:C<...'coat'>.* To have all reflections but the coat I would expect it to be: lpe:C<RG[^...'coat']>.* This gives a syntax error and doesn't seem to work. Has anyone tried this before and knows the correct syntax or is this a bug? Cheers, Luca
  20. Hi, I am currently trying to use mantra in distributed rendering mode. (mantra -H machine1,machine2) It all works as expected, but as soon as I connect something into a cached shader (for example a constant into the principled shader), the error “No Code Generator Available for opshader_path_xyz” pops up and it doesn't render at all anymore. It also happens with custom cached shader assets. All works well in non distributed rendering mode. If I unlock the asset, then it seems to work again since the asset is then "in the file", but this is unfortunately not useful in production. Additional Info: I am using windows and the “Force VEX Shader Embedding” checkbox is on. Has anyone experienced this error before? If no I'll submit a bug report. Cheers, Luca
  21. Hey, iam using a simple glass material with some tweaked parameters and i rendered out one single frame, it took me 1.5 hours to render it complety and i still got a decent amount of noise? Im using a hdri image for lightning and increased my samples from my lightsources to reduce noise. anyone would know how to reduce my noise? This are my render settings.
  22. Modulating Motion Blur

    Hello, you helpful people, I'm currently doing snow VFX and I'm a bit bothered by the tube-like look of the mantra motion blur. I'm also using VRay quite a bit which has some helpful features to make the motion blur look less sterile, especially two settings, as shown below. The first setting allows me to move more samples towards the centre of the time interval, simulating a shutter that's not instantly opening and closing. The second setting allows me to move more samples towards the front or the back of the interval. Using the two settings combined, I've quite a lot of control over the look of the motion blur. Is there anything comparable in Houdini or are there some tricks to emulate that look? I think I've diligently read what I could find about motion blur and mantra but I couldn't find anything comparable. Thanks in advance! Paul
  23. Hello! So I just started dabbling in FLIP and have a little river scene everything looks decent except for the whitewater. It takes forever to cache but I have the rasterized particles cached out and the standard particles will be finished caching later this evening. I've played with the spray shader but it looks kind of weird when applied to the rasterized particles, especially with the noise parameter that's [on] by default. I also read a few posts that suggested trying a billowy smoke shader instead. That made it look much better but still not correct so I figured I needed the regular particles cached in addition to the rasterized ones. If that's so they're caching now, but when they're finished what do I do with them? Last time I had whitewater particles cached they just rendered as spheres because of the pscale, am I supposed to mesh them like the FLIP and then layer them? I can post some screenshots once my cache has finished.
  24. Hey ODforce! I recently published the next installment of the Houdini For The New Artist series. This course is all about making Houdini easier to learn, and you can watch the first chapter for free! Here's the link: http://bit.ly/2okYpY4 Some of the main topics include: Chapter 1: How to understand and set up DOP networks Using SOP attributes with the dynamic context Forces Particles Life & Variance How to use groups Rendering Particles Sub-steps, Sub-frames, & Time-Steps Caching Sub-steps & Practical Applicaitons Chapter 2: Texturing & Lighting Techniques How to Integrate a Substance Painter Workflow with Houdini Spot Lights, Area Lights, & Artistic Considerations When Placing Key Lights Utilizing Light Attenuation Gobos Aim Constraints, Near-attenuation Techniques for Reading & Emphasizing Form While Lighting How to Make Light Blockers Shadow Masking Techniques for Controlling Light Placement Component-based Lighting Light Linking Chapter 3: HDAs, Velocity, Normals, and & VOPs What is an HDA? How do you make one? Data Types & Adding user Parameters Booleans, Switch Nodes, & Parameter Behaviors Additional HDA Modifications What is Velocity? What are Normals? An Introduction to the Point Wrangle Introduction to VOPs How to Blend Materials Basic Noise Parameters Additional VOP Techniques Chapter 4: Geometry Types, Animation, & CHOPs Geometry Types & Attributes Introduction to Volumes Animation Basics Channel Groups, Animation Layers, & Motion Trails Introduction to CHOPs Curve Constraints, CHOPS with DOPs, & Additional Settings Chapter 5: Rendering & Compositing Introduction to Multi-pass Compositing Rendering Concepts How to Use Takes Rendering Multiple Jobs De-noising passes Compositing Basics with Houdini Thanks for watching! - Tyler
×