Jump to content

Jero3d

Members
  • Content count

    24
  • Joined

  • Last visited

  • Days Won

    3

Jero3d last won the day on October 27 2016

Jero3d had the most liked content!

Community Reputation

22 Excellent

2 Followers

About Jero3d

  • Rank
    Peon
  • Birthday 09/24/1994

Contact Methods

  • Website URL
    http://www.jrmaggi.com/

Personal Information

  • Name
    Jeronimo M
  • Location
    Vancouver

Recent Profile Visitors

1,388 profile views
  1. string groups[] = detailintrinsic(0, "primitivegroups"); foreach(string s; groups){ if (inprimgroup(0, s, @primnum)){ i@zone = opdigits(s); } } There is no way to delete groups in VEX as far as I know. You will need to add a Group SOP and go to the Edit tab, then Delete and type zone*
  2. Houding cooking every time and right click

    Foxel try reinstalling your current graphics card driver if you can't update it. I had the exact same issue during the weekend on my windows 10 laptop. After submitting a bug report, SideFX replied saying I should reinstall or upgrade my graphics card driver. I did that and it immediately worked again. I had tried EVERY possible solution before that. I uninstalled Houdini several times, deleted the houdini15.5 folder, disabled my anti virus, disabled/enabled the firewall for Houdini, even tried to launch Houdini while offline and nothing seemed to fix it. At one point I even suspected something could be wrong with my RAM since it wouldn't go above 4gb (out of 16) while it was cooking a big scene. No matter what I did in Houdini the memory wouldn't go above 4gb. I ran the windows memory diagnostic tool and had no errors, so I knew it wasn't a hardware issue. The only thing that solved it was reinstalling the graphics card driver. If that doesn't help I would submit a bug to SideFX! Edit: To reinstall the latest driver, I actually went into Nvidia's website and downloaded the driver myself, instead of updating it from the Nvidia control panel. My guess is that some update on Windows 10 might have corrupted the driver.
  3. Houding cooking every time and right click

    I had similar problems over the weekend and updating my graphics card driver solved it for me
  4. Motion vector

    Make sure you copied the whole setup inside the Mantra Surface, including the null node that is renaming the variable to ndcvec. If you didn't copy the null node, you can also change the last line of the snippet to diff *= res; That is provided you didn't add any nodes after the subtract. If you did, just replace diff with the data name that is being output from the node that is being connected into the snippet.
  5. Motion vector

    I changed the way you were exporting the motion blur to the setup I mentioned. Also you have to enable Allow Motion Blur in Mantra, and to use the velocity of your object I enabled Geometry Velocity Blur under the Sampling tab of the geometry object. (If you only want to use transformation blur then you can increase the xform samples to 2 in Mantra). Finally you also have to assign the material to the object. motionvector_working.hipnc
  6. Motion vector

    Could you post the hip file? It sounds like the motion vector pass isn't being created. I would double check that the attribute you are exporting is set to be a vector, and that the image plane has the correct attribute name and data type. Also make sure you have the setup on the shaders that your geometry is using. Use the IPR to see if there's anything there. I'm guessing you got the setup from the thread below, but there are 2 of them. I have used Matt's method and it works every time! I would give that a try or you can post a hip file so it's easier to pinpoint the issue. All I said above are just guesses, I can't be sure without looking at it This is the post with the working setup:
  7. I have a question that has been bugging me for some time and I couldn't find much information about it. Which is the best and most efficient way to render many polygons? Using delayed load procedurals or using packed disk primitives? Or, am I confused and are they both doing the same thing and there's no difference between the 2 workflows? As far as I know, they both create instance geometry. The documentation doesn't help much either, half of the things I read talk bout optimizing a render using delayed load procedurals, and the other half about using packed primitives. I'm wondering if packed primitives is the new workflow and using delayed load procedurals was the old way of doing it as is now obsolete? Here are the 2 workflows I'm talking about: Packed Disk Primitives Here I pack all my geometry and write it out to disk. I then load it back and change the load setting to "Packed Disk Primitives". Then I generate my IFDs and they are now referencing the geometry from disk instead of having to write them out (And the IFDs are a few KB or MB big). I then render using those IFDs. Here is what the documentation says about it: "Packed Primitives express a procedure to generate geometry at render time." "Because Packed Disk Primitives by their nature are geometry streamed from a file, similar to Alembic primitives, we don’t have to use a special procedural to get smaller IFDs." Delayed Load Procedurals Here I write out my geometry (not packed) as bgeo and then make a Delayed Load Procedural shader and select the bgeo files I just wrote to disk. I then go to the Rendering -> Geometry tab of my object and load my Procedural Shader. I then create my IFDs and then render them out. In the documentation about the delayed load procedurals, it talks about optimizing geometry this way. So I know there are these 2 ways, but are they both equally the same, or is one of them better than the other? Which workflow do you use? Also, when using the packed disk primitives, if the geometry you want to render is unique and it can't be instanced (or there's just no point in doing it), do you still pack it (so its only 1 packed prim) and save it out? Or do you use the delayed load procedurals? Do you use any other workflow? Any advice on this would be greatly appreciated! Thanks
  8. Custom Group SOP in VEX

    Hi guys, here are the files I used for the last Vancouver Houdini User Group in case anyone wants to take a look. My presentation was about writing the Group SOP in VEX and trying to optimize it. I wrote everything in a wrangle, and it works great as a preset in a point/prim wrangle. I also made a Digital Asset, although it's not a VEX operator because I couldn't figure out how to make it work purely in VEX because I need to use the group bindings from the Wrangle and I'm not sure how to implement that in VEX. Anyways, the OTL is just the wrangle with the parameters promoted. In the presentation file you can take a look at the speed tests of the Group VEX vs the Group SOP. The difference in performance is pretty similar if you have a few points/prims, but once you go above 1 million the performance difference really starts to kick in. The Group VEX becomes exponentially faster than the Group SOP the more points you have. The difference can really be seen when you group by object and when you group by volume. I also made those 2 modes work with primitives. Here are the files in case anyone wants to take a look at the code. Feedback is greatly appreciated! group_vex_jeronimo_maggi.hipnc vhug_presentation_jeronimo_maggi.hipnc group_vex_jm.hdanc custom_group_sop.vfl
  9. Here is a chart from the docs of a previous Houdini version (not sure which one) that has helped me a lot. For some reason it's no longer in the docs, but you can hunt it down if you look at the documentation in H12 or H13. Another thing I have read over and over again in the docs is that you should never really go above 6 pixel samples (they mentioned 9 as being ridiculously high and you should only go that high if you have very fine displacement details which aren't being rendered properly. Almost everything can be fixed with the min and max ray samples and the noise level. And of course using the quality and limit controls if you have a problem with reflections/refraction/GI. Following the chart solves all my problems most of the times! Hope its useful
  10. Is this what you are trying to do? I animated a sphere and just timeshifted it with the new for loop system. timeshift.hipnc
  11. Kill isolated particles

    The reason why it's not working is because you need to declare a variable called radius (or have a number instead of the word "radius"). You just need to add one line before that: float radius = 0.1; int handle = pcopen(0, "P", @P, radius, 5); if (pcnumfound(handle) == 1) removepoint(0, @ptnum); You could also set it to float radius = ch("radius"); to create a parameter and be able to control it through a slider. I attached an example using Origin's code in case you want to see how to apply it. kill_isolated_particles.hipnc
  12. Hey Drew, the problem is that you are plugging your bump map into the displacement output, which will cause Mantra to evaluate the input P and N of the Displacement Output and then create the displaced geometry (even though in this case the position isn't displaced). To fix this you can plug the output N from the Display Along Normals into the Base N of your principled shader. I tried it out and the result is the same but now Mantra doesn't print the "generating displaced geometry...". You only need the N since you are dealing with bump, so you can ignore the output position from the displace along normals. I'm not sure what the exact different is between the displacement and surface variables, I was always told to use the surface variables for the inputs of any node that were going into the surface context and the displacement variables when dealing with the displacement context. I tried plugging both in and the result is the same, so I can't really answer this one. Finally for the rest, Houdini does generate a rest parameter but if you dive into the OTL and see the parameters of the rest, it's set to export "never", which means that it won't be sent to Mantra and it won't overwrite your original rest position if you export it. bump_issue_fix.hipnc
  13. Are you saving the renders locally or on a network drive? I once had problems like this when I was outputting the renders directly into a network drive and my guess was that sometimes the network speed was too slow and it wouldn't save it. It stopped happening once I started saving the files locally.
  14. Here's an article from FX Guide explaining how The Mill made the music video: https://www.fxguide.com/featured/so-just-how-was-that-chemical-brothers-video-made/
  15. Prepare UVs For Mari?

    In my experience I found it was usually better to go for more uv islands and less stretching, rather than having very big areas unwrapped together that would end up with some stretching. Also since I painted in Mari, the placement is almost irrelevant, very rarely do I go into the UVs to project. Here is a turntable of a helicopter I painted in Mari, and I will explain my workflow of how I organized my UV layout and what I spent time manually unwrapping and what I did automatically. The UVs are spread over around 6 tiles if I remember correctly (don't have access to the file right now), and the way I approached unwrapping was the following: Any big important pieces, I would manually unwrap. So the body of the helicopter is carefully unwrapped with the seams placed in places where they wouldn't be so visible. The reason why you need to have a balance between hundreds of UV patches vs 1 UV patch is because there's a lot of procedural work in Mari. The helicopter has a tiled diffuse texture for example (the procedural textures in Mari are applied on the UV space, not object space, so you would have a very clear difference between every patch), and if it the UVs had seams all over the place, I would have to manually paint over them, which really kills the procedural nature of the process. I only had very few seams in this case, and it would only take 5 minutes to just project my texture over the seam and make it look seamless. Now for all the small objects, I just did automatic projections in Maya (which would be the same as you are doing in Houdini), not even caring about where the seams were. The reason for this is that it's very hard to spot the seams on those objects, and if any extra work is required you can again paint it out manually. Finally for laying out the UVs, I organize my UV tiles based on material. I plan ahead roughly how many materials I will use, and I organize my layout that way. For example, I put all the objects that would have green paint in the same tiles. I also have a tile for glass, one for rubber, another one for chrome and so on. In Mari it's then very easy to select that UV tile and give it a procedural texture. I also use automatic layout for each of these objects, so they are spread across the UV tile in a way that wouldn't make sense if you had to paint in Photoshop. I think you can barely see any seams in this way, despite all being arranged in a random way. Every rivet and bolt that make every panel was manually painted using projections, and they are spread out along different UV patches and also different tiles, but you can't tell the difference. They aren't arranged in any logical way, but it doesn't matter since you paint in the viewport in Mari. It took about 2-3 weeks to texture the helicopter. This is my workflow though, doesn't mean it's the right way, I'm sure there's a huge room for improvement, but I thought it would be useful to share how you can work in Mari. Edit: I will try to upload a picture of the UV's tonight
×