Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

Mario Marengo

Members
  • Content count

    1,251
  • Joined

  • Last visited

  • Days Won

    7

Community Reputation

26 Excellent

2 Followers

About Mario Marengo

  • Rank
    Grand Master

Contact Methods

  • Website URL http://www.axyzfx.com

Personal Information

  • Name Mario
  • Location Toronto, Canada
  • Interests Music, Math, Mantra, Moudini (sorry, but it has to start with 'M' :)

Recent Profile Visitors

12,978 profile views
  1. Thanks for your thoughts^H^H^H^H^H^Hwrenches, Jim! :-) Yup, I agree with all of it. My new mission: to convert all these Maya heathens to Houdini! Cheers.
  2. The only difference between the two is the way in which they deal with total internal reflection (TIR). If you look at their output for the cases where the transmission is valid (kt>0), you'll see that they're identical: When writing this kind of function, there's always the question of what to do with the transmission vector (T) when there is no valid transmission (which happens under TIR since you end up with a div-by-zero or root-of-a-negative and T is undefined). You could return a zero vector (possibly dangerous), or assume the user will always inspect kt and deal with the issue (not very reliable), or pick something arbitrary that's "incorrect" but not zero. Both fuctions choose the last approach, but they make different choices (a matter of legacy behaviour I think). Under TIR, fresnel() returns T=I (direct transmission), and refract() returns T=R (mirror reflection) -- and that's why you see a difference (notice that the difference is only happening in the black portions of the image above; i.e: under TIR). Knowing this, you can modify refract() to match fresnel() (or vise-versa) by inspecting kt: Though of course, the moral of the story is not so much "here's how you make them match", but rather "here's where you have no business refracting at all" I've added a few parms to your shader so you can explore all of the combinations I mentioned above. HTH. mgm_refraction.hipnc
  3. I think this might be what you're after: 1. Move the Pos null to move the center of influence 2. Switch "Noise Space" from "Fixed" to "Relative" to make the noise space "stick" to the center of influence. HTH. mgm_moving_ramp.hip
  4. Thanks, Eetu! Nice to be back.
  5. Here's one possible approach using vex (PointWrangle SOP). Comments in the code. cummulative_transfer.hip
  6. Thanks, Francis. Yes, I've just recently started looking into the guts of HouEngineForMaya and am in touch with SESI... so we'll see. I'm fairly certain now, however, that at least to start with, we'll tackle things in the traditional way (Maya->Fbx/Alembic/etc->Houdini->Mantra), just for practical reasons. But as I familiarize myself better with some of the components, I may decide to tackle a direct Maya->IFD solution (in which case I'd make it open source). The purpose behind this initial exploration was to A) find out if any such thing exists out there; to which the answer is clearly "no" -- at least not publicly, and failing a ready-made tool, what would be the scope of a roll-your-own solution, to which the answer seems to be "rather large". ...but I'll continue to pull the string, of course:).
  7. Hello OdForce! Circumstances have recently forced me to explore the possibility of rendering directly to Mantra from Maya -- that is: generating an IFD directly from Maya. This is in contrast to the more typical exporting of scene elements to Houdini (via some intermediate format like Alembic/Fbx, say) and then rendering from Houdini, which I realize is still a valid avenue open to us. Instead, I'm looking into the possibility of a method that will allow Maya users to simply "press render" and have Mantra transparently service said renders behind the curtains. My uncertainty with the whole thing lies primarily on the Maya side, because while I'm quite comfortable with Mantra and IFD's, I'm very much *not* a Maya power user. I realize this is not at all a trivial task (and perhaps not even worth the effort in the end), and am also conversant with some of the individual components that may be put to use in a hypothetical solution: Maya & Houdini C++/Python toolkits IFD's reference SOHO implementation Houdini Engine for Maya etc... But I'm curious, so I thought I'd tap into the vast Houdini brain-store here to see if anyone has had experience with this or can point me to existing partial/complete solutions (I'm aware of at least one partial attempt), or simply has travelled down the road enough to say "don't even think about it, you fool!" TIA!
  8. Yes, 4d->perlin->1d and 4d->perlin->3d included. Again, if you need to patch by hand right now, these additional two changes are: At lines 395-398, where it currently reads: #define ns_fperlin4 \ nsdata ( "perlin" , 0.0168713 , 0.998413 , 0.324666 , 1 ) // +/- 0.0073 #define ns_vperlin4 \ nsdata ( "perlin" , 0.00576016 , 1.025 , 0.32356 , 1 ) // +/- 0.0037 It should instead read: #define ns_fperlin4 \ nsdata ( "perlin" , 0.0168713 , 0.998413 , 0.507642 , 1 ) // +/- 0.0073 #define ns_vperlin4 \ nsdata ( "perlin" , 0.00576016 , 1.025 , 0.518260 , 1 ) // +/- 0.0037 The rest of the 4d stats look OK -- and it was the 4D batch that was compromised for some reason (maybe because I ran it in Windows? ). As an aside, I'd recommend using simplex over perlin from now on, if you can. Thanks for catching these!
  9. Bug and fix submitted (ID:49911).
  10. Huh. Seems like there was a little hiccup when auto-generating the stats tables. Not sure what happened there, but the "running mean" value (which is not necessarily the same as the average of the minimum and maximum values encountered -- it is the mean over approx 5 million samples) for the ns_fsimplex4 wrapper didn't get calculated properly or got corrupted somehow (it got assigned a value of 0.294652 which is clearly wrong). These tables were generated automatically and are used by the algorithm to "normalize" the output to [0,1] (while spanning as much of that range as possible). The biasing you're seeing in that particular flavor of noise (simplex, 4d-IN, 1d-OUT) is due to this flawed entry in the tables. I haven't checked lately so this may have been corrected already by SESI, but I'll post the BUG just in case anyway. In the meantime, if you need a fix "right now", you can change one value in the file $HH/vex/include/pyro_noise.h as follows (you may need to change permissions on that file to be able to edit it): At lines 403-404, where it currently reads: #define ns_fsimplex4 \ nsdata ( "simplex" , 0.0943673 , 0.912882 , 0.294652 , 1 ) // +/- 0.0064 should instead read: #define ns_fsimplex4 \ nsdata ( "simplex" , 0.0943673 , 0.912882 , 0.503625 , 1 ) // +/- 0.0064 This new mean value of 0.503625 may not be super accurate (it's just the average of the min and max) but all simplex means hover around 0.5 anyway so it couldn't be too far wrong either. Hope that helps.
  11. A somewhat related old thread that may be useful.
  12. All good suggestions, but I think the problem is not so much how to generate a "unique color" (however that's defined) per material, but how to decompose the result after the fact. Here's some interesting reading[1] on this topic (and much more). He uses principal component analysis (PCA) in various color spaces (finally landing in Lab/Luv IIRC), along with some "fuzzy" algebra to get very impressive results. But, you know, not exactly "simple" this. The obvious problem scenario that springs to mind is: What happens when one of your materials is semi-transparent/translucent? And what if it *is* translucent and 15 other materials are partially showing through it? Can you recover any one of them from the resulting color soup? [1] Arash Abadpour, Master Thesis, COLOR IMAGE PROCESSING USING PRINCIPAL COMPONENT ANALYSIS, 2005
  13. Hi Alexey, I'm hoping to get to it this weekend... A couple of VEX functions had issues, and those have been fixed, which is good. The problem is that my original treatment of PC-based emission unwittingly relied on one of these flaws (and what was solving "fast" was actually fast, yes, but also wrong ). Long story short: even though PC-based emission is now "fixed" (i.e: it doesn't crash or generate garbage point clouds), it is no longer "fast"... which is what remains to be fixed. Thanks for all your testing. I'll let you know when a fix is submitted. Cheers.
  14. There was a change introduced in VEX right after the release candidate was frozen. This had the unfortunate side effect (no pun intended) of breaking the PC-based scattering portion of the Pyro2 shader. Now that we again have access to daily builds, I'll try to address this problem. Unfortunately, PC-based scattering will remain pretty much unusable (for daily builds) until then. Sorry about that. Just unfortunate timing. I'll try to remember to update this thread once a fix is submitted.
  15. #pragma help is for writing help for the overall operator. To add a "tooltip"-style popup help for a single parameter, use #pragma parmhelp