Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

6 Neutral

About meldrew

  • Rank

Personal Information

  • Name

Recent Profile Visitors

1,382 profile views
  1. gradient r&d

    Hey @Aizatulin & @srletak Sorry for the delay in getting back... Thanks for these solutions - especially the latest side-by-side Aizatulin. I've been working on a setup on/off all week & I think I'm pretty much there now with the combination of what to do in vex/vops. It got a little more complex as I've decided to try and implement displacement which is based on the above techniques, however with some separate controls, so I'm just tweaking & learning as i go. I'll see how far i can get over the weekend & with a little luck post some updates sometime soon. Thanks again to everyone who has replied!!
  2. gradient r&d

    @Aizatulin Thanks a bunch for this - As you mention it is a great comparison of the 2 techniques in terms of drawing the shapes & the attrib transfer aspect of it. However I am still struggling implementing a point sourcing function to the original VOP setup that @konstantin magnusshared last week. If I understand correctly, the following VEX code is the definition of the points as source, finding nearest point on grid & adding the normal direction?: int pt_near = nearpoint(1, v@P); vector pos_near = point(1, 'P', pt_near); float dist_near = distance(v@P, pos_near); vector dir = normalize(v@P - pos_near); What I would really like to do is apply this point sourcing through VOPs , so that I can have multiple wheels being generated & then run each wheel through its own colour ramping before colourmix node at the end. Apologies - I'm aware that I'm probably verging on being stupid here! as always any help much appreciated. vop_point_sourcing.hip
  3. gradient r&d

    Hi! Sorry if the notes in the .hip file were unclear... The visual effect I want to achieve is accomplished mainly by the first post from response from @konstantin magnus i.e A wheel of color that can be gradiated over its length. However after seeing @Aizatulin solutions - I would also like to implement some more control over the effect, namely: - Define wheel sources with points. - Define & control individual wheel radius (with just a float param or pscale attrib etc.) - Ramp over the soft min/max VOP node for each wheel. As I mention in my post above, I believe I understand the VEX mathematical principles to a far better degree than I did previously, however I am struggling to integrate these 3 elements of Aizatulin's wrangle approach with Konstantins VOP approach. Hopefully my post here makes sense. Thanks!
  4. gradient r&d

    Thank you both again so much for the clear explanations! I think (hope) I have a much clearer understanding of the VEX workings at this point - in terms of definitions & some of the mathematics - as these examples are broken down in such helpful detail. However my SOP-based brain is still struggling a little to know exactly how to integrate the two solutions of VOP/VEX. I've put together a version of the node-tree I would like to achieve in this example file (painfully incorrect implementation - however it's more to illustrate the order of operations I'd like to achieve) If someone would be able to take a look at this & outline a potential solution or direction that would, as always, be a massive help. Feel like I am getting ever closer to the eureka moment, but I've found it so helpful in learning process to reverse engineer previous examples, I thought I'd try my luck and ask one more time. Again, many thanks in advance if anyone does have the time to help me out. vex_vop_hybrid.hipnc
  5. gradient r&d

    Hi all ok, so I've manage to get almost all of these suggestions working to some degree - However I'm still lacking a little understanding of the original suggestion provided by @Konstantin 1. If I am using the arctangent - Is it possible to create multiple 'colorwheels' on the same grid geometry? For example, I'm attempting to create a system where the source of each wheel would be a point projected onto the geo (using the add SOP, or from a particle sim etc.) - So was attempting to make a kind of hybrid of 'colorwheel.hipnc' & 'colorcurve.hipnc', however so far I've been totally unsuccessful. The transform matrix allows me to rotate the colour ramp, but as I understand the arctangent uses the dimensions of the grid to generate the curve? (although if i alter the grid scales, the curve remains the same, which is what's throwing me off, I think). 2. I've also been trying to impliment the ramp paramater in the radius of the gradient from Aizatulin's file. But again to no avail - As I can't quite figure out where to wire that into the VOP setup. Apologies for being a little slow with this & as always any tips welcomed with open arms! Thanks!
  6. gradient r&d

    Konstantin - Thanks so much (again!) for taking the time to make these files up & sharing! Before I open them & dissect fully, I'm going to spend some time trying to create your answer from the 'colorcurve' reply to try and learn as much as I can along the way! I know there's no shame in using SOPs if it still gets the correct results, however I am always trying to advance in Houdini (whenever I get the time to really sit down with it) - So I also really appreciate everyone's responses here with varying methodology. Odforce community is super helpful & supportive as always!
  7. gradient r&d

    Hi Aizatulin, Wow, ok there's a whole lot for me to unpack there, the ramp for the radius is particularly helpful. thanks a million! (my vex is ashamedly almost non-existent... Have been looking for an exercise that forces me to practice it for quite some time - this seems like a good time to do that! ) My loose plan is to build an asset where i am intersecting poly geo with a grid & then using those intersection edges to drive the gradients. Thanks again, the massive point in the right direction is greatly appreciated!
  8. gradient r&d

    Hi Konstantin, Thanks so much for taking the time to make that file up for me (with the node descriptions in the vop!) - It is very much appreciated! In all honesty this technique is a little beyond me as a casual user - However I've rebuilt your setup & am getting the same results, so in that case it was a success! I do have a few questions though, if you'd be so kind to maybe give me some hints? 1. The 'pinch' of the gradient at the end of the arc - is this a result of specifically using the arctangent / cartesian function? If so, would there be a way around it to generate a fully soft gradient around the entire circumference? 2. What (if any) would be a suitable way to approach basing the gradient on any given piece of geometry? (i.e draw a curve projected onto a grid, perhaps?) I assume the length based calculations could still work - However I'm not sure where one would 'plug in' the geo to the vop setup. 2b. The arctangent function is turning the 2d coordinates into a radial value. So my thinking is perhaps for a curve this node would be replaced with another function/pattern node? Again, thanks so much for your time! I have already learned a lot today just experimenting with various parameters & exploring the help docs with a little more direction! T.
  9. gradient r&d

    Hi all, I'm approaching something in H/Redshift which I've not really attempted before - So was interested to see if there were any thoughts surrounding it. Basically I'm trying to recreate the soft/feathered gradient seen in the attached ref image. The inner hard edge I'm planning to achieve with some transformed geo floating above the grid below, however the super subtle feathering is challenging me a little. At first I though of using a simple attribVOP on a grid > point Cd. However I'm nut sure how to achieve the outer feather using that technique... Similarly with an attrib transfer from geo > grid it seems not very feasible to get the extreme feather unless the mesh is crazy high res? I'm continuing to experiment & look through forum threads, but if anyone has some suggestions/tut links/examples etc. would be very much appreciated! (Worth noting system = H17.5.258 & RS 2.6.43) Thanks in advance!
  10. Hi all, I recently found this Toolset on Github allowing realtime input & record of input from a leapmotion controller. https://github.com/arqtiq/HouLEAP Unfortunately I'm having a little trouble getting it to run properly. In the readme it explains to 'simply copy the content of the **/houdini16.x** folder to your houdini home/hsite folder.' So my question is, where would be the correct place to place the python scripts that the tool provides? (I'm not entirely sure what the 'hsite' folder is referring to?) Houdini see's the OTL's however i return the attached error in, what i assume, is the python scripting. Or perhaps I need to define LEAP in the .env file? Any tips much appreciated EDIT: The error is reported from the example .hip contained in the Github repository linked in my post. Also This was run in H 17.5.258.
  11. Multiple cameras, one cook?

    Hi Luke, thanks for the quick response - So if I understand you correctly, Mantra isn't thinking about rendering anything outside the frustum in any case? If so that's great (and what I assumed/hoped anyway!) I assume you mean saving geo out as .bgeo, caching sims etc.? Then pointing IFD directly to these caches? I'll need to look more into this & packed geo, as I'm rendering on a cloud service... But that is another thread I suppose. ha. I always endeavour to cache things out as efficiently as I can, so I'll revisit this and see if there's any improvements I can make. It is a cube map of sorts yes - However it's not for VR, so any spherical mapping/fish-eye lens type solutions will lead to distortion that is not wanted in this case. I'm rendering content for 3 walls & floor of a box, all angles 90deg, with the pov at eye height - roughly centred within said box. Would there be a specific camera you would recommend? Thanks in advance for any additional pointers, appreciate your time. I know these are fairly rudimentary questions
  12. Hi all, Ok, so I have a scene/set of scenes where I need to render the same sequence from numerous cameras. Is there a way to render, say, 4 cameras at the same position, but facing different directions, but only calculating the lighting/reflections/shading etc. once? As far as I understand .ifd just include the instructions to render, not an actual cache of the scene itself. Time is a little tight, so any optimisation would be a benefit. (Cams are also various resolutions, not uniform, in case that has an impact.) Thanks!
  13. Calculating / matching orientation

    Hey all, Left this thread behind because the workflow got pretty intense! As an update - @moneitor @petz & @galagast solutions all worked well - Thank you so much! - But the issue I kept running into, which was eventually revealed, was that the tracking data was bad So eventually I got it working with all 3 approaches once we re-tracked and got it done correctly. Thanks again everyone for your kind help - I learned a heck of a lot solving this in the end!
  14. Calculating / matching orientation

    Thanks @moneitor - That is exactly what I was trying to achieve, done in a way I would have not even thought about. I'll spend some time this evening/over the weekend going through it & trying to better understand the math that's going on in there. Plus some VOP stuff that's pretty new to me. Thanks for the annotations as well, they're always very helpful!
  15. Calculating / matching orientation

    Essentially I am trying to access the results to whatever calculation(s) the 'extract transform' SOP does at object level... If anyone has any idea how to do that?