Jump to content

grasshopper

Members
  • Content count

    179
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

9 Neutral

About grasshopper

  • Rank
    Initiate
  • Birthday 05/15/1968

Contact Methods

  • Website URL
    http://

Personal Information

  • Name
    John Hughes
  • Location
    Los Angeles
  1. Further thought: the easiest test would probably be just to isolate points in the whitecaps as your source, make sure they have velocity set and set the emit attribute to 1 for those points and pipe that into the second input of the whitewater source as I described above to see what you get.
  2. When setting up your Whitewater Source SOP you can pipe the water (eg ocean) into the left input and any emitter you set up into the second input. I've only tried this one time but I used the cached out points from the water simulation rather than the surfaced data as the left input. For the second input you can use any geometry you want. Set an emit attribute on it then in the Whitewater Source turn off all of the Emit from Curvature, Acceleration and Vorticity. Instead use the Extra Sources (by enabling "Add Extra Sources") which by default uses emit attribute you already set up as the Emission Attribute. I used this method to generate whitewater from obstacles in the water (eg rocks) so I haven't tried it for what you want. However, (guessing a little here) you should be able to use something like this for generating whitewater from the surface too using traditional methods to isolate sources such as whitecaps. You'll basically need to grab the surface and make your own curvature, acceleration and vorticity calculations.
  3. [SOLVED] Is there a way to animate Vellum constraint properties?

    Well seeing as Noobini is challenging me one answer is to set "Detach Point Chance" to "$F>70" on the Vellum Constraints node inside the Vellum Solver.
  4. Hi odForce! Haven't posted on here for years! I had to solve this exact problem on one of the Spider-Man movies where we needed a way for Spidey's webs to hit surfaces without getting twisted. We would project points representing the hit position of each thread in the web's terminating 'Eiffel tower' shape. You get a cloud of points scattered on the surface that isn't necessarily planar. The task is to fit a polygon to the points without getting twists. So I solved this by automatically calculating the best fit plane through the points and the centroid to give me an axis. Then I sorted the points by polar coordinates around this axis. If you then join the points you get an untwisted polygon. Pretty sure it worked every time.
  5. VFX Survey

    Asking for input from anyone, anywhere seems highly dubious to me. Costs of living vary greatly from place to place and pay to some extent has to reflect that. Remunerations vary dramatically depending on the experience needed for the position, time in the industry and a whole host of other reasons. I've worked in the UK, Canada and now the US and pay structures have been very different in each country. Here in the US I get paid for every hour of overtime I do whereas in the UK it was common to be pressured to work "for free". The British pound has just fallen almost 5% against the dollar in the last two weeks meaning my UK-equivalent wages have gone up quite significantly (by ...err 5%!). Of course finding out as much as you can about what people earn is the only real way to negotiate in a position of strength but you really need to be finding out what people earn in the location you are working who do a job similar to what you are doing.
  6. copy sop + pop and $LIFE.

    I guess you're looking for something like this: (1-$LIFE) * fit01(rand($ID*1.23),0.01,0.3) Another thing you can do is to add a Color POP to the Source POP to control the color of the particles as they age. Sometimes its easier to work this way because you get feedback through using the colors. Use the Ramp tab to set colors according to $LIFE by adding $LIFE to to the Lookup field. Then in your Point SOP you would reference $CR instead of $LIFE. This is a good way to manage particles growing then shrinking (or fading in and out or whatever) over their life span.
  7. The SSS Diaries

    Going out on a limb here regarding the flicker issue as I haven't played with any of this stuff but has anyone tried using a "rest scatter" and attribute transferring into it from the original scatter made on the deforming geometry? Basically, I mean getting a bounding volume that covers the entire 3D space of the deforming object throughout its animation and scattering a dense point cloud over it which stays constant. When you scatter points on the deforming object you attribute transfer an attribute such as colour to the rest scatter. Then remove those points from the rest scatter that aren't close enough to the deforming scatter based on the value of the transferred colour. You are left with a scatter that fills the volume of the deforming object but whose points stay fixed in 3D space consistently from frame to frame in terms of position and density. You could get fancy to blur or fade in or out the areas where the object is growing or shrinking by comparing the scatter of the current frame to the ones around it.
  8. You can embed expressions within other expressions but you shouldn't use backticks. Try: chop("/ch/ch1/xMove"+stamp("../copy1",chanNum,1)+"/chan1") If that doesn't work post a hip and I'll take a look.
  9. MPC opens in LA - with Mark Tobin...

    Nice find. As far as I know MPC in London aren't exactly big on Houdini.
  10. different Birth Rates over time

    Well you could always use this 1000 * fit($F,30,31,0.3,1) * fit($F,120,121,1,0)
  11. You can use $PT but as it is a local variable it will evaluate in the local SOP not the SOP that is referenced in the expression. That may or may not be what you want.
  12. The varmap method isn't always available. For example, when you are accessing the attribute from another network so you can substitute the point, vertex, prim or detail expressions instead something like this:
  13. Alpha Test Of New Houdini Tutorials

    I like this format a lot actually. If this was a more in-depth tutorial that I was playing along with in Houdini then I would find the pauses helpful so I could try stuff out without constantly losing my place and having to rewind. Replacing audio with text 'soundbites' is useful if you want to scrub through an old video to search for a snippet of information. It's much easier to find something that's written than spoken. The jump to section stuff is useful for this too. Even better would be an option to allow the user to save the text to make it even easier to search through. Thanks for sharing Peter. John.
  14. Houdini 10 Wish List

    Hey Ed, if you really do want to investigate this a bit further I for one would be happy to share my thoughts off-line if you want to get in touch. Some points come to mind immediately though.... Your description of a possible implementation in POPs sounds reasonable but I think you would have difficulty ramping up to anything comparable to Massive without having the fuzzy logic implemented as its own AI context. That's because complex behavior requires having very highly interconnected boolean-logic networks. Take your example of "hungry". You could have an attribute measure for hungriness as you suggest but its more powerful to encapsulate it as a fuzzy network of logic states. That allows the 'measure' of hungriness to be different in different circumstances for different agents. So an agent might feel pretty hungry even if he's just eaten but happens to be right next to a food source. On the other hand, an agent who hadn't eaten for a long time wouldn't feel so hungry if there is a predator between it and the only food source! That sort of reasoning is expressed by logic interconnections and you wouldn't necessarily use a quantifiable measure for 'hungriness' at all. The Massive interface is horrible but it does have one or two interesting ideas. One of these addresses your issue about debugging the logic. Each node in the network comes with a little bar that indicates the fuzzy logic value being passed through for the currently selected agent. You quickly get into the habit of selecting various agents and watching the logic bars go up and down whilst playing the sim to check to see if the values are working as expected. It makes it easy to trace back where the behaviors are being triggered and to correct logic flaws. I'll stop hijacking this thread now! Like I said shoot me a mail if you want to talk more. John.
  15. Houdini 10 Wish List

    Your argument could be used to block the development of almost any kind of new node because you 'could always roll your own with the HDK'!! No doubt you enjoy the benefits of many nodes that have been introduced in recent versions of Houdini that you could have implemented the hard way with vex or whatever. But you don't because that would increase your development time and cost, would most likely not work as efficiently and would be much less easy to use. Any new built-in-behaviour for Houdini potentially extends its market reach, especially when we are talking about functionality that is available in Massive at over $25,000 a license. Replicating the standard functionality that you get out-of-the-box in Massive using Houdini's current toolset together with python, vex and HDK is no mean feat. From what I can see the actual fuzzy logic part is relatively straight-forward but Massive comes bundled with lots of sensory modules that may prove trickier. For example, Massive has vision modules that can be used for avoidance behaviour (amongst other things). You could replicate that with a ray-marching system in vex but it would be pretty difficult to build and probably even harder to maintain. John.
×