Jump to content


Popular Content

Showing most liked content on 11/18/2016 in all areas

  1. 9 points
    Short answer is no. I think you're missing a lot of computer history. Linux (1991) was influenced by Unix (1971). In fact, Linux (as the whole ecosystem, not just the kernel) is one of many variants descended from Unix. One other important Unix descendant was IRIX (1988) which ran on the graphics workstations made by Silicon Graphics (SGI) (1982). Houdini (1996) was the successor to PRISMS (1987). So all these particular commands that you're talking about are actually Unix commands, not Linux. PRISMS and Houdini used to run on IRIX on Silicon Graphics workstations, as did Maya (1998) and its predecessors. Today, most of the big 3d studios (like Pixar, Disney Animation, DreamWorks, Blue Sky, ILM, Framestore, etc) all run Linux because they used to run IRIX. Windows (1993) as you know it today wasn't used much for 3D work until the computer games industry took off. CLI in general has been around since the dawn of operating systems. Interestingly, the "ls" Unix command was an abbreviation of the "list" Multics (1964) command, which itself was a shortened form of the CTSS (1961) "listf" command (according to A Brief History of the 'ls' Command). And to complete the OS history lecture, Windows can be seen as a descendant of VMS (1977) since it was largely designed by the same developers (who had been hired away from DEC by Microsoft). And VMS of course drew inspiration from Multics amongst other OSes.
  2. 2 points
    Like that? ground_extend.hipnc
  3. 2 points
    Here is the other half, after the break : https://www.youtube.com/watch?v=IRtrPaWoGFc
  4. 1 point
    The last two days I "wrangled" with a new solution to my render quest: Houdini -> Alembic -> Blender 2.78a -> Thea Render Proof of concept: So while file sizes are a bit scary and Blender has been it's usual unshaven pain in the ostrichs neck, this will at least enable me to get some animations done with what I have and love :-) Yay :-) Cheers, Tom
  5. 1 point
  6. 1 point
    @Martin / @rbowden -- you guys were right. As long as you keep the resolution down in the volume it works fine even with a larger sim. Thanks for the tip. This will work well for what I'm doing. Thanks again.
  7. 1 point
    For Redshift I always make my materials inside a Redshift_Network. you have yours outside a network. I have never built materials that way. It may work, but building them inside the VOP network certainly works. For glass, water, or see-through materials there is only Refraction/Transmission, set that weight to 1. Also review your objects. You have a material applied at the OBJ level and then also inside at the SOP level. Just use the inner SOP material for all objects and blank out the material field at the OBJ level. I have attached my revised version of your ROPNet and SHOPNet, without the geometry. So try dropping those into your scene and see if you get results similar to this image. ap_redshift_test.hiplc
  8. 1 point
    @f1480187 Thanks a lot once more for sharing your knowledge and your time man, I really appreciate, love the result of this setup, will experiment to adapt to other motion stuff. Btw I just uploaded some Shoe render practice http://cargocollective.com/caskal/Inspired-by-Nature Cheers!
  9. 1 point
    Hey Sorry, I completely forgot to reply to this. Apparently the limitation is a mysql limitation, or at least that's what the story was 7 years ago when I last looked ;). I can check again to see if anything has changed, or if there are better solutions. M
  10. 1 point
    The slow down is being caused by the points from volume node. When you scale it up and keep the particle separation the same, the number of points inside your sphere object is going to grow exponentially.
  11. 1 point
    Here is an approach, there is some sticky notes inside for explanation take a look.. vToPOPs.hipnc
  12. 1 point
    The contents of VOPnet is quite random, it's merely adjusting the value. I start with only noise, and place nodes later, by observing actual extrude result. @zscale and @opinput0_zscale are same thing, but only if you didn't change @zscale in this wrangle. Otherwise, @opinput0_zscale will be original @zscale value. Accumulate solver just remembers a maximum value from all previous frames. Blend wrangle makes it less strong, allowing areas to fade a bit, if the noise goes away.
  13. 1 point
    Sorry for my late reply. The Masterclass because of its nature couldn't be streamed. The conference however will, please check link below.
  14. 1 point
    I find it much easier to do angular constraints with polar coordinates. Try replacing the code in your wrangle with this: vector toPolar(vector v) { float r = length(v); float th = acos(v.z/r); float phi = atan2(v.y, v.x); vector a = set(r,th,phi); return a; } vector toCartesian(vector v) { float x=v.x*sin(v.y)*cos(v.z); float y=v.x*sin(v.y)*sin(v.z); float z=v.x*cos(v.y); vector a = set(x,y,z); return a; } vector a = toPolar(v@padd); float ang = radians(90); // constraint angle. Could use different values for theta and phi a.y = ang*floor(a.y/ang); // theta a.z = ang*floor(a.z/ang); // phi a.x = 1; // comment out to keep velocity field magnitude v@padd = toCartesian(a); @P += (v@padd* @TimeInc); This will constrain theta and phi to 90 degree angles, but it will work just as well for any angle.
  15. 1 point
    A nice evolution of the concept. There is a little bit of additional setup to prepare a shape other than the torus. Initially when I used the font the line did not "cling" to the shape. But after centering it on the world origin and adding more faces with the Divide node, in bricker mode, the line eventually took hold. Because the resulting line is open we can measure the length and use that to drive the polywire radius, here the line is thinner at the start compared to the end. 1.0-$PT/($NPT-1)/100 And, of course, random sized Wire Radius. rand($PT)/50
  16. 1 point
    There was an error in pop_too_close wrangle. It deleted both intersecting bubbles, not just the smaller one, drastically reducing bubblecount. Normally it should remove only degenerate bubbles almost enclosed by neighbours. It also seems that whole loop can be replaced with a point wrangle. So, it cooks instantly now, retains topology and scales better. Scattering and pscale setup really matters. You need to generate a good foam first, before doing intersections. The current setup should be improved somehow. bubbles2.hipnc
  17. 1 point
    What I like best with this setup is I didn't even have to set it up myself, I just vaguely outlined the idea and some genius took it from there! Edit: Wow, after looking through the scene file, I must admit I thought the POP approach would work but not that it would work this good. That's absolutely awesome! About surface control, nothing gives control than converting your mesh to a volume, doing a volume lookup in VOP's, then just use the sample value in a displace along normal (or set it up manually) to shift the point position using the gradient as the normal value. The cool thing using this technique is you can pipe additional values into the displacement, like having lower number particles displace into the mesh while keeping higher numbers at the surface, you could basically fill an object with one of these setups that way like Raphael Gadot did in this setup!
  18. 1 point
    Awesome stuff guys, in order to constrain it to a surface, I used inverted surface normals as velocity field (advect by volume pop) and the same geometry as static collision object, it's not perfect as steep or sharp parts on the geometry (ears) makes the particles go wild. Still a lot of fun to play with
  19. 1 point
    Hurray! That POP version is ace. I played with it for awhile, its alot more stable than my solver version, and there's so much flexibility with all the POP forces. I also failed to make it stick to a surface though. I feel there should be a pop node that just does this. I cant get POP attract to work well. In my solver version, I was doing the resample wrong, and I switched to what you have in yours, re-sampling based on size of the line. I got a bunch of stability back from just that change. Here it is again, with the resample fix and a bit of smoothing on the line. And some quick renders. 16_03_30_grow_line_SOLVER_01.hip
  20. 1 point
    Ok, ill bite here. Ive been wanting to understand these effects for awhile, so maybe this will spark some experimentation. Heres my initial idea for making it work. I'll spend a bit more time documenting the process tommorow, but heres the basic steps. Its all done in a solver node: 1 - resample a line, adding a point each frame (alterable with an attribute) 2 - avoid_force - use a point cloud to sample all the nearby points and create a vector that pushes them away from each other 3 - edge_force - measure each line segment and create a force which attempts to extend the line to a maximum distance. (this was difficult as if you have a totally straight line you never get any interesting motion. My crap solution was to turn the direction vectors into quaternions and slerp between them) 4 - add up the edge force and the avoid force and move the points a little bit along that vector. 5 - use a ray sop to make the points stick to a surface. As long as the movement is not too great, this isn't too bad. I've ran out of time to tweak this tonight, hopefully i'll get back to it soon. This version barely works! Id love to see other peoples ideas for how to create this. sopsolver_growth.hip
  21. 1 point
    Heyy! Did you check out Andrew Schneider's presentation; http://www.guerrilla-games.com/publications.html Realtime stuff but applicable..
  22. 1 point
    stand by for some news in the very near future... ok...back to eetu's crazy awesome work