Jump to content

Leaderboard


Popular Content

Showing most liked content since 12/18/2019 in all areas

  1. 8 points
    Hi Zunder, I think the second video you have posted shows almost the entire process: He is basically deforming a spiraling grid.. flower.hipnc
  2. 7 points
    Let's suppose we want to imitate graffiti art pieces with minimal effort. Warp the lines into curvy curves. Map the direction to the closest curves with greyscale values. Colorize each segment. And e x p l o d e by color difference. curve_art.hipnc
  3. 7 points
    Hi, thought I'd share this in this section too: I wrote an article for the german “Digital Production” magazine about my free LYNX VFX toolset. For the article I made a couple of renderings using the LYNX fabric tools. Luckily it even made the cover Here are my personal favorites, the rest of the images can be found on Artstation. You can also find the complete scene on GitHub under the Demo Files. So now anyone can design an ugly Christmas Sweater;) Looking forward to seeing what you guys come up with, enjoy! Links: LYNX VFX Toolset Odforce Thread: https://forums.odforce.net/topic/42741-lynx-free-opensource-vfx-pipeline-tools/ LYNX VFX Toolset (Sweater Scene File included): https://github.com/LucaScheller/VFX-LYNX Artstation (HighRes Renderings): https://www.artstation.com/artwork/OyeY6g Digital Production Magazin: https://www.digitalproduction.com/ausgabe/digital-production-01-2020/ Alternatively view the article in my latest blog post: https://www.lucascheller.de/vfx/2019/12/15/ynybp7wpiqtshoy/
  4. 5 points
    I didn't see much implementation of machine learning in Houdini so I wanted to give it a shot. Still just starting this rabbit hole but figured I'd post the progress. Maybe someone else out there is working on this too. First of all I know most of this is super inefficient and there are faster ways to achieve the results, but that's not the point. The goal is to get as many machine learning basics functioning in Houdini as possible without python libraries just glossing over the math. I want to create visual explanations of how this stuff works. It helps me ensure I understand what's going on and maybe it will help someone else who learns visually to understand. So... from the very bottom up the first thing to understand is Gradient Descent because that's the basic underlying function of a neural network. So can we create that in sops without python? Sure we can and it's crazy slow. On the left is just normal Gradient Descent. Once you start to iterate over more than 30 data points this starts to chug. So on the right is a Stochastic Gradient Descent hybrid which, using small random batches, fits the line using over 500 data points. It's a little jittery because my step size is too big but hey it works so.. small victories. Okay so Gradient Descent works, awesome, lets use it for some actual machine learning stuff right? The hello world of machine learning is image recognition of hand written digits using the MNIST dataset. MNIST is a collection of 60 thousand 28 by 28 pixel images of hand written digits. Each one has a label of what they're supposed to be so we can use it to train a network. The data is stored as a binary file so I had to use a bit of python to interpret the files but here it is. Now that I can access the data next is actually getting this thing to a trainable state. Still figuring this stuff out as I go so I'll probably post updates over the holiday weekend. in the mean time. anyone else out there playing with this stuff?
  5. 5 points
    Hi, here is another approach inspired by the method of the first video. It looks like, that in this tool the petals are transformed to each guide curve using something like a pathdeformer. The guide curves themselfes can be created procedurally using incremental rotations. Using a foreach allows you to set control parameters using ramps etc... . rose.hipnc roseA.hipnc
  6. 4 points
    I've tested RenderMan with spline rendering. it's much faster compared with baked Geometry. the spline primitives are very good in Renderman. I've stopped the test with mantra after 10 minutes, it is super slow even with optimized render settings. the Renderman images rendered for 3:20 on a 6 core Xeon 2.7ghz with pxr pathtracer. 4k resolution geometry: splines primitives: rendering splines directly saves a lot disk space and meshing time. interactive rendering was super responsive. Octane 2019.2 renderings is very fast, I've used the geo to render it. 2 minutes, as except its lot faster the CPU renderings. the spline rendering in octane does features custom width yet. Octane gives physical correct shading out of the box. Renderman the shading is extreme flexible with NPR renderings, which is hard to get in Octane. the IPR responsibility was a little slow with shader tweaks.
  7. 4 points
    Renderman 23 CPU vs Arnold 6 GPU with 2 minutes time limit. Renderman 23 CPU Arnold 6 GPU the reflection differences come from normals, Arnold calculates it's on normals, Renderman took the normals from the Geometry. i've tried with Karma but i had issues and crashes with it.
  8. 4 points
    Made a cute little render with this technique toy_train_cam03_v001.mp4
  9. 4 points
    Hi guys, this is a case of waterfall effect breakdown. Hope you like it ~ https://vimeo.com/vfxgrace
  10. 3 points
    Hi all, As some of you may know I have recently started a youtube channel where I am sharing some techniques, setups and tips. My goal is to first cover some of the fundamental tools/setups and then build more elaborate setups. I wanted to share this with the odforce community too as this community will always have a special place in my heart :). Thank you for watching & have fun learning! The channel: https://www.youtube.com/channel/UCZMPkkgnAFghvffxaTh6CsA My first video:
  11. 3 points
    Hi Jiri, to get started I just wrote a script that turns a bunch of lines with three points into circles: It's basically calculating the intersection point and radius of rectangular vectors starting from the lines' midpoints. To be run in a primitive wrangle: // 3D INTERSECTION // https://stackoverflow.com/questions/10551555/need-an-algorithm-for-3d-vectors-intersection function vector intersection(vector r1, r2, e1, e2){ float u = dot(e1, e2); float t1 = dot(r2 - r1, e1); float t2 = dot(r2 - r1, e2); float d1 = (t1 - u * t2) / (1 - u * u); float d2 = (t2 - u * t1) / (u * u - 1); vector p1 = r1 + e1 * d1; vector p2 = r2 + e2 * d2; vector pos_center = (p1 + p2) * 0.5; return pos_center; } // INPUT POSITIONS int pt_0 = primpoint(0, i@primnum, 0); int pt_1 = primpoint(0, i@primnum, 1); int pt_2 = primpoint(0, i@primnum, 2); vector pos_A = point(0, 'P', pt_0); vector pos_B = point(0, 'P', pt_1); vector pos_C = point(0, 'P', pt_2); vector mid_AB = (pos_A + pos_B) * 0.5; vector mid_BC = (pos_B + pos_C) * 0.5; // DIRECTIONS vector dir_BA = normalize(pos_B - pos_A); vector dir_BC = normalize(pos_B - pos_C); vector dir_rect = normalize(cross(dir_BA, dir_BC)); vector dir_BA_rect = normalize(cross(dir_BA, dir_rect)); vector dir_BC_rect = normalize(cross(dir_BC, dir_rect)); // ADD CIRCLE vector pos_center = intersection(mid_AB, mid_BC, dir_BA_rect, dir_BC_rect); float radius = distance(pos_center, pos_A); int pt_add = addpoint(0, pos_center); int circle_add = addprim(0, 'circle', pt_add); matrix3 xform_circle = dihedral({0,0,1}, dir_rect); scale(xform_circle, radius); setprimintrinsic(0, 'transform', circle_add, xform_circle); // REMOVE INPUT GEOMETRY removeprim(0, i@primnum, 1); circles_from_points.hipnc
  12. 3 points
    Try opening Wacom Tablet Properties and under Grip Pen / Mapping uncheck "Use Windows Ink". That should fix it. If only there was a check box somewhere you could uncheck "Use Windows"
  13. 3 points
    See attached for an example .hip of a simple method to preserve UV seams in a flip sim: UV_to_FLIP.hipnc
  14. 3 points
    Just released some videos on the SDF (signed distance field).
  15. 3 points
    here's my go...does not rely on existing (discrete) segments to make cuts, does not merge and therefore no interpolation, the cuts are exactly where they are. Will work even with extremely thin cuts. alright alright, exact is a dirty word...who knows what Boolean is doing behind the scene but I'll place blind trust in boolean to its job. vu_RandomPieCuts.hiplc
  16. 3 points
    I documented a process like this on a short animation I made a couple years ago: When I did this I was stuck with FEM because Vellum didn't yet exist, but you could do this waaaaay faster using Vellum and some Vellum-friendly force attributes... you'd just have to translate a little bit from the FEM solver attributes I was using to generate force. The short answer is that I just randomly applied forces to areas of the leaves/petals until the rhythm of the impacts looked believable, and then I isolated those impacted positions in the simulation and used them to generate raindrops after the fact. The raindrop particles were POPs that I slid down the petals using the "gradient descent" algorithm (described in the following blog post), and then randomly released after a time. It was easier to do it this way and get the timing right than to actually try to collide the leaves with falling raindrop particles. This is a long post, scroll about halfway down and you'll start seeing how the impacts were done: https://www.toadstorm.com/blog/?p=557 There's a HIP file you can download linked at the beginning of the first post in the series: https://www.toadstorm.com/blog/?p=534
  17. 3 points
    I have recorded a short tutorial explaining this method a little:
  18. 3 points
    When you have time, share some snowflakes .. why not .. have fun pufko.rar
  19. 2 points
    Modeler 1.0 for Houdini released! Free for all the DM 2.* users. $70 for the DM 1.* users. https://gum.co/xKBKM What's new: 1. DM now renamed to Modeler 1.0 2. new feature: the DM menu (Z hotkey) has been replaced with a new alignment menu where you can use tools for fast and accurate transformation. The menu includes the whole set of tools for working with a compass, geometry centring, quick flattening with gestures and many other transformation tools. Use the hotkeys for the tools of the old DM menu. 3. new feature: Deform Menu (N hotkey) with lots of interactive deformation tools including a new Lattice tool 4. new feature: MODELER_PICK_STYLE environment variable allows to override Modeler startup selection pick style. Add it to the houdini.env file. Use "Box", "Lasso", "Brush" or "Laser" values, then restart Houdini. 5. new feature: the hard and soft boolean tools are now combined in a new menu called Boolean (J hotkey) 6. new feature: a Fix Curves tool helps get rid of broken lines in open polygons. This helps when beveling corners of open polygons. 7. new feature: a Select Curves tool helps to select open polygons (curves) in the model 8. improvement: now some tools can create curves and process them. For example, the Extrude tool can produce lines from selected points. The Collapse tool can flatten open polygons (curves). The Connect tool can be used to cut a segment between two selected points or connect two open faces. The Push tool now properly moves points in open faces. 9. improvement: the RMB menu of the Push tool has a new item Toggle Connectivity, which allows you to move points, capturing the points of other closed pieces 10. improvement: the Push tool now works slightly faster 11. improvement: the Push tool can now slide point with Ctrl+MMB 12. improvement: the mouse and keyboard shortcuts of the Push tool have been completely redone 13. improvement: if nothing is selected, the Hose tool searches for all the curves in the current geometry 14. improvement: a Group parm added to the Hose Tool. Can be used in conjunction with a result of the Duplicate tool 15. improvement: Hose now creates straight edges tube if the Resample Curve set to zero value 16. improvement: Geometry Library renamed to KitBash and works only as the python panel 17. improvement: KitBash replace feature now doesn't update the item icon 18. improvement: Tools Panel now has a new category KitBash with tools for working with the library items. Now you can create, save, overwrite and update icons faster, without actually working in the KitBash panel 19. improvement: volatile edge sliding now does not require explicit movement of the mouse pointer to the edges 20. improvement: volatile edge sliding now can be used to slide points and faces 21. improvement: Fix Overlaps can now use face groups 22. improvement: Duplicate applied to edges now creates a curve in the current geometry object 23. improvement: the Resymmetry tool now works slightly better. The Tollerance parameter is no longer saved between nodes (). This allows you to not change the position of the seam points. 24. improvement: mouse wheel manipulation in various tools has been improved 25. improvement: new simple box type has been added to the QPrimitive HDA 26. improvement: Tools Panel now has a more logical structure for faster access to popular tools 27. improvement: the Modeler shelf was fully revisited 28. improvement: the Walk History Up and Walk History Down tools (Up and Down hotkeys) now work more interactively when traveling through nodes with more than one input or output. 29. improvement: the Select By Shells tools was replaced with a new Convert To Shells tool (Ctrl+1) 30. improvement: double-clicking with LMB in the viewport is completely revisited. Now you can jump to objects level by double-clicking LMB in an empty space. Clicking on a geometry allows you to quickly switch between objects. If you are in a certain state, double-clicking activates the selection mode. All these improvements speed up the modeling process. 31. improvement: the deformation tools (Size, Ramp, View) now have the fixed blend feature. The transition between the deformable points and the undeformable part looks more correct. 32. fix: Hose now orients rings copies correctly 33. fix: Slice, Mirror and Cut tools now set correct geometry center on tool activation 34. fix: JumpUp and JumpDown tools does not work when Compass is active 35. fix: QLight now works properly if you run it from the orthographic viewport 36. fix: sometimes camera movement with Alt did not work after a mouse click 37. Lots of tools have changed hotkeys. Look at Tools Panel for more details. 38. Python code has been revisited 39. Documentation has become more detailed 40. Overall speed improvement 41. Other improvements Works only in Houdini 18. Use build >= 18.0.346
  20. 2 points
    Here's the beta. Give it a try (it's free). https://gum.co/SOpPT
  21. 2 points
    I've start test Houdini 18 and Arnold 6. the first test was simple splines rendering, 250.000 splines instanced 25 times. it loads a 140MB alembic file. rendered in 6 core Xeon CPU and Nvidia Quadro RTX 5000. (windows 10 pro) the startup for Arnold GPU is slow, it renders faster, so it seems but for clear up the final image it takes for forever or just dropped /crashed, hard to tell on the GPU. the CPU is quite fast but much slower then GPU if it ever would finish. (adaptive sampling was on) As soon as Arnold finishes rendering the scene, it stops and do not refresh any more on parameter changes. so far i am not impressed with the Arnold GPU rendering. here is the same scene Arnold CPU with only direct Lighting. (on my MacBook) some test with Arnold GPU. it performed much better with just direct lighting.
  22. 2 points
    I want to share a couple of my favorite splashscreens with you. Just add - HOUDINI_SPLASH_FILE = /path/to/file.jpg - to env file.
  23. 2 points
    Hi, there are probably many ways to do this. Here is my approach: making a single prim curve by connecting each curves (reversing u-value manually, if necessary) defining an u-value (start) and an offset-value (width) additionally a speed modification can be used (to make the carves reach the ends at the same time) creating 2 curves with u + speed1*width and u - speed2*width Carve curve on controlled PT_.hipnc
  24. 2 points
    It does seem like the pop solution is causing the bad deformations. I disabled the pop section. I sorted the agents along the Z-axis. You may be able to automate triggering if they are in a linear order. Instead I took the hard coded approach which offers art direction. Place the Crowd Trigger node into Custom VEXpression mode. Use if statements to turn on the trigger, based upon individual point #s. // setting i@trigger to 1 will enable // setting i@trigger to 0 will disable int result = 0; // Stagger transition to ragdoll. if((@Frame == 13) && (@ptnum==0)){result=1;} if((@Frame == 15) && (@ptnum==1)){result=1;} if((@Frame == 17) && (@ptnum==2)){result=1;} if((@Frame == 28) && (@ptnum==3)){result=1;} if((@Frame == 29) && (@ptnum==4)){result=1;} if((@Frame == 31) && (@ptnum==5)){result=1;} if((@Frame == 37) && (@ptnum==6)){result=1;} if((@Frame == 39) && (@ptnum==7)){result=1;} if((@Frame == 42) && (@ptnum==8)){result=1;} if((@Frame == 46) && (@ptnum==9)){result=1;} i@trigger = result; For the ragdoll state, I dropped down a PopSteerSeek to make the agents move in the down stream direction. You can adjust the force on this node to match the agent speed to the fluid motion. ap_crowdFLIPtest_v1.hiplc
  25. 2 points
    test with previs Scene but with Blender realtime Eevee render. 2 million polygons. 10-20 fps on RTX 5000. (live viewport below) the offline rendering took 3 seconds not sure what is causes, probably loading time houdini openGL rendering: the prman 2-minute rendering for comparison : Realtime engines are on the rise! I will keep an eye on this!
  26. 2 points
    The way Houdini works is that it evaluates nodes from top to bottom of your node-graph (well, in SOPs : not talking about DOP or PDG...). Therefore, each frame, your graph is re-evaluated independently of the previous frame.... until you use a SOP Solver node. The SOP Solver node allows you (when you dive inside him), to access (i) the incoming stream of nodes, evaluated at this particular frame and (ii) the same stream of nodes, evaluated at the previous frame. Therefore, you can perform your code on the incoming stream, make your tests and create geometry, and then merge it to the geometry of the previous frame. It allows you to keep changes you made on the geometry on previous frames (because each frame you can merge what you are doing with the results of previous frames). Here some docs : https://www.sidefx.com/docs/houdini/nodes/sop/solver.html
  27. 2 points
    3d flame rendered with renderman23 rendered with octane
  28. 2 points
    if you want your ray to continue through all backfaces and hit something behind you'd need to do something along the lines suggested in previous posts vector t = chv("t"); vector dir = normalize(chv("dir")); float maxlength = chf("maxlength"); // get all intersection int prims[]; vector hitPs[], primuvs[]; intersect_all(1, t, dir*maxlength, hitPs, prims, primuvs, .001, .001); // filter out backface intersections and stop at first frontface vector outP = t + dir*maxlength; foreach(int i; vector hitP; hitPs){ int prim = prims[i]; vector primuv = primuvs[i]; vector hitN = primuv(1, "N", prim, primuv); float dot = dot(dir, hitN); if (dot < 0){ outP = hitP; break; } } // create line int pt0 = addpoint(0, t); int pt1 = addpoint(0, outP); addprim(0, "polyline", pt0, pt1); ts_intersect_ignore_backfaces.hip
  29. 2 points
    Keep in mind that I have a biased opinion because I made Vex Foundations I & II, but here's why I'd recommend my course alongside CG wiki. Matt Estela has done an incredible amount of work. CG Wiki is very comprehensive, accurate, and full of many of the same ideas I teach in Vex Foundations I & II. Because of that, every Houdini TD ought to check it out because it's essentially $200-$300 worth of highly-skilled work provided for free. But - here's the catch - CG wiki is a collection of journal entries rather than a fluidly designed course. You don't get an awesome looking project, course files, feedback, and help if you get stuck. And that can be important if you're trying to learn something that's a difficult topic such as coding in Vex. It's not just reading what the content is - it's about how the content is taught. If you're someone with a computer science background, then Vex Foundations I and II is probably not for you. It's probably quicker and easier to just go through Matt's wiki, and then you can take what you already know about C-style languages from there. If you're 3D artist though - Vex Foundations I and II is the most approachable introduction to vex that you'll find anywhere online. And that might be difference between you getting frustrated and giving up on it vs. getting past that initial learning curve. So my advice is to do both. Check out Matt's wiki after you go through vex I and II. It'll be way easier to read through it, and by the end that, you'll be exposed to two different teachers teaching you concepts which aren't easy to get at first. Good luck! - Tyler
  30. 2 points
    Hi, I've found an older file. Here is a modification of it. three_point_arc.hipnc
  31. 2 points
    in my experience the best result are, if you handle it like real smoke, control the temperature. at the source its quite hot --> buoyancy push up the smoke fast, and when it cools down the smoke slow's down, if the temperature drip too fast in some area's, the smoke goes down like in your reference image.
  32. 2 points
    since polypath is just an HDA you can easily modify it to allow to specify such group, if you look inside, it's just an addition of the green groupcombine node polypathgroup_mod.hipnc
  33. 2 points
    Debugging - Fixed scaling for anything outside the unit circle. - Fixed curve framing with extreme angles by using dihedral to minimize rotation (thanks to Entagma for the theoretical insight on this one) Features - Added point sampling mode switch (cone cap or cone volume) - Added twist control - Added per connected mesh UVs using Toraku's "Get correct uvs with a sweep sop" fix - http://www.tokeru.com/cgwiki/index.php?title=Houdini#Get_correct_uvs_with_a_sweep_sop Some practical results with it here (and one of my first ever tutorials, haha): Give it a try and if you have suggestions, feel free. Cheers! grass_n_stuff_gen_18_4b.hiplc
  34. 2 points
    Maybe you should take a look at VEX. VEX and VOP are really two flavor of the same thing, it's just about code. If you are confortable with some language, then use VEX. If not it's probably why you're not confortable with VOPs VOP network can be build quickly, but if you have not created them, they can be very difficult to "decrypt". VEX is much simpler (IMO), especially when it comes to a succession of arithmetic operations or conversions. small illustration on the picture (pythagore theorem). If you become comfortable with VEX, then you will be comfortable with VOP, and you will choose to use it when it is advantageous. One of the things that VOP is best used for, is when you want to play with noises. But even then, I usually just create a VOP sop with the noise, export the result and process the datas behind it in a VEX sop. Well, that's just me. But as you understood, I'm not a fan of VOPs either.
  35. 2 points
    Try a Fuse Polyfill combo to seal up any holes in the original mesh.
  36. 2 points
    You're in luck! https://qiita.com/jhorikawa_err/items/24785f6255ccf19296b3
  37. 2 points
    There you go: Aliased object IDs in COPs. // CAMERA float aspect_cam = YRES / float(XRES); vector pos_canvas = set(X - 0.5, (Y - 0.5) * aspect_cam, 0.0) * 0.036; matrix xform_cam = optransform(cam); vector pos_cam = cracktransform(0,0,0,0,0, xform_cam); vector pos_sensor = pos_canvas * xform_cam; vector pos_focal = set(0.0, 0.0, focal) * xform_cam; vector dir_sensor = normalize(pos_sensor - pos_focal); vector ray_sensor = dir_sensor * vector(range_cam); // TEXTURE vector pos_hit; vector st_hit; int prim_hit = intersect(geo, pos_sensor, ray_sensor, pos_hit, st_hit); int id = prim(geo, 'id', prim_hit); // OUTPUT vector color = rand(id); assign(R, G, B, color); COP_render_ID.hipnc
  38. 2 points
    I have Mega Exampel ... for education. #define TAU 6.283185307179586 #define PI 3.141592653589793 addpointattrib(geoself(), "ring_id", -1); addpointattrib(geoself(), "seg_id", -10); void segment(int res_lon; float start_theta; float end_theta; int res_lat; float start_phi; float end_phi; float r_min; float r_max; int ring_id; int seg_id) { string poly_type = chs("poly_type"); float theta_step = (end_theta - start_theta) / (res_lon - 1); float phi_step = (end_phi - start_phi) / (res_lat - 1); int pt; // Keep track of last created point. // Inner surface. for (int i = 0; i < res_lat ; i++) { for (int j = 0; j < res_lon; j++) { // Calculate longitude arc point position. float theta = start_theta + theta_step * j; float phi = start_phi + phi_step * i; matrix r = ident(); vector pos = set(cos(theta), 0, sin(theta)) * r_min; vector axis = cross(normalize(pos), {0, 1, 0}); rotate(r, phi, axis); pt = addpoint(geoself(), pos * r); setpointattrib(geoself(), "ring_id", pt, ring_id, "set"); setpointattrib(geoself(), "seg_id", pt, seg_id, "set"); if (i > 0 && j > 0) { // Create a new quad. int prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, pt - res_lon); addvertex(geoself(), prim, pt - res_lon - 1); addvertex(geoself(), prim, pt - 1); addvertex(geoself(), prim, pt); } } } // Outer surface. Same except r_max used and reversed vertex order. for (int i = 0; i < res_lat ; i++) { for (int j = 0; j < res_lon; j++) { // Calculate longitude arc point position. float theta = start_theta + theta_step * j; float phi = start_phi + phi_step * i; matrix r = ident(); vector pos = set(cos(theta), 0, sin(theta)) * r_max; vector axis = cross(normalize(pos), {0, 1, 0}); rotate(r, phi, axis); pt = addpoint(geoself(), pos * r); setpointattrib(geoself(), "ring_id", pt, ring_id, "set"); setpointattrib(geoself(), "seg_id", pt, seg_id, "set"); if (i > 0 && j > 0) { // Create a new quad (reverse vertex order). int prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, pt); addvertex(geoself(), prim, pt - 1); addvertex(geoself(), prim, pt - res_lon - 1); addvertex(geoself(), prim, pt - res_lon); } } } // Side surfaces. for (int i = 1; i < res_lon; i++) { int prim; int surface_ptnum = res_lon * res_lat; int start_pt = i + pt - surface_ptnum - surface_ptnum + 1; // Bottom. prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, start_pt - 1); addvertex(geoself(), prim, start_pt); addvertex(geoself(), prim, start_pt + surface_ptnum); addvertex(geoself(), prim, start_pt - 1 + surface_ptnum); // Top. prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, start_pt + surface_ptnum - res_lon); addvertex(geoself(), prim, start_pt - 1 + surface_ptnum - res_lon); addvertex(geoself(), prim, start_pt - 1 + surface_ptnum + surface_ptnum - res_lon); addvertex(geoself(), prim, start_pt + surface_ptnum + surface_ptnum - res_lon); } for (int i = 1; i < res_lat; i++) { int prim; int surface_ptnum = res_lon * res_lat; int start_pt = i * res_lon + pt - surface_ptnum - surface_ptnum + 1; // Side A. prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, start_pt); addvertex(geoself(), prim, start_pt - res_lon); addvertex(geoself(), prim, start_pt - res_lon + surface_ptnum); addvertex(geoself(), prim, start_pt + surface_ptnum); // Side B. prim = addprim(geoself(), poly_type); addvertex(geoself(), prim, start_pt - 1); addvertex(geoself(), prim, start_pt - 1 + res_lon); addvertex(geoself(), prim, start_pt - 1 + res_lon + surface_ptnum); addvertex(geoself(), prim, start_pt - 1 + surface_ptnum); } } // RING FUNCTION void ring(float min_arc; float max_arc; float h_min; float h_max; float r_min; float r_max; float skip_chance; int ring_id; float ring_seed; int num_rings; float frame){ float fixed_res = chi("fixed_res"); float step_lon = ch("step_lon")+10000*fixed_res; // Longitude resolution in meters. float step_lat = ch("step_lat")+10000*fixed_res; // Latitude resolution in meters. float theta = 0; float in_theta = 0; // ch("test_theta_step"); //start_theta float out_theta = theta + TAU; float arc_width; float arc_sep_min = radians(ch("arc_sep1")); float arc_sep_max = radians(ch("arc_sep2")); float phi = 0; float arc_h; float freq_min = ch("freq1"); //height change frequency float freq_max = ch("freq2"); float sfreq_min = ch("sfreq1"); //spin frequency float sfreq_max = ch("sfreq2"); float amp_min = ch("amp1"); float amp_max = ch("amp2"); //int seg_num = npoints(geoself()); // /(res_lon+1*res_lat+1) int seg_id = 0; float f = frame; //ch("t"); //time frame //ring logic while (1) { //seeds and mapping float seed = seg_id + ring_seed; float seed_big = fit(rand(seed+2153.5),0,1, 0, 50000); float nr_max = fit(rand(r_max), 0, 1, 0.2, 1); float ring_idf = fit(ring_id, 0, num_rings, 0, 1); arc_width = fit(rand(seed+223),0,1, min_arc, max_arc); float arc_sep = fit(rand(seed+536),0,1, arc_sep_min, arc_sep_max); // check end of circle doesn't overlap begin if (theta+arc_width >= TAU-arc_sep){ arc_width = TAU-theta-arc_sep*nr_max; // } //noise fx for height float freq = chramp("freq_ramp", rand(seed_big+8+ring_idf)); freq = fit(freq,0,1, freq_min, freq_max); float sample = (f/2)*freq; float noise = noise(sample); //height arc_h = fit(noise,0.3,1,h_min,h_max); float m = arc_h*ch("mirror"); //mirror below xz plane //noise fx for rotation int crazy_spin = chi("crazy_spin"); //crazy_spinning is seeded per segment, this causes segments overlapping float sfreq = chramp("spin_ramp", ring_idf+rand(seed_big+0.5)*crazy_spin); //spin frequency ramped on ring id sfreq = fit(sfreq,0,1, sfreq_min, sfreq_max); float sample1 = f*sfreq; float noise1 = noise(sample1); float spin_rnd = fit(noise1,0.3,1,0,1); //trig fx for rotation float spin_trig = sin(sample1); spin_trig = fit(spin_trig,-1,1,0,1); //regular rotation float spin_reg = f*sfreq; //in this case sfreq is the slope of the linear curve //rotation float amp = fit(rand(ring_seed+1.25),0,1,amp_min,amp_max); int spin_dir = (rand(ring_seed+11) > 0.5) ? 1 : -1; amp *= spin_dir; //clock and counterclock turn float w; int spin_type = chi("spin_type"); if (spin_type == 0){ w = amp*spin_reg/5; // /5 is for keeping rotational speed similar to other types } if (spin_type == 1){ w = amp*spin_rnd; } if (spin_type == 2){ w = amp*spin_trig; } in_theta = w+amp; //+amp gives a in_theta offset foreach ring even when rotation is disabled // Proportional longitude resolution. float arc_width_length = r_max * radians(arc_width)*10; int res_lon = max((int) (arc_width_length / step_lon), chi("min_res1")); // Proportional longitude resolution. float arc_h_length = r_max * radians(arc_h)*10; int planar = chi("planar"); //flat rings int res_lat = max((int) (arc_h_length / step_lat), chi("min_res2"))*(1-planar)+planar; //creation if (rand(seed+1563.9) > skip_chance ) { segment(res_lon, in_theta+theta, in_theta+theta+arc_width, res_lat, phi-m, phi+arc_h, r_min, r_max, ring_id, seg_id); } theta += arc_width+arc_sep; // arc separation seg_id +=1 ; if (theta>= out_theta) break; } } float min_arc = radians(ch("arc1")); float max_arc = radians(ch("arc2")); float min_h = radians(ch("height1")); float max_h = radians(ch("height2")); float r_min = ch("r1"); float r_max = ch("r2"); float skip_chance = ch("skip_chance"); //chance to make a hole in ring float frame = @Frame; // ENTIRE RING SYSTEM int num_rings = chi("ring_count"); float ring_seed = ch("seed_ring"); float r_depth_min = ch("r_depth1"); float r_depth_max = ch("r_depth2"); float r_depth_seed = ch("r_depth_seed"); float off_min = ch("ring_off1"); float off_max = ch("ring_off2"); float r_off_seed = ch("ring_off_seed"); for (int i=0; i<num_rings; i++) { float r_depth = fit(rand(i+r_depth_seed+222),0,1,r_depth_min,r_depth_max); float r_off = fit(rand(i+r_off_seed+.5),0,1,off_min,off_max); ring(min_arc, max_arc, min_h, max_h, r_min, r_max, skip_chance, i, ring_seed, num_rings, frame); r_min = r_max+r_off; r_max = r_min+r_depth; ring_seed *=1.235684; }
  39. 2 points
    files = filter(lambda p: p.endswith('.sc'), [n.evalParm('file') for n in hou.nodeType('Sop/file').instances()])
  40. 2 points
    Hey everyone, for all the German speaking peeps out there: LYNX Tools are featured in 2020/01 issue of the “Digital Production” magazine. The article covers the basics on getting you started as well diving into advanced use cases and tips & tricks. I also had the honor of supplying the official cover. The scene was setup using only the LYNX fabric tools in Houdini and rendered in Mantra. The scene file is available for free in the GitHub Repository. So now anyone can design an ugly Christmas Sweater;) Looking forward to seeing what you guys come up with, enjoy! Digital Production Magazin: https://www.digitalproduction.com/ausgabe/digital-production-01-2020/ Alternatively view the article in my latest blog post: https://www.lucascheller.de/vfx/2019/12/15/ynybp7wpiqtshoy/ High-Res Renderings for the Cover/Article (Scene File Included in the GitHub Repo): https://www.artstation.com/artwork/OyeY6g LYNX GitHub Repository: https://github.com/LucaScheller/VFX-LYNX
  41. 2 points
    Hi Daniel, that's a complex shape which will require a fair amount of RnD. But to get started you could delete the top of a spherified cube and deform it with noise. The finer surface structures would probably be done with displacement textures. cell.hipnc
  42. 2 points
    Hello again! It's been a long time. Today with the release of Houdini 18 marks the first "official" release of MOPs: v1.00. This includes a ton of changes since the previous Stable release, and is now feature complete, barring any future bugfixes. Development of new features will now be focused on the upcoming commercial version of MOPs. The list is way too long to post here, so I'll just link to the Github release page: https://github.com/toadstorm/MOPS/releases/tag/v1.00 Please continue to post bug reports, feature requests, or any other feedback, either here, on GitHub, or in the MOPs forums! Thanks as always!
  43. 2 points
    sorry for Chaos inside patt789.hipnc
  44. 2 points
    I wrote a custom render engine in COPs today. While 'engine' is probably a bit far fetched, it's a little ray tracer experimentally supporting: Meshes with UV coordinates Shading on diffuse textures Multiple point lights (including color, intensity, size) Area shadows and light attenuation Ambient occlusion Specular highlights Reflections with varying roughness The snippet basically transforms the pixel canvas to the camera position and shoots rays around using VEX functions like intersect() and primuv(). The rendering process only takes a few seconds. I still have to figure the licensing fees, though COP_render.hipnc
  45. 2 points
    also if you have bones that you don't want to deform, just want them to deform muscles you can add them as external colliders to detangle penetration_removal_detangle2.hip
  46. 2 points
    Now I get it , just to fine adjust and I'm goona posted hipnc .
  47. 2 points
    For a wireframe shader with consistent line widths, first create a primitive attribute that contains all point positions. Then inside a shader choose point pairs for ptlined() to determine the shortest distance to your shading position to check against a custom wire width. // primitive attribute with point positions int pts_prim[] = primpoints(0, @primnum); v[]@pos = {}; foreach(int pt; pts_prim){ vector p = point(0, 'P', pt); append(@pos, p); } // shader snippet feeding point positions into ptlined() // determining shortest distance to check against wire width. int num = len(pos); float dist = 1e6; for(int i = 0; i < num; i++){ vector pos_0 = pos[i]; vector pos_1 = pos[(i + 1) % num]; float dist_curr = ptlined(pos_0, pos_1, P); if(dist_curr < dist){ dist = dist_curr; } } stroke = dist <= stroke; wireframe.hipnc
  48. 2 points
    Awesome! Thanks guys! I got both methods working. H15 file attached below. insideObject.hipnc
  49. 1 point
    Love it, here are some Pollocks made with a slight variation on your technique.
  50. 1 point
    Hi Dave, following up from what I wrote to you by email: ****************************************** There is the cheap way of doing it and a more expensive way of doing it. *) the cheap way would be to use a few nurbs cylinders and using their parametric coordinates to drive a noise pattern along them, which displaces the surface. Add a fresnel type shader (think x-ray), some glow and 2d distortion in comp and you will be 90% there. *) the expensive way to do it would be with a fluid simulation. Think: "rocket lifting off". That would give you all the swirly detail and would allow for parts of the energy to break free. You could advect a ton of particles and render those with high motionblur to get fine streaking details. This type of effect will be all about layering different elements together. Generally: - a big overal beam effect - some atmospherics (fog/dust clouds that are slowly being pushed by your beam... that is how you would read that the beam is affecting the environment.) around it to make it sit in the environment - some really small elements like sparks/embers/energy particles to help establish a sense of scale. - mixing the elements together in comp with glows, a darkening of the background where the effect is happening (kinda like changing fstops on a camera -- if you were to look straight into the sun everything else becomes dark because your eye/iris is trying to compensate.), 2d distortion (this just makes it sit and look cool) -- this can be normal distortion or even chromatic distortion. Things to search for on the forums: "tornado", "solar flare", "ink", "beam", "pyroclastic clouds". There are a lot of scene files on the forums with pretty much all the components you require for your effect. So go for a search on odforce. ****************************************** *) scene file -> inexpensive way with particles example. -- you should cache some stuff out to disk. I had limited amount of time, so just rendered it directly. The vops should give you plenty of ideas to play with and to help control your particle sim. It needs some work, but you can add extra pieces of geometry as emission sources, make duplicates, combine several sims together, etc... Have fun learning. cheers, Peter beam_01.avi beam_01.hip
×