Jump to content

Leaderboard


Popular Content

Showing most liked content since 07/09/2020 in all areas

  1. 7 points
    I have a Houdini GitHub repo where (in addition to the code section, which is the Houdini pipeline for my personal projects) I store all my R&D notes related to the pipeline developing and programming organized as one wiki. The valuable part of this wiki is VEX for Artists tutorial, where I record everything I was able to understand about VEX in form of tutorials, so it might be useful not only for me but for anybody else, who is going the same route of learning programming from scratch. It was built by a guy with an artistic background and no technical education and skills, so it might be suitable for the same type of peoples. Easy and clean, a lot of simplification, a lot of explanation of basics. This VEX tutorial was just updated with a new section: Solving problems with VEX. Here, using the basic blocks studied earlier we will create something meaningful for the production. The first example we look into is the creation of a hanging wire between 2 points. For those who tried or even afraid to begin to learn VEX but fail and stop because it was too hard. Enjoy!
  2. 6 points
    My latest reel, 2020. Collection of shows 2016-2019.
  3. 5 points
    Hello magicians. I want to share with you the result of studying l-systems. Behold the Moss Incredible aka Sphagnum! Redshift shading. And yes, it is growing. Available on https://gum.co/fZROZ https://gum.co/qmDmg Thanks! akdbra_moss_grow.mp4 akdbra_moss_var.mp4
  4. 3 points
    Hello! So i created a few tools for a recent project for creating trees. I thought Id share it with the community. This is my first ever toolset Ive created so If you like it consider donate a few bucks on gumroad. I currently have it as a "pay what you want" product. You are more than welcome to try it out and come with suggestions for future potential updates. Hope you like it! https://gum.co/nEGYe
  5. 3 points
    Finished my first tutorial, hopefully you find it helpful learning Houdini!
  6. 3 points
    yes there is. (you refuse to upload your file, I refuse to upload my file)
  7. 3 points
    To fill quadratic polygons with copies just take the square root of the intrinsic primitive area as point scale: int pt_add = addpoint(0, v@P); float area = primintrinsic(0, 'measuredarea', i@primnum); float scale = sqrt(area); setpointattrib(0, 'pscale', pt_add, scale, 'set'); removeprim(0, i@primnum, 1); KM_recursive_subd_001.hipnc
  8. 2 points
    Hi all, I recently started uploading a large Houdini python course/playlist on Youtube. In case you might be interested, there will be a section in there where i'll actively answer odforce python questions The series is called the 'deep dive series' And this playlist contains a free course that will be aprox. 31 to 32 hours long in total. The first few videos about programming with Python were just uploaded. This course consists of 5 steps 1 - (Part 1) The Big Picture 2 - (part 2a) The Fundamentals (13 videos) 3 - (part 3) Full Projects (Multiple Projects from beginning to the end) 4 - (part 2b) Beyond the Fundamentals (More advanced tutorials to be used on a need-to-know basis) 5 - (part 3) Contextualization (Masterclasses regarding all kinds of decisions you can make as a programmer, and interviews with fellow Houdini Python coders) You can watch it here - https://bit.ly/2Pu8gbI (the complete series will be free) Netinho da Costa Linkedin - linkedin.com/in/netinho-r-p-da-costa-b5326985
  9. 2 points
    In case anyone else needs to do this, I made a free hda to do this. you can add noise, rotate, scale and transform randomly!! and it is done all in /mat. Make sure to lay out uv's before using. The hda is included here or you can get it from the link in the video on youtube. uv_randomize_b.hdalc example.hiplc
  10. 2 points
    http://www.cgchannel.com/2020/07/download-the-free-public-beta-of-cascadeur/ what is this sorcery...I'm not an animator but would an animator say this is a Maya killer ?
  11. 2 points
    here is a simple example with Python, I attached an example: parm = hou.parm('/obj/geo1/sphere1/ty') # store all values in list values=[] for i in range(100): val = parm.evalAsFloatAtFrame(i) values.append(val) # remove parm expression parm.deleteAllKeyFrames() # set values from list to keyframes for i in range(100): myKey = hou.Keyframe() myKey.setFrame(i) myKey.setValue(values[i]) parm.setKeyframe(myKey) bake_keyframes.hipnc
  12. 2 points
    you CAN write the vex yourself every time if you want to, or if you feel lazy, just a Point SOP, preset Morph to 2nd input then you can have a controlled lerp amount.
  13. 2 points
    Crag Splits Face................crudely.
  14. 2 points
    This belongs to everyone in Houdini Community(for those that didn't Know) Have fun with colors (make absolutely everything (area(biology, particles, modeling, PLot, CNC.....etc) sorry for eventual errors ..I use Houdini 16.5. Tips-save channels Data- download Qlib HaveFunfinal2.hipnc
  15. 2 points
    Ok, so I have found a solution, which I think could be improved upon. This setup keeps a heartbeat runnig for 30 seconds and each second you can generate new work_items. You can obviously change number of heartbeats and wait time. So this allows you to fetch live data form a database or folder for example and act accordingly. There's no logic for tracking what was processed, you need to build that yourself. But it's at least a way to keep houdini polling. Now in the file, I have also 2 python generators which I think would be the proper solution, but can't get to work. I'd love to know if there's anything better of more robust perhaps. Hope it helps someone and if you have feedback please let me know! PDG_while_loop_tests.hiplc
  16. 2 points
    We use 3DEqualizer at work. Although it's doesn't have the best UI, it apparently is the best tracking software. I've done some 3D tracks using Blender and the 3D solves have been pretty good and the tracking is pretty darn quick!
  17. 2 points
    Here's a non-VEX approach, that doesn't even use any rotations. I'm applying a linear gradient across the top face, and then using a Soft Transform to move the points up based on that mask. You do need Labs for the Gradient SOP. columns01_soft_transform.hiplc
  18. 2 points
    not quite sure if that’s what you are after but please take a look at the attached file: rotate_prim.hiplc
  19. 2 points
    Hey all! I just updated Simple Tree Tools to version 1.5.2. There is a lot of goodies in here! Download ----> https://gum.co/nEGYe Oh, and don't forget to read the "read me" to see what's new! Cheers!
  20. 2 points
    I feel like Get Vector Component needs an update to be useful currently it's sort of useful if you want to give the option for statically change component as a promoted parameter, but in reality it has a few issues 1. there is no 4th component in the menu even though it allows vector4 signature 2. doesn't allow dynamic choice of the component since it doesn't have component index input, so better choice for that case is vector2float and switch however if updated it can be useful for those cases
  21. 2 points
    Hi, It's more of a workflow thing. I personally use vector to float myself. I think vector to float might be faster as it's modifying 3 parameters by reference vs calling vop_getcomp 3 times to get 3 values, if you need 3 values that is.
  22. 2 points
    You are welcome. Submitting RFEs doesn't hurt. C++ implementation is still much faster than laspy, but it's possible to do an easy and quick specific modification for las import until LIDAR Import SOP gets some improvements without going the C++ route. I attached an example with adding classification as an attribute (added closing the las file to avoid memory issues). I added an updated file: pz_load_las_with_python_classification_attribute_memory_fix.hipnc as it looks like context management protocol is not implemented in laspy (with File(): block will not close the file at exit) so I switched to try/except/finally instead. It will not error on the node, so watch python console for exception logging.
  23. 2 points
    The classification works fine. But with inFile = inFile.points[I] you are overwriting inFile object with multi-dimensional arrays of all attributes so you no longer get them by .x/.y/.z or other named properties. I uploaded a modified scene, where you can set classification and then it returns only a subset of points that match the correct classification. inFile.points[I] Returns subset of points in an array where I is True inFile.Classification == 2 Returns array of True/False values to determine which points are classified with id 2. Another approach would be adding Classification as an attribute to all points and then use vex expressions, groups, or other partitioning mechanisms to separate points. pz_load_las_with_python_classification.hipnc
  24. 2 points
    Hi, pretty neat library! Thank you for the tip. There is no need for csv, you can do a lot with laspy and numpy itself. Attached example scene to load data from las file. Seems that Lidar Import SOP ignores scale and offset. To make it work (18.0.499, Python 2.7 branch) I cloned the https://github.com/laspy/laspy repository. Then copied content of laspy folder to $HOME/Houdini18.0/python2.7libs/laspy so I have $HOME/houdini18.0/python2.7libs/laspy/__init__.py (and rest of the library) and it's possible to load it into Houdini with import laspy in Python shell. (Numpy is already included with Houdini) I used example file from repository: https://github.com/laspy/laspy/blob/master/laspytest/data/simple.las import logging from laspy.file import File import numpy as np node = hou.pwd() geo = node.geometry() file_path = geo.attribValue("file_path") inFile = File(file_path, mode='r') try: # --- load point position coords = np.vstack((inFile.X, inFile.Y, inFile.Z)).transpose() scale = np.array(inFile.header.scale) offset = np.array(inFile.header.offset) # there is no offset in simple.las example from laspy library # offset = np.array([1000, 20000, 100000]) # just for testing that offset works # geo.setPointFloatAttribValues("P", np.concatenate(coords)) # same as Lidar Import SOP - seems that it ignores scale (and offset?) geo.setPointFloatAttribValues("P", np.concatenate(coords*scale+offset)) # --- load color color = np.vstack((inFile.red, inFile.green, inFile.blue)).transpose() geo.addAttrib(hou.attribType.Point, "Cd", (1.0,1.0,1.0), False, False) # add color atttribute geo.setPointFloatAttribValues("Cd", np.concatenate(color / 255.0)) # transform from 1-255 to 0.0-1.0 range) except Exception: logging.exception("Processing lidar file failed") finally: inFile.close() pz_load_las_with_python.hipnc
  25. 2 points
    Here is a basic setup. It uses a second loop to give each primitive a unique name. Inside the loop, the area for each primitive is stored as an attribute. After the loop, pscale is derived from the area. Use the ramp and the multiplier to dial in the sizes. ap_recursive_subd_001.hiplc
  26. 2 points
    To import a specific script in Python you need to append the folder where it is to the python path (the path contains all the paths to the different modules it loads). You can do it using the sys module : import sys sys.path.append("/path/to/the/folder/where/my/script/is/myscript.py") import myscript And if you want to import just a specific function from your file and not all the file : import sys sys.path.append("/path/to/the/folder/where/my/script/is/myscript.py") from myscript import myfunction Cheers,
  27. 2 points
    I love this, make still can t understand how to get that with chop . I see a formula with your pic but its not for a point vop Tesan , its more a regular expression, hummm.. Not very friendly yet with Chops. May lean to know that beast better... The only way i used it was to stabilize some motion from sim like vellum , if you measure velocity , filter the anim and blend with the original anim and the slighty smoothed anim when the velocity was sup to your input threshold. Fro now i have been playing with different procedural and concept for a work, and some stupidy like this ... Doughnut with Covid creatureA.mp4
  28. 2 points
    Yes I do! I just created one Here you go!
  29. 2 points
    Hi @lobao, Thanks for following up the progress. Regards a paid tutorial, I think a tutorial is not enough, it has to be a Masterclass or something more robust, this method is not a simple one to deal with, also the pipeline is made out of many different stages that has to be explained in a nice way without overwhelming too much the attendants, so I'm trying to find the best way to do this, maybe a Patreon or a a collection of hips on Gumroad. A Patreon is a good idea, I have many techniques and tools to show, so I think that method would be nice, or maybe people is searching for another way to learn. Who knows! Anyway thanks again to be interested! Alejandro
  30. 2 points
    @vinyvince progress between lessons checking more about modeling and Uv. learning from ( Len White and Node flow).
  31. 1 point
    Wow! Thanks Aizatulin. While trying to fix this problem I had some OK result with the Redshift RS Round Corners node. But this would have been just a compromise. Your solution is so much better and pretty much what I was looking for. Thanks for the effort! You made my day Have a nice weekend.
  32. 1 point
    Hey friends, found the solution. Just add a "HF Tile Split" and the grass is super dense Cheers, sant0s
  33. 1 point
  34. 1 point
    This is the link of HDA I made in the past, how about this? https://www.orbolt.com/asset/ynkr::polylinebevel
  35. 1 point
    alright, I did a real petal for ya...gotta appreciate my effort !!! vu_HairOnPetal.hiplc
  36. 1 point
    It's absolutely possible to export the foam as texture with the same pattern/method used in the shader. How? Using COPS and VOPs. What I have done was copying the foam part inside the ocean shader in the VOPCOP generator. Here then you convert the data from world position to UV space. It has a few pros and cons though: - First of all I have been using this method quite a lot to have a fast feedback while doing a project with tons of water shots. It's not very fast to generate but definitely waiting for the render of a small portion of the ocean is a lot slower. - You are going to export a texture, so you have to export a really hires image to keep all the details and cover all the ocean. Also this has an easy solution. On the tool I've developed, there is the possibility to export an extra texture which is placed only near the camera/on the closeup sea. Further from the camera and more this texture is going to be blended with a low resolution texture. - you need uvs in your ocean grid. This means it's going to be a little bit more complicated if you want to use it with also an ocean simulation. If I'll have the time I'll post a more complete and accurate version, but here is an example for now. Hope you guys will find it useful: AS_convert_ocean_foam_to_texture.hip
  37. 1 point
  38. 1 point
    ah great @sant0s81 thanks for the link!
  39. 1 point
    For the random angles on the tops, in a for loop you can boolean substract randomly oriented cubes. For the gradient, before your copy to point make a more or less random @scale.y attribute from Z that you define in a wrangle. Or with the same wrangle add a transform in the for loop and use the scale.y attribute in the scale Y field.
  40. 1 point
    Hi everyone, I have created the sun effect on Houdini. In this tutorial, we will learn how to combine many types of noise to make the sun surface. Also, we will learn how to create the sun light rays using hologram technique in Houdini. Please, Feel free to Check in here: https://www.cgcircuit.com/tutorial/the-sun-effect1 Regards, Nhan Vo
  41. 1 point
    while i've got that island comment in the file, you can also add a uvautoseam, output island attrib, add explodedview, use island as piece attrib...bob's your uncle. (still black boxes tho)
  42. 1 point
    This looks like an unmitigated UX disaster.
  43. 1 point
    for H19, I would just like to see the current bugs fixed and the frequent crashes would be nice to address as well...
  44. 1 point
    I didn't see much implementation of machine learning in Houdini so I wanted to give it a shot. Still just starting this rabbit hole but figured I'd post the progress. Maybe someone else out there is working on this too. First of all I know most of this is super inefficient and there are faster ways to achieve the results, but that's not the point. The goal is to get as many machine learning basics functioning in Houdini as possible without python libraries just glossing over the math. I want to create visual explanations of how this stuff works. It helps me ensure I understand what's going on and maybe it will help someone else who learns visually to understand. So... from the very bottom up the first thing to understand is Gradient Descent because that's the basic underlying function of a neural network. So can we create that in sops without python? Sure we can and it's crazy slow. On the left is just normal Gradient Descent. Once you start to iterate over more than 30 data points this starts to chug. So on the right is a Stochastic Gradient Descent hybrid which, using small random batches, fits the line using over 500 data points. It's a little jittery because my step size is too big but hey it works so.. small victories. Okay so Gradient Descent works, awesome, lets use it for some actual machine learning stuff right? The hello world of machine learning is image recognition of hand written digits using the MNIST dataset. MNIST is a collection of 60 thousand 28 by 28 pixel images of hand written digits. Each one has a label of what they're supposed to be so we can use it to train a network. The data is stored as a binary file so I had to use a bit of python to interpret the files but here it is. Now that I can access the data next is actually getting this thing to a trainable state. Still figuring this stuff out as I go so I'll probably post updates over the holiday weekend. in the mean time. anyone else out there playing with this stuff?
  45. 1 point
    Play with this file ( combine different shapes ) solu2od.hipnc colODD.hipnc I like the arrangement in your file ...
  46. 1 point
    Here is a Redshift instancing ground cover setup. The meshes included come from cc0 sources. This mesh set contains grass and spring time plants but the system can scatter any mesh set across the target surface. (i.e. rocks, trees, plants etc...) This scene uses the image based volumetric spotlight to achieve colorization. ap_rs_instance_ground_cover_050517.hiplc
  47. 1 point
    in houdini.env you can set HOUDINI_SPLASH_FILE = /path/to/file.JPEG
  48. 1 point
    // Point wrangle. #define PI 3.1415926535897932384 float u = fit01(@u, -PI, PI) * ch("revolutions"); vector pos = set(sin(u), cos(u), 0) * ch("radius"); matrix3 t = set(v@tangentu, v@N, v@tangentv); @P += pos * t; Where tangents and normal was computed by PolyFrame SOP, and @u is 0..1 point's position on curve. spiralize.hipnc
  49. 1 point
    May I suggest different approach. helix_curve.hipnc
  50. 1 point
    I'm pretty sure FLIP fluids don't work that way. You can activate a FLIP object - that is, the object that contains the entire particle stream, but the individual particles aren't handled as individual objects... it would make it impossibly heavy, and cause rather strange behaviour too I think. If you want particles to remain static, before being "activated" in the sim, you need to group just the points you want to activate at a particular frame from a SOPs point cloud, and use them as an emitter. If you want those particles to be present in the sim from the start - as in, collidable, but static, then it's a rather more difficult notion. About the best way I can think of is to give those particles a massive viscosity value, and maybe use a gravity mask to stop them falling under gravity. Basically, complicated. It may be better to try and re-think your approach. FLIP fluids behave like fluid, often even when you're trying everything possible to make them do otherwise... you can waste a LOT of time trying to invent ways to stop FLIP fluids being fluid :-)
×