Leaderboard
Popular Content
Showing content with the highest reputation since 03/04/2026 in Posts
-
AOV_v04 using for mine Projects no errors ..Tricks AOVs_H20_v05.hip AOVs_v04Srle.hiplc2 points
-
Hey Hannes, my experience with it as been a mix, some people usually the one with less experience, want to copy basically the concept, idea, design one to one to with the generated image and people who really understand or have more knowledge in the whole process usually get ideas of the image but they do not want to copy it, if that make sense. Lately though more and more people is using, and with the ones that have less experience is tough to make them understand that is not as easy as they think, I have been in meetings with clients and trying to explain the whole thing is tiring. With that said these type of clients are few though, most of the people that I work with, clients and co worker get the picture. Now where all this is going, I do not think anybody cant really answer that yet. We can def see some benefits but also negatives about this whole AI thingy.2 points
-
Thank you so much @fencer! I really appreciate your help, that VOP setup for the orient is working perfectly. Exactly what I needed.1 point
-
1 point
-
1 point
-
1 point
-
6 Gb not enough, you probably shoud stack with 1024x1024 COPs res to work1 point
-
1 point
-
1 point
-
Thank you Konstantin! big fan of your videos!1 point
-
attribdelete is set as render node already not sure I understand what this issue is, jump to operator shows exact material applied to the node1 point
-
Nowadays you would rather use the group expand node, though: https://www.sidefx.com/docs/houdini/nodes/sop/groupexpand.html group_expand.hip1 point
-
just wondering how people here in the forum thinking about this topic. As a fx artist at my company i hear clients often asking for ai effects and cheaper timeframes. Currently we can use it in more and more but still rare cases. What are your experiences ?1 point
-
I found they had been very helpful and contributes greatly to research and study for my workflow. It allows me to quickly adjust my workflow in a hands-on way, avoid repeated mistakes, build more meaningful tools and methods, explore different frameworks, and gain clearer product definitions. I use NotebookLM to systematize my documents and notes, while employing GPT to explore frameworks and clarify scope, architecture, and pipelines. I also use Grok in a more conservative way, which I really appreciate. It helps me stay grounded and realistic by engaging with real-world data, surveys, audits, and measurements before moving into project planning. In that sense, Grok helps challenge assumptions and debate scope and definitions. Meanwhile, Claude is very useful for quietly assisting with hands-on tasks and helping produce practical tools that I can apply directly in manual workflows. I’ve also noticed some mismatches online when people discuss AI in the context of VFX and animation. Many conversations confuse generative AI (for images or video) with AI used for automation, reasoning, or workflow support. These are very different applications. In production environments, AI is often more useful for speeding up processes, reducing repeated mistakes, and helping frame the correct scope of a project. For technical and visual production, it’s important to clarify what type of AI we are actually referring to before discussing solutions. Doing so helps align both the questions and the answers, ensuring everyone is on the same page. This kind of clarity creates the constructive and energetic discussions that I always appreciate when engaging with the Houdini/Technical Art community. These things remind me of the Houdini community when I first tried to break in from scratch. There were many mysteries and untold details, sometimes things people preferred not to share, especially when there was no clear commercial benefit. Over time, I realized that sharing knowledge is actually a way for me to learn more myself. In reality, only a small number of people will truly understand these ideas and extract what is useful for their own workflows, allowing them to evolve. Meanwhile, AI is advancing rapidly, often faster than major corporations can fully realize. Because of that, I feel comfortable sharing new discoveries, while still valuing the importance of revisiting old topics, practicing fundamentals, and digging deeper into the foundations. In the end, strong fundamentals are what improve real understanding, not hype, marketing scam, or FOMO driven trends that produce flashy ideas but vanishes within weeks.1 point
-
KUVA is looking for a Senior Houdini FX TD and a 3D Layout Artist for a 3–4 month project starting immediately. FX TD Looking for a senior Houdini artist comfortable working in production pipelines. Required: Strong particles / pyro / volumes Experience creating stylised or magical FX Solaris / USD workflow Rendering with Karma or Redshift Pipeline-friendly approach to scene organisation and optimisation Bonus: Fulldome / immersive formats Gaussian splatting / point rendering workflows Layout Artist Required: Excellent 3D camera animation Strong cinematic composition Experience preparing scenes for lighting/render Bonus: Fulldome / immersive formats Remote friendly. Please send reel + day rate + availability to: steve@kuva.io1 point
-
Holy Cannolli!! This is a brilliantly elegant real world solution! Much better that the 'fix it in post' hack job I was thinking. This gives me a tremendous amount to work with. Let me try and implement some of this and I may return with more questions. Thank you so very much for taking the time to build this out. Greatly appreciated!1 point
-
Thanks a lot, I really appreciate the feedback. You are right, there is a lot of things happening now around AI, ComfyUI, GitHub tools and free stuff too. My main goal with Houdini AI Assistant is not just make another AI tool, but to build something that feels native in Houdini and helps more direct in procedural workflow. AI texturing / projection in COPs is definitely a interesting direction, and yes I agree it can make the product stronger. Right now I focus more on making the base solid first, then add more practical Houdini specific features step by step. Thanks again for your thoughts.1 point
-
What if you could turn a simple prompt into a working Houdini HDA? In this demo, I use Houdini AI Assistant to generate a procedural Asteroid Generator in minutes. Houdini AI Assistant: https://rart.gumroad.com/l/HoudiniAIAssistant Instead of spending hours building node networks from scratch, the assistant helps speed up the process by generating the structure, parameters, and procedural logic from a prompt. In this video, I show: Prompt to HDA workflow AI-generated procedural asteroid setup Node-by-node inspection of the generated network Parameter tweaks and improvements Final result after just a few minutes This is not just a generic chatbot. Houdini AI Assistant works inside Houdini and helps with analyzing, debugging, generating, and automating procedural workflows. If you are learning Houdini, building tools, or trying to move faster in production, this is exactly the kind of workflow that can save a huge amount of time. Try Houdini AI Assistant here: https://rart.gumroad.com/l/HoudiniAIAssistant Thank you for all the support and feedback it really helps me improve the tool. More updates and demos are coming soon.1 point
-
1 point
-
Sony Pictures Imageworks is located on the unceded traditional territory of the Musqueam, Squamish, and Tsleil-Waututh First Nations. We are committed to respecting traditional lands, and working with communities towards reconciliation. Sony Pictures Imageworks Canada Inc. 658 Homer St Vancouver, BC V6B 0T5 Sony Pictures Imageworks is an Academy Award®-winning visual effects and animation studio known for photoreal live-action visual effects, dynamic creature and character animation, and full-CG features. Role Review: We are looking to expand our development team dedicated to enhancing our Creature Effect, Crowds, Environment, and FX tools and workflow. We are looking for a highly proficient Technical Engineer to focus on the development and implementation of robust, high-performance, and domain-specific procedural toolsets using SideFX Houdini. This role requires strong technical leadership and the ability to work effectively with minimal supervision alongside a Lead Software Engineer or Architect. The ideal candidate thrives in a fast-paced production environment where priorities can shift quickly. What You’ll Be Doing: Design, develop, and implement scalable procedural systems and workflows within Houdini for various departments (FX, Creature, Crowds, Environments). Create, maintain, and optimize robust Houdini Digital Assets (OTLs), ensuring stability, ease of use, and adherence to production standards. Extend the core pipeline by developing, integrating, and supporting tools that manage data flow, asset publishing, and version control. Collaborate closely with artists (Creature, FX, Lighting) to gather requirements and troubleshoot technical issues, providing support for both in-house and commercial tools. Partner with engineers across other teams (Shading, Lighting, Rendering) to ensure seamless integration of procedural assets into the overall rendering and compositing pipeline. Document tools and workflows thoroughly for use by the wider artistic and engineering teams. Required Technical Experience & Skills: 3-5+ Years of professional experience in developing procedural workflows and tools in a VFX, Animation, or Game production environment. Demonstrated knowledge of the Houdini environment, including tool creation, optimization, and workflow techniques. Strong proficiency in Python and VEX for writing efficient, custom nodes, and tool wrappers. Proven experience developing and supporting production pipeline tools and asset delivery. Experience leveraging the Houdini API and other DCCs like Katana. Strong grasp of 3D math, linear algebra, and data structures as applied to geometry processing and simulation. Strong verbal and written communication skills, with a collaborative approach to problem-solving. Capable of delivering on multiple competing priorities with little supervision. Preferred skills: Applies curiosity and judgment to identify broader or systemic issues and recommends creative approaches that address wider issues. Documented experience with desktop application development using PyQt/PySide to create custom user interfaces. Experience with C++ for high-performance plugin development. Experience with data science, machine learning, or complex simulation techniques (e.g., fluid dynamics, cloth) in a production context. Bachelor’s or Master's Degree in Computer Science, Digital Media, or a related technical field. The anticipated base salary for this position is within the range of $100,000.00/yr - $120,000.00/yr CAD (up to $150K for Senior candidates). The final compensation package will be commensurate with the candidate's professional experience, technical interview performance, and specific alignment with our team's requirements . Benefits are per company policy: which include healthcare, tuition reimbursement, RRSP's, Sick and Vacation leave, standard increases as applicable. The actual base salary offered will depend on a variety of factors, including without limitation, the qualifications of the individual applicant for the position, years of relevant experience, level of education attained, certifications or other professional licenses held, and if applicable, the location of the position. We value unique perspectives, and want diverse, unique talent to work with us. We encourage candidates from all identities to apply. *Sony Pictures Entertainment is an equal opportunity employer. We evaluate qualified applicants without regard to race, colour, religion, sex, national origin, disability, age, sexual orientation, gender identity, or other protected characteristics. Job Posting Link: https://www.imageworks.com/job-postings/43761 point
-
Small update on minimal surfaces using the poisson equation and some more modern nodes. minimal_surface.hip1 point
-
Try this: 1. attribute promote SOP - turn the uv attribute from vertex to point 2. attribute blur SOP - blur the uv attribute, make sure that the method is set to Edge length and "pin border points" is checked on 3. attribute promote SOP - convert the uv attribute back to vertex. UV_relax.hip1 point
-
I worked through the spin particles tutorial. The thumbnail looks nothing like the tutorial result. I don't understand why authors do that? This file demonstrates how to make randomly rotating particles lie flat after contact. This technique can be used with RBD debris as well. ap_tut_spin_particles.hiplc1 point
-
thats cool, didnt see the files! he's using the HORIZONS system, which is very accurate but needs data to be cached and interpolated, or fetched over the internet (which may cause issues on a farm). I implemented the less accurate, continuous offline version with the keplerian formulae. Was very interesting diving into this - looking up to the stars at night has become a lot more exciting to me for sure! Btw, for the stars there's a neat recent post on Erwan Leroy's blog!1 point
-
I found a solution using a switch solver which does exactly what I needed.1 point
-
Have Fun.. thanx + @t_hasegawa Twitter file +read In second example make attributes like in first ..it works For me https://github.com/d3/d3-force https://observablehq.com/@d3/force-directed-tree ForceDirectedGraph_v002t03(1).hipnc1 point
-
to streak the uvs along side faces of the extrusion you can unlock Vellum Post Process and inside on both polyextrude nodes uncheck Generate Unwrapped Texture Coordinates for Sides or maybe extract that extrusion portion and wrap into your own cloth thickness asset that you can append after postprocess so that you don't have to have unlocked nodes in the scene also you can submit RFE to promote this option or make it default1 point
-
This is really cool... Looking to the very first part, I did a simple setup yesterday, trying to replicate it, but can't say I got very far before running into issues. The problem with the POP setup is it really wants to create the coral patterns - even using a polyframe to calculate outwards normals and using them to guide velocity outwards, you get to a point where is starts to fold into itself" and create those traditional differential curve type patterns... So perhaps something ti rethink - or perhaps someone here has an idea..? Oh, and just generally, there's SOOO much friggin cool stuff in this video, I can barely watch without freaking out, wanting to try to replicate all of them, hehe...1 point
-
Hope you don't mind, Karl, I coded a version of this in vex after checking out your file. Slightly different than yours, but using the same principle. I put two options in there, one to do even divisions between particles, and another to use a step size for divisions, so you always get a somewhat even distribution of new points. Match this to your flip particle step size, and I think that will provide optimized results. I also approximate normals on the point cloud and provide a random spread along the surface tangents which can further help fill in the gaps. There's an option in there to try and detect isolated particles and delte them, but that will slow things down a lot. HIP and OTL attached. splashExample.hip pc_gapfiller.hda1 point
-
1 point
-
It has been a long while And today I had need of this, and decided to try again. I think I got it working! It's barebones and not tested extensively (it's waay past bedtime!), would love to hear if there is something I missed! I got the solution from: http://www.flipcode.com/documents/matrfaq.html#Q40 Q40. How do I use matrices to convert one coordinate system to another? extractTransformMatrix_v001.hipnc1 point
-
Hi, another method is that this group you created is your preselect group, so lets assume to call it preselect. Now put down another group sop, set it to points, turn off the enable toggle on the first page, go to edge tab, turn it on. Turn on edge depth, and in the points group field choose your preselect group, then adjust your edge depth accordingly. I use this all the time.1 point