Jump to content

StepbyStepVFX

Members
  • Content count

    143
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    3

StepbyStepVFX last won the day on April 21

StepbyStepVFX had the most liked content!

Community Reputation

27 Excellent

1 Follower

About StepbyStepVFX

  • Rank
    Initiate

Contact Methods

  • Website URL
    https://vimeo.com/stepbystepvfx

Personal Information

  • Name
    JO
  • Location
    France
  • Interests
    VFX, Computer Graphics, AI, Deep Learning, Cinema and many more...
  1. POP COLLISION DETECT

    Hi, Maybe it’s the setup of your collider : use volume when you work with particles. I am also wondering if it is necessary to use static solver and static object to get bounces : maybe you can use the POP Collision behavior ? Not sure... By the way, Jeff Wagner speaks about that in the first 20 minutes of this video : Hope this helps
  2. Add points to packed geo

    Indeed, this is in this kind of situation that you learn the most ! Let me know if « divide and conquer » works on your problem.
  3. Add points to packed geo

    I understand better. You want to simulate all 10000 feathers using wire solver (each wire containting only a few points to represent a low version of each feather ?), and instance and pointdeform a packed version of a feather on each wire ? If yes, I would then make the wire sim first, and afterward loop over each wire (divide and conquer : you replace memory problem by using more space on disk), pointdeform my highres feather and save it to disk (.bgeo), and use it later as a "packed disk sequence primitive" (File Sop setup on packed disk sequencet). And depending on your computer strength, to save time, I would do it by bunch of 1000 wires (you can handle 1000 feathers at a time ? 4 million points is not killing a good laptop), and you would reach 10 sequences of 1000 feathers that you just need to load later for rendering... If not, if you want to only animate one template feather (but I doubt it if your animal is moving), it is even easier : save your animation of one pointdeformed feather to disk (.bgeo), and instance it on your animal through a File Sop setup on Packed Disk Sequence. All apologizes if I make you loose your time and does not answer correclty / understand correctly your problem, but your this topic is quite interesting :-)
  4. how to create Ink in water

    I remember a tutorial talking about that on cmiVFX : https://cmivfx.com/houdini-ink-fx Never bought, never watched, but seems like what you are looking for. And there is this one from Entagma (you can use it as a base start, altough in their setup they use a thin layer of fluid that they reproject on a plane, to focus on the nice turbulence - but the key is to have fluids with different densities, so to answer your question, I would use FLIP, have particles with one density, and change the density for a bunch of particles that are inside a drop of ink, for example, and colored differently):
  5. RBD fracture - id per set of constrained pieces

    You want your packed object to have an attribute that is the same among them when a constraint is still alive between them at the end of the sim ? I think you can manage that with Wrangle using a solver or multisolver. But how do you deal is, say, you have three pieces, one being constrained to the two others (1 object appearing in two constraints), but the others having only one constraint ? You can’t have a unique attribute... can you explain more what you want to achieve ?
  6. Prism pipeline

    Is it a kind of Shotgun “of the poors” (ie. assest management system + tasks followup) ?
  7. Add points to packed geo

    Hello, Packed primitives (if this is what you are using), are just a way to encapsulate your geometry so that, precisely, you only manipulate it like a solid object, using only one point (and one primitive, that contains that point) to bear all the attributes required for a RBD simulation, for example. So if you want to work again on the base geomatry, you have to « unpack » your packed object, with the unpacked node. Why are you using packed geomtery ? Is it for simulation purpose ? For instancing ? Depending on that, maybe it will be easier to work only on the unpacked mesh. You can also use your sim and the attributes P, V and W from your packed prim and apply them to other meshes than the packed one (separate animation from the feather from you sim). With more info on what you are trying to achieve, we may be able to help you more :-)
  8. Deform Bend Camera Ray?

    As far as I understood, the shaders only use build-in raytrace functions, so my guess is that to achieve such a results, you would have (i) to cheat (like what you did by cumulating various planes that « difract » or bend the rays), or (ii) rewrite your own renderer :-) I guess this is what they did for Interstellar. Just curious : are you trying something « physically based », that is to say taking into account mass of surrounding objects to « bend » space (or bend rays) ? The cheat I would use, because I don’t have the skills to write my own renderer, or my own raytrace functions, would be to use particles, placed at time = 0 on each point of a grid (each pixel of à fake focal plane of your camera), give them a mass (although photon don’t have one, but this is to cheat the curvature of space around massive object : light goes straight but into a curved space...), and throw them straight into the scene, apply force fields based on surrounding objects (forces in 1/squared distance or other potential exotic fields found in astrophysics books ), and retrieve the color of surfaces they hit... Once they all have colors, you can reconstruct the image knowing the initial position of your particle. You could even have more samples per pixel by increasing resolution of your grid, but you would loose secondary rays, shadows etc... except if you cheat one more time, by « baking » the textures and lighting first and then just play with the particles / pseudo-primary rays of this poor renderer just to get the a « gravitational lens » FX... Hope this helps :-)
  9. Round the value in the font node

    Check this thread, there may be useful info for you want to do :
  10. UV unwrap without holes

    By the way, there is this 3 minutes tutorial that can help you (but instead of doing it manually like she does, you can try to select your seams using the « algorithm » I discribed above to automatize it based on a threshold of the size of your islands) :
  11. retrieving attributes from objects within refractions

    Why don’t you apply a constant shader to your objects with the color you want them to have in the matte ? For the shader of your sphere just keep the refraction and kill the specular, and it should work, isn’t it ?
  12. UV unwrap without holes

    If the shapes are not too different after remesh, you can indeed transfer UVs between the two geo, like any other attribute. You can use the Attribute Transfer node, that take two inputs, and transfer based on distance, with options to average the values etc. You can even do your own by using xyzdist, primuv (uv of each polygon), and transfer the UV of vertex or points based on that. But concernign your initial question, I am wondering if there is not a way to process the UV to fill the holes : you can use the Connectivity node and check the « use UV connectivity »; then use the Measure node on « area »; then inside a For Loop based on the class attribute of the Connectivity Node, you Promote the area to detail, setup on « sum » and not average, so that you get the area of the island of UV, and Promote back to primitive. Then for each primitive, you look at the area, and if it is lower than a certain threshold (if is an isolated polygon), you seam its UVs with its neighbours (I think there is a node than can do that ? Or maybe loop over its points, get a list of neighbours primitives, and create its UVs averaging the UV values of its neighbours). And you « loop » that process a number of time necessary to fill the holes. I haven’t tried, that’s just how I would explore to obtain a solution... Hope this helps :-)
  13. Indeed, arcosine cannot know if your angle is x or 2*Pi - x, as both of them have same cosine. To work fully, you will need to build an up-vector, make a cross product between your up-vector and your « aim vector » (direction where camera is pointing), and make a new dot product between this new vector and the vector « camera-point » (as before). If they are in the same direction, dot product should be positive, and hence the arcosine gives you the right angle, if this is negative, then you have to take 2*Pi - arcosine. Just review if I am not mistaken myself in trigonometry before implementing that, I doing it quickly wihtout verification :-)
  14. Single Threaded vs Mulithreaded constraint solvers.

    I think in production they tend to freeze the updates of the softwares when they start the projects to be sure the pipeline will remain stable. I think the only solution in your case is to see if they kept the old solver node in 16.5..
  15. I will try to open your file and post a result when i can access my computer (when will there be a houdini version for iPad ??? :-)
×