Jump to content

Scatter Points Interpolation


sweetdude09

Recommended Posts

Hey guys,

 

Huge ask here, and I'm probably going to go into far too much detail with this description, so bear with me.

 

TL;DR: Trying to interpolate the points in a scatter sop, that are animated by a texture. 

 

Now let's get into it, and to be honest, as a complete noob at this, it might not even be possible or there might be infinitely smarter ways of doing this, so i could just be spinning my gears for no reason. Basically, what I've been trying to do over the past month (okay a lot longer, but i'm super stubborn and didn't want to ask for help) is to try and have a scatter sop, driven by an image texture, not be so god damn poppy. And while sure the relax iterations are great, they really don't actually help me, because rather than particles be generated every frame, i would prefer a field of particles to be generated once, and then have their positions moved. And this could be super simple, if it weren't for the fact that point ID's don't really help here, due to how the scatter sop generates them. So rather than just being able to straight interpolate every N frames, based on point ID, i can't unless i want the points to totally ignore the image texture. So I've tried tons of different methods, and i'm extremely close (i think) but instead of doing what i intended, i basically created a diffusion system instead... 

 

So basically my goal right now is to come up with a way to give each particle it's own unique particle pair, based on distance, that corresponds to the same sim, N frames ahead. I've been diving into a ton of the Point Cloud documentation, to see if it can be of any use, and it really has worked great. The issue is, I have no idea how to get each particle to have it's own unique pair. I can totally get pairs with pcfilter based on position, but that gives me a ton of non unique pairs (basically, point A goes to point B, but point C also goes to point B, instead of point D, and so on). I will attach a project file, so if this doesn't make sense, then please refer to that. 

 

I don't really expect a simple solution, and i'm really open to any suggestions at this point. I haven't been able to focus on anything else because i'm so invested in this god forsaken project. It might turn out to be something like i need to create a loop that checks for duplicates, but I can't seem to figure out the best way to go about doing that without having it fall apart on me...

 

Anyways, I hope some of what i said makes sense, and I apologize for this massive wall of text. Also, my project file probably makes no sense, and I'll happily explain any and all parts of my thought process behind each piece. This is also the latest iteration of this project, as i have tried like 500 different solutions. Thanks in advance for any and all help...

 

-Jake 

 

P.S. Totally open to the idea of using VEX for this as well, coding isn't alien to me by any means, so if it would be faster to execute this using VEX, then I'm all for it. 

PointInterpolation_ODFORCE_Package.zip

Link to comment
Share on other sites

maybe you should handle this in your source images.  rather than having a normal image sequence, generate delta images that add/remove from the current points.  so the process would be like: red channel is area where points should be culled and green channel is area where points should be generated.  then run a simple vex cull to randomly remove points in the red area and a scatter to add points from the green.  the common areas would then not change/pop.

Edited by fathom
  • Like 1
Link to comment
Share on other sites

Sounds like you're trying to do this, right?

 

 

Not exactly, while that scene is very useful (I've only briefly looked into the Attribute Interpolate SOP, so thank you for showing me a scene with it) this is actually the inverse of the effect I'm looking for. Basically, i want a static grid with particles on it, and then what I've been attempting to do is have the particles "advect" by an animated texture on the grid. Maybe there's a way to do it using the Attribute Interpolate SOP, but so far I haven't been able to make that work. 

 

 

maybe you should handle this in your source images.  rather than having a normal image sequence, generate delta images that add/remove from the current points.  so the process would be like: red channel is area where points should be culled and green channel is area where points should be generated.  then run a simple vex cull to randomly remove points in the red area and a scatter to add points from the green.  the common areas would then not change/pop.

 

I tried something very similar to this! Though way you're describing is far simpler than what i did... I used a For Each to subdivide a mesh based on the luminance of an image texture, and then created points at the center of each primitive, which totally works. However, i realized that for the problem i was trying to solve, i wanted to try and keep the point number the same throughout the process, so i came to the conclusion that rather than culling i wanted just position change. Hence this mind numbing problem... Maybe there's a way to combine the two thought processes and create a system kind of like Conway's "Game of Life"?

 

With that being said, I'm going to try it using the delta images like Fathom recommended (because it should keep the point number the same, so maybe it will work?), and see where that path takes me. My other thought now is to try the new grains system. Sort of like this video: 

 

Anyways, thank you both for your comments (I usually don't post on forums, it always feels selfish to me!) 

Link to comment
Share on other sites

why do you want the point count to remain constant?  and if you advect, how far do you advect each frame?   does it take longer than a single frame for the points to line up with the next image?

 

if the points move to positions defined by the current image being fed to the scatter, then it doesn't really matter how they move to get there, unless maybe you're motion blurring them.  some of this is bordering on optical flow stuff -- where you take two images and figure out a vector field that blends from one to the next.  you might try looking into that stuff, but honestly, it seems like overkill if you're just trying to avoid flickering.

 

 

okay, wait.  here's another even easier option:  generate a constant field of points and then cull the ones not in the current image.  points won't move at all, they'll just turn on/off.  the number of points will change each frame, but you shouldn't get any/much popping.

  • Like 1
Link to comment
Share on other sites

I agree with Miles, still not totally sure what you're after in the end but there's probably a simpler way to achieve it. Here's an example where the point count is constant, although the point numbers jump around from frame to frame. This is using animated UVs instead of a texture sequence but same diff, y'know?
Nothing special needed for this, scattering a ton of points, reading a texture onto them and writing a scalar value from that ttexture to sort by. The points are sorted by luminance (and reversed in order so point 0 has the highest luminance), then you can just delete all points over your desired number. Is this getting closer?

 

 

interp_tex.hip

  • Like 2
Link to comment
Share on other sites

First i want to start by saying, both of you are huge helps, especially in terms of thinking this whole thing through.

 

Alright, so my reasoning is this (and please by all means, call me retarded if like it's outrageous or just a piss poor way of executing this idea): The reason i don't want particles to be deleted, is because i would love to use either a triangulate 2d or use the points as the template for a voronoi fracture on the base grid, so that i can rebuild the original grid, with polygons that flow according to the image texture that is being used to generate the points. So basically when the area is less dense, the polygons grow to fill that area out, and as the points shift towards an area of luminance the polygons get denser in that area, minus the terrible popping induced by using the scatter to achieve this.

 

However, when points get culled, the "remeshing process" for lack of proper articulation skills, get's pretty ugly, just with popping and with other artifacts. So instead of the polygons expanding and shrinking, they just pop from one area to another. Again, I might be spinning my gears for no real reason here.  In hindsight the whole to "avoid flicker" phrasing was just silly on my part, apologies on that note. I just checked out the file you posted Jesse, and it seems to be a lot closer to what I was thinking, and the set up is super interesting. Sorting by luminance seems pretty damn useful. 

Link to comment
Share on other sites

Okay, that makes more sense. Check this one out, inside the solver there's a volume vop node that has two texture parm, you can use the current and next frame of the texture sequence here. This reads the textures into a volume and stores the delta values in voxels. This gives you a way to read the gradient of the difference between the images so you can advect your points. I did this in a crappy solver but it should probably be done in Dops and bundles with a Gas Particle Separate or soemthing to keep them from bunching up.

 

 

 

interp_tex.hip

  • Like 4
Link to comment
Share on other sites

Holy shit.

 

Dude i like honestly have no words. This file is exactly the kind of base i was looking for. Thank you so much, and thank you Miles as well. Next time in the LA area (that's a very large area) i'm 100% buying you a beer. I'm going to try and set it up in Dops either tonight or tomorrow morning so that i get a feel for that too. I actually just dove into Dops for the first time a few nights ago so it'll be interesting to try and apply it to this problem. 

 

The gradient difference is a super smart solution, it feels like a couple of things i've done when comping during post. I'm honestly so floored man, i literally never would have thought that you could use volumes like this. 

 

Anyways, enough rambling. Going to dive into this project and see what kind of crazy shit Dops has in store for me :)

 

-Jake

  • Like 1
Link to comment
Share on other sites

Nice approach tjeeds!

 

This one is with a bit of mod to give a more interesting movement with a fake momentum! Its better that you use the texture delta approach and gradient computation in conjunction with a smoke solver, this way you can ensure that the system removes the divergence to give an even better momentum.

 

Cheers

 

interp_tex_vortx.hip

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...