Jump to content

Bullet or RBD for fracturing/crumbling effects?


Scratch

Recommended Posts

Hey folks,

I'm currently working on a rock crumbling/fracturing effect and I am not sure which solver to use.

Bullet-Solver

+ fast

+ easy to set up

+ nice (artistic) control using glue-networks

- (and I mean a BIG MINUS): popping simulation behaviour, exploding on frame 1 because of interpenetrating convex hull collision geo (we all know the deal...)

RBD-Solver

- way slower than bullet

+ (and I mean a BIG PLUS): much better collision handling (volume based, therefore the shape, whether it is concave, convex or something, is more or less irrelevant)

Thoughts and actual approach:

I tried bullet, fractured my geo using voronoi, without using clustering to get convex pieces, but some pieces are still not entirely convex (due to my input geo, a rock), causing interpenetrations in the sim (after being converted to a convex hull). Now I think I may not be able to avoid that some piecescome out concave to some amount, because I do not know how to prepare the geo in the modeling stage (zbrush sculpted) for being entirely convex. RBD would solve all thouse problems, but I wanted to have control over my sim, so using bullet with it's glue networks would be nice and would actually be my first choice, wouldnt there be this damn (sorry) interpenetration -> popping simulation problem.

Do you guys have any ideas, tips or tricks in mind for me? Every bit of help is much appreciated! Thx in advance for your time! I'm looking forward to your answers!

Cheers from Austria

Philipp

  • Like 1
Link to comment
Share on other sites

Anyone any idea how to prevent my simulation from exploding on frame 1?

I know it has to do with convex vs. concave geometry, but I don't know how I assure all my pieces are convex only. Since a rock isn't a sphere, you will allways have some areas which are somehow shaped concave. How can I deal with such geometry when using bullet?

Link to comment
Share on other sites

I would go with bullet - and have been using it for a while in production now.

As you noticed the main issue is making sure your pieces are convex and not interpenetrating. The key to get around concave pieces is by constraining several convex pieces together to form the concave piece required.

You can perform a check to see if pieces are concave or convex. If they are too concave, you can cut them up into smaller chunks that will be constrained together.

An example would be a coffee cup, which is concave if you take it as a whole object, however, if you cut the cup in 10 pieces, the amount of concavity of each individual piece is a lot less and you get a more accurate representation of the cup.

In regards to testing the geo for concavity, you can measure the surface area of the object, attribpromote that to the detail level so you have the total surface area. Then compare that against the surface area of the convex hull of the object (use the tetrahedralize sop) - same thing in regards to attribpromoting the measured surface area to detail.

*) The convex hull will always have the "perfect convex shape" which will have less surface area than any other more concave shape.

You can define the following ratio:

(convex hull area) / (geo surface area)[/CODE]

Based on that ratio you can set up a recursive voronoi split operation. Until an acceptable concavity treshold has been reached. You could also perform the splits manually as that would be more intelligent than the voronoi split. Or you could perform a semi-intelligent way by scattering more fracture points where curvature is high.

You then keep the pieces together with a gluecluster and an rbd fracture object. Depending on the cleverness of your splitting you will end up with more or less pieces.

This works quite well for an intermediate amount of pieces, generally less than 20.000 because the dops overhead.

If you want to read up on some more general info on rbd dynamics this was an interesting discussion:

http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=20909

--> In regards to volumetric splitting, openVDB will revolutionize that when it is part of h12.5

  • Like 8
Link to comment
Share on other sites

Wow! Thanks alot for taking the time to answer! :) I really appreciate! I have to digest and try all you've said and will be back as soon as I have any reasonable results.

If there is more or something else to know, don't hesitate to throw it in.

Again thx, and see ya soon!

Link to comment
Share on other sites

Hey again,

I digged through all your helpful stuff (pclaes, woodenducks hint to the sidefx bullet tutorial) and I am happy to tell you that the sim works now! No explosions! YAY! :)

@pclaes:

I successfully implemented your method - which is genius by the way!

I fractured all simulation relevant objects/rocks (not just the ones which are about to be crumbling) based on their curvature until the ratio of convex hull area / geo area matched with an error about 1-4%. I ended up with rocks roughly devided into 500-1000 pieces. When bullet went in to create the convex hull collision geo then, it did it for every single piece instead for the whole object. All the seperate pieces together represented the shape better than just one big, inaccurate convex hull collision piece. This method works, even if your geo (like mine) has concave areas. Awesome!

What I noticed though, is that bullet is now waaaaay slower (I guess because it has to handle around 4000 RBD-pieces now instead of roughly 350 (when I just fracture my rock which is to crumble, leaving all other static meshes in one piece). Is that the reason or did I mess something up here?

Furthermore, is bullet not multithreaded?! I only get 10-15% CPU utilisation with most of my cores sitting around idle. Can't that be used more efficiantly, so that uses all the computational power I have?

Despite these new questions, I took a big step forward the last couple of days, so I want to say thanks again for your great support! I don't take it for granted to get help of Method's Lead FX Artist! ;)

See you around!

Philipp

Link to comment
Share on other sites

Hey Philipp,

glad the method is working for you and you have also found its weakness. Striking that balance between more pieces and more accurate representation is the tricky part. The speed slowdown is coming from the increased amount of pieces and constraints and the overhead in dops.

There are a few things you can do about this:

1 A) Bullet is polygon based, so if your source object has a high amount of polygons (subdivided geo) you really want to reduce the amount source polygons.

1 B) You can fracture the high res geometry alongside the low res and set up a lookup id which you can then use to lookup the transformation attributes (P and orient) from the pointcloud coming from dops. -- This is way faster then writing the entire geometry to disk or copying every piece to the corresponding point.

1 C) Ideally the high res detail is added at rendertime through displacement maps. We used ptex on Wrath of the Titans to store that data - you might find this talk interesting: http://siggraphencore.myshopify.com/products/2012-tk145

( This one is interesting too -but unrelated to rbd - and uses a similar method for the displaced detail: http://siggraphencore.myshopify.com/products/2012-tk146 )

2) In regards to the cutting, sometimes it can be better to put in a few big cuts manually first (I tend to use lines or curves as input points for the voronoi as they allow me to slice geometry up in "bricks") - then feed that into your procedural split system, it can converge a lot quicker depending on the shape.

3) I don't know how much of the houdini bullet implementation is multithreaded. Also the constraints do add quite a bit of overhead, you want to limit the amount of constraints you have per piece. The tetrahedralize sop that is often used for creating them tends to generate way too many, so I tend to filter them and make sure a single piece does not have more than 4 or 5 constraints (generally I keep the 5 shortest as they represent the closest - you get a nice adaptive detail constraint network of this), that makes sure all pieces are still connected (as opposed to deleting constraints based purely on their length). The lower amount of constraints is also more manageable artistically when breaking them in a sopsolver or so.

4) You can shrink the pieces with the shrinkwrap, this is what bullet uses internally to compute the "bullet geometry" in dops. I prefer to turn that off in dops sometimes and prepare my geometry first in sops so I know exactly how big my gaps and offsets are - and because when restarting the sim the geo does not need to each time compute that shrinkwrap overhead. -- Since you won't actually be using these pieces for rendering (-- you'll be using the transformation matrix) you can get away with quite a bit of a gap.

4 B) For example, for this spot: http://www.youtube.com/watch?v=686S_NcudLY

I had a lot of tiny pieces (+40000), but the sim only had a few bigger chunks cut from a proxy geometry (around 800 or so). The transformation matrix from those 800 pieces was then transferred to the 40000 small pieces. Running a bullet sim with 40000 unique pieces all constrained and trying to art direct that is just way too hard and slow with the current implementation. The nice thing about running a sim with 800 pieces is that it is fast, and you can revisit the pieces that you would want to fracture more in a second sim that basically will fracture some of the hero pieces and make it collide with the already existing sim, granted that does not always look correct in regards to mass, but it does work often.

-- Also for that spot we had around 4 weeks start to finish. I was lead, managing 3 others ( 2 fx & 1 lighter: 1 dust, 1 secondary hero fracture pieces, 1 lighting - and me setting up the main sim).

5) I also really look forward to what will be able to be done with the bullet sop that Milan is developing -- as even though it is a bit more of a black box, it does not have the overhead of dops and can handle a ton more pieces (the convex/concavity setup would still work) : http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=19952&start=350

Good luck with it and happy learning - pass the knowledge along!

  • Like 8
Link to comment
Share on other sites

Hey Peter,

wow, annother great post that gets my head spinning!

1) A/B and 4)- lowres geo -> sim -> position data to highres geo

I get the overall idea of what you say - interesting and convinient approach! Could you maybe explain this process of siming lowres pieces and tranfering their position data to the highres pieces using that lookup id a bit more detailed? (Maybe a quick, simple example file if that saves you from a lot of writing?) The concept makes a lot of sense, but at the moment I lack the knowledge to tell it to Houdini. I'd love to try that technique though! My sim is heavy, but not that mega-complex and would also works without that trick if it had to. Eventhough, I am very interested to see how far I can optimize it. :)

4) I can understand that it must be possible to transfere position data for a matching number of pieces (with the described aproach above), but for more pieces than you simulated? How is that done?

1) C - Displacement

I learned in a Digital Tutors Video that if you use displacement based on normals, you will mess up the fractured geo and the fractured pieces will not match 100% when they are packed together. It happens, because the normals of the pieces are facing each other at the inside areas and at the contact areas,. If you displace along them, they produce interpenetration. They showed a solution in the video just using a noise on point position directly. Here is the link to the video: http://www.digitaltu...ng.php?vid=7108 (subscription required :( )

It works, but sounds a little odd to me, so I am wondering how to implement displacement correctly?

Ptex as you use it is for sure a great method, because it requires no uv's, and allows vector displacement. So far so good.

But what about the inside faces? If you paint a ptex texture for a rock, with color and displ. map. and then you fracture it, you get areas which are new and untextured (inside group).

2) Point scattering based on curvature completely did the trick for my rocks. But I'll keep your advice in mind if I encounter geometry where it makes sense to use this method!

3) Reduce the ammount of constraints to optimize performance

How can I limit the amount of glue-constrains in the glue network? The only parameter which seems to affect the network structure is the Points per Area - Slider. (I generate my glue-network using the shelf-tool "glue adjacent", selecting the whole fractured object). I checked the tetrahedralyze sop, it generates something similar to the glue network when choosing "connected polylines" but I do not understand how that relates to the glue network constraint?

5) Thanks for the hint! I gave it a quick read but I guess at the moment I have my hand's full by just understanding the basic concepts of bullet in its standard implementation. I'll keep an eye on that thread though!

I can't stop saying thanks. That is so much usefull and helpfull information! As soon as I understand that stuff myself, I am happy to pass it along to whoever asks! :)

EDIT: I totally forgot to show you how it looks at the moment. There are no glue constraints yet:

Edited by Scratch
Link to comment
Share on other sites

Hey guys!

I found some Lowres - Highres RBD Workflow explainations on the web. I'm not entirly done with reading, but I wanted to share that instnatly with you in case someone else is searching for this.

http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=123963&highlight=&sid=f56cd838fae24356ea49dc81d50d7915

I also found something related to that topic in the help documentation:

http://www.sidefx.com/docs/houdini12.1/nodes/sop/dopimport

Reading now :)

Link to comment
Share on other sites

hey, scratch. This thread is becoming more and more interesting. I´m following it from the beginning.

I´m personally interested in two things so far.

Firstly, could you show a quick sample of the recursive cutting with the curvature test? Any sample object (sphere or whatever) will do...

The second thing I would like to see is what Peter mentioned about using the lores transformation matrix on a hires object. I understand you gotta use a VOPSOP, but I still haven´t tried it yet.

Given enough time I plan to try both things myself, but I asked just in case anybody else already did their own test that I can check and save precious minutes from my increasingly reduced spare time. :unsure:

Link to comment
Share on other sites

Hey Netvudu,

Glad you like it! It is a very interesting topic, and it's good to see that my questions are also helpfull for you guys in the end :D

Regarding cutting with curvature test:

I didn't use a recursive (automated) setup. I just plugged a few sops together to get it working, so I am sure you can push my setup further and maybe make a digital asset out of it which you can re-use.

Anyway, I added a example to this post showing the technique I used (based on peter's concept). It gave me good results.

lowres-highres workflow:

I am just reading through the help files. It seems that there are example files coming with houdini showing the exact process. you can find them in the second link in my post above.

Hope that hepls! We will figure this out together guys! :)

fracture_based_on_curvature.zip

Edited by Scratch
  • Like 1
Link to comment
Share on other sites

@Netvudu: Sure, no problem!

@All:

Update lowres-highres workflow:

I digged a bit further into that topic and I think I now understand how you do it the traditional way using rbd point clouds (dop import node in the mode "create points to repesent objects", maybe cache that to disk) and a copy sop (stamping the incoming pieces). It is perfectly explained in the example files that come with houdini ( I posted the link earlier in this thread). Great! -> Progress :) What I still don't get is how you implement Peter's method, using a lookup id attribute, or when it comes toa lowres sim with a few pieces, to the highres geo with much more pieces. Can you explain that Peter?

Update Displacement:

I found a guide which shows how to export ptex from Mudbox and implement the generated maps into Houdini. http://3dexport.com/...mudbox-houdini/

That's a good start, but what about the areas which are procedurally generated during the fracture?. The inside areas of the pieces (inside primitive group) aren't covered by the painted ptex texture. Do you use a different (procedural) displacement shader here? And are the cracks of the pieces (the facing areas) after applying the displacement still matching? I think the should, but only when using displacement based on vectors, not displacement along normals. Correct?

Still a lot of ground to cover! I'm preparing some tests to figure out that displacement thing. Hopefully I can give you some results soon. I am a bit stuck regarding the other question (lookup id etc.) but maybe guys can shed some light on that :)

I'll keep you posted!

Edited by Scratch
Link to comment
Share on other sites

Hey, in regards to that lookup id attribute.

You do that using pointclouds to transfer the id from the low res to the high res.

example: You have sphere A, which you fracture and you end up with 10 pieces (let's say this is the low res fracture), then you have that same sphere, but you fracture it and you turn on all the noises and the high res subdivided version of the pieces etc... this is sphere B and ends up being 14 pieces because of the deformation and the slice planes.

Either you use the "piece" attribute directly that is coming from the voronoi -- or if you have deleted/filtered away some pieces (like the crappy super tiny pieces) - then you just regenerate the "piece" and name attribute with the connectivity sop.

Now you need to create "center point" pointclouds for both of them (foreach piece - add sop, centroid("../each") ) - and for pointcloud A, you need to make sure that the "piece" attribute is added to the point. -- Then you need to sort that pointcloud by the "piece" attribute. -- now your pointnumbers are aligned with your piecenumbers -- this is what will allow the lookup in a vopsop later on.

You create the lookup attribute on the pointcloud for sphere A, then you transfer that to the pointcloud of sphere B. The lookup attribute for sphere A is simply the pointnumber of the point after it has been sorted. After you have transferred it to Sphere B, there will be some points on sphere B that have the same lookup value, this is fine -- as these two or more pieces will be clustered together and transformed by the same point.

Now you run the dop sim on the pieces from sphere A (after filtering or regenerating your piece and name attribute if you had to) - the name attribute is what is being used by your dops, the "piece" primitive attribute is very useful to have on your pieces as it allows you to do all kinds of other tricks and triggers as it replaces the $OBJID variable. Anyway, that's another topic.

So when you get your pieces back into dops with the dopimport as points. Those points are numbered, based off the name of the pieces.

Now let's first apply the transformation matrix to the pieces of sphere A (*transfrom 1):

*) attributepromote the "piece" primitive attribute to points

*) create a vopsop with two inputs, in the first input, plug in the pieces geometry, in the second input plug in the dops points.

*) use an attrib import to bring in the orient and P attributes, set the opinput to 1 so it reads those attribs from the 2nd input.

*) use an attrib import to bring in the "piece" point attribute (opinput 0) and plug this into the point numbers for the import orient and import P vops.

Now build and apply your transformation matrix as I descibe here:

http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=124079#124079

Hope that works.

Now for sphere B there is a slight step inbetween.

You need to get the piece attribute from pointcloud A onto the pieces of sphere B.

1) after you create the pointcloud from sphere B -- and let's name the attribute on the points "piece_b" to make it less confusing, you sort the points by "piece_b".

2) you have the pointcloud from sphere A (with attribute piece_a -- and sorted by piece_a)

3) you have the pointcloud from sphere B (with attribute piece_b -- and sorted by piece_B)

4) attribtransfer "piece_a" from pointcloud A to pointcloud B

5) you have the pieces from sphere B, attributepromote the "piece" attribute to points and name it "piece_b"

6) create a vopsop with two inputs, first input the pieces geo from sphere B, second input the pointcloud for sphere b after the "piece_a" attribute from pointcloud A has been transfered (step3)

7) create an import attribute vopsop that imports piece_b (opinput 0)

8) create an import attribute vopsop that imports piece_a (opniput 1) using import piece_b as a pointnumber

9) addattribe vop piece_a

10) now that you have the piece_a attibrute onto the geo of sphere B you can add the same transformation vopsop as in (*transfrom 1)

I hope this helps and wasn't too confusing -- I don't have that much time right now, but I will be creating some example files, digital assets and tutorials on this stuff at some point soon. As there are a bunch of other things I want to cover too.

Good luck!

Peter

Edited by pclaes
  • Like 2
Link to comment
Share on other sites

Peter. What is the point to do so?

Let's say we have 10 proxy chunks and 20 hipoly chunks. We just refer hipoly to appropriate proxy chunks, then simulate proxies in dops and got 10 (==proxy count) unique transforms. Now we are finding appropriate transform for every hipoly chunk via stored reference to proxy. But we still have only 10 unique trajectories (transforms). So some hipoly chunks will be clustered together. But why we need to have more hipoly chunks then proxies if they are still transforming as one object?

The only answer i can find is to layer another sim where you will "destroy" these hipoly clusters. But again it is better to use some proxies for sim. So you need new proxy for every hipoly chunk.

thx in advance

Link to comment
Share on other sites

We just refer hipoly to appropriate proxy chunks

That is what this setup does, but potentially on thousands of chunks and automated - and through pointclouds and vops so it is relatively speedy.

So how are you doing this step?

Now we are finding appropriate transform for every hipoly chunk via stored reference to proxy.

How are you finding which hipoly chunk(s) belongs to which low poly chunk?

It is in essence the same as what you are describing. It is an "upres" technique, or a beautification step.

But why we need to have more hipoly chunks then proxies if they are still transforming as one object?

*) You don't need to - I never said it is a requirement, it is a solution to the following issues:

1) Having a higher number of hi-poly chunks can be a by-product when using the voronoi on much higher resolution and displaced geometry

2) Ideally a 1-1 relationship would be nice, but sometimes you don't want to sim the super tiny pieces and filter them out, thereby simplifying the low res sim and lower the amount of pieces that need to be simmed. But you can't 'delete' them from the high res geo, so you clump them together with whatever big piece is nearby -- this can cause interpentetration, but the filtering step is an optimization step when you are dealing with heavy sims that need convey a large scale motion and you are less concerned about every tiny piece. ( The tiny pieces will be simulated through particles and run as secundary or tertiary elements). -- Think avalanche, or mountain collapsing.

3) Having a higher number of pieces might not be your choice, for example in the wall video I posted, there were a lot of bricked shaped objects coming from the modeling department, but I did not want to sim every tiny brick in the wall. Therefore my proxy geometry was a box representing the wall fracutred with a voronoi fracture setup with more fracture points where I needed more detail -- and where I wanted to use constraints to break hero pieces apart.

The main point was to answer the question on how to set up the lookup attribute and show how the transform can be applied with a vopsop which makes it a lot faster for previewing purposes then using a copy stamp setup -- or having to perform a delayed load instance type render.

Eventually you do bake the path to the different clusters of high res geometry onto the points of the low res geo and render it with "instancefile" attribute through an instancing setup.

Running a secondary sim after a primary sim can be done too, and all the other objects become passive and you can fracture the high resolution cluster however you want and create proxy geo and apply the lookup id setup again.

Hope this answers some of your questions.

Link to comment
Share on other sites

Thanks.

my bad. I'm not talking about your way of finding corresponding pieces between proxy and hipoly. It is simple and nice.

I'm curious why do you have different numbers of proxies and hipolies.

So as you described:

1) it can be accidently when you fracture your proxy and hipoly with different settings etc.

2) You already have detailed object (house of bricks from modeling department) and just optimize sim by simulating fewer amount of proxies

As for the attrib transfer of "cluster number" between proxy and hipoly.

Don't you have a problem when some hipolies "attaches" to the far-away-proxy (due to spherical influence of attribtransfer func). Something tells me that it works better with rounded proxy shapes than elongated.

Link to comment
Share on other sites

Question that just came up while working: How do I fracture a lowres and a highres geo exactly the same way? (getting the same fracture pattern in the end?)

My solution so far: I scattered points on the lowres geo and reused this exact same points on the highres. Gave me quite good results. Pattern is matching, and the number of pieces is nearly identical (difference is about 1-5 pieces). Is that a good (right) way to do it? Is there a way to get the exact same number of pieces?

EDIT: I figured that scattering points on the highres and reusing them on the lowres gives more accurate results than the other way round. (But that can also just be just random, I don't know for sure).

Edited by Scratch
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...