Jump to content
Valent

about parallel processing in VEX

Recommended Posts

Hi

For some reason, it is hard for me to understand multithreadness of VEX:blink: I mean, in most cases, the fact that my code works in parallel on each of the components messes my plans. So I usually make my code to be able to work on 'detail' and manually set all the processing loops.
Everything was fine until yesterday when my wrangle was cooked for a solid eight hours.)
So my questions are:
Is there a way to make a point(primitive) wrangle to process elements in a specific order, or do I have to use detail_wrangle if the order is crucial?
Can I somehow speed up a detail wrangle, like divide all the components in several batches or partitions to compute them in parallel? 
Can you suggest something to read about how to  adjust both my mind and code to be parallel processing friendlier)

Share this post


Link to post
Share on other sites

In general, yes if you only really use point or primitive mode, if the order of processing is not important
(or you compensate for it in another way, such as calculating the same data again in other points that need to access it,
although this may lower the speed by such an amount that running in detail may be faster anyways)

but there are other things you can do, like Skybar mentioned, or simply putting the wrangle in a for loop, potentially using the meta data block

Also, don't be afraid to mix and match point and detail mode, and divide your code over multiple wrangles, so you can have the best of both worlds

  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)
On 7/29/2019 at 3:13 PM, Skybar said:

You can set the Attribute Wrangle to run over "Numbers". Then you can specify how many elements to process in each thread. There is some info on it here: https://www.sidefx.com/forum/topic/51525/

Thanks for pointing this out. Very interesting.

I don't quite understand the last post, when the he's addressing the idea of procedurally assign the Thread Size, so that each thread has the same number of iterations to deal with. To me it's seems a very good practice..

Quote

This is dangerous as you might run into the same situation as before with regard to thread balancing. If your operation is known to be really expensive, you can just use a fixed smaller number like 128, that way balancing across any number of points. (for example, if the last half of your geometry has very sparse points that aren't within interaction distance, you might be able to skip those faster)

Why this should be dangerous?
If I run 1000 iterations and I have a 10 threads machine, it seems obvious to me that I want to have a Thread Size of 100, and no other number.

Could anybody clear this, please?
thanks

 

Edited by Andr1

Share this post


Link to post
Share on other sites

@Skybar, @acey195 Thanks, very interesting! 

On 7/31/2019 at 9:20 PM, acey195 said:

or simply putting the wrangle in a for loop, potentially using the meta data block

I'm not sure that I understand this. could you, please, explain about using 'meta data block'  

On 7/31/2019 at 9:20 PM, acey195 said:

Also, don't be afraid to mix and match point and detail mode, and divide your code over multiple wrangles, so you can have the best of both worlds

I sort of understand that, but most of the time I'm trying to do exactly the opposite - to squeeze everything into one uber-wrangle:) 

 

Share this post


Link to post
Share on other sites

the meta data block, (which you can generate with a button from the input of a foreach)
will give you a detail attribute, with the current data its looping over "value", incase you are running over numbers
as well as an "iteration" attribute, you can use in other cases.
you can use those values, to make sure your wrangle fetches the right data/does the right thing, depending on the iteration of the loop.

Also,
generally speaking, putting everything in a single wrangle only gives you a very slight performance increase in terms of overhead,
which is almost always outweighed, by the multi threading benefit you get from using multiple nodes.
In addition, the overhead can be completely eliminated by using a compile block
(which only really starts to make sense with larger numbers of nodes, or if you are taking the loop approach)  

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×