Jump to content

Wrenninge book, Yujie Shu thesis, and Apophysis question


AtheneNoctua

Recommended Posts

Greetings--

 

I am an Apophysis IFS flame fractal artist and I'm trying to figure out how to do flame fractals (known to you as "wisps") in Houdini.

 

In researching the possibilities, I came across Yujie Shu's Master's thesis "3D Fractal Flame Wisps" written at Clemson under Dr. Tessendorf.

 

http://tigerprints.clemson.edu/all_theses/1704/

 

She and the thesis have been quite enlightening and I am grateful.

 

I've snagged a copy of Magnus Wrenninge's book PRODUCTION VOLUME RENDERING: DESIGN AND IMPLEMENTATION and am trying to learn the concepts as I work through the book.

 

While installing the libraries that Mr. Wrenninge notes at the beginning (frustrating process; learned a lot; would rather be making fractals than debugging), I noticed I'd been installing libraries that are already in Houdini.

 

How can I use what's already in Houdini and the HDK to follow along through Mr. Wrenninge's book? Has anyone written instructions on how to do this?

 

 

Also, as Yujie Shu has mentioned in her thesis, the coloration method in Apophysis is different from anything I've seen in Houdini yet (although admittedly I haven't seen much to date).

 

The gradient coloration in Apophysis is a freaky-awesome system that uses chaos weights to adjust a sequence of 256 colors in a Fractint color map. If used adeptly, it can yield some spectacular results. Not least of which is that it creates results discernible to tetrachromats--those people who can see finer gradations of color than the rest of us can. (I think a few people so gifted have visited a couple of my art displays.)

 

Has anybody tried to/succeeded in making a tool that would work in Houdini to use that system--Fractint color maps chaos-weighted Apophysis-style--in Houdini wisps?

 

Thanks.

 

PS: Let me know if this question needs a better forum to hide in.

  • Like 1
Link to comment
Share on other sites

I love fractal flames and have always wanted to find some time to do them in Houdini but alas :(. I never knew about this thesis.

 

I think Houdini should have all the tools these days to do them, especially with OpenVDB (as noted in the thesis) since it has support for sparse, frustum volumes. So the "double grid" approach mentioned in thesis is probably not necessary when using the OpenVDB data structure. Once you've generated the volume, there's VEX to easily shade it with Houdini's renderer, Mantra.

 

As for the HDK, I'd start with making sure you can compile a basic example following this: http://www.sidefx.com/docs/hdk13.0/_h_d_k__intro__getting_started.html

For OpenVDB, you can check out the source code for the OpenVDB geometry nodes (aka SOPs or surface operators) in the github repo but that code is fairly optimized. So you should probably get your feet wet in the HDK with something simple like: http://forums.odforce.net/topic/21212-openvdb-hello-world/

 

Note that the HDK assumes familiarity with Houdini and so you might want to experiment first with Houdini before you even start diving in. There's already a fair bit you can do in Houdini with fractal flames if you don't care about performance yet. See for example, other fractal work in Houdini: http://forums.odforce.net/topic/10207-3d-mandelbrot-primitive/

  • Like 1
Link to comment
Share on other sites

Not as fancy as actually creating flame fractals in houdini - but I've created some volumes by stacking flame fractal animation frames in this old forum post. (I've taken it a bit further since, should post stuff one of these days)

 

Thanks for the paper reference, very inspiring, I'll be following this with a keen interest! :)

  • Like 1
Link to comment
Share on other sites

Hi, eetu, and thanks for pointing me to your flame test.  The rest of the tests are exciting as well!

 

 

Hmm.  Since this is the Education subforum, I'd like to confine comments to one question. Then I'd like to make a general IFS Flame Fractal "Wisp" thread in the Effects forum, since a lot of people including myself seem to be very interested in the topic. (Yay!)

 

On to Education.

I've gone through some of Ari Danesh's tutorials and a general Bag of Holding-full of other random materials, so I think I've got enough to make a start.

 

 

So here's the focus question, restated:

 

 

How can I use what's already in Houdini and the HDK to follow along through Mr. Wrenninge's book?

 

 

Because before I can participate with any intelligence in a general Houdini Wisp subforum, I need to get through that book.

 

Here's the roadblock I'm working through (and will do so myself, given time):

 

To go through the examples in the book, I need a python binding to C++ libraries that render volumetrically.  Magnus Wrenninge thoughtfully uploaded PVR to GitHub, then uses the Boost.Python library to send commands which implement the examples in the book.

 

Yujie Shu does the same thing, and she includes example code in her thesis (although mostly pseudocode), only she uses SWIG instead of Boost.

 

I spent an entire week trying to install both of these, including bushels of other libraries (debugging, debugging, failing) until I decided to check to see if a python binding/wrapper/etc. had been already installed in Houdini. 

 

The hypothesis I now have is: I can use the HDK instead of Boost or SWIG just by opening a Python window in Houdini and merrily typing away at example code.

 

Or not.

 

Like if one of the example code lines is:

 

for (boost::python::ssize_t  i = don't_you_wish_you'd_installed_Boost_libraries_neener_neener)

 

 

Anyway, as edward said, my next step is "Hello World" familiarity with what Houdini already has.  I will report as soon as I've done this.

 

By the way, Magnus Wrenninge has put up the first part of his book in free PDF form.  This may give a better idea of what kind of education I am trying to put myself through, for whoever isn't sure.

 

http://magnuswrenninge.com/content/pubs/ProductionVolumeRenderingFundamentals2011.pdf

 

brb  and thanks again.

Link to comment
Share on other sites

I don't have the book. What do those examples do?

 

Boost.Python is not shipped with Houdini so I think you would at least need to compile that yourself if you want to do your own C++ methods made available in Python.

 

In Houdini, you already have a production proven volume renderer and toolset. I would hazard a guess that you wouldn't need to do any C++ at all to do the examples mentioned in the .pdf.

Link to comment
Share on other sites

I don't have the book. What do those examples do?

 

Boost.Python is not shipped with Houdini so I think you would at least need to compile that yourself if you want to do your own C++ methods made available in Python.

 

In Houdini, you already have a production proven volume renderer and toolset. I would hazard a guess that you wouldn't need to do any C++ at all to do the examples mentioned in the .pdf.

 

This book was written to be package-independent, I think.  And the books and Siggraph presentations came out before Dreamworks released OpenVDB.  If you follow along with the book, you compile a renderer, called pvr, and then use the python commands via bindings to send commands to the renderer.  You're right in that I probably won't have to do C++, because Mr. Wrenninge has provided all that in GitHub at:

 

https://github.com/pvrbook/pvr

 

But--

If I'm going to be using Mantra instead of PVR and the HDK instead of Boost/SWIG, then absolutely it will be much simpler to learn the concepts.  The catch for me is to figure out what in the HDK--heck, in the Houdini interface itself--corresponds to the book.

 

Now that I think about it, they're ridiculously simple.  Back then you evidently had to use python code  to set motion blur, shutter angle, build a mesh (just did that this afternoon in the HDK--that was the first example in HDK: Getting Started). Or maybe just to use pvr.

 

So I know I can set that up graphically in Houdini.

 

Later on, the book's lessons get into Fractional Brownian Motion (fBm), octave gain, pyroclastic points, raymarchers, sparse grids yep, OpenVDB stuff)--

 

At this point,  I'm guessing that my education is going to consist of going through the book, going through OpenVDB tutorials, seeing what information corresponds, and doing the happy-duck dance that I don't have to mess with installing 3rd-party libraries.

 

After that, though, I'm going to have to do things like program the base fractal flame class, the base wisp class, and see how well the rendered results correspond to what I make in Apophysis. 

 

Next I will need to code the 20 variations in Yujie Shu's paper plus the others I've collected over the years (I have like about 200, now) and use hcustom or cmake to turn them into sops/plugins.

 

Then I will have to figure out how to do the gradients.

 

Wow. It all sounds almost possible now. Thanks.

 

So... if you don't find anything in the above guesses that sound absolutely headdesk-ridiculous, maybe I'll finish the book, and who knows, start posting results in an "IFS Flame Fractal" thread in the Effects subforum sometime soon.

 

Thanks for the help!

 

(Sig made in Apophysis)

Link to comment
Share on other sites

I would learn how the pyroclastic stuff is done with CVEX, and then apply those fractal flame concepts to it. See the files posted by Serg on pyroclastic clouds in here: http://forums.odforce.net/topic/12923-pyroclastic-noise-demystified/page-4?hl=pyroclastic

 

I'm not sure that you need to code your variations in the HDK at all, vs using VEX/CVEX.

  • Like 1
Link to comment
Share on other sites

Just finished looking at the pyroclastic link and thinking about the possibility of doing the variations in VEX and got cautiously excited.

 

The best part:

 

Much of Apophysis code is written in Delphi. From what I can tell, Delphi is a close cousin to VEX already.

 

The plugins are written in C or C++, so I'm assuming that they need to be fast.

 

Pyroclastic noise:

There is some confusion that "flame fractal" = "gaseous advection look" -- some of what I do has that look, and for those, serg's pyroclastic information is going to be phenomenally useful, once I wrap my head around it.  "Wisps" use Perlin noise. That's one of the variations that I can use in Apophysis, but I actually don't use that one very often. 

 

Here is the source code for a variation I use more often than Perlin--Larry Berlin's "Foci_3D" variation.

 


/*
    Apophysis Plugin

    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program; if not, write to the Free Software
    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/

/*
     Modified by Larry Berlin
     September 2009
     http://apophysisrevealed.com
*/


// Must define this structure before we include apoplugin.h
typedef struct
{
} Variables;

#include "apoplugin.h"

// Set the name of this plugin
APO_PLUGIN("foci_3D");

// Define the Variables
APO_VARIABLES(
);

// You must call the argument "vp".
int PluginVarPrepare(Variation* vp)
{
    // Always return TRUE.
    return TRUE;
}

// You must call the argument "vp".
int PluginVarCalc(Variation* vp)
{
    double expx = exp(FTx) * 0.5;
    double expnx = 0.25 / expx;
    double siny, cosy, sinz, cosz;
    double kikr, boot;
    boot = FTz;
    kikr = atan2(FTy,FTx);
    if(boot==0.0)
    {
        boot = kikr;
    }
    
    fsincos(FTy, &siny, &cosy);
    fsincos(boot, &sinz, &cosz);                         
    double tmp = VVAR / (expx + expnx - (cosy * cosz));  
                                                         
    FPx += (expx - expnx) * tmp;
    FPy += siny * tmp;
    FPz += sinz * tmp;             
        // Always return TRUE.
    return TRUE;
}


 

 

Which leads to another issue: Apophysis and its plugins are released under the GNU 2.0 and subsequent licenses.  Which means that if/when I get all of this coded and working properly, I'm going to have to figure out where to upload it so the rest of you can get at it conveniently without breaking the terms of the license.

 

 

 

Linkies for the curious:

https://en.wikipedia.org/wiki/Apophysis_%28software%29

http://www.apophysis.org/

http://www.tmssoftware.com/site/scriptstudiopro.asp

 

Another example of what I do besides what's in the teeny profile pic.  It shows what you can do with the gradient-coloration system, when used on a wisp-type flame fractal.  Since I haven't seen anything like this on any of the wisps or fractals I've found, I'm going to assume that coloring will be a research project on its own.

post-11942-0-87978500-1417119430_thumb.j

Link to comment
Share on other sites

I've been testing Houdini 14 beta and so made a quick attempt at doing the thesis using only Houdini. The Volume Rasterize Particles SOP now supports stamping of any attribute so it's quite easier/faster than Houdini 13. There were some details missing in the thesis so I had to make some creative interpretations. I've attached my scene file that shows how the structure looks like but it won't work because the Volume Rasterize Particles SOP won't automatically stamp the color (Cd) attribute.

 

Anyhow, 100 (unoptimized) iterations of 1 million points randomly walking through the sample 4 fractal flame functions mentioned in the thesis using a voxel size of 0.1 took about 3 min 40 seconds computation time on my Intel Xeon 3.2 GHz (4 cores HT) using 14 GB of RAM. Rendering in Mantra then took about 21 seconds and 4.2 GB of RAM.

post-209-0-82002100-1417412510_thumb.png

  • Like 1
Link to comment
Share on other sites

Here's the sample .hip file but limited to 10 iterations. If you render, I've got a hack in the file right now that just visualizes the particles using red. As I've mentioned before, the Volume Rasterize Points didn't stamp the color attribute, Cd. However, you can see basically how I've set things up. To make it work in H13, one would need to manually do the stamping of Cd using a variety of other nodes.

flame_h13_broken.hip

  • Like 2
Link to comment
Share on other sites

Spent some more time on this going through the original fractal flame paper again. Just in case anyone is interested, some thoughts for the night:

- Point color isn't being averaged across iterations in my file. Actually, I'm still not sure if color is being done correctly because I'm using a signed FBM which means that I can get negative color values. It might be better to just use the original fractal flame formulation for the color, ie. just average some pre-assigned color for each flame function.

- Doing a pre-roll (ie. the loop) without stamping into a volume is much faster (as suggested in the paper) because you can stay in threaded code more. So doing a preroll of 300 iterations is really fast and can get you really sharp looking results off the bat.

- The slowest past is the stamping into the volumes which 100 iterations is really the limit using my current approach with the corresponding voxel memory increase. Culling out points with a Delete SOP for any that are too far away from the area of interest could be a potential benefit here. I haven't tried to use frustum volumes either because Volume Rasterize Particles doesn't support it.

- The volumes I'm using right now are upwards of 25 M voxels each for the density and Cd fields. We easily do a few copies of that in SOPs and easily find yourself in 20 GB memory land. This leads me to think that maybe I should just try more particles instead and directly render them. A post-process can be used to composite multiple images together which sounds like what eetu was doing.

  • Like 1
Link to comment
Share on other sites

Some progress on using H13, with a more simiplified approach:

- just iterate on the 1M particles alot

- moving average with predefined colors for each iteration (gave up on figuring out the noise stuff)

- stamp once into volume and render

 

I'm probably still doing things wrong and if I find some time, I'll try just trailing the particles during the iterations.

flame_h13.hip

post-209-0-81635700-1417589237_thumb.png

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...
  • 6 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...