Jump to content

a shader each day


rdg

Recommended Posts

  • Replies 88
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Day 16: there comes the light

This one is a light shader.

setup.63.jpg

Animated version

The light expands from the point source.

A use might be some strange explosions of even more strange weapons.

Currently it's rather uncontrollable - I need to dig more into the light shading to understand it.

For example I need to control the spread in units to be able to link any secondary effect to the light.

Like for example destruction of stuff inside the radius of the explosion.

I guess this might look also good inside some volume.

Link to comment
Share on other sites

Day 17: The Shadows

setup_04_od.jpg

Animation: http://vimeo.com/2063377

I added a shadow shader that only casts shadows outside the range of the light.

As this looked rather odd I also changed the light to behave like an occluded ambient light inside the radius.

At least what I understood this to be after reading the Rob's ambient light [1] thread again.

It's far more controllable by now, but still opens a lot of questions.

I did some post-processing to accent the shadows, they are quite faint in the beauty pass:

pre1.jpg

It's quite hard to rely on real world references in this case and I guess this is what it looks like ...

More likely there is an error in the shaders.

Anyway: What did I learn?

Why people use compositing packages instead of writing light/shadow shaders isn't a that big question anymore.

[1] http://forums.odforce.net/index.php?showto...l=ambient+light

Link to comment
Share on other sites

Day 18: Deep Color

While converting a Technicolor HLSL Shader [1] into a VOP COP2 Filter I remembered that Houdini also supports OGL2 shaders in the viewport.

The possibility to have a OGL2/Mantra combo that allows WYSIWG tweaking of shaders in the viewport was just too tempting.

As a first candidate I chose another shader of desaxismundi[2]: deepcolor[3].

deepColor_od.jpg

It basically normalizes Pz from a user defined range to blend two colors. It also has a roll off/gamma control to tweak it.

It looked so easy and turned out to be a true adventure ...

Before you can create an OpenGL shader you need to understand GLSL and its related concepts such as vertex shaders and fragment shaders.

Slowly my rudimentary openGL/GLSL/HLSL knowledge returned and I even managed to transform the fragment into camera space, before loosing my temper.

Evolution follows plagiarism and I added some outline controls.

The VEX Shader also supports an occlusion layer, but for some reason the shader just refuses to bind the controls:

WARNING[rdg_ogl2_deepcolor]: Failed to bind variable 'switch_occlusion'

To function properly the shader needs be assign to a geometry that has UVs assigned. Preferably something like Rows&Columns.

Maybe there is a way to utilise something like s and t in GSLS - I have to investigate this.

rdg_ogl2_deepcolor.tar.gz

This file contains a sample scene, the SHOP OTL and a gallery.

You need to copy the *.gal file into your HOUDNI_GALLERY_PATH - eg: ~/houdini9.5/gallery

The OTL goes into your HOUDINI_OTLSCAN_PATH - eg: ~/houdini9.5/otls

My help currently just dumps core information as soon as I try to display a custom OTL help ... so no much help when pressing F1.

[1] vvvv.org is currently inaccessible ... so no deep link

[2] http://desaxismundi.blogspot.com/

[3] vvvv.org is currently inaccessible ... so no deep link

Link to comment
Share on other sites

Day 19: Spherical Harmonics pythonSOP

Spherical Harmonics as described at Paul Bourke's site [1]

Symek: I still don't know what to do with them but eyecandy <_<

These turned out to be quite Haeckel-esque.

sphericals_od.jpg

I guess it's time to de-dust the A3 printer.

The pythonSOP first creates all needed points and then connects them to polygons using the createPolygon method.

Get's slow at higher resolutions, but the development was more rapid than me trying to write another HDK massaker.

[1] http://local.wasp.uwa.edu.au/~pbourke/geometry/sphericalh/

Edited by rdg
Link to comment
Share on other sites

Day 20: Behind the scenes of "Art Forms of the Procedure: The Harmonic Atlas"

This one is rather dry - some unstructured notes on the progress.

I am working on a book called "Art Forms of the Procedure: The Harmonic Atlas" - a homage to Haeckel's master piece [1]

I set up a mantraROP-compositeROP to create the images for the book.

So far so good, but the chaining didn't work ... the support hopefully addresses the non-cooking of the geometry.

In the meantime I worked on the further pipeline.

I generate a fair number of rendering of spherical harmonics, just like this:

sample.jpg

This is right after the composition.

The idea is now to store the parameters that generated that geometry in a machine readable way.

This allows applications further down the pipeline to access them and place them e.g. in the captions.

How to approach this?

A pre/post-render script for some reason refused to cook the right frame/parameter ... maybe this is part of the chaining issue .... <_<

But this is what SOHO was made for, isn't it?

What do we need?

1: A Database

I started writing per frame XML files but noticed that I am too lazy to parse them later - so I went for the whole menu.

A MySQL server and we are fine ...

2: A custom ROP

This allows to modularise the approach and separates style and content.

And there we go.

The database needs some table to store the data - in this case 8 parameters called M0 ... M7 and the current frame.

#!/usr/bin/env python
import MySQLdb

# connect to MySQL DB
conn = MySQLdb.connect (host = "localhost",
						   user = "root",
						   passwd = "",
						   db = "preset")
cursor = conn.cursor()

query = """ CREATE TABLE `preset`.`sphericalHarmonics` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`M0` SMALLINT NOT NULL DEFAULT '0',
`M1` SMALLINT NOT NULL DEFAULT '0',
`M2` SMALLINT NOT NULL DEFAULT '0',
`M3` SMALLINT NOT NULL DEFAULT '0',
`M4` SMALLINT NOT NULL DEFAULT '0',
`M5` SMALLINT NOT NULL DEFAULT '0',
`M6` SMALLINT NOT NULL DEFAULT '0',
`M7` SMALLINT NOT NULL DEFAULT '0',
`frame` INT NOT NULL DEFAULT '0'
) ENGINE = MYISAM """

try:
	cursor.execute(query)
except:
	print "Table already exists.\nSkipping."

Next we create OutputdriverHDA using the file menu. Throught the Edit Render Parameters interface we add the parameters:

soho_program

soho_diskfile

soho_command

soho_outputmode

Some styling until it looks like this:

ropinterface.jpg

The ROP expects a path to a sphericalHarmonics pythonSOP.

Depending on the output mode it either creates a XML file or writes directly into the database.

The soho program looks like this and needs to be in the HOUDINI_SOHO_PATH:

# Produced by:
# 	Georg Duemlein
#	http://www.preset.de/
#
# 	Prints parameters of Spherical Harmonics to XML file ...
#


import soho
from soho import SohoParm
import hou
import time


controlParameters = {
	"now":SohoParm("state:time",		"real",   [0],  False, key="now"),
	"mode":SohoParm("soho_outputmode",	"int",	[0],	False, key="mode"),
	'harmonic'  : SohoParm('harmonic_path', 'string', ['/obj/geo_setzkasten/rdg_sphericalharmonics1'], False, key="harmonic")

}

parmlist = soho.evaluate(controlParameters)

FrameTime = parmlist["now"].Value[0]
mode = parmlist["mode"].Value[0]
harmonic = parmlist["harmonic"].Value[0]


if not soho.initialize(FrameTime):
	print "-1"
else:
	item = hou.node(harmonic)


	if mode == 1:
		print "&lt;generic&gt;"

	# output NULL for autoincrement
	if mode == 1:
		print "\t&lt;id&gt;NULL&lt;/id&gt;s"
	else:
		print "NULL"


	# output the parameters
	for i in range(0, 8):
		parmvalue = str(item.parm("m" + str(i)).eval())
		if mode == 1:
			print "\t&lt;ident id='%s'&gt;%s&lt;/id&gt;" % (i, parmvalue)
		else:
			print parmvalue

	# generate and outout current frame Number
	# mark (http://www.sidefx.com/index.php?option=com_forum&amp;Itemid=172&amp;page=viewtopic&amp;t=11061&amp;highlight=soho+frame)
	now = soho.getDefaultedFloat('state:time', [0])[0]   # Get the time that the ROP evaluation time
	FPS = soho.getDefaultedFloat('$FPS', [24])[0]		# Evaluate the $FPS variable
	frame = int(round(now * FPS + 1))					# Compute the frame number
	if mode == 1:
		print "\t&lt;frame&gt;%s&lt;/frame&gt;" % (frame)
	else:
		print frame

	if mode == 1:
		print "&lt;/generic&gt;"

If we don't want to save into the database that's all we need.

The ROP saves a per frame XML file with a content similar to this:

&lt;generic&gt;
	&lt;id&gt;NULL&lt;/id&gt;s
	&lt;ident id='0'&gt;8&lt;/id&gt;
	&lt;ident id='1'&gt;0&lt;/id&gt;
	&lt;ident id='2'&gt;5&lt;/id&gt;
	&lt;ident id='3'&gt;2&lt;/id&gt;
	&lt;ident id='4'&gt;3&lt;/id&gt;
	&lt;ident id='5'&gt;5&lt;/id&gt;
	&lt;ident id='6'&gt;2&lt;/id&gt;
	&lt;ident id='7'&gt;4&lt;/id&gt;
	&lt;frame&gt;1&lt;/frame&gt;
&lt;/generic&gt;

The database is accessed through the soho_pipecmd.

The file needs to be placed somewhere in the path or referenced with an existing path prefex.

It also needs to be executable:

chmod a+x ./mapper

The data is "piped" into the command, not passed as arguments as I first expected.

It just collects everything into a list and passes this data to the database.

If there is an error accessing the database it returns an error value: 1: Cannot connect to DB or 2: Some Error in the query

#!/usr/bin/env python

# Georg Duemlein
# http://www.preset.de
# SOHO_pipecmd to store stuff in a database

import sys
import MySQLdb

content = []
for line in sys.stdin:
 	content.append(line.strip())

if content[0] == '-1':
	print "No Harmonics SOP specified.\nSkipping."
else:

	data = ", ".join(map(str, content))

	try:
		conn = MySQLdb.connect (host = "localhost",
							   user = "root",
							   passwd = "",
							   db = "preset")
	except:
		sys.exit(1)


	cursor = conn.cursor()
	query = "INSERT INTO sphericalHarmonics () VALUES (" + data + ")"
	try:
		cursor.execute(query)
	except:
		sys.exit(2)

I everything is going well, the database get rapidly filled with numbers:

mysqladmin.jpg

I am not sure if there is a need for transactions in a farm context.

Anyway - the funny part is actually doing something with this data, like sorting and/or analysing and feeding it into InDesign or even directly into a PDF writer.

That's it - I am still looking for some feature film experience

I wouldn't mind working on commercials, though. But I appreciate that this is print.

[1] http://www.amazon.com/Art-Forms-Ocean-Radi...s/dp/3791333275

Edited by rdg
Link to comment
Share on other sites

Really cool work ! This thread is always an interesting read !

I'm experimenting with Clifford Attractors myself at the moment :). Using prman 14 which has a python api. I'm using ripoints to render 10 million points through a renderman procedural (inline seems to be the only way it works without bound errors).

Paul Burke site on Clifford attractors:

http://local.wasp.uwa.edu.au/~pbourke/fractals/clifford

I attached one of the images. I modified the formula to work in 3d. It's a shame they are a bit unstable when changing the control parameters drastically. I would have loved to interpolate between two attractors, but they collapse into a couple of points. At the moment I'm adding some additional noise to the points with the python cgkit module.

thanks for sharing the info! I'll probably put my own script online as soon as I clean it up and finish it :).

post-1666-1225327368_thumb.jpg

Link to comment
Share on other sites

Using prman 14 which has a python api.

This is cheating <_<

I attached one of the images. I modified the formula to work in 3d. It's a shame they are a bit unstable when changing the control parameters drastically.

This is a feature not a bug, though.

As I get it these algorithms are searching for stable attractors - by classifying them through their Lyapunov exponent [1].

There for one would need first find some stable attractors, then sort them by some other attribute to interpolate them.

Probably you'd even need to recreate them afterwards to handle point orders and positions.

Another - more realistic - approach would be to rotate the attractor while attracting:

Like in parameter VI by W:Blut [2]

[1] http://en.wikipedia.org/wiki/Lyapunov_exponent

[2] http://www.wblut.com/parameterVI.php

Link to comment
Share on other sites

This is cheating <_<

This is a feature not a bug, though.

As I get it these algorithms are searching for stable attractors - by classifying them through their Lyapunov exponent [1].

There for one would need first find some stable attractors, then sort them by some other attribute to interpolate them.

Probably you'd even need to recreate them afterwards to handle point orders and positions.

Another - more realistic - approach would be to rotate the attractor while attracting:

Like in parameter VI by W:Blut [2]

[1] http://en.wikipedia.org/wiki/Lyapunov_exponent

[2] http://www.wblut.com/parameterVI.php

How prman createas 10milion rpoints via python API (or C API)? Technically in which moment of rendering pipeline these points are created? Does PRMan source a file and creates them in memory at once just before rendering? Rib is(AFAIK) one-to-one mapping of api method, right? So sequence goes like: create object (bounds), for() loop creating millions of points, then WorldEnd()? Just curious about memory management in PRMan workflow and how this can be mapped into Houdini/Mantra.

Normally most tricks with rib filtering and such are pointless in case of Houdini->mantra workflow, but here memory is always a constrain.

thinking loudly...

Link to comment
Share on other sites

This is cheating <_<

Since so much of houdini's rendering and shading is similar to renderman I'm trying to get a good understanding of renderman first (also I'm at uni and only have access to prman 14 there - so trying to learn the most in the given time). Also if you take a closer look at the cgkit module it has a python wrapper for rib commands. Since you seem to like python a lot, it might be handy - on my laptop I'm using 3delight instead, but it has a slightly different syntax.

But yeah, bypassing houdini altogether is cheating :).

This is a feature not a bug, though.

As I get it these algorithms are searching for stable attractors - by classifying them through their Lyapunov exponent [1].

There for one would need first find some stable attractors, then sort them by some other attribute to interpolate them.

Probably you'd even need to recreate them afterwards to handle point orders and positions.

Another - more realistic - approach would be to rotate the attractor while attracting:

Like in parameter VI by W:Blut [2]

[1] http://en.wikipedia.org/wiki/Lyapunov_exponent

[2] http://www.wblut.com/parameterVI.php

You are correct! And I had not considered rotation while attracting, I will definitely try that out! Thanks.

Link to comment
Share on other sites

Also if you take a closer look at the cgkit module it has a python wrapper for rib commands.

Thanks for this pointer!

And yes, monocultures aren't very promising ...

Edited by rdg
Link to comment
Share on other sites

How prman createas 10milion rpoints via python API (or C API)? Technically in which moment of rendering pipeline these points are created? Does PRMan source a file and creates them in memory at once just before rendering? Rib is(AFAIK) one-to-one mapping of api method, right? So sequence goes like: create object (bounds), for() loop creating millions of points, then WorldEnd()? Just curious about memory management in PRMan workflow and how this can be mapped into Houdini/Mantra.

Normally most tricks with rib filtering and such are pointless in case of Houdini->mantra workflow, but here memory is always a constrain.

thinking loudly...

I don't want to sidestep this thread too much but to answer your questions:

There are several ways to get all those points going, but I only got some of them to work. The two methods I used so far are by using renderman procedural geometry:

1) a python script creates the rib file, inside the rib file there is the following statement that will call an external python script.

program='Procedural "RunProgram" ["./python/pointCreator.py" "%d %f %f %f"]' %(PARTICLE_COUNT, POINT_WIDTH, frame*0.1, GLOBAL_SEED)
ri.ArchiveRecord(ri.VERBATIM,program)

This writes the above line straight into the rib file "as is".

the argument to my python program are fed into the script as they were inputted in the command line - but rather than coming from the command line, the input is coming from the rib procedural command. The output of the pointCreator script is written to stdout which is caught again by prman. -- It does not matter if the Procedural calls a python or a C/C++ program or any other program, as long as that program works with stdin and stdout, you can use it.

In this way the rib files are generated first (fairly fast), then the rib files are rendered individually. The points are never stored on disk - which also means there is no dependency between them. They are generated every frame in memory when the Procedural is called and rendered immediately. (the Procedural is called between worldBegin() and worldEnd() )

2) Very similar to the 1 method, with the exception that I don't call an external python script, you call a definition inside of the same script. This is using the ri.Procedural method which is part of the prman python module in prman 14.

ri.Procedural((ri,PARTICLE_COUNT, POINT_WIDTH, 0, 0, 0, A, B, C, D, noiseAmount, freq, offset), [-5, 5 ,-5 ,5 ,-5, 5],createNoiseCliffordAttractor,None)

The method createNoiseCliffordAttractor is defined inside of the script that calls ri.Procedural.

I am not writing rib files at all, and rendering straight to .exr on disk. The points are again generated only in memory every frame.

I've been able to push it up to about 30 million points with 2 gigs of ram on 32-bit ubuntu, single core 2.8ghz intel. It does become slow to render with that amount of points, like half an hour per frame or more depending on how many points are overlapping.

I prefer using the first method, but I got some strange bounding problems and had to go with the second method. I don't know yet how to go about doing this kind of stuff with mantra in houdini, but I would assume it is similar to how the fur procedural works. I think I might look into it deeper in my second term after newyear.

I found some interesting information that got me started on this website:

http://www.hosok2.com/project/helper/helper.htm

Link to comment
Share on other sites

Ok - thanks for that render p*rn ;)

Myself get's too often distracted by implementation and technical details - at least I have sometimes this impression.

The support fixed the chaining issue in the 9.5.274 build

Day 21: The Harmonic Browser

Here is a prototype a harmonic browser: http://www.preset.de/2008/1030/

The setup creates the randomized harmonics and stores the parameters in the database.

The browser displays the images and parameters.

The prototype is clamped to 256 variations and is now offline.

But could easily be 'life' in terms of the setup rendering images all the time ... I guess 8 parameters create enough variations.

Reload the web page to see 4 different harmonics ... maybe ... as the selection is randomized.

Edited by rdg
Link to comment
Share on other sites

I'm trying to get the 'Deep Color' shaders to appear in the viewport in realtime, how do I enable it? I tried to set HOUDINI_OGL_ENABLE_SHADERS, but I don't get anything.

I didn't have to set this variable. Sorry - no great help here.

Link to comment
Share on other sites

Day 22: ASEtools - a color management solution

This is snapshot of a tool I am working on:

devsnapshot.jpg

It reads and writes Adobe Swatch Exchange files.

The question was: How to keep track of colors used in the scene. Being able to transfer them from scene to scene or even to another application.

I once wrote a Photoshop ACO reader/writer [1] for 3dsmax - but I seem to have lost it :(

The ASE format can be read my all CS3 applications and looks like a nice proprietary format to save colors in a design context.

It supports all kind of color formats (RGB/LAB/HSB(?) and CMYK) in 16bit floats and allows to store colors in groups.

It's unfortunately not open, but there are some reverse engineered documentations available on the net.

The final tool will not only read and write the swatch files, but also allow to create harmonic color palettes from arbitray images [2].

-- ---

[1] http://proforma.preset.de/showpic.php?pic=...rdg_readACO.jpg

[2] http://www.preset.de/showpic.php?pic=/2008...g-takeColor.png

Link to comment
Share on other sites

  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...