Jump to content

Houdini 10 Wish List


Jason

Recommended Posts

It's time to be vocal! Write early, write often.

Jason.

allrighty,

the missing_features list, in no particular order:

some of these may be obsolete/wrong - correct at your leisure.

i/o

bgeo sequence dlo/dlm for max.

needed to import topologically varying geometry into max, either as

geometryclass or particleclass. bgeo is preferred as it is fast, easy

to manage and easy to glitchfix. this could be part of a suite of tools

for helping houdini prove itself in hostile environments.

PC2 read/write.

fbx read/write.

rendering

easy point-attribute override of volume shader params

EG: connecting $PSCALE to a shader noise size is... difficult.

too difficult for day-to-day tweak 'n' render when afterburn and

pyrocluster can do this with a couple of clicks in the ui.

volume shader density gradients.

like those in afterburn. used to animate a user defined gradient rolling

over the density for crisp fire->smoke transitions - for both colour

and self-illumination (constant shading).

fast volume shader - must be faster than afterburn.

render curves as tubes.

faster raytracing.

I remember when both tops and shops were still in houdini the old tops

raytrace shader was way faster than the new shops one. even now on 3Ghz

machines and bounce limits raytracing can be painfully slow.

progressive scan/spotcheck.

this saved me a lot of time back in the prisms days.

fast gpu-assisted gi renderer (addon).

sops

assimilate polycuspbevel by Simon Barrick.

poly bridge sop|builds a tunnel between 2 polys.

much faster smooth sop - refer to 'relax' in 3dsmax; its many times faster.

faster minimum-distance mode in ray sop.

more equilateral default triangulation.

for capping, curve closing, knitting, etc. any op that builds faces

should do it sensibly.

join primitives (for curves).

presently impossible without destroying the curves and using add to

rebuild them - assuming you can get the point order sorted properly.

non intersecting corners in polyextrude inset.

been asked for many times, has been done for lightwave, so is doable.

non indexed selection.

this can be spatial or some topologically inherited value.

this will enable sops to be used for procedural modelling.

group falloff radius per element.

aka soft-selection. if subsequent node can't use it then the soft-part

is just ignored.

more automatic output groups.

like those in polyextrude - any node that alters topology should create

groups of the new/altered parts.

edge groups.

self explanatory - we have edge ops but no edge groups...

resample by attribute.

i'd like to be able to bias the segment lengths with attributes.

sort by... then by...

nested sort options.

absolute toggle for group by normal.

local handle toggle.

here's a hint to the secret the mysterious new technology called

'local space' (maxscript, but you get the idea):

	   
		lmp = $handle -- a handle object.
		obj = $ -- the object you're working on.
		fc = face -- the face to align the handle to.
		pp = #()
		zz = (normalize(getFaceNormal obj fc))
		apos = [0,0,0]
		avert = (getFace obj fc) -- returns an array of 3 vertex ids
		pp[1] = obj.verts[(avert[1])].pos
		apos += pp[1]
		pp[2] = obj.verts[(avert[2])].pos
		apos += pp[2]
		pp[3] = obj.verts[(avert[3])].pos
		apos += pp[3]
		apos = (apos / 3)
		xx = normalize(cross zz obj.dir)
		yy = normalize(cross xx zz)
		lmp.transform = matrix3 xx yy -zz [0,0,0]

cops

improve general stability.

time offset + fcurve in binary cops (removed since ICE).

can be done now via 2 shift nodes and 2 animated BCs, but it was

faster the old way.

fast rotosplines.

fast 2D tracking.

multiple viewers (removed since H4).

presently one cannot have one viewer at half-res on red, and another

one at quarter res on alpha - viewing another node. unless you use

floaters. this was dooable with docked viewers and worked fine back in

houdini4.

clean viewport refresh.

no blinking, flashing, rescaling, transforming, buckets, etc.all i

should see is the old picture instantly replaced by the new.

faster than DF.

camera view for the geometry cop.

vops

multiple outputs.

i don't want to have to instance-copy bunches of nodes across several

vopnets. make each output a 'shader' and allow multiple outputs per

network. compiling to assets would be done via right-click menu over

the output instead of over the network itself.

arrays.

lets hope. but be sure to allow for differncing options where 1 array

is used with another of a different size - the options would be just

like those of image sequences; hold, loop, reverse, zero, fail, user value.

procedural shaders displayable in the viewport.

pops

geometry goal.

like follow but uses live geometry instead. must be able to weight goal

location probability by attribute.

1-ckick 'stick to emitter' checkbox in source.

this would be used often enough to justify hard-coding it,

has to be fast.

direct access to inherited attributes/variables.

not having to use poppoint() all the time...

faster sliding/collision with deforming geometry.

ray pop - querying geometry attributes.

must be fast or don't bother.

dops

attribute handling consistent with the rest of the app.

stitch/tear cloth by attribute.

i should be able to pre-partition a garment and glue it all together

with attributes that can be animated off and back on whenever and wherever.

fast fire/smoke - must be as fast as fumefx, or i'll use fumefx.

fast fluids - must be as fast as realflow, or i'll use realflow.

fast cloth.

must be as fast as ncloth, or i'll... um... at least try to make it sim

at <1fps while interacting with a 250kp cached character.

an in-depth tips and tricks video for the new fluid/gas solvers.

would be most appreciated.

misc

type-what-you-see attribute/variable names.

at the moment finding an available attribute/variable and typing it

correctly is guesswork. the help/extended info shows the

attributes/variables in a form that's diffrent to what has to be typed

in - leading many a 'WTF' moment.

available attribute/variable lookup per param.

at the moment finding an available attribute/variable and typing it

correctly is guesswork.

generic f-curve ui widget/param in vops/spare.

x and y values would be mappable. must be fast.

generic gradient-ramp ui widget/param in vops/spare.

like the one in the colour pop. x value would be mappable.

mplay can quickly load a sequence via filemanager.

i double-click an image in the sequence,

mplay loads the sequence,

i watch the sequence.

faster, cleaner UI.

any skinz, iconz and other gimmicky crap that slows an application

down must be entirely optional. houdini is a work application,

not some game or lifestyle app you fug-around with instead of work -

erm, well i do... but if i had invested in a license and was using it to

earn a living i'd be pretty livid if houdini was *needlessly* eating

into my time and budget - even if only a small amount. its just wrong.

image preview pane in file open dialogs.

at the moment i need a 3rd party image browser to see what file I'm

opening because the mplay preview is so bloody slow.

faster interaction with heavy scenes.

90 million polys is about the largest thing I've worked with in max,

it handled just fine. when I tried importing into houdini to compare it

froze then segfaulted.

full path in the path bar.

presently i can't see what op im using while modelling fullscreen.

or use the bar to navigate to another op. it was fine in H8.

bookmarks. removed since h9.

...

in summary:

make it fast where it painfully isn't... & fast in comparison to competing apps.

fill the many and varied gaps in functionality/usability left behind in the

mad rush for dynamics.

Link to comment
Share on other sites

  • Replies 184
  • Created
  • Last Reply

Top Posters In This Topic

Well, its technically not a "pass" if its a different camera. Passes are meant to be all from the same camera. So what do you mean by this?

I think what he means is alike a feature of PRMan 13.5. Essentially it's cache data reuse speeding up rendering from multiple cameras (typically stereo).

My list:

1, Extending HOM to cover the entire hexpression/hscript toolset plus more. Then remove Hscript*.

2, Once everything settles down,documentation is key. Also, example files for every area is necessary (maybe production-alike files as well). Especially the areas introduced in H9 lack severely: fluids, fur, muscles, pbr..etc.

3, Bug fixes.

4, Stronger shading toolkit: instead of providing useful VOPnet examples like the current material library, please build a modular shading toolkit consisting of larger blocks that users can combine quickly to build production shaders. Use an industry-like AOV breakdown and naming convention (see what Renderman Studio shaders do by default). Also, create 2-3 supershaders out of these building blocks that cover a majority of tasks in smaller studios. If the shading toolkit stays as it is, at least provide blending between the different surface shaders attached to a material.

5, Version management: It would be great to have another layer of version management over takes that let the user to reorganise the entire scene instead of just parameters, together with necessary user documentation attached. Take management improvements: would be cool to have a more flexible way to organise, combine, flatten..etc takes.

6, GPU acceleration in DOPs and/or network parallal fluid solver.

7, Prepackaged fur workflow. Others have mentioned grooming toolkit, I would add a CVEX fur ubershader that would support texture mapping and tweaking of common parameters (something on the level of Maya's fur description is already a good step).

I have 3 left to fill later :)

Edited by kodiak
Link to comment
Share on other sites

I want to add one more:

*/ Allow us to combine surface and displacement shaders into a single shader. Already we can optimize for producing F from PBR renders and so we should be able to solve P and Cf seperately, allowing a single shader to define Displacement, Surface Colour and BSDF. This would solve the problem of having to manage displacement imports and also vastly simplify the complication the would arise if SESI one day allow us to chain multiple shaders together. It would be another step to support the concept of Materials.

I second, third, fourth and fifth this one (I'm an admin, I'm allowed ;)). I can't believe I forgot about it, the one we wrote was instrumental on Spidey. This one shouldn't even be debatable actually...

Yeah - and a fast sphere/sphere collider. Sounds boring, but being able to stack 500,000 spheres that solve in 30s a frame is very useful.

Link to comment
Share on other sites

Seems that I forgot a few very important things:

1. Free form node layout (a la nuke), going both horizontal or vertical.

2. POPs should be integrated in DOPs and the POP context removed (or hidden for a start). A single unified multi threaded dynamics environment.

3. GPU acceleration of everything possible (cooking -- especially DOPs), shading/rendering. Drop the Cell thingie, is a waste of time and resources, and concentrate instead to where the industry is going.

4. Export to some scene format useful across different packages (I favor Collada). Long long overdue.

Dragos

Link to comment
Share on other sites

And one more, discussed since the "What you wish for H9" days:

1. Ripple editing of keyframes in the DopeSheet (that is, if I select a range of keyframes and move them on the timeline, the ones that come after them on the timeline are also moving). As it is now, my range of movement is limited by the keyframe next to my selection.

2. Multiple selections in the dopesheet. XSI has it and is very good. There is no reason that in the dopesheet one can have only one contiguous selection.

Dragos

Link to comment
Share on other sites

I'm going out on a limb here with some stuff I haven't heard yet.

- More audio support and documentation. I hear you can make a synthesizer in Houdini, but I also hear the audio stuff is being phazed out.. Nooooo.

- Color Ramp type for variables. Something like Maya's ramp node.

Link to comment
Share on other sites

2. POPs should be integrated in DOPs and the POP context removed (or hidden for a start). A single unified multi threaded dynamics environment.

Dragos

This is actually true, I think. Theoretically this should happen exactly this way. My fears are always that POPs, which are frustrating but powerful, when crossed with DOPs (which is pretty obscure at times), might produce a monster ;) DOPs workflow needs to be made more intuitive somehow...

If this is done, all the vectorfield / collision requests from earlier (from Marc and me) can be ignored, probably.

I am 100% on board, though - if they can address the myriad of POP limitations upon a reimplementation in DOPs, lets do it! POPs is probably the most essential context alongside SOPs; it's the bread and butter of Houdini and all FX artists spend part of their day dealing with it. It needs to develop further and made better and easier to use. It needs to support more modern particle system features too and also needs to be sped up to handle many millions of particles better than it does. Can anyone think of additional particle features they want to see?

Link to comment
Share on other sites

This would be more useful and faithful to Mantra rendering than the introduction of VOP-based OpenGL2 shader authoring IMHO, although I can see game developers wanting this as a counter-measure to Mental Mill.

I can be wrong but game developers use mostly Cg and HLSL shaders (probably except J. Carmack) so support of DX could be more important. But anyway GLSL in VOPS would be great.

Well, its technically not a "pass" if its a different camera. Passes are meant to be all from the same camera. So what do you mean by this?

I'm talking about feature for faster rendering of stereoscopic pictures in latest PRMan.

https://renderman.pixar.com/products/news/r....5_release.html

Among the many new features are stereo rendering from a multitude of camera viewpoints in far less time than it takes to render frames in multiple-passes, new shader authoring techniques including shader objects, co-shaders, resizable arrays, and enhancements to point cloud controlled photon emission and scattering that facilitate photon mapping for a wide variety of light sources and surfaces.

3. GPU acceleration of everything possible (cooking -- especially DOPs), shading/rendering. Drop the Cell thingie, is a waste of time and resources, and concentrate instead to where the industry is going.

4. Export to some scene format useful across different packages (I favor Collada). Long long overdue.

3. Do you think that GPU is a good solution for renderfarm? And who knows where will be Cell and GPUs in a next two years...

4. Do you really need such monsters?

Edited by hoknamahn
Link to comment
Share on other sites

- More audio support and documentation. I hear you can make a synthesizer in Houdini, but I also hear the audio stuff is being phazed out.. Nooooo.

Yes, you can make lots of amazing synthesizer effects, but where on earth did you hear that they are phazing out audio stuff ?! :unsure: Hold tight on the audio documentation, I'm going to release an ebook for Houdini on this topic shortly (currently being reviewed by SESI). In terms of audio support it seems they fixed a long overdue bug in H9.1 which is the ability to export a full-quality .wav file. While this is small in scope it shows that people up there haven't abandoned these features, although I wish the sound material parameter wasn't in the "H8 obsolete" category.

To my knowledge everything works perfectly as it should CHOPs and audio-wise in the latest Houdini (hope it stays that way or gets better).

Edited by andrewlowell
Link to comment
Share on other sites

I can be wrong but game developers use mostly Cg and HLSL shaders (probably except J. Carmack) so support of DX could be more important. But anyway GLSL in VOPS would be great.

Don't get me wrong, I would LOVE glsl/cg shaders hardware rendering for mostly for previs purposes; but I wonder which development would take us further: hlsl vops or VEX rendering of texture swatches? I'd think the latter since its the most applicable for all other phases after the previs stage.

I'm talking about feature for faster rendering of stereoscopic pictures in latest PRMan.

https://renderman.pixar.com/products/news/r....5_release.html

Among the many new features are stereo rendering from a multitude of camera viewpoints in far less time than it takes to render frames in multiple-passes, new shader authoring techniques including shader objects, co-shaders, resizable arrays, and enhancements to point cloud controlled photon emission and scattering that facilitate photon mapping for a wide variety of light sources and surfaces.

I would not hesitate in believing that if you show commit an animated feature to be rendered in Mantra, SESI would step up to the plate and do something like this. IMHO this is such an uncommon use of Mantra that I'd rather have more globally useful tools. Perhaps reusing IPR caches for left/right eyes can be a good basis for stereoscopic optimization?

3. Do you think that GPU is a good solution for renderfarm? And who knows where will be Cell and GPUs in a next two years...

From what I hear, Intel are looking at 80-core CPUs in near-so-distant future - possibly treading on the GPU/Cell territory.

4. Do you really need such monsters?

I agree this is needed.. (not for me,though).. In fact I was certain SESI insinuated that one of these solutions was coming for 9.1; but apparently not.

Link to comment
Share on other sites

I haven't been keeping up with audio for a decade now. Do sound cards support that these days? What about the state of integrated audio?

Correction, 352.8 kHz

Actually, I guess a good audio-mastering standard (high-range) would be 352.8 kHz. Here's what I'm basing my judgment off of, when I worked at an experimental audio mastering/post house a few years ago we were using this software ..

http://www.merging.com/2002/html/pyramix.htm

which was the only software at the time which could readily convert and edit DSD audio, which is a non-editable format designed for mastering and archiving. There may be others now.

Once it was "down-converted" from DSD for editing it came out as 352.8 kHz 32 bit audio. But I think it's somewhat common knowledge that people can't tell the difference between DSD and 352k (yes you can hear the difference between 44100, and say 96k, just not on bad speakers).

352.8 kHz is probably about as high as you'd ever really need to go to get taken seriously as an audio application. Most consumer sound cards can't support this, but this is probably as good of a mastering standard as any. For my uses I wouldn't need a sample rate this high, but I feel very limited with synthesis techniques working at 44100, really doesn't work with the low range very well (high frequency oscillations are inaccurate and sum together to form low transients). A lot of low end audio equipment these days goes to 96k, including quite a few for linux. Personally I have a presonus firepod.

In any event, CHOPs isn't just for audio it's a generic data processor, so I'm sure there may be other needs as well. If the sound card doesn't support it the individual could always not go beyond 44100 (if they even know what that means) Maybe the slider should go to 44100 but should be able to type in a number up to say 352800 hz.

So, I'd like to be able to read and write .wav files at high sample rates, and work with dense streams of data if need be.

And, I think being able to mix the levels and pan in a high quality surround recording with virtual microphones and sound objects would be QUITE cool.

Edited by andrewlowell
Link to comment
Share on other sites

In any event, CHOPs isn't just for audio it's a generic data processor, so I'm sure there may be other needs as well. If the sound card doesn't support it the individual could always not go beyond 44100 (if they even know what that means) Maybe the slider should go to 44100 but should be able to type in a number up to say 352800 hz.

I tried that with a Wave CHOP and it seems to work.

So, I'd like to be able to read and write .wav files at high sample rates, and work with dense streams of data if need be.

Offhand, I don't see a sample rate limitation. The data depth is somewhat problematic since there's no architecture for allowing the user to specify audio save options like it is for images.

EDIT: Oops, looks like audiere is the limitation :(

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...