Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


sebkaine last won the day on June 23 2018

sebkaine had the most liked content!

Community Reputation

255 Excellent

About sebkaine

  • Rank
    Houdini Master

Personal Information

  • Name
    Emmanuel Mouillet
  • Location
  1. Houdini 19 Wishlist

    Bulletproof implicit UV for FLIP solver.
  2. Houdini 19 Wishlist

    Integrate in the auto rig tool, very clean base mesh for : - biped : male / female human - quadripeds : feline / dogs / horses For each model have by default integrated : - clean skin mesh topology - clean optimised muscle geometry - clean skeleton Like this : https://www.3dscanstore.com/ecorche-3d-models/male-and-female-écorché-bundle https://gumroad.com/nicolasmorel - Create a restricted feature Muscular FEM solver where all that is not needeed is ditch from the solve, optimise interpenetration and self collision between msucle to get ziva FEM speed. - Having all the muscle constrain already set on the default skeleton base mesh - push and simplify a workflow were muscle in the Muscular FEM are based on modeling in Zbrush instead of having to draw muscle with the houdini muscle tool. - ability to export poly geo of skin / muscle / skeleton to zbrush and edit them directly with a GOZ like real time connection - Ability to retarget those base mesh to different type of characters
  3. uv snapshot

    i hope someone can prove me wrong , but i think that this process cannot be automated with ROP or PDG, if you want UV snapshot of 1000 object you have to do that manually. maybe python could do that ? but i don't know how.
  4. Solaris Reveal

    I hope not ! Competition is great for us, I don't want to see SESI becoming the new Foundry. When you have a full monopoly on your market you always tend to slide slowly to the dark side of the Force. However it would be a delight to see SESI competing against Foundry monopoly on the high end compositing software market.
  5. Houdini 18 Wishlist

    Better integration between Houdini Engine and crowd system in Unreal Engine. Something like Anima but with the fire power of H. https://www.youtube.com/watch?v=KyxXKA84xXY
  6. Houdini to Unreal Tutor Needed (Fee Updated)

    not Hengine related but i strongly recommand all the lighting academy tutorials by Tilmann Milde who work at dice. and it's 100% free. These are extremely good tutorial. https://www.youtube.com/watch?v=grN5Yd55UIM he have a patreon if you want to support him : https://www.patreon.com/Daedalus51
  7. I understand Adrian, but i will just give you my pov. - Working in VR by applying the traditional VFX way into the 360 world, is imo the wrong way to do VR. - Importing Tons of VDB assets / load of gygabyte for POP cache etc .... certainly is not an option in UE. - but precalc suck for 360 it is an awkward way , and 360 movies tend to be boring excpet when they are rendered 8K stereo with good haptics setup I think that you must also think about an other option, that is to shift your paradigm. and accept that yes real time will give you lot of constrain , BUT you will have an iteration power over all the creative process that no precalc engine on the world can match ... even RS with 4*1080Ti. If you are making a 360 experience and will dedicate 1 or 2 years of your life doing so ... working in precalc with a VFX paradigm way of doing things in mind, is just the wrong way. But if you start to modify some of art direction + story to take RT restriction into account, you will be able to consider your project as a software not a movie, and will be able to achieve somethin far more impressive than what you have expected. You will for exemple be able to pack an experience , then distribute it on various support, - like interactive room scale immersive movie with a rift or vive - wire free interactive eperience with an oculus quest - 8K stereo movie for wider audience 10 min of 8K at 60Fps is 36.000 frame to render. if you render one 8K frame with RS in only 5 minutes , it's 180.000 minutes for the all movie so it's 3000 hours for one iteration of your movie , and about 125 days for one iteration of the movie. with unreal you will be able to render your movie in one day on a single machine. so just before you jump in a clunky Maya + Houdini + precalc RS workflow, be sure to double test UE and RT workflow. if after that you sure that UE is not for you , that is OK but at least you will have really push the UE options. Good Lucks ! Cheers E EDIT : and by the way i agree with you streaming the houdini viewport to oculus or vive or valve index would be great , this has been already ask as RFE , finger crossed for H18...
  8. what about unreal engine + HDA + HouEngine ? - you will have the interactivity - the FPS - a true real time framework - No pain to find a custom solution In case you really want it in H you could make some research on - Check out how openVR encode positional tracking info https://github.com/ValveSoftware/openvr - How to translate those info into a usable format like OSC https://github.com/BarakChamo/OpenVR-OSC - how to load the data contain in this format into CHOP , for exemple OSC loader in CHOP https://docs.derivative.ca/OpenVR_CHOP (Touch Designer equivalent ) - Connect your CHOP channels to your camera 360 - is it possible to apply directly Vive lenshader in OpenGL inside Houdini ? you need to double check - the hardest part like said previously will be to open a GL window that stream the houdini viewport rendered throw that camera Getting 90FPS out of that will be a real challenge/miracle that needs some serious programming background Well except if you are a Russian coder, and it appears from your avatar that you are not , i will spend this R&D time, on Unreal Engine and Houdini Engine practice. Cheers E
  9. Houdini 18 Wishlist

    This is a script coded by a friend : https://berniebernie.fr/wiki/Houdini_Python I think it would be great to have something like this by default on Houdini, in order to get a previz image of a certain part of the node graph. It could be very useful when sharing setup between people for complex FX / Rigs / Lighting setup.
  10. render volume in mantra in short time

    good read by Fathom basically i start with that with volume :
  11. Houdini 18 Wishlist

    - gpu rendering for mantra and bake texture rop - split mantra render mode to render integrators like prman or render kernel like in octane - build a mantra PBR integrator that can only run precompile shaders for faster rendering - add a mantra post process node for custom LUT / Bloom / camera fx like in octane / maxwell - ability to output uv snapshot with the bake texture rop directly - new UI / interaction paradigm for manual task like modeling / uvlayout / sculpting / paint weight / blendshape / grooming / deformer - nodal / inline code GLSL shader tool to control the shaders and lights in the openGL viewport - openGL viewport LUT to match Unreal / Unity display - controlable GLSL viewport postprocess grain / vignette / bloom / LUT - perfect matching beetween mantra render and openGL preview, translation of VEX noise to their GLSL counterpart - display shader preview from external render engine in the shader palette - cops on steroid
  12. Method to generate Implicit UV on Pyro Objects ?

    Thanks for the tips Andrii ! and sorry for the delay answer. I will try to do that in full dops, see how it goes, and post back the result if they are good enough ! Cheers E
  13. Fluid surface UV question.

    Hi Paul, This might be interesting https://www.gerbertgosch.com/rndtextureflowpt1 Basically the default workflow in H is base on the combo - rest pose activation in the fluid solver. - rest pose reading in the shader with the rest pose solver The problem is that restsolver is mantra based, i think UE give you 4 uv channel so their might be a way to store rest in uv3/uv4 and interpret them in UE. you can access the code of mantra dualrest solver in the asset. Then you can try to reproduce that logic in the material editor of unreal. Not an easy task to get proper UV on fluid, yo will have to do your homework.
  14. Method to generate Implicit UV on Pyro Objects ?

    well i would like to find a way to create a proper rest pose to be able to post-process the grids in dops then post-process the grids in shops directly. basically the goal is always the same get the extra detail without having to pay the full price of it at sim time. i agree with you that now computer are really faster and a brute force approach might now be the best things in 2019 ? But actually to do upres things in H you basically have by default : - rest / rest 2 + restsolver -> which is not a convincing method imo for reason describe previously - upres -> which can be an efficient / convincing way but is also very tedious , setup the proper noise, second solve , lot of disk cache needeed to save the grid. I think it would be possible to replace those 2 methods, (uprez + displace), with a proper rest pose. Actually many great pyro guys who i ask the question, use the gradient stuff describe by andrii, i was just curious to know if some have other approach. The point base technics describe by Mainroad always put my interest cursor to the max, maybe because i still find that their pyro are the best i have ever seen in H.
  15. Method to generate Implicit UV on Pyro Objects ?

    Yes sorry for my bad english it's more a pulsing effect, but at the end it's not so good. it works for fast whispy smoke, but you could get the exact same idea with displace and gradient. So that's why i would classify rest/rest2 tricks to the bad tricks , that can sometime save you. but it remain an awkward method imo. For Pop , you just nail it . while it sound easy at first to advect some points on pyro to store some att. The hard part start when you whant - a coherent point cloud with always a correct distance beetween each point to get a proper sampling - how to reseed and manage the point when density disapear or when some area with high divergence in the previous frame , have very few point to store the data on the current frame Coherent space beetween each point + reseeding -> that start to sound like a FLIP things ? This concept of blending MAC grid / particles velocities to compute high detail with point is not new , on this 9 years old article , they already talk about that : Wrenninge-CapturingThinFeaturesInSmokeSimulations.pdf