Search the Community
Showing results for tags 'Volume'.
-
Hey all, I just encountered a very strange issue I cannot seem to find a solution to. I am using an RTX 2080ti with a Ryzen 9 3950X and when I try to render a volume in karma, no matter the light source, as soon as I increase the volume limit above 0, my volume turns blue. Even with a basic distant white light, bounces make the volume turn blue. I am using the default karma cloud material and my scene is as basic as can be. The weird part is when I try to render the same scene on my old laptop with a 1070, it works fine. Which leads me to believe the issue lies in my GPU. I have just reinstalled windows and Houdini, left every setting default, updated all my drivers, nothing seems to be working. Some insight into why this might be happening would be great. Thanks in advance CPU renderview on 2080ti workstation XPU renderview on 2080ti workstation XPU renderview on 1070 laptop
-
Hi, we are turning some animated geometry (alembic, non-deforming animation) into clouds, but because we want the noise of the clouds to be in local space (not be affected by the animation), I am trying to work out how to apply the animation after the creating the clouds referencing the intrinsic transform of the alembic. Using an AttributeVOP, I've managed to copy the intrinsic:packedfulltransform of the animated alembic, on to the intrinsic:transform of the static alembic model. I am still trying to get my head around the difference between matrix3 and matrix4 for these transform. Is there a general rule to which matrix type is used in each case? intrinsic:packedfulltransform is matrix4, intrinsic:transform in packed primitives is matrix3, but intrinsic:transform in VDB is matrix4. So what I put together and works for packed geometry or simple volumes, doesn't work with VDBs unless if I pack them before hand. Also my understanding is that intrinsic:packedfulltransform and intrinsic:packedlocaltransform are not writable and I can only change intrinsic:transform . Is that right? thank you
-
Hello people! I'm currently trying to render pyro exoplosion with the motion blur in Redshift and I didn't succeed with any of my aproaches. I tried Convert to VDB , and then Merge Volume in Velocity mode with the motion blur ON but nothing works yet. I found few HIP examples but they neather don't work. Is there anybody who ever did this? Too bad there's nothing about it in the Redshift official documentation too. Please help
-
Hello everyone! I have scene with emissive volume and no density rendered in Karma which is being comped over footage. Issue: since its emissive transparent pixels I was unable to replicate the exact look in AE. Currently ignoring the transparency and using screen blend mode. I have read about straight and premul alpha, but couldnt implement these findings. I know its impossible to have volumes emit light, while not blocking any light, but it would save a ton of time to render without density. Anyone stumbled upon this and found elegant simple solution? Essentially I want to have at least some sort of see through edges and full opaque volume inside dense areas. Thank you very much for any help, I might want something impossible, but that's where creativity comes :D
-
- after effects
- exr
-
(and 7 more)
Tagged with:
-
Hello, I'm trying to set up a recursive cube system that has volume to it like the attached image https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Frecursive-cube-subdivision-and-translation-like-an-l-system-v0-8vn5j5nu8fwa1.jpg%3Fwidth%3D640%26crop%3Dsmart%26auto%3Dwebp%26s%3D0892e696e54816de174bb3644608f5a5a71d523c I also attached an attempt file any suggestions would be appreciated. I'm sure it's more simple then what I'm making out to be... Thanks! jl_recursiveHelp_v001.hiplc
- 3 replies
-
- recursive
- subdivision
-
(and 2 more)
Tagged with:
-
-
Hi guys, In a 2D shape, I need to retrieve the distance of every point to its boundaries, so I tried to use (a 2D) SDF, but it doesn't works properly. How can I fix it? Thanks for helping. Distance To Boundries.hip
-
it says in the docs: "Scatter channel to read from the volume object. This acts as a multiplier on the scatter color, to texture the color of the volume." https://docs.arnoldrenderer.com/display/A5AFMUG/Standard+Volume What multiplication is that? 1 (colour white) * density? There is a big mess in the bottom of the cloud so I'd like to know what is the formula used? what's multiplied by what? If my volume is 9 times denser and the colour is 1 (white), does it mean the denser the brighter? I've included comparison (top: empty channel, bottom: channel is density)
-
Title is a bit misleading.. I realise Pyro volume simulations don't really have a birth and death point, but what I'm hoping to achieve is to be able to colour or alter the transparency of my fire closer to the base or emission point of the fire. Currently my fire is looking really nice, apart from the base of the fire where it first emits from. I'd like to make this area more transparent. I'm using the Redshift volume shader, but unfortunately cannot post a .hip file due to work non disclosure stuff. I can create a simplified version of the scene if need be.
-
Hi, what would be the best approach to create this type of simulation (machine gun smoke simulation)? Any Ideas, scene file strategies would be very helpful! Video file: M1A1 Abrams Firing From Hull-Down Positions_1.mp4 Here 's a gif preview -
-
Hey folks, proud to present LYNX Tools, a collection of production proven open source tools to accelerate your workflows! All mentioned tools are free to download via the links below and are licensed with a full HoudiniFX license. All Houdini Assets have complete Houdini native documentation available. Repository: https://github.com/LucaScheller/VFX-LYNX Lastest stable release: https://github.com/LucaScheller/VFX-LYNX/releases Please give them a test drive, so we can improve them further Roadmap | https://github.com/LucaScheller/VFX-LYNX/projects/1 So far there are three tools: LYNX_force_general | Tweak your sims with this all purpose & intuitive force field. Built to mimic force fields in other DCC applications, but with support for more features and a natural Houdini user experience. https://www.lucascheller.de/vfx/2019/03/24/vfx-lynx-houdini-force-general-asset/ LYNX_fabric | Create fabric/weave patterns with ease. Perfect for creating that holiday sweater you never wanted. https://www.lucascheller.de/vfx/2019/03/16/vfx-lynx-houdini-fabric-asset/ LYNX_velocity | Get control of your velocities via an intuitive UI or groom them for absolute fine tuned control. https://www.lucascheller.de/vfx/2018/11/01/vfx-lynx-houdini-velocity-asset/ More info on my website, including full release logs: https://www.lucascheller.de/blog/ Houdini User Group Munich Presentation: https://vimeo.com/334887452
- 13 replies
-
- 10
-
Hello, I've encountered an issue while rendering pyro sim splitted into multiple domains. The best example is the shelf tool's Smoke Trail Effect. I've added additional velocity field to this and several micro solvers. smokeTrail_02.mp4 Is this a render glitch? or a pyro issue? smokeTrail_02_RNDR_VRAY_Issue.mp4 Renderer: v-ray To check that out I cached out an VDB and imported into blender... Which was quite interesting as the vdb cache which was exported was only containing the first part of the domain (splitted domain) The Screenshot of vdb is of the default smoke trail Shelf tool with no customization. I did this to check if the custom setup I was using was having some issue. Anyone with any idea fixing either of the issue (Vray Rendering or VDB Caching), Please help me out.
-
Hi, I am trying to render a volume which has been rasterized from a bunch of points and that volume is in constant motion. The camera pans fast from left to right tracking the moving volume. I have a test camera that's static and if I take a test render through that static camera, the motion blur looks proper. But when I render though my shot's animated camera, the motion blur is messed up. The volume looks like a blurry blob of mess with no definite shape to it. I tried asking people that I know personally. Everyone suggested to reduce shutter time in camera or completely make it 0. But that's completely removing motion blur. Is there a way to make mantra pick only the velocity blur and not camera blur?
-
- mantra
- motion blur
-
(and 1 more)
Tagged with:
-
I am trying to use the setAllVoxelsFromString() command but its not working. Can someone provide an example? what does the string need to look like? i want to fill a vector volume with values i have from a fga file. First 9 values are for volume size and bounds. The rest is the values import hou node = hou.pwd() geo = node.geometry() file = node.parm('file').eval() with open(file, 'r') as f: data = f.read().split(',') volumesize = (data[0],data[1],data[2]) bboxmin = (data[3],data[4],data[5]) bboxmax = (data[6],data[7],data[8]) data = data[9:] datastr = ' '.join([str(elem) for elem in data]) volume = geo.prims()[0] volume.setAllVoxelsFromString(datastr) Vel_field_0000.fga fga_importer.hip
-
Hi I've been playing around with the wdas data set cloud and I've noticed that we houdini users (who are not lucky enough to have access to shaders dveloped by giant companies) lack an overall dense (almost pyroclastic) cloud shader that shows the distinct features of puffy cumulus clouds. After a through searching both in web and odforce i've seen that a few people also inquired on the same subject but it was inconclusive. I've decided to use this post as an idea dump to possibly implement a new shader or track your opinion on the subject. During the searching i've realized that distinct features of dense clouds are achievable by either using a very high number of internal light bounces or by faking them. I have seen that dark edges are the most prominent feature of the dense clouds since the other effects as transmittance and high forward scattering are already studied and implemented in shaders. To assess the current state of volume rendering other then mantra i've done a couple tests with several renderers in maya and also tried the new terragen 4.2. The most beautiful (simulating a real cloud light) is the hyperion render. Terragen 4 clouds are also very realistic and detailed (http://terragen4.com/first-look/). Arnold seems to be the best commercial option out there and is very good in giving the edge darkening but fails at details in dark areas (which is highly visible in hyperion) but i think these areas can be compansated with additional ligths. Redshift render is blazing fast but no where near the realism. In houdini i have used pbr with 50 volume limit, 5x5 samples, volume quality at 2 with a standart volume cloud shader. I have just started to see dark edges but even rendering a very small portion of 1920x1080 image took me about 10 minutes ( 2xE5-2680 v3, 48 cores total) . For speed comparison, redshift was the fastest with a couple minutes, arnold took about 10 minutes, terragen is said to be around an hour for a very heavy cumulus cloud. No information about hyperion but since wdas has a 50.000 core farm it shouldn't be a problem. Mantra was the worst with a projected couple hours of total render. Below are the sources i found for future reference for myself Oliver also asked about a cloud shader and Mike Lyndon says he has implemented a shader before (https://forums.odforce.net/topic/17900-cloud-shader/) . This is the most prominent one and is the one i will be implementing. The thesis of Antoine Bouthors that Mike says he has implemented (http://evasion.imag.fr/~Antoine.Bouthors/research/phd/thesis/thesis.pdf) . Beautiful mie scattering solution by Matt and the base for my shader (http://mattebb.com/weblog/rendering-clouds-with-the-mie-phase-function/) Odforce user Andrea was also interested in such topic and has some ideas in it (https://forums.odforce.net/topic/24831-cloud-shader/) Modelling aspect of clouds (https://forums.odforce.net/topic/12923-pyroclastic-noise-demystified/?page=2) Siggraph presentation by Magnus Wrenninge (http://magnuswrenninge.com/content/pubs/VolumetricMethodsInVisualEffects2010.pdf) Realtime clouds for horizon zero dawn by guerilla games. realtime but has really nice implementations (horizon zero dawn real time clouds) A hefty paper from Hyperion developer Yining Karl Li (https://blog.yiningkarlli.com/2017/07/spectral-and-decomposition-tracking.html) With the directions i gathered from Mike Lyndon's post i have started implementing the photon map approach. I have already implemented a henyey-greenstein phase function and added the contribution by radiance photons. Now i will try to implement the ideas presented in the thesis of Antoine, with high focus on cumulus cloud rendering (ch. 7). Attached is the current state of shader (cloud_shader.rar) I am also interested in GLSL implementations for viewport and cloud modelling tools but i guess this post wil be mostly about a shop type implementation. anyway here is a simple glsl implementation work and i am open to every feedback https://forums.odforce.net/topic/32197-glsl-pipeline-in-houdini/?tab=comments#comment-190945 I am definitely not an expert on this subject, so all input, ideas, criticsm, and source is welcomed. Thank you. Hyperion Render (wdas_cloud_hyperion_render.png is Copyright 2017 Disney Enterprises, Inc. and are licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License ) Terragen 4 Render terragen_4_render.tif Houdini Pbr Render (the little part on top took 10 minutes to render so i left it unfinished) Arnold Render Redshift Render
-
Hi guys, I have an issue with visible banding artifacts both in a pyro sim and in a volume generated from polygons with iso offset. The artifacts appear as soon as I crank up the density. The source geo for the static volume is the same as the collider geo for the pyro sim, everything is at the same (high) voxel resolution and from the same highres poygon base geo. Does anybody here know this issue?
-
I have a dataset that contains animated volumetric smoke density. The dataset is stored as NPZ files (numpy array save file). How can I load the density of a saved smoke simulation as an animated volume to render it ? For context about the float data in the volumes. This is what simulation number 70 snapshot 150 looks like in matplotlib with a viridis color map, but the data in each voxel are simple float64 values. The other snapshops and simulations look similar. This is how you load a single npz file, if you want to try it with the dataset. It just generates a generic numpy float array. So you could just make a tiny numpy array filled with random data yourself for the purpose of testing like this np.random.rand(3,3,3). import numpy as np def npz_to_nparray(filepath): """ loads npz file from location :param filepath: string to file location :return: numpy array """ data = np.load(filepath) data = data[data.files[0]] # normal strange operations to get the the actual array inside data = np.transpose(data, (3, 1, 0, 2)) # turning the inner array layer outside and rightside up return data[0] I have tried: -I can load the npz files in python and in Houdini's python shell as a numpy array. I dont know how to put the data into a Houdini volume or which file format Houdini needs to import it. -I tried to translate the npz files to vdb with the pyopenvdb libary for python. But after multiple days of trying I am unable to import pyopenvdb. I build pyopenvdb library on windows with vcpkg and cmake, but I cannot import the build package. I setup an ubuntu VM and tried importing pyopenvdb there, but I can not import it. tl;dr: How to import animated smoke voxel data that is currently stored as a 4D python array?
-
Hi, I tried to do a tool that will enable user to deform volume by lattice. Here you can find my results. I found many many good example files on this forum so I thought I would share this Hope you will like it Juraj project.hipnc
-
At 13:52, this video briefly glosses over using simple geometry to add temperature to a specific area of an explosion: This is pretty much exactly what I want to achieve. The setup looks simple enough, except for an attribute wrangle where he doesn't show what's happening. Does anyone have any idea how he might be doing this? (Any other methods are welcome too!)
-
Hello! I am building a flocking sim, and I am trying to get the turn information to propagate across the flock like a wave. I am using particles as "birds", so each particle represents a bird. I am trying to figure out how to use a ripple solver that uses a volume to broadcast the velocity information of each particle at each time step out to the rest of the flock, so that the turn information moves through the flock in a wave-like fashion. LMK if that makes sense and if you have any suggestions! Right now I just don't know how to read the velocity information of a particle into the ripple solver, and then ripple that information out to the other particles. Thanks!
-
I am happy to announce that I'll be joining a host of talented Houdini teachers, like: Debra Isaac Kate Xagoraris Beck Selmes Christopher Rutledge Justin Dykhouse Jeffy Mathew Philip Nicholas Ralabate You can learn more about my Velocity Forces class here: https://www.houdini.school/courses/hs-204-forces-building-custom-velocity-setups Teaser video for my Velocity Forces live class: https://vimeo.com/577928177
-
Hi all, I generated a mesh, which I converted in volume, then used a box with vdb combine to limit my volume to that box. Problem is near the bounds, the density is fading away. Is there a way to avoid this and keep true density? Thank you, mrblimack
- 1 reply
-
- volume
- bounding box
-
(and 1 more)
Tagged with:
-
hey folks. I've been following this amazing tutorial by Tim J but wanted to learn how to add attribute paint or another noise to use as a mask so only certain areas are effected by the volumetric noise I'm quite new to Houdini and have no idea! thanks in advance.
-
Hi guys. In a smoke simulation, I want to use a thin deforming geometry (a sheet) as the collision object, but instead of creating a Volume, I want to use "Point Velocity For Collisions". Is it possible? Thanks for helping. Smoke_CollisionPoints.hip