Jump to content

Leaderboard


Popular Content

Showing most liked content on 01/05/2016 in all areas

  1. 2 points
    it would be nice if the doc system is more open like the wikipedia so the user can add some text, images and example hip files for enhancing the docs and sesi can approve it. a open task list for the docs can also help to see whats missing and mostly there a some discussions in the forum where the anwser is written but it misses in the doc's.
  2. 1 point
    Lighting and rendering will really help make it look real. Being able to light, render and comp is usually a big part of being an FX TD. Initially I would work on getting the elements to feel right before hand. As for approaching this shot, I would probably shoot a plate to work with first. With a good model for the bottle I would put a lot of work getting the prefracture to be great. Getting broken glass to look right is a great challenge in itself. Then you have 2 options to either animate the bottle being thrown, or use a sim to get the motion down. I personally would then focus on getting RBD fracture sim to look great. When your fracture is looking good I would then bring in the flip sim and try and get that looking good using the rbd sim as a collision. With that working I would try and push it further and have the flip also effect broken glass pieces. With all of that working, then comes the fire. In my experience, small detail fire is often more difficult to look right than a big explosion. This will take a lot of work getting the flip sim to work as a fuel source. Now comes the lighting and shading, To get the shot to look great, I would shoot an HDR map to use at the same time while filming the plate to use. Having an HDR will make getting a realistic look much easier. Since the fire will also be acting as a light, I personally would start with getting the fire to look great. Then it is up to whether to tackle the water or glass next. both can be very difficult to get all of the refraction, and reflections, but thankfully the odforce community can help you with any problems. When rendering there are a couple things to add to really help it sell. First is adding a shadow map to use on the plate, and secondly would be a light map of the fire as it will also light up the wall. Then comes the compositing, I will be honest and say that I am not the strongest compositor, but there are some great tutorials for it. As for tutorials, personally I have found cgworkshop tutorials are the best, then FXPHD, then CMIVFX, and lastly Digital Tutors. I would also go through a bunch of peter quint's tutorials on vimeo, they are very helpful. If you have any questions or need some feedback on your work feel free to send me a message and I will try and help you as best I can.
  3. 1 point
    I would recommend trying something that requires two different effects working together, like a bottle of wine breaking. You would have the rbd sim of the glass and a then the flip sim of the wine. I shot of this scale is great because it is challenging to get photo real, but being of a smaller scale it lets you do faster iterations. One of the nicer student shots I saw was a molotov cocktail being thrown against the wall, it was great seeing the rbd, flip, and pyro all working together. There are lots of options, and if you need ideas I would look through the youtube channel slow mo guys. They have a wide variety of tests that would be great for a demo reel. One thing to keep in mind when putting together your demo reel, it is a pass or fail. Either your shot works or doesn't, employers don't care that you tried something big if in the end it didn't look great.
  4. 1 point
    I've done simple setup: advecting particles by pyro sim, computing gradient of density field and trasferring gradient to particles, normalizing gradient attribute on particles. Sop solver setup: Volumeanalysis (compute gradient), Attribfromvolume (transer gradient to particles), Attribvop (normalize) Gas microsolvers setup: Gasmatchfield (create grad field by density), Gasanalysis (compute gradient), Gasfieldtoparticle (transfer to particles), Geometryvop (normalize) Simulation length: 50 frames Sop solver took 5.122 seconds Gas microslovers took 3.992 seconds Difference in simulation time is not big probably because of not many simulation frames and low resolution of volumes and low number of particles. But even now microsolvers setup is a little bit faster. If I cranked up all those settings difference would be greater. Also some microsolvers can run on gpu, so it would increase difference I guess. Another thing crossed my mind: is it possible to modify fields in Sop solver? I can import fields into Sop solver using Dop Import but how can I export it back to Dop network?
  5. 1 point
    Python expression used in parameters are implicitly functions, so in your case you would have to "return 0/1" instead of "print" (...) if exist == 0: return 0 else: return 1 A couple of random thoughts about that. - I kind of dislike solving such problem wtih python script embeded into parameters. Not sure if this is prejudice or rational concerns. First, at least for 15.0.322 you will have problem with refreshing, which I'am not sure how to solve. Putting hou.frame() into the code, makes it time dependent, so it will refresh on time change, but not force recook in place. Second, this is non-procedural, hardcoded solution... - Oldschool way would be to use ObjectMergeSOP with /obj/startwith_key* parm, and in switchsop use expression like bellow to check whether it has imported anything. This obviously doesn't work for many cases though... if(npoints("../objectmerge") > 0, 1, 0) not to mention it also has problem with refreshing once import errer occurs (which is new bahaviour, as I am pretty sure in undefined-old-version-of-houdini, object_merge Sops were nicely recover after error)... - I think my main concern is that is seems to be ineletant to drive SOP level parameters with OBJ level queries about nodes' names. Sops should worry about points/prims/attributes, not random objects' names. Perhaps I'm too pedantic, but such setup usually indicates there is some design flaw in a scene.
  6. 1 point
    This project turned into more than I wanted, I thought I could just pull down the data and view the surface of the moon up close but calculation times are long. For the LDEM_128 12.8x7.2 degree tile it took my single core Python script 40 minutes to calculate on my AMD 4.4Ghz machine. If I make the tile too big I can easily exceed my 24Gb memory limit, then it really slows down into hours and hours for calculating the surface. The LDEM_4 data set calculates fairly quick, however. Render times are not that bad. I am creating the surface by scratch in Python. I tried the displacement approach early on but found it took much longer to use a series of Houdini nodes than to simply contain it all in a single script. Yesterday I tried out a hybrid python/vex approach where I used Python to only read the DEM data and generate points. I stored the height information as an attribute on those points.Then after that I dropped down an Attribute Wrangle and use the VEX based wrangle to scan the points and create the faces. While I did see a slight improvement in CPU usage (up from 14% to 60%) the overall time to create the geometry was about the same so I dropped back to just using python for the entire generation. I left the VEX code in the HIP file and if you want to play around with this technique simply set the projection_type=2 (bottom of python code) and activate the attributeWrangle. There are gigs and gigs of data to pull down from NASA if I wanted to assemble the entire surface and I only have a small SSD drive at this time. So generating a complete tile set is still on the ToDo list. I did manage to generate a complete Moon surface as a .bgeo model from the LDEM_16 data set. This resulted in a 537Mb model which would be good for any distant shot. But once you get too close to the surface, the detail is lost as you can see in the planet side shots from the LDEM_128 data above. The data sets from LDEM_4_FLOAT, LDEM_8_FLOAT, LDEM_16_FLOAT ,LDEM_64_FLOAT, and LDEM_128_FLOAT contain the entire surface of the moon in a single data set. This is convenient for my current code scanner because I can specify any latitude or longitude within a single file. The higher resolution data sets are broken into sections of the moon that cover only a potion of the latitude and longitude range. Meaning my scanner, in it's current form, can not cross any boundary and fetch data from companion files yet. I do have a basic line scanner which allows you to view a small window into the highest resolution data (256, 512 and 1024) but the area that fits within my computer is so small (0.6 degrees by 0.3 degrees). Set projection_type=1 in the python code if you want to use this experimental approach to viewing large data sets. If anyone wants to play around with the code I am posting it here along with the LDEM_4 data set, which is the lowest quality moon data. For best results start off with small sections of Latitude and Longitude and increase the range as you observe how long it takes to calculate any given range. Beware, there are rules to the Latitude and Longitude To->From parameters and the code will break if these rules are broken. Additional higher quality data sets that are compatible with this code can be downloaded here. Remember to download both the .IMG and .LBL files. The .LBL is the descriptor that informs the python code how to read the .IMG file. Have fun, and post any moon pictures you make with this! atoms_houdini_moon_scanner.zip
  7. 1 point
    Oks, I found it. After 2 days without any reply How to create Orient Constraint In Houdini CHOP`s: -create 3 objects: A, B, C -parent B to A. (A is the parent and B is the child ) C will control the rotation of B. -go to chop layout and create 2 nodes: object node and export node. -put A on "Reference object" and C on "Target object". -set what-ever-you-want in compute. In this case "rotation" -In the export node, put B on "Node" and "rx ry rz" or "r?" in "Path" -set on the state of the export node and that's it. And now just move A and u will see the results. Just play with the other modes of object node to obtain diferent kind of constraints. Hope you find it helpful
×