Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/06/2012 in all areas

  1. The different vector types do indeed behave differently, at least in transforms that have non-uniform scaling. I guess they might have some differences in UI behaviour/GUI widgets or something as well. Here's a small demonstration on how vectors behave when they are declared as different types. PS. Menoz should have an official word on this vector_types.hip
    1 point
  2. Might be confusing to debut if a float3 sets a vector3 value, I'd think.
    1 point
  3. Uhmm, I have a vague recollection that it can matter in a shading context... Vectors can behave differently from 3-floats when you do maths on them. They are members of a vector space but I'm not sure in how far that applies to what goes on in Houdini. Basically I'd say it does't matter but if you get weird and seemingly inexplicable behaviour then check your types and if/how they transform. It's one of those gotchas when you write shaders.
    1 point
  4. I think it does matter. "sizex" implies an axis where as "size1" does not. Also, "sizex" is easier to work with numbered multiparameters. example; "sizex#"
    1 point
  5. ( within my limited knowledge and experience ) what i find interesting is their -Philosophy- ( bullet points of what artists mostly need today ) . the hardware power is on its way , while the workflow is crucial , IMHO . also , smart memory and power management on recalculating only whats being affected by changes ( at least what they said ) , are the max a developer can give to an artist . Clarice-makers seem to have their focus and ideas clear but .. it is obvious that the product , they are demoing , is simply a scene manager and a renderer ; with very limited instancing and variatons compared to Houdini , basic shading , pass manager , and god knows what else .. . what i find useful on seeing that and bringing it here is the idea of ' bringing back to life ' the compositing Context of Houdini and making it more tightly connected with other contexts . seeing whats happening in Clarise , i believe it wd be very useful if the compositing context wd be used heavily into preparing and testing passes before sending those into render process . let say the scene and animations are finished . objects sparately textured and shaded . now comes the final ' Meeting ' , where all the things should make sense . lets open IPR and the Pre-Compositor . this last might have bundles , takes' manager , light linkers , a huge list of General Shaders ready to be used or creayed ( mates , AO , irradiance , shadow catchers , collector of custom shaders' exported variables , etcetera .. ) , Pre/PostProcessing/Compositing networks ( to test final results / apply filters if neccesary / recollect and build final Shots ) , a Master Render Properties Director .. and a Queue Manager throw passes to .. and an mp3-player integrated . -- now let me smoke another one for other even more 'brilliant' ideas .. =) .cheers
    1 point
  6. Hi! I dont know If I'm posting it in the right place, so if it's wrong, please move it to the right one A year ago I was writing my Bachelor of Science Thesis "Lex Systems - specification of high-level character processing language". I have been working for a long time on this programming language, which could very efficiently generate L-Systems and is a lot more powerful than standard L-Systems processors. Technically, it allows for definition of almost all languages of Chomsky's Type-2 and 3 grammars. Results I get were really good. I was planning that I would post them in a couple of months, because I wanted to start off with my blog or small website. But the reality was horrible. I am working as technical director in one of the film studios in Poland and in parallel I'm developing several open and closed source projects, so I've got so much work, that probably this website will not be alive for log time The following results are a year old, but this really does not matter Lex Systems is a variant of formal gramma, but more universal, than L-Systems. It even allows for writing standalone programs performing some computation, make some complex algorithms and of course generate L-Systems. The most powerfull feature is its syntax, which is very easy to learn (a lot more straightfoward than L-Systems syntax, but it is simmilar) and gives users a lot more control over the execution and data flow. This syntax is powerful and simple - rules in Lex-Systems could be as short as in L-Systems. What is really interesting is that my compiler of Lex-Systems is a LOT faster than Houdini's one (I tested it against Houdini 9,10,11,12 and 12.1 - almost the same results). It is written in C and it is optimized to get really good results and be stable even for large files. It seems, that my solution have a lot lower computational complexity, that Houdini's solution. I and a lot of people, that were reading and rating my work, have tested if results are correct. They are (we were testing if the resulting geometry is the same. Because of the size of the geometry, the tests were executed on dells workstation with 2*3.2GHz Xeon and 64 Gb of RAM). So lets put some facts in this post! I was testing the generation of L-Systems, not the drawing process in viewport! The tests are running against houdini batch and houdini master (fx) without displaying the results in viewport (for example by telling only the node to cook) some tests: premise: FFFA rules: A -> !"////////B B -> &FFFA for the first 14 generations results are similar, but next generations give us real feedback about whats going on: generation 17 - houdini: 0.84s, Lex-Systems: 0.29s generation 18 - houdini: 10.65s, Lex-Systems: 0.53s generation 19 - houdini: 24.25s, Lex-Systems: 0.87s generation 20 - houdini: 144.61s, Lex-Systems: 1.58s generation 17 - houdini: 345.49s, Lex-Systems: 2.58s generation 21 - houdini: 2,013.52s, Lex-Systems: 4.76s generation 22 - houdini: 3,887.33s, Lex-Systems: 7.28s generation 23 - houdini: crash, Lex-Systems: 14.28s Please notice, that the 22th generation took about 1 hour Houdini to process and only about 7 second L-Systems to get the same results! another example: premise: F+F+F+F rules: F -> F-F+F+F-F+F-F-F+F generation 3 - houdini: ~2*10^(−3)s, Lex-Systems ~2*10^(-3)s generation 4 - houdini: 2.7*10^(−2)s, Lex-Systems 4*10^(-3)s generation 5 - houdini: 1.01s, Lex-Systems 2.4*10^(-2)s generation 6 - houdini: 371.5s, Lex-Systems 0.2s generation 7 - houdini: after 7 hours we get 80% of generation process, Lex-Systems: 1.76s generation 8 - houdini: impossible, Lex-Systems: 15.97s generation 9 - houdini: impossible, Lex-Systems: 145.77s ------------------------------- Additional I created 2 more tools: a plugin for houdini, and pythons translator of L-System language to Lex-Systems language (it has less than 100 lines of code, because the basics of Lex-Systems are simmilar to L-Systems) As a result - all my previous tools that were using L-systems now work A LOT faster and allow for making animation of L-Systems parameters really fast. Due to some limitations in license of my thesis I cannot make it public (I have to talk about it with my university, but I've got not so much time). I would be happy to hear some feedback! If you would like, I will post some more info! Thank you, Wojciech Daniło
    0 points
  7. about killing nuke, no about what could do sesi, only in terms of procedural modeling and rendering about many years maybe yes... lets say 5
    -1 points
  8. I've come a across a bunch of people who were former SESI interns. Seems like they make good TDs in the end.
    -1 points
×
×
  • Create New...