Guest tar Posted October 25, 2015 Share Posted October 25, 2015 Have you tried the GL ROP in H15? The more serious issues with it have been resolved (transparent objects, material issues, BG image support). Definitely appreciate the work done there, it was the second thing to check after the FEM improvements, motionblur and dof are as critical too to bring it up to the flipbook level. Quote Link to comment Share on other sites More sharing options...
malexander Posted October 25, 2015 Share Posted October 25, 2015 Definitely appreciate the work done there, it was the second thing to check after the FEM improvements, motionblur and dof are as critical too to bring it up to the flipbook level. The current flipbook dof & moblur are built into the flipbooking system, not the viewport. They also predate any sort of GL shader work, so there are much better ways to do both (quality and performance-wise). It's on the todo, it lost out to Onion skinning this time around. 1 Quote Link to comment Share on other sites More sharing options...
lisux Posted October 30, 2015 Share Posted October 30, 2015 The current flipbook dof & moblur are built into the flipbooking system, not the viewport. They also predate any sort of GL shader work, so there are much better ways to do both (quality and performance-wise). It's on the todo, it lost out to Onion skinning this time around.Well if at some point would be possible to get a decent approximation of the mantra render in the viewport this would be great to block the lighting and use it as a tool to approve lighting before fully rendering the shot in Mantra. This would be so useful 1 Quote Link to comment Share on other sites More sharing options...
Guest tar Posted October 30, 2015 Share Posted October 30, 2015 Well if at some point would be possible to get a decent approximation of the mantra render in the viewport this would be great to block the lighting and use it as a tool to approve lighting before fully rendering the shot in Mantra. This would be so useful This is why a master class in the viewport/glsl would be wicked - glsl can do so much and a kickstart would be great to get the community going. Quote Link to comment Share on other sites More sharing options...
sebkaine Posted October 30, 2015 Share Posted October 30, 2015 (edited) Well if at some point would be possible to get a decent approximation of the mantra render in the viewport this would be great to block the lighting and use it as a tool to approve lighting before fully rendering the shot in Mantra. This would be so useful when you see what sketchfab guys can display in a webgl viewport https://labs.sketchfab.com/siggraph2014/viewer.html?model=devastator even for transparency you have cheats : http://madebyevan.com/webgl-water/ i guess that you can definitly get some kickass accuracy beetween a Mantra PBR render and a OpenGL 3.2 PBR viewport. considering that the set of possibility offer by OpenGL 3.2 beats hands downs the very limited WebGL sets of function. And i agree that realtime previz would beat hands down any GPU previz stuff for efficiency ... Edited October 30, 2015 by sebkaine Quote Link to comment Share on other sites More sharing options...
lisux Posted November 2, 2015 Share Posted November 2, 2015 i guess that you can definitly get some kickass accuracy beetween a Mantra PBR render and a OpenGL 3.2 PBR viewport. considering that the set of possibility offer by OpenGL 3.2 beats hands downs the very limited WebGL sets of function. I hope H16 will have materials with some decent representation in the viewport, compared to the final render in Mantra.If at least the "speculars BSDF shapes" are similar then it will be very useful. Quote Link to comment Share on other sites More sharing options...
tinyparticle Posted November 4, 2015 Share Posted November 4, 2015 If we had mantragpu and if it had the functionality of Redshift it would be more than enough for a massive number of users. And I think it would bring many more users to the camp. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 5, 2015 Share Posted November 5, 2015 An omen perhaps... 'Redshift wins CG Awards’ new application award' This year’s runners up were Autodesk Memento, The Foundry’s NUKE STUDIO and Houdini Indie. https://www.redshift3d.com/blog/redshift-wins-cg-awards-new-application-award Quote Link to comment Share on other sites More sharing options...
Mandrake0 Posted November 6, 2015 Share Posted November 6, 2015 (edited) http://www.creativebloq.com/3d/cg-awards-winners-2015-revealed-111517613 Under 3D World Hall of Fame: There is Kim Davidson on the list :-) Edited November 6, 2015 by Mandrake0 Quote Link to comment Share on other sites More sharing options...
Erik_JE Posted November 6, 2015 Share Posted November 6, 2015 An omen perhaps... 'Redshift wins CG Awards’ new application award' This year’s runners up were Autodesk Memento, The Foundry’s NUKE STUDIO and Houdini Indie. https://www.redshift3d.com/blog/redshift-wins-cg-awards-new-application-award Good that NukeStudio didnt win, bugfest galore 1 Quote Link to comment Share on other sites More sharing options...
Mandrake0 Posted April 15, 2016 Share Posted April 15, 2016 some update about xeon phi: Article: http://www.extremetech.com/extreme/226604-intels-next-generation-xeon-phi-knights-landing-now-shipping-to-developers intel xeon phi dev kit:http://dap.xeonphi.com/ 1 Quote Link to comment Share on other sites More sharing options...
Mandrake0 Posted January 6, 2017 Share Posted January 6, 2017 update xeon phi Article: https://www.servethehome.com/intel-xeon-phi-x200-knights-landing-boots-windows/ Video: Quote Link to comment Share on other sites More sharing options...
lukeiamyourfather Posted January 6, 2017 Share Posted January 6, 2017 So four Phi cards that cost around $10K (not including the rest of the computer) are on par with really old Xeon processors. I don't get why anyone would want to do this. Quote Link to comment Share on other sites More sharing options...
Mandrake0 Posted January 7, 2017 Share Posted January 7, 2017 (edited) On 6.1.2017 at 5:20 PM, lukeiamyourfather said: So four Phi cards that cost around $10K (not including the rest of the computer) are on par with really old Xeon processors. I don't get why anyone would want to do this. in the cinebench the render wasn't optimized(compiled) for the xeon phi, there are some optimizations options open, so it didn't show the full power. on page 19 there is a compairing of two systems with the embree renderer: https://embree.github.io/data/embree-siggraph-2016-final.pdf if the render is optimized it will be in the same speed range of a 2x E5-2699(2.3Ghz / 18core) system Edited January 7, 2017 by Mandrake0 Quote Link to comment Share on other sites More sharing options...
malexander Posted January 7, 2017 Share Posted January 7, 2017 That's the thing with these massively parallel hardware architectures - you pretty much have to re-express your algorithm on its terms, or you just don't see much improvement. Rewriting C++ as OpenCL is a good start. 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.