Jump to content

gridMoth

Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About gridMoth

  • Rank
    Peon

Personal Information

  • Name
    Ryan
  • Location
    Indiana

Recent Profile Visitors

164 profile views
  1. Upcoming Build- future AMD vs Intel

    Wanted to get back right quick to say thanks for the replies @malexander & @lukeiamyourfather. Have a couple follow up questions myself, but will have to revisit this in a couple days. Might be able narrow my focus anyhow after marty's question.
  2. Upcoming Build- future AMD vs Intel

    Thanks for the links&nuggets @marty. Good stuff. Reading up on the compiled blocks now, and looking forward to diving into all the AVX-512 more this weekend. Will also look into the pcie lanes, but I think you may be partially right. That is another area I really want to look at is latency type stuff. Feeling pretty good about the direction I'm headed after your input. I have so many more questions that I hope you and other smart folks in the forum will help me with in the coming weeks/months. But, I've done enough pestering for one day I think. Super grateful .
  3. Upcoming Build- future AMD vs Intel

    For sure, I'm afraid Intel will drop prices the day after I purchase from them...but a win in the long run hopefully. Thanks for the reply
  4. Upcoming Build- future AMD vs Intel

    Must research this...thanksAppreciate ya taking the time to respond—truly. It's all so conflicting in many ways. My goal is to have both, GPU and CPU goodness. I have around a 10k budget that I absolutely can't afford to go wrong here. I'd like to be able to render in Arnold OR Redshift both ideally—depending on the project/time constraints etc. I sorta thought Houdini was edging closer to more ops really taking advantage of more multi-threading... I see your point with openCL and GPU moving forward. In some aspects, I want computationally heavier algorithmic stuff to become fluid in my workflow. I want RnD type stuff to get results fast and not be bogged down, yet still have most of generalized perks you speak of with single core higher clock rates. So, I won't keep buggin' ya....but to clarify, given what I've mentioned; you think it may be more beneficial going with a good i7 and loading up on GPU instead of running dual Xeons & GPU? I had planned on running dual E5-2690's, starting off with 2 1080ti's and 1 980ti... Not that I won't do my own homework, but if you have a good link or two on houdini's future dev and/or openCL etc. that you think I should read, please drop me a line. Thanks again.
  5. Upcoming Build- future AMD vs Intel

    Oh wow, not sure if I glossed over that somehow. For some reason I was under the impression that information was not out yet. I know higher core counts for server based chips typically have slower clock speeds, but that is not at all what I thought Naples was going to be. Might be okay for a headless render box one day, but...back to sticking my gut about Intel being the wise decision I suppose. If others read this and can add any useful information in general about building a machine in 2017, or can add a bit of perspective with maybe holding off for a bit for Intel to drop prices, OR perhaps if the E5-2690 v4 may not be the best choice. Please do. I could really use some sidefx opinions so to speak. The 2690 seemed to me like a good balance or trade-off with clock speeds/all core boost speed of 3.2, core count, and thermal output for the $. [using this as a partial reference] http://bit.ly/2na7Gks
  6. Okay, so let me start off by saying I know it is way too early to speak in hypothetically about this stuff... but just looking for some general knowledge/opinions for a build. For the past few months I've been targeting running dual Xeon E5-2690's. Then news of Naples from AMD. Is the general consensus Intel is still going to perform better and be much more stable...say running Arch Linux for example? Those cores with presumably a friendlier price point...just curious what someone with more experience thought about this moving forward—as I'm looking to get into an expensive workstation. Like I said, I know we shouldn't really speculate, but if anyone has a bit of insight as to how Intel crunches data for simulations and whatnot, the stability/reliability for updates in firmware vs what may be attractive with all the Naples cores expected etc., I love to hear some thoughts on the architectures running Houdini, and overall workstations. Much appreciated, and apologies if this isn't in the right thread.
  7. Houdini Master Sluggish/Python Error

    Yeah, thanks Atom (I really like your work). I mean, I always have the web version open, but I'd still like to figure this out as it can be very convenient while working with a deadline. Being a local connection, I'd think it'd be blazing fast. Also, considering sidefx just re-designed the help layout and structure in H15.5, I highly doubt it's intended to be this sluggish and throwing errors.
  8. Hey guys, so my Houdini help is super slow at times, others not. I've set my PYTHON_LIB path to my homebrew install of /usr/local/Cellar/python/2.7.11/bin/python Setting the path to that of Homebrew version seemed to work at first as Houdini was tossing up errors in confusion with the system Python(I believe). Now I keep getting this broken pipe with the Socket Server: Attached... This is what happens on launch of the Help: May 23 16:17:07 rP-macPro.local houdinifx[50735] <Warning>: void CGSUpdateManager::log() const: conn 0x4b3ff: spurious update. Do I also need to set other env variables such as PYTHON_BIN? etc.? If anyone can help me resolve this, I'd be super appreciative... Screen Shot 2016-05-22 at 3.50.43 AM.tiff
×