So I want to have a reet proper server, running on a bare metal hyper-visor... 1 permanent VM acting as a file server... then VMs for practising pen testing or forensics work going up and down as needed...
Thankfully hardware that's a couple to a few generations old is coming down in price an awful lot...
But that's left me in a quandry now, and I feel the need to ask the advice of the techy types.
Two chips are in a sweet spot price-wise both will slot into an x99 motherboard giving lots of space for oodles of ram and enough drives that I wont run out soon...
i7 5820K is a 3.3 GHz (boost to 3.6 GHz) 6 cores, 12 threads... can be clocked to 4.5 GHz with ease.
E5 2658 V3 is 2.2 GHz (boost to 2.9 GHz) 12 cores, 24 threads... clocking at best will take it to 3.2 GHz
Normally I'd assume the more cores the better, but with about 50% per core speed enhancement and as I'm unlikely to stress all 12 cores fully ... Not sure if the Xeon will make the most of an M.2 for a boot drive.. the EP i7's have all the virtualisation extensions etc...
Or do I go ultra low budget and make a cluster of cheap quad cores and create a parallelised and distributed VM machine? With all the issues of having processing divided over the network, and stuff...
Comments
I have a dual core machine with 32GB of Ram and that runs plex just fine with a while bunch of other test VMs running. Ok these do not run anything intensive with a 4;1 over provisioning of CPU you find you get very little slowdown.
Never over provision memory though.
Personally I'd go down the old Xeon Route (two of in an HA cluster) with a NAS as shared storage so you can do fancy things vmotion between physical hosts.
What I mean is...don't over-spec just because you can. Can you honestly say that you'll be using 6 cores at any given moment? It's fairly unlikely, unless you're going to be running a render farm across your active virtual machines, at which point you'd be better off just running it all on a single machine natively. It's even less likely that you'll be wanting to use the full capacity of 12 cores.
Of course, you can look at it another way and think "what could I do with a lot of cores, and are those workloads better suited to fast single threads or lots of slower threads?". If you're running Plex on one of them with live transcoding, for example, lots of slower cores would be better unless you're using the VC-1 codec (which is single-threaded).
As I said...this would be an excellent project to practice your requirements analysis skills.
I'm personally a fan of getting laptops with knackered screens, putting an SSD and a decent amount of RAM in them and setting them to work. They don't use much power, don't kick out a lot of heat, effectively have a built-in UPS, and it's not wasteful in terms of landfill.
"Requirements analysis" two words sure to strike boredom into the very soul of anyone.
You're not wrong... but you could have called it something else so I didn't have flashbacks to lectures by someone with only a single tone of voice in his arsenal ... and a subject which only managed as interesting a topic as "real pictures" ...
Back to the topic...
I'm really not sure about the actual cpu usage of the VMs. In reality I'd probably not be utilising the file server much (if at all) while playing with pen testing or forensics VMs and the target VMs wouldn't be doing Prime95 levels of work...
And more cores would mean the ability to run a far more target rich environment to play with... without impacting performance on the active one...
Oh and DS m.2 runs over PCIe lanes - different socket is all... speeds of 1.4GBs read should be easy... my only worry is if old(ish - still 2011 V3 socket) xeons will be fully compatible (PCIe being at least partially managed by the cpu)
And a Xeon isn't nearly as sexy as an enthusiast grade i7
My honest advice - since you're aiming for a career in this - would be to treat this as you would a project in a business. The difference is that you're both the project manager and the customer, so you should get a view of how IT project should be run given that what you're talking about isn't a trivial undertaking.
Not sexy, but probably quite eye-opening.
Well, I was referring to M.2 SATA in terms of how much work the CPU's going to be expected to handle (ie actually doing something with the amount of data, not whether the PCIe bus is saturated). Booting the OS, a PCIe-speed SSD will no longer be the bottleneck given that the CPU has to do a lot of work other than loading files. That's just a hangover from the days when the CPU had a lot of idle time between loading files; I'd expect that OS design will change in the next few years to account for the massively-increased I/O bandwidth available.
Stick a sexy cooler on it, and nobody will know the difference
All this talking sense will wind up with my doing extra homework...
I mean... it all makes sense... and at least one point could make for ending up a better hacker... but... Maybe I want a shiny one...
That ought to keep you occupied for a while
GPU compute grids are the work of the devil...
Trading feedback: http://www.thefretboard.co.uk/discussion/72424/
http://www.ebay.co.uk/itm/nVidia-Quadro-4000-2GB-DDR5-DVI-Dual-DP-PCI-e-x16-Graphics-Card-38XNM-/201830234208?hash=item2efe04f860:g:GqoAAOSwfVpYrtVV wonder how many I could fit...
Get an aws account. If you need a 60 core server with 200GB ram for an afternoon it's a couple of clicks away and you can run a file server off any 5 year old piece of crap.
If I was getting into industry now I'd be more interested in getting up to speed with AWS or Azure and learning a lot about Dev Ops, Containers, Serverless etc.
A lot of huge companies don't even have a conventional IT department any more.
If you invest in iron then you are stuck with it and it rapidly becomes almost completely worthless. The real thing to learn is what you describe about responding to a requirement. If you are in the business of physical servers the servers themselves aren't actually all that interesting compared to things like secure power and cooling, fire zones and when you get into high availability stuff load balancers, replication, failover and backup, but you can equally learn all that on a cloud provider without having to make a large capital outlay. Installing a bare metal hypervisor like VMWare is a pretty trivial task that any mug could do with no training, but all the interesting enterprise level stuff you might do is locked out unless you have some very costly software licenses and at least three servers. (Once you do have all that stuff getting it failing over is again not much of an intellectual challenge). You can get VMWare, KVM, or Xen working on some ancient £100 POS if you just want to play about with it on a physical box.
It's worth getting into the distributed computation thing, as it is absolutely massive at the moment, but again you would be better off getting hadoop, storm, spark, etc running on a cloud provider.
And of course the hands-on labs are a great resource, just not so much for the base builds.
Do you know any good/simlar resources for hadoop, storm, spark, or similar?