Home virtialisation server...

What's Hot
So I want to have a reet proper server, running on a bare metal hyper-visor... 1 permanent VM acting as a file server... then VMs for practising pen testing or forensics work going up and down as needed... 

Thankfully hardware that's a couple to a few generations old is coming down in price an awful lot...

But that's left me in a quandry now, and I feel the need to ask the advice of the techy types.

Two chips are in a sweet spot price-wise both will slot into an x99 motherboard giving lots of space for oodles of ram and enough drives that I wont run out soon... 

i7 5820K is a 3.3 GHz (boost to 3.6 GHz) 6 cores, 12 threads... can be clocked to 4.5 GHz with ease.
E5 2658 V3 is 2.2 GHz (boost to 2.9 GHz) 12 cores, 24 threads... clocking at best will take it to 3.2 GHz 

Normally I'd assume the more cores the better, but with about 50% per core speed enhancement and as I'm unlikely to stress all 12 cores fully ... Not sure if the Xeon will make the most of an M.2 for a boot drive.. the EP i7's have all the virtualisation extensions etc... 

Or do I go ultra low budget and make a cluster of  cheap quad cores and create a parallelised and distributed VM machine? With all the issues of having processing divided over the network, and stuff... 
0reaction image LOL 0reaction image Wow! 0reaction image Wisdom

Comments

  • jellyrolljellyroll Frets: 3073
    Rodney Marsh?
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • Your processing won't be distributed a VM can only ever run on a single machine.
    I have a dual core machine with 32GB of Ram and that runs plex just fine with a while bunch of other test VMs running. Ok these do not run anything intensive with a 4;1 over provisioning of CPU you find you get very little slowdown.
    Never over provision memory though.
    Personally I'd go down the old Xeon Route (two of in an HA cluster) with a NAS as shared storage so you can do fancy things vmotion between physical hosts.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • I'd agree with @Axe_meister - however, consider your actual usage here, just as you would in a production situation (this is a good exercise). How many of your virtual machines are likely to actually be doing work at any one time?

    What I mean is...don't over-spec just because you can. Can you honestly say that you'll be using 6 cores at any given moment? It's fairly unlikely, unless you're going to be running a render farm across your active virtual machines, at which point you'd be better off just running it all on a single machine natively. It's even less likely that you'll be wanting to use the full capacity of 12 cores.

    Of course, you can look at it another way and think "what could I do with a lot of cores, and are those workloads better suited to fast single threads or lots of slower threads?". If you're running Plex on one of them with live transcoding, for example, lots of slower cores would be better unless you're using the VC-1 codec (which is single-threaded).

    As I said...this would be an excellent project to practice your requirements analysis skills.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • digitalscreamdigitalscream Frets: 26729
    edited February 2017
    Oh, and just FYI - my dev database server is an old i3 laptop with a busted screen (which I removed) bolted to the back of a 19" monitor. I have a Samsung 850 SSD in there, and it hits its bandwidth limit quite regularly. If the M.2 is SATA, then believe me...that Xeon will easily be able to make full use of it. In fact, it'll probably be able to if it's PCIe.

    I'm personally a fan of getting laptops with knackered screens, putting an SSD and a decent amount of RAM in them and setting them to work. They don't use much power, don't kick out a lot of heat, effectively have a built-in UPS, and it's not wasteful in terms of landfill.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 1reaction image Wisdom
  • MyrandaMyranda Frets: 2940
    As I said...this would be an excellent project to practice your requirements analysis skills.
    Oi vey.  (Not autocorrect... not 'oil very' )

    "Requirements analysis" two words sure to strike boredom into the very soul of anyone.

    You're not wrong... but you could have called it something else so I didn't have flashbacks to lectures by someone with only a single tone of voice in his arsenal ... and a subject which only managed as interesting a topic as "real pictures" ...

    Back to the topic...

    I'm really not sure about the actual cpu usage of the VMs. In reality I'd probably not be utilising the file server much  (if at all) while playing with pen testing or forensics VMs and the target VMs wouldn't be doing Prime95 levels of work...

    And more cores would mean the ability to run a far more target rich environment to play with... without impacting performance on the active one... 

    Oh and DS m.2 runs over PCIe lanes - different socket is all... speeds of 1.4GBs read should be easy... my only worry is if old(ish - still 2011 V3 socket) xeons will be fully compatible  (PCIe being at least partially managed by the cpu)

    And a Xeon isn't nearly as sexy as an enthusiast grade i7
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • quarkyquarky Frets: 2777
    Honestly, I almost never see CPU contention as an issue. Typically RAM and disk becomes issues long before the CPUs break too much of a sweat, but I guess it depends a lot on your workloads. How many VMs are you running? Lots? Go for the maximum cores. Fewer more demanding VMs, go for the i7.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • Myranda said:

    I'm really not sure about the actual cpu usage of the VMs. In reality I'd probably not be utilising the file server much  (if at all) while playing with pen testing or forensics VMs and the target VMs wouldn't be doing Prime95 levels of work...
    That depends. If you're chucking files about over an encrypted link, there can be quite a bit of CPU usage. That could be another good exercise for you. Then there's running a VPN server...

    My honest advice - since you're aiming for a career in this - would be to treat this as you would a project in a business. The difference is that you're both the project manager and the customer, so you should get a view of how IT project should be run given that what you're talking about isn't a trivial undertaking.

    Not sexy, but probably quite eye-opening.
    Myranda said:

    Oh and DS m.2 runs over PCIe lanes - different socket is all... speeds of 1.4GBs read should be easy... my only worry is if old(ish - still 2011 V3 socket) xeons will be fully compatible  (PCIe being at least partially managed by the cpu)
    Well, I was referring to M.2 SATA in terms of how much work the CPU's going to be expected to handle (ie actually doing something with the amount of data, not whether the PCIe bus is saturated). Booting the OS, a PCIe-speed SSD will no longer be the bottleneck given that the CPU has to do a lot of work other than loading files. That's just a hangover from the days when the CPU had a lot of idle time between loading files; I'd expect that OS design will change in the next few years to account for the massively-increased I/O bandwidth available.
    Myranda said:

    And a Xeon isn't nearly as sexy as an enthusiast grade i7
    Stick a sexy cooler on it, and nobody will know the difference ;)
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • crunchmancrunchman Frets: 11467
    Stick a sexy cooler on it, and nobody will know the difference ;)
    I do worry about people who think coolers are sexy
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • MyrandaMyranda Frets: 2940

    crunchman said:
    Stick a sexy cooler on it, and nobody will know the difference ;)
    I do worry about people who think coolers are sexy
    And yet... shove a Bigsby on it or an violin burst finish and off your pants come ;) 

    Myranda said:

    I'm really not sure about the actual cpu usage of the VMs. In reality I'd probably not be utilising the file server much  (if at all) while playing with pen testing or forensics VMs and the target VMs wouldn't be doing Prime95 levels of work...
    That depends. If you're chucking files about over an encrypted link, there can be quite a bit of CPU usage. That could be another good exercise for you. Then there's running a VPN server...

    My honest advice - since you're aiming for a career in this - would be to treat this as you would a project in a business. The difference is that you're both the project manager and the customer, so you should get a view of how IT project should be run given that what you're talking about isn't a trivial undertaking.

    Not sexy, but probably quite eye-opening.
    Myranda said:

    Oh and DS m.2 runs over PCIe lanes - different socket is all... speeds of 1.4GBs read should be easy... my only worry is if old(ish - still 2011 V3 socket) xeons will be fully compatible  (PCIe being at least partially managed by the cpu)
    Well, I was referring to M.2 SATA in terms of how much work the CPU's going to be expected to handle (ie actually doing something with the amount of data, not whether the PCIe bus is saturated). Booting the OS, a PCIe-speed SSD will no longer be the bottleneck given that the CPU has to do a lot of work other than loading files. That's just a hangover from the days when the CPU had a lot of idle time between loading files; I'd expect that OS design will change in the next few years to account for the massively-increased I/O bandwidth available.
    Myranda said:

    And a Xeon isn't nearly as sexy as an enthusiast grade i7
    Stick a sexy cooler on it, and nobody will know the difference ;)
    All this talking sense will wind up with my doing extra homework... 

    I mean... it all makes sense... and at least one point could make for ending up a better hacker... but... Maybe I want a shiny one... :( 
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • OK, enough with the sense-talking. Get a bunch of cheap 5- or 6-series Nvidia GPUs, a load of old machines and build a GPU-assisted compute cluster.

    That ought to keep you occupied for a while :D
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • MrBumpMrBump Frets: 1244
    OK, enough with the sense-talking. Get a bunch of cheap 5- or 6-series Nvidia GPUs, a load of old machines and build a GPU-assisted compute cluster.

    That ought to keep you occupied for a while :D

    GPU compute grids are the work of the devil...
    Mark de Manbey

    Trading feedback:  http://www.thefretboard.co.uk/discussion/72424/
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • MyrandaMyranda Frets: 2940
    OK, enough with the sense-talking. Get a bunch of cheap 5- or 6-series Nvidia GPUs, a load of old machines and build a GPU-assisted compute cluster.

    That ought to keep you occupied for a while :D
    Awww yeah... now we're talking

    http://www.ebay.co.uk/itm/nVidia-Quadro-4000-2GB-DDR5-DVI-Dual-DP-PCI-e-x16-Graphics-Card-38XNM-/201830234208?hash=item2efe04f860:g:GqoAAOSwfVpYrtVV wonder how many I could fit... 
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • monquixotemonquixote Frets: 17662
    tFB Trader
    I can't imagine wanting to bother with physical hardware.

    Get an aws account. If you need a 60 core server with 200GB ram for an afternoon it's a couple of clicks away and you can run a file server off any 5 year old piece of crap.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • I can't imagine wanting to bother with physical hardware.

    Get an aws account. If you need a 60 core server with 200GB ram for an afternoon it's a couple of clicks away and you can run a file server off any 5 year old piece of crap.
    To be fair, I can understand the motivation here - you and I have been doing this for years (decades?), whereas @Myranda is learning and wanting to get into the industry. Dealing with hardware is exactly what she needs right now.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • monquixotemonquixote Frets: 17662
    tFB Trader
    I dunno.
    If I was getting into industry now I'd be more interested in getting up to speed with AWS or Azure and learning a lot about Dev Ops, Containers, Serverless etc. 

    A lot of huge companies don't even have a conventional IT department any more. 

    If you invest in iron then you are stuck with it and it rapidly becomes almost completely worthless. The real thing to learn is what you describe about responding to a requirement. If you are in the business of physical servers the servers themselves aren't actually all that interesting compared to things like secure power and cooling, fire zones and when you get into high availability stuff load balancers, replication, failover and backup, but you can equally learn all that on a cloud provider without having to make a large capital outlay. Installing a bare metal hypervisor like VMWare is a pretty trivial task that any mug could do with no training, but all the interesting enterprise level stuff you might do is locked out unless you have some very costly software licenses and at least three servers. (Once you do have all that stuff getting it failing over is again not much of an intellectual challenge). You can get VMWare, KVM, or Xen working on some ancient £100 POS if you just want to play about with it on a physical box. 

    It's worth getting into the distributed computation thing, as it is absolutely massive at the moment, but again you would be better off getting hadoop, storm, spark, etc running on a cloud provider. 
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • quarkyquarky Frets: 2777
    edited February 2017
     VMWare is a pretty trivial task that any mug could do with no training, but all the interesting enterprise level stuff you might do is locked out unless you have some very costly software licenses and at least three servers.


    Nesting your ESXi servers on ESXi is a great way to use a lot of those features without having to have too much hardware. A guest on a nested ESXi isn't going to run that well, but it can certainly run well enough for testing. 

    And of course the hands-on labs are a great resource, just not so much for the base builds.

    Do you know any good/simlar resources for hadoop, storm, spark, or similar?
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
Sign In or Register to comment.