FAQ | Search | Memberlist | Usergroups | Register | Profile | Inbox | Log in | SmartFeedSmartFeed


 okgg.org > Forum Index > All Things Technical > Free ESXi (VSphere) 6.0 server as production host?

  Author    Thread Post new topic  Reply to topic
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
Free ESXi (VSphere) 6.0 server as production host?  Reply with quote  

Anyone have any experience with ESXi in a production environment? I have it running on bare metal in a test situation and it seems to be working for that light duty purpose.

Has anyone used it as a host for production servers that get much use? (SQL, Email, Share, AD servers?)
_________________
For with what measure you measure it will be measured to you.

Post Tue Jan 05, 2016 4:24 pm  View user's profile Send private message ICQ Number
Sevnn
Candy Cane King


Joined: 22 Mar 2003
Posts: 7711
Location: Kyrat
 Reply with quote  

I'll have to check what version of ESX we are on to say for sure, but nearly our entire environment running 450+ users is on it. It was running great until recently when a bug caused SQL Server to corrupt. It was very, very ugly but we got it solved.

Post Wed Jan 06, 2016 4:08 am  View user's profile Send private message Send e-mail AIM Address ICQ Number
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

Yikes on the SQL corruption! Was that due to being a VMware VM (bug with vsphere) or was that a microsoft thing?

I'm running some things in the free version which is pretty basic, but it gets the job done, and from what I can tell it gets it done well. Adding a "Standard" license and support subscription looks to be around $1,300 a year for 1CPU which isn't an aweful lot I guess. That adds some data protection options that I think I would like. (I still have a lot of learning to do on VSphere, and server consolidation in general)

I can get a 14 core 28 thread single CPU server that fills my needs that kicks the ass right out of anything I got currently, repleat with 1.6TB enterprise SSD's and 64GB of memory for about $30k. (2/3rds of the cost is the SSDs)

That server would replace about 8 of my current servers easily and be a huge performance increase across the board at a cost savings of about half.

Like I said, I still am in a preliminary learning mode and still have yet to figure everything out. I like the idea of that server having 2x 10Gb netowrk cards, and that price includes a 48 port PoE switch with 2x 10Gb ports I assume would make a great connection to the Server VMs.

Edit: Server was 14c/28t sincle processor, not 10/20.
_________________
For with what measure you measure it will be measured to you.

Post Wed Jan 06, 2016 1:35 pm  View user's profile Send private message ICQ Number
Fatal0E
pwn3d


Joined: 16 Dec 2004
Posts: 143
Location: OKC
 Reply with quote  

We use ESXi for nearly everything, SQL, Oracle, Exchange, AD controllers, various random purpose servers. I cannot imagine only having one server if things are important. We have 6 servers in a HA cluster.
_________________
Core i7 4790K
16GB GSkill 1866
Geforce GTX 970

Post Wed Jan 06, 2016 6:05 pm  View user's profile Send private message
Sevnn
Candy Cane King


Joined: 22 Mar 2003
Posts: 7711
Location: Kyrat
 Reply with quote  

Our issue came from the point in time backups of the full machine. It was a known issue for only a short time and as soon as we reached out to them they identified the issue and supposedly fixed it. We were making those backups nightly, and unnecessarily. We've since reworked the schedule to reduce the chance of that technology biting us again.

We have a single primary bad ass server, dual 12 core procs I think, with 384G of ram and a big direct attach array. On it we have 4 SQL servers, 6 web servers, various AD and file servers, exchange, etc and it doesn't break a sweat.

Post Thu Jan 07, 2016 12:55 am  View user's profile Send private message Send e-mail AIM Address ICQ Number
detox
Naaaaaah. NaaaaaFLAC.


Joined: 20 Mar 2003
Posts: 4317
Location: Durant
 Reply with quote  

I've been out of the IT industry admin stuff for almost 10 years and this is purely a curiosity question:

Do you have a second identical server on standby in case that one croaks or can you just move the vm's to non-identical server(s) as long as it can run the programs, although not as efficiently?
_________________
I7 2600K
EVGA 980ti FTW
16gigs
SSD
3x Dell U2412 Monitors

Post Thu Jan 07, 2016 9:21 am  View user's profile Send private message Visit poster's website ICQ Number
Fatal0E
pwn3d


Joined: 16 Dec 2004
Posts: 143
Location: OKC
 Reply with quote  

In our case we have six identical servers. The virtual machines are monitored by vCenter. It monitors the load on the hosts and automatically moves vms around to keep it spread evenly. It also monitors for potential problems, and under certain conditions will move all the vms off a host if it predicts a failure. If a host fails unexpectedly, depending on what happened, the vms may be brought up automatically on the other hosts, or it may require manual intervention. All of our storage is centralized, nothing is on the host itself, the host can melt to the ground and all should be well.

You can make rules to help ensure systems stay online. We have rules to ensure the SQL cluster servers, AD controllers, etc, are not allowed on the same host

All of our data is also replicated to another site, with another cluster. Soon we will be able to bring everything online from another site if the primary site is erased by a tornado.
_________________
Core i7 4790K
16GB GSkill 1866
Geforce GTX 970

Post Thu Jan 07, 2016 10:01 am  View user's profile Send private message
detox
Naaaaaah. NaaaaaFLAC.


Joined: 20 Mar 2003
Posts: 4317
Location: Durant
 Reply with quote  

Awesome!
_________________
I7 2600K
EVGA 980ti FTW
16gigs
SSD
3x Dell U2412 Monitors

Post Thu Jan 07, 2016 11:11 am  View user's profile Send private message Visit poster's website ICQ Number
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

Extreme HA is not really needed in my situation, but of course as close to %100 uptime is. At least as reasonably as possible. I find it interresting, and excellent, that you can have two completely different servers and move the VM between the two.

For redundancy I think I could probably get away with a very cheap server with cut down specs. For instance, just 4x 4TB sas mechanical drives on a server with a processor and some memory. I think we could limp along on that if the super massive server cratered and needed to be repaired.

I know I'm a few years late to the server virtualization party, but in my job change happes on a geologic time scale. However, I'm really starting to see how extremely cool it all is. heh

So could the secondary server act as a hot spare/backup of the VM's of the primary? Meaning, if the big guy failed, I could just walk into the server room and "flip over" to the crutch? Like real time synchronization? Or would I need some kind of shared storage solution for that, like a SAN?
_________________
For with what measure you measure it will be measured to you.

Post Thu Jan 07, 2016 4:25 pm  View user's profile Send private message ICQ Number
Fatal0E
pwn3d


Joined: 16 Dec 2004
Posts: 143
Location: OKC
 Reply with quote  

A second server could be a backup. Not sure the details, need something to replicate the data from one to the other, possibly vSphere replication, and may need Site Recovery Manager.

Will also want to make sure that your VM compatibility settings are set to the lowest spec server.
_________________
Core i7 4790K
16GB GSkill 1866
Geforce GTX 970

Post Thu Jan 07, 2016 4:35 pm  View user's profile Send private message
Sevnn
Candy Cane King


Joined: 22 Mar 2003
Posts: 7711
Location: Kyrat
 Reply with quote  

We have a few host machines, one bad ass one, one medium powered one that is coming up for HA, one older one we put less critical VMs on, and a medium powered offsite that gets copies of the data. We are running local storage so we can't vMotion the machines (move them while running) between hosts but it doesn't take long to shut one down and copy the images over. We have a SAN in the works and once that is done we'll be able to vMotion between the hosts while their are still running and responding to requests. Our offsite gets a copy of the machines on a regular basis and can be brought up with a small amount of network and VM reconfig.

Shinare, the biggest issue you will have with your plan is that the virtual images (files that represent the hard drives of the guest OS) will have to be copied over if you needed to move to the second machine. As long as the bad boy is able to hold out long enough for that to happen you'll be OK with minimal downtime. If you need the ability to move over to the second machine while hot you'll need a SAN. There are reasonable priced options out there for SAN solutions and iSCSI might be an option as well. If I was building something like you are talking about, I would buy 2 medium powered machines with a cheap'ish SAN solution. That way you can live swap the images, you'll have hardware redundancy on the computing side of the hardware as well as being able to separate domain controllers, etc. You'll be able to move machines around if load is not balanced, and one machine should still be plenty bulky enough to run all the systems if the other needed repairs.

Post Thu Jan 07, 2016 6:23 pm  View user's profile Send private message Send e-mail AIM Address ICQ Number
Nikola
Hung Like a Flea


Joined: 22 Mar 2009
Posts: 790
Location: Edmond, OK
 Reply with quote  

Veeam Replication to keep your VMs up to date on the backup server.
_________________
Failure is just success rounded down.

Post Sat Jan 09, 2016 2:45 am  View user's profile Send private message AIM Address ICQ Number
Menos
Broke My Labia


Joined: 06 Jun 2003
Posts: 1125
Location: Oklahoma City
 Reply with quote  

Are you limited to VMWware? We run Xen in our datacenters for our VMs. I don't work on that side of the house so I don't know all the details but we handle several thousand machines across all of our sites.

Post Sun Jan 10, 2016 4:47 pm  View user's profile Send private message Send e-mail
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

Yeah, semi-limited to VMWare only because a "Virtual Appliance" that we have paid for requires its use. And I have a WSUS server running in a vm on the same machine just because I wanted to. (Which I could easily move or remake somewhere else)

I'm just really stoked about how easy vSphere is to use and manage, especially for a "freeware" bare metal hypervisor. I was able to set it all up AND the virtual appliance in a matter of an hour or so. I'll take a look at Xen, but I'm not sure I will need anything super special or sophisticated (if that is what that is), I seriously only have 5 major servers and 6-10 ancelary "servers" that do specialized things.

This all may be academic anyway as my network is going to be assimilated by the greater collective soon anyway. At that point my servers (and hopefully myself) will be absorbed and relocated to a central data center full of VM servers. (also why I am trying to learn as much as I can about virtualization.)
_________________
For with what measure you measure it will be measured to you.

Post Wed Jan 13, 2016 11:57 am  View user's profile Send private message ICQ Number
LightningCrash
Smile like Bob, order your free LC today


Joined: 03 Apr 2003
Posts: 5020
 Reply with quote  

I even run ESXi at home for my firewall and assorted stuff. It's just easy. SR-IOV for the HBAs for the fileserver, Quad GbE over to the firewall VM, etc.

Live migrations to completely different servers in ESXi: It just depends on how different they are, if you have a Sky Lake server A and a Sandy Bridge server B, you could have some issues to work around. Not an insurmountable problem though.

Shared storage is almost a must for a redundant ESXi configuration . It doesn't necessarily have to be a SAN, but a SAN is great if you have the money. A sturdy DAS that supports multi-initiator would be fine for 2 small-ish hosts. The next step up from there would be a very small SAN tray like one of the powervault models that end in i (PV3200i etc).

Shins, are you referring to OMES? When is your agency's IT scheduled for going over?
Hit me up on Lync.

Post Tue Jan 19, 2016 1:52 am  View user's profile Send private message
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

Hey LC, I do have one of these on hand with 1TB drives in it and I'm currently using as the big bucket storage for my test ESXi host. Working flawlessly, however its fairly old and I dont see "multi-initiator" on it anywhere, hehe. I would probably go the "i" SAN route if it ever came time for me to look into a serious production setup.
_________________
For with what measure you measure it will be measured to you.

Post Wed Jan 20, 2016 6:01 pm  View user's profile Send private message ICQ Number
LightningCrash
Smile like Bob, order your free LC today


Joined: 03 Apr 2003
Posts: 5020
 Reply with quote  

Shinare wrote: Hey LC, I do have one of these on hand with 1TB drives in it and I'm currently using as the big bucket storage for my test ESXi host. Working flawlessly, however its fairly old and I dont see "multi-initiator" on it anywhere, hehe. I would probably go the "i" SAN route if it ever came time for me to look into a serious production setup.


Didn't realize you were going back that far on connections Smile I'd set another server's scsi card bus id to something unused (your main server is probably configured as ID 7) and plug it in.

If you want something random to test it on before the main array, I have some old Sun multipack 711s that are scsi you can try out.

Post Thu Jan 21, 2016 1:59 am  View user's profile Send private message
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

LightningCrash wrote:
Shinare wrote: Hey LC, I do have one of these on hand with 1TB drives in it and I'm currently using as the big bucket storage for my test ESXi host. Working flawlessly, however its fairly old and I dont see "multi-initiator" on it anywhere, hehe. I would probably go the "i" SAN route if it ever came time for me to look into a serious production setup.


Didn't realize you were going back that far on connections Smile I'd set another server's scsi card bus id to something unused (your main server is probably configured as ID 7) and plug it in.

If you want something random to test it on before the main array, I have some old Sun multipack 711s that are scsi you can try out.


LOL well, funny story about that. Way back in 2005 we got an EMC CX300 FC SAN with 2 Silkworm 200E fiber channel switches. That was connected to 5 servers each with 2x FC HBA's in each server for fault tollerance and teaming. that SAN was a single shelf with 10x 146GB 10k SCSI drives in it. Unfortunately, a few years after its purchase it was clear that it was woefully undersized for our needs and we needed (in almost an emergancy way) more space. The great thing touted to us about the SAN was that it was SUPER EASY to add a shelf of more drives to immediately increase available space. So we had Dell quote us up another shelf of drives and it was going to be $20k for another 1TB of data. 1TB wasn't going to cut it, so we were going to need multiple shelves. Only two servers needed more storage so I found the above quoted 2 channel U320 DAS enclosure that could hold 16x 1TB sata drives (the biggest available at the time) that offered the ability to host one volume for one server on channel 1 and another volume for another server on channel 2 (each server with a single u320 card). The cost on that, fully populated with 1TB drives was <$5k. So, $5k or >$80k... The choice was easy for this cheapass. Smile Smile

1TB of total space on that mothballed EMC SAN and PCIX FC HBA's are keeping me from going that route (PCIe slots in my current servers). *sigh*

Edit: RE: the 711's; Thanks for the offer! However, if you had a couple PCIe u320 HBA's colledting dust on a shelf somewhere I could play with I could maybe use that DAS in a simmilar way. Very Happy
_________________
For with what measure you measure it will be measured to you.

Post Thu Jan 21, 2016 10:34 am  View user's profile Send private message ICQ Number
Shinare
SEXNOCULAR


Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
 Reply with quote  

Been reading up on FC zoning, SANs in general, and also of course ESXi... Man I really want to start tinkering with this kind of stuff LOL! I might have to break out that old FC SAN after all and see whats what. Too bad the servers that have the PCIX FC HBA's in them for that SAN are so old they probably dont support virtualization. heh

If I make a mess of that old stuff it doesn't matter anyway, its going to state surplus some day anyway and is currently on the shelf collecting dust.

This SAN stuff is all pretty darn trick.
_________________
For with what measure you measure it will be measured to you.

Post Fri Jan 22, 2016 1:19 pm  View user's profile Send private message ICQ Number
  Display posts from previous:      
Post new topic  Reply to topic

Last Thread | Next Thread  >

Quick Reply

  
Jump to:  
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum