As a result of hurricane Matthew, our business shutdown all servers for just two times.

As a result of hurricane Matthew, our business shutdown all servers for just two times.

Among the servers ended up being an ESXi host by having a connected HP StorageWorks MSA60.

what is a good icebreaker for online dating

We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. So when we glance at the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, however the drives all reveal up as “unconfigured disk”.

We rebooted the host and attempted going to the RAID config energy to see just what things appear to be after that, but we received the message that is following

“an drive that is invalid ended up being reported during POST. Customizations towards the array setup after a drive that is invalid can lead to loss in old setup information and contents regarding the initial rational drives”.

Of course, we are extremely confused by this because nothing had been “moved”; absolutely nothing changed. We simply driven up the MSA in addition to host, and have now been having this presssing problem from the time.

We have two questions/concerns that are main

Since we did absolutely nothing a lot more than energy the devices off and straight back on, just what could’ve caused this to take place? We of course have the choice to reconstruct the array and commence over, but i am leery in regards to the chance of this taking place once again (especially it) since I have no idea what caused.

Can there be a snowball’s opportunity in hell that I’m able to recover our guest and array VMs, alternatively of getting to reconstruct every thing and restore our VM backups?

I have two primary questions/concerns:

  1. Since we did nothing but energy the products down and right back on, exactly what could’ve triggered this to take place? I needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the chance of this taking place once more (especially since I have no clue what caused it).

A variety of things. Would you schedule reboots on all of your gear? Or even you should just for this explanation. The main one host we now have, XS decided the array was not ready over time and did not install the storage that is main on boot. Constantly good to understand these things in advance, right?

  1. Will there be a snowball’s opportunity in hell that i will recover our array and guest VMs, alternatively of getting to reconstruct every thing and restore our VM backups?

Possibly, but i have never ever seen that one mistake. We are chatting really restricted experience right here. According to which RAID controller it’s attached to the MSA, you are in a position to see the array information through the drive on Linux making use of the md utilities, but at that true point it really is faster merely to restore from backups.

A variety of things. Do you really schedule reboots on all your valuable gear? Or even you want to just for this explanation. Usually the one server we now have, XS decided the array was not ready over time and did not install the primary storage amount on boot. Constantly good to learn these things ahead of time, right?

I really rebooted this host times that are multiple a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing back on, the server and raid array information ended up being all intact.

A variety of things. Can you schedule reboots on all your valuable gear? Or even you should really for only this explanation. The only host we now have, XS decided the array was not ready with time and don’t install the primary storage space amount on boot. Always good to learn these plain things ahead of time, right?

I really rebooted chicas escort Paterson NJ this host numerous times about a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing straight straight straight back on, the server and raid array information ended up being all intact.

Does your normal reboot routine of one’s server come with a reboot associated with MSA? would it be which they were driven straight back on into the wrong purchase? MSAs are notoriously flaky, likely this is where the presssing problem is.

I’d phone HPE help. The MSA is really an unit that is flaky HPE help is very good.

We really rebooted this host times that are multiple a month ago whenever I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the exact same time because I added more RAM to it. Again, after powering every thing straight straight straight back on, the server and raid array information was all intact.

Does your normal reboot routine of one’s host include a reboot associated with MSA? would it be which they had been driven straight straight back on into the wrong purchase? MSAs are notoriously flaky, likely this is where the problem is.

We’d call HPE support. The MSA is a flaky unit but HPE help is very good.

We unfortuitously don’t possess a reboot that is”normal” for almost any of our servers :-/.

I am not really sure exactly what the proper purchase is :-S. I would personally assume that the MSA would get driven on very very first, then your ESXi host. Should this be proper, we now have currently tried doing that since we first discovered this dilemma today, and also the problem continues to be :(.

We do not have help agreement with this host or the connected MSA, and they are most most likely way to avoid it of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), therefore I’m uncertain simply how much we would need to invest in order getting HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates about it. The reboots went fine. We additionally entirely powered that server down at round the exact same time because I added more RAM to it. Once again, after powering every thing straight back on, the server and raid array information had been all intact.

Leave a comment