Running the Intel NUC headless with VMware ESXi

I recently acquired two Intel NUC5i5MYHE for my home lab. This specific model features Intel AMT/VPRO technology. This means that you can connect to the NUC remotely via IP independent of the operating system installed on it. This has several benefits

– Power On , Power Off , Reset the NUC
– Hardware information
– Remote Console access

Screenshot 2015-04-28 18.25.46

 

My plan was to run the Intel NUC’s headless. I don’t have any monitor or keyboard connected to it. So I needed the AMT / VPRO technology. When I installed the NUC’s the first time they were both connected to my living room TV through HDMI. The NUC only got Mini Display Port 2 interface – so I had to use a mini display port to HDMI adapter. This worked perfectly and I also tested the AMT technology and got the remote console to work.

Once i put the NUC’s in the closet and powered them on the remote console was black. I did not get any picture when connecting. I had just seen this work when connected to my living room TV… Maybe that was the issue I thought, maybe it don’t send anything out of the display port when it does not see a monitor connected. After some google activities I found the following used for MAC minis to run them headless. It is called Fit-Headless and is a small HDMI dongle you plug-in to a HDMI port, then the MAC Mini / NUC will think a monitor is connected and remote console will work.

Screenshot 2015-04-28 18.32.09

 

Of course I could not just connect this to the NUC since I did not have HDMI in it. So I also bought an adapter like this:

Screenshot 2015-04-28 18.35.06

 

The result was perfect. Once I rebooted my Intel NUC’s with the display port to HDMI adapter and Fit-Headless connected I got the console back and it was not black. I use VNC View Plus that can connect directly to the IP address.

 

 

Screenshot 2015-04-28 18.36.45

 

 

So if you want to run the NUC and use Intel AMT / VPRO you need either a monitor connected, or use a dongle that “fakes” it.

vFranks Home Lab running Intel NUC

I had been considering for some time what home lab hardware to get. Luckily there are plenty of home labs blog post in the community that gives you inspiration. For a home lab everybody wants unlimited CPU, RAM and Storage resources. But because it is a home lab (it runs in your home) you have to consider space, power usage, noise, price. Once you factor in these four constraints you will be on the road to finding the lab that fits you.

I wanted my home lab to run 24/7 – so power usage and noise were two important factors for me. I also would love to be able to run the home lab headless (no monitor) but still be able to connect through IP to the screen.

After considering my options I ended up with the following configuration:

  • Intel NUC5i5MYHE with i5-5300U processor and Intel VPRO/AMT technology
  • 16 GB DDR3L Kingston Memory
  • Kingston E50 240GB SSD
  • 32GB Sandisk Cruzer fit USB Key

I already had a two disks Synology DS211+ NAS running at home. So that is used as a NFS datastore for the virtual machines. The Cruzer fit usb key is used for ESXi installation and the Kingston E50 240GB SSD is used with PernixData as a read & write acceleration in front of the Synology.

With PernixData doing its magic my Synology NAS is perfect for what it does, storing the virtual machine. Just look at the screenshot below. NAS experienced 148 ms of latency but the VM experienced 0,71 ms. That is the power of decoupling performance from capacity.

Screenshot 2015-04-24 10.13.12

 

 

If you are looking at a home lab the NUC can be a good choice. But you have to be able to live with only 16GB of RAM pr. NUC and a Dual-Core processor. Luckily with VMware I can always scale out and add more NUCS to the vSphere cluster when I am in lack of resources.

Right now this is what the home lab looks like.

Screenshot 2015-04-24 10.18.00

 

PernixData FVP 2.5 GA and ready for download

PernixData FVP 2.5 was announced earlier this week. Your favorite acceleration software just got even better.

  • DFTM-Z – With Distributed Fault Tolerant Memory (DFTM), FVP is the only storage acceleration software that lets users cluster server RAM, providing the fastest acceleration with complete fault tolerance. DFTM-Z augments this capability with adaptive memory compression to enable the performance of RAM to be delivered at the price of flash.
  • Intelligent I/O Profiling – Provide virtualized applications with guaranteed excellent and predictable performance by identifying I/O profiles that are not good candidates for server-side acceleration and automatically bypassing them.
  • Role-Based Access Control (RBAC) – Increase visibility to the FVP environment by granting appropriate access to authorized individuals.
  • Remote Network reads on NFS – After a vMotion the accelerated data is still available for reads over the network.

You can download the software from the support portal at http://support.pernixdata.com (requires login)

If you don’t have a login and want to try PernixData FVP in your lab/demo/production environment request a free trial at http://info.pernixdata.com/trialreg

 

Running PernixData FVP in Monitor Mode

Back in March Frank Denneman wrote the following article about running PernixData FVP in Monitor Mode. I suggest that you read it before you read this post.

The great thing about monitor mode is that without the use of Flash or RAM for acceleration you will get a clear picture of how your storage array is performing. With the information gathered from this exercise you will know what to do next. Monitor mode will either let you choose to continue the POC with PernixData and start accelerating virtual machines. Or you may learn that your storage array got the performance you need. No matter the outcome you as administrator will know more about your environment and have learned about the IOPS profiles from your virtual machines.

I often get the question “What graph should I look at in monitor mode?” 

A good place to start is the Summary graph on a Cluster level. This will show you information for all virtual machines latency running in that cluster. To start you should only add the two counters

– Datastore Read
– Datastore Write

Screenshot 2015-01-05 17.10.56

By looking at this graph you will get a quick summary of ReadWrite latency the virtual machines are experiencing in your environment. At one data point in this graph we see a Read Latency at 105 ms. and Write latency of 30 ms.

When you see this there is no doubt that PernixData FVP will be able to help your virtual machines getting predictable low latency performance.

The next step is to try to accelerate your virtual machines. PernixData can use SSD, PCI-E flash or RAM for acceleration. When you choose you have two factors to think about

1. Performance of the acceleration media
2. Capacity

You want a good SSD/PCI-E flash device that gives you predictable low latency for read and write IOPS but you also want a drive with the right capacity. If the capacity is low you will not get the hit-ratio you are looking for. Capacity is a huge advantage for SSD/PCI-E over RAM. RAM is faster – but I would not sacrifice that capacity from a good performing SSD over RAM with lower capacity.

If we move on and start to accelerate with RAM/SSD/PCI-E you can then use the graphs again to show the difference.

This picture shows the Datastore Write Latency and Write latency. Write equals VM Observed Write. So that latency is what the VM is experiencing. At the data point highlighted we have a datastore latency of 30 ms. but the VM is experiencing 1.39 ms because of local flash acceleration!

Screenshot 2015-01-05 17.10.20

 

The next picture focuses on Read latency. The counters selected are Datastore Read Read. The highlighted data point shows datastor latency at 105 ms but the VM is experiencing 5 ms because of local acceleration.

Screenshot 2015-01-05 17.10.03

 

Monitor Mode is a strong tool to use during a PernixData FVP POC. It is not necessary to use it though. If you have flash/RAM available from the get-go you can still use the graphs to show what is going on. The graphs are right inside the vSphere Client or vSphere Web Client.

Once customers get FVP in their environment they don’t use the vCenter Performance graphs for storage performance anymore. They go straight to PernixData and uses them instead.

If you are interested in figuring out what storage Read Write latency you are experiencing today. Then go and request a free trial version of PernixData FVP software here

 

VMUG.DK Konference i Bella Center d. 20 November

Så er der næsten gået et år siden sidste år hvor VMUG Danmark afholdte deres første konference i Bella Center. Det var sådan en success at den kommer tilbage igen i år. Det kommer til at foregå torsdag d. 20 November

Det er gratis at registere sig og det kan du gøre her: http://www.vmug.com/p/cm/ld/fid=5239

Jeg har kigget på listen over talere og det ser rigtig godt ud! Vi kan blandt andet glæde os til at gense

Duncan Epping fra VMware der taler om hvad VMware kommer med i fremtiden af produkter (manden bag www.yellow-bricks.com)

Cormac Hogan fra VMware der taler om VSAN.

Kamau Wanguhu fra VMware der taler om NSX

Vil man også høre om andre ting kan man høre det sidste nye fra bla. Veeam og PernixData.

Preben Berg taler om Veeam og automatisering af backup med vCO og REST API’et.

Frank Denneman taler om PernixData og virtualiseringen af Flash og RAM for at skabe lavere io latency og flere iops foran et eksisterende disk system.

Jeg håber at se en masse af jer derude den dag. Se den fulde agenda her: http://www.vmug.com/p/cm/ld/fid=5244

 

vmug

PernixData FVP 2.0 Now Available

It is here! PernixData FVP 2.0. During the last couple of months I have been getting experience with this in my lab and at selected customers. In 2.0 we are introducing 4 new groundbreaking capabilities.

Distributed Fault Tolerant Memory (DFTM):

Now we support the use of RAM for read and write acceleration. We still support the use of flash of course. We are just giving extra options for acceleration. From my experience in my lab RAM is INSANELY fast. It did not matter what kind of IO I threw at it. It just consistently performed with extremely low latency. Just imagine reading and writing data from RAM with 0,08 millisecond!

NFS support

This one does not require a lot of explanation. Now we support NFS datastores. It is implemented in the same transparent fashion as block storage. So absolutely no changes is made to the VM or NFS datastore.

User defined fault domains

With the use of RAM we also see the need to define your fault domains. If you are using RAM for Write acceleration you would probably like to have a copy of the Writes on a second host placed in a another RACK/Blades chassis/Datacenter. With fault domains you can now define your physical boundaries and make sure exactly where the writes are replicated to. This of course also works with flash.

Adaptive Network Compression

When we send the Writes over the PernixData network (default is vMotion but any vmkernel will work, you decide) we have seen in 1Gb environments that it can be a problem. In FVP 2.0 we will take a look at the data to be sent, and if it makes sense we will compress before we send it over the wire. This brings down the latency of WB +1/+2 policy in a 1Gb environment.

You can read the full press release here:

http://www.pernixdata.com/press-release/pernixdata-introduces-new-enterprise-subscription-and-vdi-editions-fvp-software

 

 

PernixData Nordic Roadshow with Frank Denneman

If you are from Scandinavia this post is for you, please read on

I am happy to announce the PernixData together with Arrow ECS will be having morning seminars in Copenhagen, Oslo, Stockholm and Helsinki. The topic is using flash in the datacenter and where should it belong.  We have been so lucky to have Frank Denneman visit the Nordic’s and share his insightful perspective

The schedule for the week:

September 15th: Copenhagensign up
September 16th: Oslosign up
September 17th: Stockholmsign up
September 18th: Helsinkisign up

What is the seminar about

Using storage arrays for both performance and capacity is regarded as a natural design by many. This is easy, but almost always introduces issues with storage performance, regardless of the environment size.

Flash storage is seen as savior to storage I/O bottlenecks, but implementing flash can be confusing. Should it go in your SAN? Servers? Both? Furthermore what key features (e.g. write acceleration, clustering, etc.) are required to turn flash into an effective tool for accelerating storage performance across an entire data center?

Join us for this talk where Frank will highlight:

  • Pros and cons of various flash deployment methodologies
  • Best practices for using flash to accelerate storage performance
  •  How to measure results and ROI

We look forward to seeing you there!