Header Image - Acropolis.Ninja

1 click Self Service Restore? Yes, please.

I got the chance to test this feature out today at the request of a customer. This feature was going to allow them a much simpler configuration in their environment, and also not require Microsoft DPM. Since I hadn’t used the feature myself I decided to take it for a spin. So what is it?

1 Click Self Service Restore allows users to restore their own files. This feature uses Nutanix’s built in Data Protection. It is extremely easy to set up. I set it up in a test environment in minutes. I believe it is a feature of 5.0 or newer.

First step is ensure that your VM has Nutanix Guest Tools enabled. Ensure that the SSR feature is checked.

Once it is enabled you will need to install them. They mount in the cdrom drive. It may even autoplay.

After the installation we need to setup a Protection Domain under Data Protection tab. Configure your PD with the schedule you require.

I created some files that would represent something I deleted and wanted to copy back.

Once the PD is created and it has taken its snap, its time to run the executable for the SSR from within our vm.

 

 

Once you launch the executable, you will need to login to the local portal. This is your system credentials, not the Nutanix administrative portal.

After authentication you can see the snaps that belong to this VM. You can click on each individual snap and see the disks associated. Click on any that you wish to restore from and then click Disk Action and Mount. You can then navigate through the Windows Explorer and see the mounted drive.

 

Now you have access to your file!

Install Cisco ISE on AHV

Upload the cisco ise iso to the image service. This will allow you to mount it to a virtual cdrom.

 

Now create the vm we will use to install the Cisco ISE. I chose 4 vcpu/16GB RAM, 200GB disk.
Connect the cdrom to the image service and choose the iso we uploaded.

You will need to run the following command to set some right bits.

<acli> vm.update vmnamehere extra_flags=enable_hyperv_clock=false

Now boot the vm and open the console. Choose option 1 to install.

 

 

The system will reboot and give the same prompt again. This time just press enter to boot from disk.

Type setup

Enter all the info:

 

That’s it!

 

Export an ISO from the Image Service on Nutanix AHV

Nutanix AHV has an image service built into PRISM that lets you upload ISO files and connect them to VMs.  Currently there is no export for the image service built into PRISM.

image

However, it is not hard to export this ISO if you need to.

Step 1: Find the vmdisk_uuid for the Image

Log into a CVM and go into acli.

List the images using: image.list

Get the details for the image using image.get [Image Name].
Example: image.get Windows10

Note the vmdisk_uuid.

image

 

Step 2: Use a SCP tool to copy the vmDisk

Use a SCP tool like WinSCP to log into the CVM.  You need to log in with a PRISM user/pass to port 2222.  In this example I use the default login “admin”.

image

 

The .acropolis folder is hidden so use the open folder button to browse to the .acropolis/vmdisk folder.

image

 

Right click on the UUID and select download.

Screen Shot 2017-02-16 at 10.30.00 AM

 

Type in a name for the iso.  In this example I name it Windows10.iso

image

image

Export a VM from Nutanix AHV to VMware ESXi

In this example I export a CentOS 7 template from AHV to ESXi.  There are a couple of ways to accomplish this task, depending on whether you need a thin provisioned file or a thick provisioned file.

Thin Provisioned

Step 1: Find UUID of the vDisk.

Connect to a CVM, enter aCLI and run the command vm.get [vm name]

Copy the vmdisk_uuid.  (Notice the size of the VM under the STORAGE column in PRISM, that should be the size of the exported file… assuming the VM only has 1 vdisk)

 

Step 2: Export the vDisk

vDisks of AHV VMs are located in a hidden folder on the container named .acropolis.  We use the qemu-img command to export the vDisk.  The vdisk is exported in a thin format and should match the size of the VM in PRISM.  If the disk is large then the command might take longer to complete than the timeout value of the SSH session.  In order to not have the conversion corrupted by the SSH session timing out either use keep alives or run the task in the background by using a ‘&’ at the end of the command.  In this example I will run the task in the background.

Make sure the VM is powered off, then run the following command:

qemu-img convert –O vmdk nfs://127.0.0.1/[container]/.acropolis/vmdisk/[UUID] nfs://127.0.0.1/[container]/[vmdisk].vmdk &

Example:
qemu-img convert -O vmdk nfs://127.0.0.1/Nutanix/.acropolis/vmdisk/fea6b382-43ec-4236-b521-edac7ac923cb nfs://127.0.0.1/Nutanix/CentOS_7.vmdk &

We can check that the task is still running using the command PS –A | grep qemu.  When the command returns nothing we know it has completed.

 

 

Step 3: Copy the vDisk

Once the export completes, you can now whitelist a Windows 2012 R2 server and simply browse to the container and copy the vDisk.  Alternatively you can also use a SCP tool  by connecting with admin@[host]:2222.

 

Step 4.  Create a new Virtual Machine and Upload the VMDK to ESXi.

Here I create a new VM with no virtual disk, because I am going to upload the VMDK to the VM’s folder.

Use a SCP tool to connect to ESXi and upload the VMDK to the VM’s folder.

 

Step 5: Use vmkfstools to create the vmdk disk descriptor.

ESXi expects vmdks to have a disk descriptor file that points to the raw vmdk file.  We can use vmkfstools to create that using the following command:

vmkfstools –i [sourceVMDK] [destinationVMDK] –d thin

Example:
vmkfstools –i CentOS_7.vmdk CentOS7.vmdk –d thin

Once the disk descriptor is created you can delete the original file.

 

Step 6: Attach the VMDK to the VM and power it on

Thick Provisioned

Exporting a thick provisioned disk is similar to the process above, except we don’t need to use qemu.  We can just SCP the disk from the .acropolis directory.

Step 1: Find UUID of the vDisk.

Connect to a CVM, enter aCLI and run the command vm.get [vm name]

Copy the vmdisk_uuid.  (Notice the size of the VM under the STORAGE column in PRISM, that should be the size of the exported file… assuming the VM only has 1 vdisk)

 

Step 2: SCP the vdisk from the .acropolis/vmdisk directory.

Use a SCP tool to connect to the Nutanix CVM.  If you use WINSCP you will have to use the Open Directory button (CTRL+O) to open the hidden .acropolis/vmdisk directory.

Copy the vDisk that matches the UUID from vm.get. Then copy the file to ESXi.

 

Step 3.  Create a new Virtual Machine and Upload the VMDK to ESXi.

Here I create a new VM with no virtual disk, because I am going to upload the VMDK to the VM’s folder.

Use a SCP tool to connect to ESXi and upload the VMDK to the VM’s folder.

 

Step 4: Use vmkfstools to create a VMDK descriptor file, then replace the flat file with the exported vDisk.

Following the process in VMware KB 1002511 we recreate a vDisk descriptor file.  The file has to be created with the exact same size as the exported vDisk.  Use ls –l to check the size of the exported vDisk.  Then use the following command to create the vDisk descriptor file:

vmkfstools –c [vDisk size] [destination file] –d thin

Example:
vmkfstools -c 42949672960 CentOS_7.vmdk -d thin

Once the file is created then replace the –flat.vmdk file with the exported vDisk.

Example:
mv fea6b382-43ec-4236-b521-edac7ac923cb CentOS_7-flat.vmdk

 

Step 5: Attach the VMDK to the VM and power it on

Helpful Nutanix Commands Cheat Sheet

I’m pretty sure anyone who is either a Nutanix employee or a customer that uses the product on a daily basis has a list somewhere of the commands they use.  I decided to create a blog post to become a living document with the most used commands.  It should get expanded and be kept up to date over time.

AHV

configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-br0

VLAN tag mgt network

ovs-vsctl set port br0 tag=####

Show OVS configuration

ovs-vsctl show

Show configuration of bond0

ovs-appctl bond/show bond0

Show br0 configuration (for example to confirm VLAN tag)

ovs-vsctl list port br0

List VMs on host / find CVM

virsh list –all | grep CVM

Power On powered off CVM

virsh start [name of CVM from above command]

Increase RAM configuration of CVM

virsh setmaxmem [name of CVM from above command] –config –size ram_gbGiB

virsh setmem [name of CVM from above command] –config –size ram_gbGiB

 

ESXi

Show vSwitch configurations

esxcfg-vswitch -l

Show physical nic list

esxcfg-nics -l

Show vmkernel interfaces configured

esxcfg-vmknic -l

Remove vmnic from vSwitch0

esxcfg-vswitch -U vmnic# vSwitch0

Add vmnic to vSwitch0

esxcfg-vswitch -L vmnic# vSwitch0

Set VLAN for default VM portgroup

esxcfg-vswitch -v [vlan####] -p “VM Network” vSwitch0

Set VLAN for default management portgroup

esxcfg-vswitch -v [vlan id####] -p “Management Network” vSwitch0

Set IP address for default management interface (vmk0)

esxcli network ip interface ipv4 set -i vmk0 -I [ip address] -N [netmask] -t static

Set default gateway

esxcfg-route [gateway ip]

List VMs on host/find CVM

vim-cmd vmsvc/getallvms | grep -i cvm

Power on powered off CVM

vim-cmd vmsvc/power.on [vm id# from above command]

 

CVM

VLAN tag CVM  (only for AHV or ESXi using VGT)

change_cvm_vlan ####

Show AHV host physical uplink configuration

manage_ovs show_uplinks

Remove 1gb pNICs from bond0 on AHV host

manage_ovs –bridge_name br0 –bond_name bond0 –interfaces 10g update_uplinks

Configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Create cluster

cluster -s [cvmip1,cvmip2,cvmipN…] create

Get cluster status

cluster status

Get detailed local to current CVM services’ status

genesis status

Restart specific service across entire cluster (example below:  cluster_health)

allssh genesis stop cluster_health; cluster start

Show Prism leader

curl localhost:2019/prism/leader

Stop cluster

cluster stop

Start a stopped cluster

cluster start

Destroy cluster

cluster destroy

Discover nodes

discover_nodes

Gracefully shutdown CVM

cvm_shutdown -P now

Upgrade non-cluster joined node from cluster CVM without expanding the cluster

cluster -u [remote node cvmip] upgrade_node

Check running AOS upgrade status for cluster

upgrade_status

Check running hypervisor upgrade status for cluster

host_upgrade_status

Get CVM AOS version

cat /etc/nutanix/release_version

Get cluster AOS version

ncli cluster version

Create Prism Central instance (should be ran on deployed PC vm, not cluster CVM)

cluster –cluster_function_list multicluster -s [pcipaddress] create

Run all NCC health checks

ncc health_checks run_all

Export all logs (optionally scrubbed for IP info)

ncc log_collector –anonymize_output=true run_all

 

ipmitool (NX platform)

(hypervisor agnostic), leading / required for ESXi (/ipmitool)

Configure IPMI to use static ip

ipmitool lan set 1 ipsrc static

Configure IPMI IP address

ipmitool lan set 1 ipaddr [ip address]

Configure IPMI network mask

ipmitool lan set 1 netmask [netmask]

Configure IPMI default gateway

ipmitool lan set 1 defgw ipaddr [gateway ip]

Configure IPMI VLAN tag

ipmitool lan set 1 vlan id [####]

Remove IPMI VLAN tag

ipmitool lan set 1 vlan id off

Show current IPMI configuration

ipmitool lan print 1

Show IPMI mode (failover/dedicated)

ipmitool raw 0x30 0x70 0x0c 0

The result will be one of the following

  1. 00 = Dedicated
  2. 01 = Onboard / Shared
  3. 02 = Failover (default mode)

Get IPMI user list

ipmitool user list

Reset IPMI ADMIN user password back to factory (trailing ADMIN is the password)

ipmitool user set password [# of ADMIN user from command above] ADMIN

Reboot the BMC (reboot the IPMI only)

ipmitool mc reset cold

 

URLs

CVM built-in foundation

http://[cvmip]:8000/gui

Legacy cluster-init (should attempt redirect to foundation on newer AOS)

http://[cvmip]:2100/cluster_init.html

Get cluster status

http://[cvmip]:2100/cluster_status.html

Export a VM from AHV raw format to VMware Workstation

In this example I’m going to export my Windows 2012 R2 template from AHV to VMware Workstation.

Step 1: Find UUID of the vDisk.

Connect to a CVM, enter aCLI and run the command vm.get [vm name]

Copy the vmdisk_uuid.  (Notice the size of the VM under the STORAGE column in PRISM, that should be the size of the exported file… assuming the VM only has 1 vdisk)

image

image

 

Step 2: Export the vDisk

vDisks of AHV VMs are located in a hidden folder on the container named .acropolis.  We use the qemu-img command to export the vDisk.  The vdisk is exported in a thin format and should match the size of the VM in PRISM.

Make sure the VM is powered off, then run the following command:

qemu-img convert –O vmdk nfs://127.0.0.1/[container]/.acropolis/vmdisk/[UUID] nfs://127.0.0.1/[container]/[vmdisk].vmdk

Example:
qemu-img convert -O vmdk nfs://127.0.0.1/Nutanix/.acropolis/vmdisk/838950be-d0d8-4132-bf8f-e02411807cf2 nfs://127.0.0.1/Nutanix/win2012r2.vmdk

 

Step 3: Copy the vDisk

Once the export completes, you can now whitelist a Windows 2012 R2 server and simply browse to the container and copy the vDisk.

image

 

Step 4.  Create a VM with the new vdisk in VMware Workstation and power it on.  Remember to install VMware Tools.

image

Configure a Network Switch in Prism to show latest network visualization!

With the release of Asterix (AOS 5.0) you can now view very detailed information into your switch. This is great news because the visibility into the physical network layer in AHV was not very detailed before.

Nutanix has supported adding in a network switch config for some time now to be able to view stats in Prism, but it wasn’t until Asterix where you can see such things as AHV host vswitch config, detailed port information and vm to host to port to physical switch mappings.

Getting a switch configured with a basic config to pull into prism is pretty easy. Let’s begin.

Let’s start with the physical switch config. I am using an Arista 7050.

 



Now lets configure Prism. Click the Network Switch option and let’s add a Switch Configuration.

Click on the gear in Prism and Click “Network Switch.”

Using the basic information from our network config input the SNMP community name as well as the username, and also pull down the correct version. I used v2c. Don’t forget to add your switch management ip address!

 

Now you can click on the network visualization from the menu.

 

 

 

Have fun playing around with the visibility.

Export A VM on Nutanix AHV

Step 1: Find UUID of the vDisk.

Connect to a CVM, enter aCLI and run the command vm.get [vm name]

Copy the vmdisk_uuid.

image_thumb1

image_thumb4

Step 2: Export the vDisk

vDisks of AHV VMs are located in a hidden folder on the container named .acropolis.  We use the qemu-img command to export the vDisk.  One cool thing is that the vDisk is exported in a thin format, so even if it is provisioned as a 100GB drive, it will only export the actual size used.

Make sure the VM is powered off, then run the following command:

qemu-img convert –c nfs://127.0.0.1/[container]/.acropolis/vmdisk/[UUID] –O qcow2 nfs://127.0.0.1/[container]/[vmdisk].qcow2

Example:
qemu-img convert -c nfs://127.0.0.1/Nutanix/.acropolis/vmdisk/5c0996b9-f114-475f-98c0-ea4d09e8e447 -O qcow2 nfs://127.0.0.1/Nutanix/export_me.qcow2

Step 3: Copy the vDisk

Once the export completes, you can now whitelist a Windows 2012 R2 server and simply browse to the container and copy the vDisk.

image_thumb6

Nutanix Discovery on Cisco 7K Switches

Nutanix nodes are configured by default to use IPv6 link-local addresses and use IPv6 neighbor discovery in order to make the process of expanding clusters very easy.  Simply connect the new node to the network, and as long as the existing Nutanix cluster and the new node being added resides on the same layer 2 segment, you should be able to discover the new node without doing any other configuration.  This enables an administrator to configure the IPv4 addresses from their desk without having to stay in the datacenter with a crash cart.  Pretty nifty, right?

I have seen this feature become unavailable a couple of times where by the necessary IPv6 traffic is not being allowed on the network. When I have seen this customers did not realize or even intend for this to be the case.  In all of these situations, this was the result of OMF (optimize multicast flood) configured on the Cisco 7K switches.  This is the default configuration so it will always be a problem if not addressed! One customer, using Cisco 7K switches as their access layer, discovery would not work at all for them.  In the other case, the customer did not align their Nutanix clusters to the boundaries of the Cisco 5K switches in a leaf/spine configuration (a best practice, but not feasible for them).  Discovery partially worked, but only within the boundary of any one single Cisco 5K, which basically segmented the traffic and caused confusion.

At first this can be daunting to figure out because everything else you would expect works just fine. Communication appears to work, for example, pings respond just fine (once IPs are manually configured).  After the right levels of google-fu, we came across this article online: https://www.tachyondynamics.com/nexus-7000-ipv6-configuration-pitfalls.  Long story short, the default configuration of a Cisco 7K blocks IPv6 communication, and you will need to disable OMF on the Cisco 7K.  The actual configuration item is: no ip igmp snooping optimised-multicast-flood.

Hopefully google hits on this page with more common search attempts for Nutanix discovery not working.

Datastore Identification Migrating from Non-Nutanix to Nutanix with vSphere

**** UPDATE ****  A disclaimer is important here before doing the below.  This should ONLY be used for migrations on a temporary basis!  Long term there is no scalability or failover capability. *****

NOTE: For the below to work, the non-nutanix hosts and nutanix hosts must be on the same layer 2 network.

With ESXi, NFS datastores are recognized discretely by the IP address of the server they connect to in order to mount the volume. Nutanix hosts use a private network for the NFS mount in a normal basis, to keep read traffic off of the external network — only reaching out over the network when data happens not to be local, or on write for redundancy to one or two hosts based on the replication factor setting. This private network is 192.168.5.0/24, and the NFS server is always 192.168.5.2. The Nutanix platform allows you to whitelist other external IP addresses to connect to the same container (or NFS based datastore in vSphere) by non-nutanix hosts. Primarily for migration purposes to our platform.

whitelist

However, when these external hosts mount the volume, it would be on the external network of our Controller Virtual Machines (CVMs). This means that the datastore will show up like below — even trying to name it the same wont work. Screenshot below shows this in my lab environment. In the screenshots esxi00 is a non-nutanix ESXi host, and the nesxi## hosts are nutanix hosts in a single nutanix cluster.

datastore mismatch

The primary way this has been addressed in the past, is to move data twice. First you would mount the container from the Nutanix hosts both internally per usual, and externally using a different datastore name. The external mount is also mounted by the non-nutanix hosts. It is then used to complete the first storage vMotion, followed by the compute vMotion, and then ultimately a final storage vMotion to complete the migration. Not difficult, but a bit time consuming. However, I had a requirement to try and reduce these steps due to another platform accessing the vCenter server upstream, where the virtual machines could not be easily moved at the vCenter/vSphere layer directly. The upstream platform had the ability to trigger storage migrations, but it is extraordinarily tedious — I had an idea occur to me that could hopefully remedy this situation, and make migrations easier in general.

During Acropolis upgrade processes, or when a CVM fails, a static route is injected into the Nuatnix host that lost its connection to the CVM that is offline. This functionality is called autopathing (brief detail here, but a google search will find you blogs explaining it in depth). The IP address of 192.168.5.2/32 is set to route to the external IP of another working CVM as the next hop temporarily. When the CVM comes back up on the host, the static route is removed. In this way, the host with the CVM that is offline never loses connectivity to the datastore. I wondered if this could be used another way, where the non-nutanix hosts could have a static route of 192.168.5.2/32 configured as a next hop to one of the CVM IP addresses. If this were to work, the IP of the datastore would be the same for both non-nutanix and nutanix hosts, and the result would be that you could migrate data just one time.

In order to do this, log into the subject non-nutanix host. The command to set the static route would be esxcfg-route -a 192.168.5.2/32 <IP of a CVM or the Cluster IP>. You’ll get an error that says the IP isn’t valid (it believes it’s not on the same network), but it will still add the route.

add static route

At this point, we can mount the volume as before on the non-nutanix host, but using 192.168.5.2 as the server rather than the external IP address of the CVM.

add datastore

After hitting the OK button, and the container successfully mounts as a datastore, the IP addresses match and the datastore is identified as the same by vSphere.

datastore matches

Now you can do a storage vMotion just once, followed by a compute migration.  Hopefully this saves you some time!