Header Image - Acropolis.Ninja

Brandon Rice

Helpful Nutanix Commands Cheat Sheet

I’m pretty sure anyone who is either a Nutanix employee or a customer that uses the product on a daily basis has a list somewhere of the commands they use.  I decided to create a blog post to become a living document with the most used commands.  It should get expanded and be kept up to date over time.

AHV

configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-br0

VLAN tag mgt network

ovs-vsctl set port br0 tag=####

Show OVS configuration

ovs-vsctl show

Show configuration of bond0

ovs-appctl bond/show bond0

Show br0 configuration (for example to confirm VLAN tag)

ovs-vsctl list port br0

List VMs on host / find CVM

virsh list –all | grep CVM

Power On powered off CVM

virsh start [name of CVM from above command]

Increase RAM configuration of CVM

virsh setmaxmem [name of CVM from above command] –config –size ram_gbGiB

virsh setmem [name of CVM from above command] –config –size ram_gbGiB

 

ESXi

Show vSwitch configurations

esxcfg-vswitch -l

Show physical nic list

esxcfg-nics -l

Show vmkernel interfaces configured

esxcfg-vmknic -l

Remove vmnic from vSwitch0

esxcfg-vswitch -U vmnic# vSwitch0

Add vmnic to vSwitch0

esxcfg-vswitch -L vmnic# vSwitch0

Set VLAN for default VM portgroup

esxcfg-vswitch -v [vlan####] -p “VM Network” vSwitch0

Set VLAN for default management portgroup

esxcfg-vswitch -v [vlan id####] -p “Management Network” vSwitch0

Set IP address for default management interface (vmk0)

esxcli network ip interface ipv4 set -i vmk0 -I [ip address] -N [netmask] -t static

Set default gateway

esxcfg-route [gateway ip]

List VMs on host/find CVM

vim-cmd vmsvc/getallvms | grep -i cvm

Power on powered off CVM

vim-cmd vmsvc/power.on [vm id# from above command]

 

CVM

VLAN tag CVM  (only for AHV or ESXi using VGT)

change_cvm_vlan ####

Show AHV host physical uplink configuration

manage_ovs show_uplinks

Remove 1gb pNICs from bond0 on AHV host

manage_ovs –bridge_name br0 –bond_name bond0 –interfaces 10g update_uplinks

Configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Create cluster

cluster -s [cvmip1,cvmip2,cvmipN…] create

Get cluster status

cluster status

Get detailed local to current CVM services’ status

genesis status

Restart specific service across entire cluster (example below:  cluster_health)

allssh genesis stop cluster_health; cluster start

Show Prism leader

curl localhost:2019/prism/leader

Stop cluster

cluster stop

Start a stopped cluster

cluster start

Destroy cluster

cluster destroy

Discover nodes

discover_nodes

Gracefully shutdown CVM

cvm_shutdown -P now

Upgrade non-cluster joined node from cluster CVM without expanding the cluster

cluster -u [remote node cvmip] upgrade_node

Check running AOS upgrade status for cluster

upgrade_status

Check running hypervisor upgrade status for cluster

host_upgrade_status

Get CVM AOS version

cat /etc/nutanix/release_version

Get cluster AOS version

ncli cluster version

Create Prism Central instance (should be ran on deployed PC vm, not cluster CVM)

cluster –cluster_function_list multicluster -s [pcipaddress] create

Run all NCC health checks

ncc health_checks run_all

Export all logs (optionally scrubbed for IP info)

ncc log_collector –anonymize_output=true run_all

 

ipmitool (NX platform)

(hypervisor agnostic), leading / required for ESXi (/ipmitool)

Configure IPMI to use static ip

ipmitool lan set 1 ipsrc static

Configure IPMI IP address

ipmitool lan set 1 ipaddr [ip address]

Configure IPMI network mask

ipmitool lan set 1 netmask [netmask]

Configure IPMI default gateway

ipmitool lan set 1 defgw ipaddr [gateway ip]

Configure IPMI VLAN tag

ipmitool lan set 1 vlan id [####]

Remove IPMI VLAN tag

ipmitool lan set 1 vlan id off

Show current IPMI configuration

ipmitool lan print 1

Show IPMI mode (failover/dedicated)

ipmitool raw 0x30 0x70 0x0c 0

The result will be one of the following

  1. 00 = Dedicated
  2. 01 = Onboard / Shared
  3. 02 = Failover (default mode)

Get IPMI user list

ipmitool user list

Reset IPMI ADMIN user password back to factory (trailing ADMIN is the password)

ipmitool user set password [# of ADMIN user from command above] ADMIN

Reboot the BMC (reboot the IPMI only)

ipmitool mc reset cold

 

URLs

CVM built-in foundation

http://[cvmip]:8000/gui

Legacy cluster-init (should attempt redirect to foundation on newer AOS)

http://[cvmip]:2100/cluster_init.html

Get cluster status

http://[cvmip]:2100/cluster_status.html

Nutanix Discovery on Cisco 7K Switches

Nutanix nodes are configured by default to use IPv6 link-local addresses and use IPv6 neighbor discovery in order to make the process of expanding clusters very easy.  Simply connect the new node to the network, and as long as the existing Nutanix cluster and the new node being added resides on the same layer 2 segment, you should be able to discover the new node without doing any other configuration.  This enables an administrator to configure the IPv4 addresses from their desk without having to stay in the datacenter with a crash cart.  Pretty nifty, right?

I have seen this feature become unavailable a couple of times where by the necessary IPv6 traffic is not being allowed on the network. When I have seen this customers did not realize or even intend for this to be the case.  In all of these situations, this was the result of OMF (optimize multicast flood) configured on the Cisco 7K switches.  This is the default configuration so it will always be a problem if not addressed! One customer, using Cisco 7K switches as their access layer, discovery would not work at all for them.  In the other case, the customer did not align their Nutanix clusters to the boundaries of the Cisco 5K switches in a leaf/spine configuration (a best practice, but not feasible for them).  Discovery partially worked, but only within the boundary of any one single Cisco 5K, which basically segmented the traffic and caused confusion.

At first this can be daunting to figure out because everything else you would expect works just fine. Communication appears to work, for example, pings respond just fine (once IPs are manually configured).  After the right levels of google-fu, we came across this article online: https://www.tachyondynamics.com/nexus-7000-ipv6-configuration-pitfalls.  Long story short, the default configuration of a Cisco 7K blocks IPv6 communication, and you will need to disable OMF on the Cisco 7K.  The actual configuration item is: no ip igmp snooping optimised-multicast-flood.

Hopefully google hits on this page with more common search attempts for Nutanix discovery not working.

Datastore Identification Migrating from Non-Nutanix to Nutanix with vSphere

**** UPDATE ****  A disclaimer is important here before doing the below.  This should ONLY be used for migrations on a temporary basis!  Long term there is no scalability or failover capability. *****

NOTE: For the below to work, the non-nutanix hosts and nutanix hosts must be on the same layer 2 network.

With ESXi, NFS datastores are recognized discretely by the IP address of the server they connect to in order to mount the volume. Nutanix hosts use a private network for the NFS mount in a normal basis, to keep read traffic off of the external network — only reaching out over the network when data happens not to be local, or on write for redundancy to one or two hosts based on the replication factor setting. This private network is 192.168.5.0/24, and the NFS server is always 192.168.5.2. The Nutanix platform allows you to whitelist other external IP addresses to connect to the same container (or NFS based datastore in vSphere) by non-nutanix hosts. Primarily for migration purposes to our platform.

whitelist

However, when these external hosts mount the volume, it would be on the external network of our Controller Virtual Machines (CVMs). This means that the datastore will show up like below — even trying to name it the same wont work. Screenshot below shows this in my lab environment. In the screenshots esxi00 is a non-nutanix ESXi host, and the nesxi## hosts are nutanix hosts in a single nutanix cluster.

datastore mismatch

The primary way this has been addressed in the past, is to move data twice. First you would mount the container from the Nutanix hosts both internally per usual, and externally using a different datastore name. The external mount is also mounted by the non-nutanix hosts. It is then used to complete the first storage vMotion, followed by the compute vMotion, and then ultimately a final storage vMotion to complete the migration. Not difficult, but a bit time consuming. However, I had a requirement to try and reduce these steps due to another platform accessing the vCenter server upstream, where the virtual machines could not be easily moved at the vCenter/vSphere layer directly. The upstream platform had the ability to trigger storage migrations, but it is extraordinarily tedious — I had an idea occur to me that could hopefully remedy this situation, and make migrations easier in general.

During Acropolis upgrade processes, or when a CVM fails, a static route is injected into the Nuatnix host that lost its connection to the CVM that is offline. This functionality is called autopathing (brief detail here, but a google search will find you blogs explaining it in depth). The IP address of 192.168.5.2/32 is set to route to the external IP of another working CVM as the next hop temporarily. When the CVM comes back up on the host, the static route is removed. In this way, the host with the CVM that is offline never loses connectivity to the datastore. I wondered if this could be used another way, where the non-nutanix hosts could have a static route of 192.168.5.2/32 configured as a next hop to one of the CVM IP addresses. If this were to work, the IP of the datastore would be the same for both non-nutanix and nutanix hosts, and the result would be that you could migrate data just one time.

In order to do this, log into the subject non-nutanix host. The command to set the static route would be esxcfg-route -a 192.168.5.2/32 <IP of a CVM or the Cluster IP>. You’ll get an error that says the IP isn’t valid (it believes it’s not on the same network), but it will still add the route.

add static route

At this point, we can mount the volume as before on the non-nutanix host, but using 192.168.5.2 as the server rather than the external IP address of the CVM.

add datastore

After hitting the OK button, and the container successfully mounts as a datastore, the IP addresses match and the datastore is identified as the same by vSphere.

datastore matches

Now you can do a storage vMotion just once, followed by a compute migration.  Hopefully this saves you some time!