Thursday, 16 June 2016

Cisco Nexus 1000v virtual switch

A software-based switch that spans multiple hosts running VMware ESX or ESXi.



VMware Networking
Each virtual machine has one or more vNICs(virtual NIC). These vNICs are connected to a virtual switch to provide network connectivity to the virtual machine. The guest OS sees the vNICs as pNICs.

Hosts running VMware ESX have a virtual management port called vswif, sometimes called as service console interface. It is used for communication with VMware vCenter Server, to manage the device directly with the VMware vSphere client, or to use Secure Shell (SSH) to log in to the host’s command-line interface (CLI). VMware ESXi hosts do not use vswif interfaces because the hosts lack a service console OS.

Each host also has one or more virtual ports called virtual machine kernel NICs (vmknic) used by VMware ESX for Small Computer Systems Interface over IP (iSCSI) and Network File System (NFS) access, as well as by VMware vMotion. On a VMware ESXi system, a vmknic is also used for communication with VMware vCenter Server.

The pNICs on a VMware ESX host, called virtual machine NICs (VMNIC), are used as uplinks to the physical network infrastructure.

Each vNIC is connected to a standard vSwitch or vDS through a port group. Each port group belongs to a specific vSwitch or vDS and specifies a VLAN or set of VLANs that a VMNIC, vswif, or vmknic will use.
===
Nexus 1000v Components

VSM – Virtual Supervisor Module
VEM – Virtual Ethernet Module  //linecards
Features – vTracker (shown in cisco device), VCPlugin (shows network info in Vmware)

The VSM is a virtual appliance that can be installed independent of the VEM: that is, the VSM can run on a VMware ESX server that does not have the VEM installed. The VEM is installed on each VMware ESX server to provide packet-forwarding capability.

VMware’s management hierarchy is divided into two main elements: a data center and a cluster. A data center contains all components of a VMware deployment, including hosts, virtual machines, and network switches, including the Cisco Nexus 1000V Series.

Within a VMware data center, the user can create one or more clusters. A cluster is a group of hosts and virtual machines that forms a pool of CPU and memory resources. A virtual machine in a cluster can be run on or migrated to any host in the cluster. Hosts and virtual machines do not need to be part of a cluster; they can exist on their own within the data center.

Port profiles create a virtual boundary between server and network administrators. Port profiles are network policies that are defined by the network administrator and exported to VMware vCenter Server. Within VMware vCenter Server, port profiles appear as VMware port groups in the same locations as traditional VMware port groups would. Port profiles are also used to configure the pNICs in a server; known as uplink port profiles, are assigned to the pNICs as part of the installation of the VEM on a VMware ESX host.

The Management VLAN is used for system login, configuration. and corresponds to the mgmt0 interface. The management interface appears as the mgmt0 port on a Cisco switch. Although management interface is not used to exchange data between the VSM and VEM, it is used to establish and maintain the connection between the VSM and VMware vCenter Server. When the software virtual switch (SVS) domain mode for control communication between the VSM and VEM is set to Layer 3 mode, the management interface can also be used for control traffic. Control interface handles low-level control packets such as heartbeats as well as any configuration and programming data that needs to be exchanged between the VSM and VEM. Because of the nature of the traffic carried over the control interface, it is of most importance in the Cisco Nexus 1000V Series solution. Packet interface is used to carry packets that need to be processed by the VSM; mainly used for Cisco Discovery Protocol and Internet Group Management Protocol (IGMP) control packets.
You can use the same VLAN for control, packet, and management, but if needed for flexibility, you can use separate VLANs.


switch# show switch edition //to show what version essential or advanced license for 256 sockets is free. License per VEM are sticky.


VEM are running on headless mode. If VSM goes down, data continue to flow but you can’t make changes. Each vSphere host gets a VEM. 

VSM

Deploy VSM in pairs just like in regular switches. 1 HA pair (or a VSM if standalone) can manage 64 VEM. So, it’s like virtual 66 slot chassis.

VSM heartbeats between Pri and Sec VSM are layer 2.

VSM control modes – L2 mode or L3 mode (more flexible; uses udp-4785) //VSM controlling VEM
VSM mgmt0 is default interface for L3
VSM connects to vmware vCenter using SSL connection.
Primary and secondary VSM must be in the same L2 domain. Latency should be 5-10ms max.  Usually makes sense to use Control0 interface for L3. Backup your config early and often.
VSM and VEM version can be 1 version difference but try to match.
#show module uptime
#show module vem
#show module vem counters
#show license usage
  





Backing up the VSM – running-config alone isn’t enough to restore. Clone the VSM and save it. To clone, shut down the standby VSM, and clone it as VSM doesn’t have vmware tools installed.


 

VEM

VEM is installed on each VMWare ESX host as a kernel component.
L3 control requires a VMKernel NIC(a virtual NIC on ESX host created by Vmware admin). Recommended using the ESXi management vmkernel nic as VMware does not have vrf concept – requires you migrate the management interface to VEM. –doesn’t require static routes on ESXi hosts.
VEM installation – VMware update manager (VUM). Or install vib file on ESXi     


Port Types

Nexus 1000V Series supports multiple switch-port types for internal and external connectivity: virtual Ethernet (vEth), Ethernet (Eth), and PortChannel (Po). The most common port type in a Cisco Nexus 1000V Series environment is the vEth interface which represents the switch port connected to a virtual machine’s vNIC or connectivity to specialized interface types such as the vswif or vmknic interface. A vEth interface’s name does not indicate the module with which the port is associated; a vEth interface is notated like this: vEthY. This unique notation is designed to work transparently with VMware VMotion, keeping the interface name the same regardless of the location of the associated virtual machine.

The second characteristic that makes a vEth interface unique is its transient nature. A given vEth interface appears or disappears based on the status of the virtual machine connected to it. The mapping of a virtual machine’s vNIC to a vEth interface is static. When a new virtual machine is created, a vEth interface is also created for each of the virtual machine’s vNICs. The vEth interfaces will persist as long as the virtual machine exists.

The switch contains two interface types related to the VMNICs (pNICs) in a VMware ESX host. An Ethernet, or Eth, interface is the Cisco Nexus 1000V Series’ representation of a VMNIC. A PortChannel is an aggregation of multiple Eth interfaces on the same VEM.

How Nexus1000v Operates 

Unlike physical switches with a centralized forwarding engine, each VEM maintains a separate forwarding/MAC address table. There is no synchronization between forwarding tables on different VEMs. In addition, there is no concept of forwarding from a port on one VEM to a port on another VEM. Packets destined for a device not local to a VEM are forwarded to the external network, which in turn may forward the packets to a different VEM. Static entries are automatically generated for virtual machines running on the VEM; these entries do not time out. For devices not running on the VEM, the VEM can learn a MAC address dynamically, through the pNICs in the server.

Cisco Nexus 1000V Series does not run Spanning Tree Protocol. Because the Cisco Nexus 1000V Series does not participate in Spanning Tree Protocol, it does not respond to BPDU packets, nor does it generate them. BPDU packets that are received by Cisco Nexus 1000V Series Switches are dropped. N1kv prevents loops between the VEMs and the first-hop access switches without the use of Spanning Tree Protocol. PortFast is configured per interface and should be enabled on interfaces connected to a VEM, along with BPDU guard and BPDU filtering. Filtering BPDUs at the physical switch port will enhance VEM performance by avoiding unnecessary processing at the VEM uplink interfaces.

VEM to VSM Communication 

The VSM can communicate with the VEM over the Layer 2 or Layer 3 network. Layer 3 is the recommended mode for control and packet communication between the VSM and the VEM. The VEM uses vSwitch data provided by VMware vCenter Server to configure the control interfaces for VSM-to-VEM control and packet communication. The VEM then applies the correct uplink port profile to the control interfaces to establish communication with the VSM. In layer 3 mode, you can specify whether to use the VSM management interface or use the dedicated Control0 interface for VSM-to-VEM control traffic. The port profile configured for Layer 3 (VSM-to-VEM) communication on the VSM needs to have capability l3control enabled. This process requires configuration of a VMware vmkernel interface on each VMware ESX host.

Layer 3 connectivity between VSM/VEM uses vmkernel ports. The VEM will use any vmkernel port in a port-profile tagged with “capability l3control” parameter.



Port Profiles
Live policy changes – changes to active port profiles are applied to each switch port that is using the profile.
Virtual Ethernet profiles  - A vEth profile can be applied on virtual machines and on VMware virtual interfaces such as the VMware management, vMotion, or vmkernel iSCSI interface.

Ethernet or Uplink Profiles - To define a port profile for use on pNICs, the network administrator applies the ethernet keyword to the profile. When this option is used, the port profile is available only to the server administrator to apply to pNICs in a VMware ESX server.

System VLANs – are defined by an optional parameter that can be added in a port profile. Interfaces that use the system port profile and that are members of one of the system VLANs defined are automatically enabled and forwarded when VMware ESX starts, even if the VEM does not have communication with the VSM. This behavior enables the use of critical host functions if the VMware ESX host starts and cannot communicate with the VSM.

VLAN consistency - Multiple VEMs require a physical Ethernet switch for inter-VEM connectivity. Each VEM needs consistent connectivity to all VLANs that are defined for the Cisco Nexus 1000V Series. Thus, any VLAN that is defined for the Cisco Nexus 1000V Series must also be defined for all upstream switches connected to each VEM. Each VLAN should be trunked to each VEM using IEEE 802.1q trunking.

#show module
#show interface brief

In the running config, there is “vem 3” , “vem 4” – a new line is added everytime a new host is added. “interface port-channel” configuration was dynamically created and added. “interface VethernetX” configuration was dynamically created and added.





System Vlans enable interface connectivity before an interface is programmed. System Vlans address the issue à VEM (line card) needs to be programmed but it needs working network (back plane) for this to happen. VEM will load system port-profiles and pass traffic even if VSM is not up.

Port Channels
3 modes à LACP, VPC –MAC Pinning and VPC-Host Mode CDP/Manual.
For LACP, on the VEM, use vemcmd show lacp
On the VSM and Upstream switch, use show port-channel summary, show lacp counters/neighbor

VPC-MAC Pinning works with any upstream switch. Allows for pinning of veths (VM) to specific links. Use it when upstream switch does not support LACP or using blade servers. Most blade servers cannot do LACP down to their servers.
When you assign the vmware physical NIC to the port-profile, that NIC has a number like vmnic0, vmnic1. And that vmnic becomes service group id on n1000kv. That gives us the ability to pin the traffic from VM based on the service group id number. If all esx host are the same, it is good but if they are not, those vmnic numbers could be different. To solve this, cisco has Relative mode à to assign the service group id based on the order which you add the nic to n1kv. 




No comments:

Post a Comment