Wednesday, 29 June 2016

Virtual Switching System - Cisco 4500

The VSS active and standby switches perform packet forwarding for ingress data traffic on their locally hosted interfaces. However, the VSS standby switch sends all control traffic to the VSS active switch for processing.
When you configure VSL, all existing configurations are removed from the interface except for specific allowed commands.

Redundancy and High Availability
In a VSS, supervisor engine redundancy operates between the VSS active and standby switches, using stateful switchover (SSO) and nonstop forwarding (NSF). The peer switch exchange configuration and state information across the VSL and the VSS standby supervisor engine runs in SSO-HOT mode.

Packet Handling
Both switches perform packet forwarding for ingress traffic on their interfaces. If possible, ingress traffic is forwarded to an outgoing interface on the same switch to minimize data traffic that must traverse the VSL.
The VSS active supervisor engine acts as a single point of control for the VSS. The VSS standby switch runs a subset of system management tasks. For example, the VSS standby switch handles its own power management, linecard bringup, and other local hardware management.

Catalyst 4500 and Catalyst 4500-X VSS require the same supervisor engine type in both chassis. The chassis must contain the same number of slots, even if their linecards differ or their slots are empty.

Switch 1 receives a module number from 1-10 and switch 2 receives a number from 11-20, irrespective the chassis type, supervisor type, or number of slots in a chassis. The show switch virtual slot-map command provides virtual to physical slot mapping.

Configure at least two of the 10 Gigabit Etherent/1 Gigabit Ethernet ports as VSL, selecting ports from different modules.

RPR and SSO Redundancy
A VSS operates with stateful switchover (SSO) redundancy if it meets the following requirements:
• Both supervisor engines must be running the same software version, unless it is in the process of software upgrade.
• VSL-related configuration in the two switches must match.
• SSO and nonstop forwarding (NSF) must be configured on each switch.
If a VSS does not meet the requirements for SSO redundancy, it will be incapable of establishing a relationship with the peer switch. Catalyst 4500/4500-X series switches’ VSS does not support route processor redundancy (RPR) mode like 6500 does.

The failed switch performs recovery action by reloading the supervisor engine. If the VSS standby switch or supervisor engine fails, no switchover is required. The failed switch performs recovery action by reloading the supervisor engine.

The VSL links are unavailable while the failed switch recovers. After the switch reloads, it becomes the new VSS standby switch and the VSS reinitializes the VSL links between the two switches. The switching modules on the failed switch are unavailable during recovery, so the VSS operates only with the MEC links that terminate on the VSS active switch. The bandwidth of the VSS is reduced until the failed switch has completed its recovery and become operational again. Any devices that are connected only to the failed switch experience an outage.

Ports on the VSS standby switch (while it boots) come up tens of seconds before the control plane is fully functional. This behavior causes a port to start working in independent mode and might cause traffic loss until the port is bundled.

If only the VSL has failed and the VSS active switch is still operational, this is a dual-active scenario. 

If you enter the reload command from the command console, it performs a reload on the switch where reload is issued.
To reload only the VSS standby switch, use the redundancy reload peer command. 
To force a switchover from the VSS active to the standby supervisor engine, use the redundancy force-switchover command.
To reset both the VSS active and standby switch, use the redundancy reload shelf command.

Multichassis EtherChannels
At the VSS, an MEC is an EtherChannel with additional capability: the VSS balances the load across ports in each switch independently. For example, if traffic enters the VSS active switch, the VSS selects an MEC link from the VSS active switch. This MEC capability ensures that data traffic does not unnecessarily traverse the VSL. 

PAgP or LACP protocols run only on the VSS active switch. PAgP or LACP control packets destined for an MEC link on the VSS standby switch are sent across VSL.

If all links to the VSS active switch fail, switch, Data traffic terminating on the VSS active switch reaches the MEC by crossing the VSL to the VSS
standby switch. Control protocols continue to run in the VSS active switch. Protocol messages reach the MEC by crossing the VSL.

The VSS active switch runs Spanning Tree Protocol (STP). The VSS standby switch redirects STP BPDUs across the VSL to the VSS active switch.

If possible, to minimize data traffic that must traverse the VSL, ingress traffic is forwarded to an outgoing interface on the same switch. When software forwarding is required, packets are sent to the VSS active supervisor engine for processing.

The same router MAC address, assigned by the VSS active supervisor engine, is used for all Layer 3 interfaces on both VSS member switches. After a switchover, the original router MAC address is still used. The router MAC address is configurable. 

Diagnosis
Bootup diagnostics are run independently on both switches. Online diagnostics can be invoked on the basis of virtual slots, which provide accessibility to modules on both switches. Use the show switch virtual slot-map command to display the virtual to physical slot mapping.

Accessing the Remote Console on VSS
Remote console (the console on the standby switch) can be accessed from the Local (active) switch.  (Switch# remote login module 11)

When you copy a file to a bootflash on the active switch, it is not automatically copied to the standby bootflash. This means that when you perform an ISSU upgrade or downgrade, both switches must receive the files individually. This behavior matches that on a dual-supervisor standalone system.
Similarly, the removal of a file on one switch does not cause the removal of the same file on the other switch.

Dual-Active Detection

A dual-active scenario can have adverse effects on network stability, because both switches use the same IP addresses, SSH keys, and STP bridge ID. 
The VSS supports the method, Enhanced PAgP, for detecting a dual-active scenario. PAgP uses messaging over the MEC links to communicate between the two switches through a neighbor switch. Enhanced PAgP requires a neighbor switch that supports the PAgP enhancements. 

Dual-Active Detection Using Enhanced PAgP
In virtual switch mode, PAgP messages include a new type length value (TLV) which contains the ID of the VSS active switch. Only switches in virtual switch mode send the new TLV.
When the VSS standby switch detects VSL failure, it initiates SSO and becomes VSS active. Subsequent PAgP messages to the connected switch from the newly VSS active switch contain the new VSS active ID. The connected switch sends PAgP messages with the new VSS active ID to both VSS switches. If the formerly VSS active switch is still operational, it detects the dual-active scenario because the VSS active ID in the PAgP messages changes. This switch initiates recovery actions.

Recovery Actions
An VSS active switch that detects a dual-active condition shuts down (by err-disabling) all of its non-VSL interfaces to remove itself from the network, and waits in recovery mode until the VSL links have recovered. You might need to intervene directly to fix the VSL failure. When the shut down switch detects that VSL is operational again, the switch reloads and returns to service as the VSS standby switch. Loopback interfaces are also shut down in recovery mode. If the running configuration of the switch in recovery mode has been changed without saving, the switch will not automatically reload.

SSO Dependencies
For the VSS to operate with SSO redundancy, the VSS must meet the following conditions:
• Identical software versions (except during ISSU with compatible versions)
• VSL configuration consistency
During the startup sequence, the VSS standby switch sends virtual switch information from the
startup-config file to the VSS active switch.
The VSS active switch ensures that the following information matches correctly on both switches:
– Switch virtual domain
– Switch virtual node
– Switch priority (optional)
– VSL port channel: switch virtual link identifier
– VSL ports: channel-group number, shutdown, total number of VSL ports
• If the VSS detects a mismatch, it prints out an error message on the VSS active switch console and the VSS standby switch does not bootup.
• SSO and NSF must be configured and enabled on both switches.

VSL Initialization
A VSS is formed when the two switches and the VSL link between them become operational. Because both switches need to be assigned their role (VSS active or VSS standby) before completing initialization, VSL is brought online before the rest of the system is initialized. The initialization
sequence is as follows:
1. The VSS initializes all cards with VSL ports, and then initializes the VSL ports.
2. The two switch communicate over VSL to negotiate their roles (VSS active or VSS standby).
3. The VSS active switch completes the boot sequence, including the consistency check described in the “SSO Dependencies” section on page 5-26.
4. If the consistency check completed successfully, the VSS standby switch comes up in SSO VSS standby mode. If the consistency check failed, the VSS standby switch comes up in RPR mode.
5. The VSS active switch synchronizes configuration and application data to the VSS standby switch. If VSS is either forming for the first time or a mismatch exists between VSL information sent by the standby switch and what is on the active switch, the new configuration is absorbed in the
startup-config. This means that if the active switch was running prior to the standby switch and unsaved configurations existed, they would be written to the startup-config if the standby switch sends mismatched VSL information.


System Initialization
If you boot both switches simultaneously, the switch configured as Switch 1 boots as VSS active and the one with Switch 2 boots as VSS standby. If priority is configured, the higher priority switch becomes active.
If you boot only one switch, the VSL ports remain inactive, and the switch boots as VSS active. When you subsequently boot the other switch, the VSL links become active, and the new switch boots as the VSS standby switch.

Configuring a VSS
When you convert two standalone switches into one VSS, all non-VSL configuration settings on the VSS standby switch reverts to the default configuration.

To convert two standalone switches into a VSS, you perform the following major activities:
• Save the standalone configuration files.
• Configure each switch for required VSS configurations.
• Convert to a VSS.
Step 1 Switch-1(config)# switch virtual domain 100 Configures the virtual switch domain on Switch A.
Step 2 Switch-1(config-vs-domain)# switch 1 Configures Switch A as virtual switch number 1.
Step 3 Switch-1(config-vs-domain)# exit Exits config-vs-domain.

Step 1 Switch-2(config)# switch virtual domain 100 Configures the virtual switch domain on Switch A.
Step 2 Switch-2(config-vs-domain)# switch 2 Configures Switch A as virtual switch number 1.
Step 3 Switch-2(config-vs-domain)# exit Exits config-vs-domain.
The switch number is not stored in the startup or running configuration, because both switches use the same configuration file (but must not have the same switch number).


Configuring VSL Port Channel and Ports
The VSL is configured with a unique port channel on each switch. During the conversion, the VSS configures both port channels on the VSS active switch. 

Step 1 Switch-1(config)# interface port-channel 10 Configures port channel 10 on Switch 1.
Step 2 Switch-1(config)# switchport Convert to a Layer 2 port.
Step 3 Switch-1(config-if)# switch virtual link 1 Associates Switch 1 as owner of port channel 10.
Step 4 Switch-1(config-if)# no shutdown Activates the port channel.
Step 5 Switch-1(config-if)# exit Exits interface configuration.

Step 1 Switch-2(config)# interface port-channel 20 Configures port channel 10 on Switch 1.
Step 2 Switch-2(config)# switchport Convert to a Layer 2 port.
Step 3 Switch-2(config-if)# switch virtual link 2 Associates Switch 1 as owner of port channel 10.
Step 4 Switch-2(config-if)# no shutdown Activates the port channel.
Step 5 Switch-2(config-if)# exit Exits interface configuration.
For line redundancy, we recommend configuring at least two ports per switch for the VSL. For module redundancy, the two ports can be on different switching modules in each chassis.

Step 1 Switch-1(config)# interface range tengigabitethernet 3/1-2
Step 2 Switch-1(config-if)# channel-group 10 mode on

Step 1 Switch-2(config)# interface range tengigabitethernet 3/1-2
Step 2 Switch-2(config-if)# channel-group 20 mode on

Converting the Switch to Virtual Switch Mode
Conversion to virtual switch mode requires a restart for both switches. Prior to the restart, the VSS converts the startup configuration to use the switch/module/port convention. A backup copy of the startup configuration file is saved in bootflash.

Switch-1# switch convert mode virtual Converts Switch 1 to virtual switch mode. After you enter the command, you are prompted to
confirm the action. Enter yes. The system creates a converted configuration file, and saves the file to the bootflash.
Switch-2# switch convert mode virtual
When switches are being converted to VSS, you should not set them to ignore startup-config.


Displaying VSS Information
Switch# show switch virtual Displays the virtual switch domain number, and the switch number and role for each of the switches.
Switch# show switch virtual role Displays the role, switch number, and priority for each of the switch in the VSS.
Switch# show switch virtual link Displays the status of the VSL.
Switch# show switch virtual link port-channel Displays information about the VSL port channel.
Switch# show switch virtual link port Displays information about the VSL ports.

Configuring the Router MAC Address
On VSS, all routing protocols are centralized on the active supervisor engine. A common router MAC address is used for Layer 3 interfaces on both active and standby switches. Additionally, to ensure non-stop forwarding, the same router MAC address is used after switchover to the standby switch, so that all layer 3 peers see a consistent router MAC address. There are three ways to configure a router MAC address on VSS:
• HHH—Manually set a router MAC address. Ensure that this MAC address is reserved for this usage.
• chassis—Use the mac-address range reserved for Chassis. This is the Cisco MAC address assigned o the chassis.
• use-virtual—Use the mac-address range reserved for the VSS. This is the reserved Cisco MAC ddress pool, which is derived from a base MAC address +vss domain-id.
By default, the virtual domain based router MAC address is used. Any change of router MAC address configuration requires a reboot of both VSS supervisor engines.


Configuring Dual-Active Detection
Configuring Enhanced PAgP Dual-Active Detection
By default, PAgP dual-active detection is enabled. However, the enhanced messages are only sent on port channels with trust mode enabled. Before changing PAgP dual-active detection configuration, ensure that all port channels with trust mode enabled are in administrative down state. To enable or disable PAgP dual-active detection, perform this task:
Switch(config)# switch virtual domain domain_id Enters virtual switch submode.
Switch(config-vs-domain)# dual-active detection pagp Enables sending of the enhanced PAgP messages.

To configure trust mode on a port channel, 
Switch(config)# interface port-channel 20
Switch(config-if)# shutdown
Switch(config-if)# exit
Switch(config)# switch virtual domain 100
Switch(config-vs-domain)# dual-active detection pagp
Switch(config-vs-domain)# dual-active detection pagp trust channel-group 20
Switch(config-vs-domain)# exit
Switch(config)# interface port-channel 20
Switch(config-if)# no shutdown
Switch(config-if)# exit

Switch# show switch virtual dual-active summary Displays information about dual-active detection configuration and status.
Switch# show pagp dual-active


In-Service Software Upgrade (ISSU) on a VSS
Prerequisites to Performing ISSU
The type of the pre- and post-upgrade images must match precisely. Identical. ISSU is not supported from a Universal_lite to a Universal image, or vice versa. ISSU is also not supported from a k9 image to a non-k9 image, or vice versa.
• VSS must be functionally in SSO mode; 
• The pre- and post-upgrade Cisco IOS XE software image files must both be available in the local file systems (bootflash, SD card, or USB) of both the active and standby supervisor engines before you begin the ISSU process. 

 -ISSU in a VSS will result in loss of service on non-MEC links, and peers must be prepared for this. On links connected over MECs, Nonstop Forwarding (NSF) must be configured and working properly.
 -Autoboot is turned on and the current booted image matches the one specified in the BOOT environmental variable. 






Converting a VSS to Standalone Switch
Save the configuration file from the VSS active switch. 
When you convert the VSS active switch to standalone mode, the VSS active switch removes the provisioning and configuration information related to VSL links and the peer chassis modules, saves the configuration file, and performs a reload. The switch comes up in standalone mode with only the configuration data relevant to the standalone system. Not recommended that you convert a VSS to standalone in a live network.

Switch-1# switch convert mode stand-alone
To convert the peer switch to standalone, perform this task on the VSS standby switch
Switch-2# switch convert mode stand-alone

To configure the switch priority, perform this task:
Switch(config)# switch virtual domain 100
Switch(config-vs-domain)# switch [1 | 2] priority [priority_num]
Switch# show switch virtual role

The show switch virtual role command shows the operating and configuThe new priority value only takes effect after you save the configuration and perform a reload of the VSS. You can manually set the VSS standby switch to VSS active using the redundancy force-switchover command.


Monday, 27 June 2016

Cisco Nexus 1000v useful show commands

Using the topology view below, refer to useful show commands and the output.














YGNDC-1000V-ACC-01# show module
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
2    0      Virtual Supervisor Module         Nexus1000V          ha-standby
3    1022   Virtual Ethernet Module           NA                  ok
4    1022   Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw     
---  ------------------  ------------------------------------------------ 
1    5.2(1)SV3(1.15)     0.0                                             
2    5.2(1)SV3(1.15)     0.0                                              
3    5.2(1)SV3(1.15)     VMware ESXi 6.0.0 Releasebuild-3620759 (6.0)    
4    5.2(1)SV3(1.15)     VMware ESXi 6.0.0 Releasebuild-3620759 (6.0)    

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    10.55.26.7       NA                                    NA
2    10.55.26.7       NA                                    NA
3    10.55.26.244     c4c014c0-d421-11e5-a0f5-ab9f62f8bc24  BOTVMWESXP1041
4    10.55.26.249     2e734260-d43c-11e5-aadc-959f62f8bc24  BOTVMWESXP1043

* this terminal session



YGNDC-1000V-ACC-01# sh vtracker module-view pnic
--------------------------------------------------------------------------------
Mod  EthIf     Adapter  Mac-Address    Driver    DriverVer           FwVer          
               Description                                                     
--------------------------------------------------------------------------------
3    Eth3/2    vmnic1   0050.56d2.fcbf                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
3    Eth3/3    vmnic2   0050.56d2.fd46                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
3    Eth3/4    vmnic3   0050.56d2.fd47                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
4    Eth4/2    vmnic1   0050.56d2.fc8d                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
4    Eth4/3    vmnic2   0050.56d2.fd20                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
4    Eth4/4    vmnic3   0050.56d2.fd21                                              
               Emulex Corporation Emulex OneConnect OCe15100 NIC
              
--------------------------------------------------------------------------------

 
To view what VM are in each VLAN

-------------------------------------

To view each VM info hosted on specified VEM module

----------------------------------------------------------
To view the IP adderss of the VM, its Veth info and the module it's on


In older versions, the output is like below. 


To know where the VM (using IP address) is located on which ESX host, use this command and find out the module. Then use “show interface virtual” or “show module vem <num>” or “show module”.


------------------------------------------------------------------------------

To view previous vmotion events

To view which Veth resides on which host


To view which module or Eth port is on which host,
 


To view each vethernet port statistics 

To view how port-profiles are being used and what ports are using



Cisco Nexus 1000v Installation Steps and Logs


Steps to install VSM/VEM


  1. Import OVA
  2. can use same vlan for management, control and packet. only management vlan in used in Layer3 mode.
  3. use manual install to go through step by step wizard for config when n1kv is online
  4. use vrf management to test connectivity
  5. deploy second vsm using secondary option.
  6. after second vsm reload and online, primary vsm detect it and HA is established. (use terminal monitor on primary vsm to see events)
  7. To connect to vCenter, https to vsm management IP and download cisco_nexus_1000v_extension.xml file
  8. Go to vCenter --> manage plug in and install this plugin.
  9. create vlans for servers, vsm mgmt, etc.
  10. configure port-profile for uplinks (trunks, po, etc); l3 control
  11. configure port-profile for vm - vlans (access)
  12. create a vmkernel interface for vem ..to communicate with vsm (or use the same mgmt interface as Esxi vmnic0)
  13. install vib on esxi host --> sftp the file (need to check which version of vib is compatible with esxi)  ~#esxcli software vib install -v /tmp/cross_cisco-vem....vib 
  14. Check vem status on ESXi host  - vem status -v (to verify)

Connection to vCenter from N1000v can be done after step 8. Refer to the logs below.
ESX6_N1KV(config-svs-conn)# svs connection VS6_VC
ESX6_N1KV(config-svs-conn)# protocol vmware-vim
ESX6_N1KV(config-svs-conn)#  remote ip address 10.55.23.16 port 80
ESX6_N1KV(config-svs-conn)# vmware dvs datacenter-name YGN
ESX6_N1KV(config-svs-conn)# connect

2016 Jun 16 20:55:51 ESX6_N1KV vms[2713]: %VMS-5-CONN_CONNECT: Connection 'YGN_VC6' connected to the vCenter Server.
2016 Jun 16 20:55:52 ESX6_N1KV vms[2713]: %VMS-5-DVS_CREATE: dvswitch 'ESX6_N1KV' created on the vCenter Server.
Note: Command execution in progress..please wait
2016 Jun 16 20:55:54 ESX6_N1KV msp[2710]: %MSP-5-DOMAIN_CFG_SYNC_DONE: Domain config successfully pushed to the management server.
ESX6_N1KV(config-svs-conn)# 2016 Jun 16 20:55:54 ESX6_N1KV vshd[12445]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12445
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12449]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12449
2016 Jun 16 20:55:54 ESX6_N1KV last message repeated 1 time
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12445]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12445
2016 Jun 16 20:55:54 ESX6_N1KV vms[2713]: %VMS-5-DVPG_CREATE: created port-group 'Unused_Or_Quarantine_Uplink' on the vCenter Server.
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12449]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12449
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12445]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12445
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12449]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12449
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12445]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12445
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12449]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12449
2016 Jun 16 20:55:54 ESX6_N1KV vshd[12445]: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by root on vsh.12445
2016 Jun 16 20:55:54 ESX6_N1KV vms[2713]: %VMS-5-DVPG_CREATE: created port-group 'Unused_Or_Quarantine_Veth' on the vCenter Server.
2016 Jun 16 20:55:56 ESX6_N1KV vms[2713]: %VMS-5-VMS_PPM_SYNC_TIME_EXP: Attempt #0 to resync Port-Profile Manager to local vCenter Server cache due to sync timer expiration
2016 Jun 16 20:55:56 ESX6_N1KV msp[2710]: %MSP-5-DOMAIN_CFG_SYNC_DONE: Domain config successfully pushed to the management server.
2016 Jun 16 20:56:01 ESX6_N1KV vms[2713]: %VMS-5-VMS_PPM_SYNC_TIME_EXP: Attempt #0 to resync Port-Profile Manager to local vCenter Server cache due to sync timer expiration
2016 Jun 16 20:56:01 ESX6_N1KV msp[2710]: %MSP-5-DOMAIN_CFG_SYNC_DONE: Domain config successfully pushed to the management server.
2016 Jun 16 20:56:06 ESX6_N1KV vms[2713]: %VMS-5-VMS_PPM_SYNC_TIME_EXP: Attempt #0 to resync Port-Profile Manager to local vCenter Server cache due to sync timer expiration
2016 Jun 16 20:56:06 ESX6_N1KV msp[2710]: %MSP-5-DOMAIN_CFG_SYNC_DONE: Domain config successfully pushed to the management server.
2016 Jun 16 20:56:11 ESX6_N1KV vms[2713]: %VMS-5-VMS_PPM_SYNC_TIME_EXP: Attempt #0 to resync Port-Profile Manager to local vCenter Server cache due to sync timer expiration
2016 Jun 16 20:56:11 ESX6_N1KV msp[2710]: %MSP-5-DOMAIN_CFG_SYNC_DONE: Domain config successfully pushed to the management server.

ESX6_N1KV(config-svs-conn)#




During VSM migration to 1000v VEM and ESXi mgmt. interface joining 1000v

YGNDC-1000V-ACC-01# sh module
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
2    0      Virtual Supervisor Module         Nexus1000V          active *

Mod  Sw                  Hw     
---  ------------------  ------------------------------------------------ 
2    5.2(1)SV3(1.15)     0.0                                             

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
2    10.55.23.254     NA                                    NA

* this terminal session
YGNDC-1000V-ACC-01#
2016 Jun 22 15:33:34 YGNDC-1000V-ACC-01 vem_mgr[2528]: %VEM_MGR-2-VEM_MGR_DETECTED: Host BOTVMWESXP1041 detected as module 3
2016 Jun 22 15:33:34 YGNDC-1000V-ACC-01 vns_agent[3294]: %VNS_AGENT-2-VNSA_LIC_NO_ADVANCED_LIC: VSM does not have Advanced licenses. May not be able to use VSG services. Please install Advanced licenses.
2016 Jun 22 15:33:34 YGNDC-1000V-ACC-01 vem_mgr[2528]: %VEM_MGR-2-MOD_ONLINE: Module 3 is online
2016 Jun 22 15:33:36 YGNDC-1000V-ACC-01 redun_mgr[2493]: %REDUN_MGR-2-AC_AC_DETECTED: Active-Active VSM detected.
2016 Jun 22 15:33:37 YGNDC-1000V-ACC-01 platform[2256]: %PLATFORM-2-MOD_DETECT: Module 1 detected (Serial number ) Module-Type Virtual Supervisor Module Model
2016 Jun 22 15:33:41 YGNDC-1000V-ACC-01 redun_mgr[2493]: %REDUN_MGR-2-AC_AC_ACTION: Primary VSM will be rebooted.

YGNDC-1000V-ACC-01#
YGNDC-1000V-ACC-01# sh module
2016 Jun 22 15:33:59 YGNDC-1000V-ACC-01 platform[2256]: %PLATFORM-2-MOD_REMOVE: Module 1 removed (Serial number)

Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
2    0      Virtual Supervisor Module         Nexus1000V          active *
3    1022   Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw     
---  ------------------  ------------------------------------------------ 
2    5.2(1)SV3(1.15)     0.0                                              
3    5.2(1)SV3(1.15)     VMware ESXi 6.0.0 Releasebuild-3620759 (6.0)    

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
2    10.55.23.254     NA                                    NA
3    10.55.26.244     c4c014c0-d421-11e5-a0f5-ab9f62f8bc24  BOTVMWESXP1041

* this terminal session
YGNDC-1000V-ACC-01#
2016 Jun 22 15:34:06 YGNDC-1000V-ACC-01 platform[2256]: %PLATFORM-2-MOD_DETECT: Module 1 detected (Serial number ) Module-Type Virtual Supervisor Module Model

YGNDC-1000V-ACC-01#
YGNDC-1000V-ACC-01# sh module
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module                             powered-up
2    0      Virtual Supervisor Module         Nexus1000V          active *
3    1022   Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw     
---  ------------------  ------------------------------------------------ 
2    5.2(1)SV3(1.15)     0.0                                              
3    5.2(1)SV3(1.15)     VMware ESXi 6.0.0 Releasebuild-3620759 (6.0)    

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
2    10.55.23.254     NA                                    NA
3    10.55.26.244     c4c014c0-d421-11e5-a0f5-ab9f62f8bc24  botvmwesxp1041.oml.com

* this terminal session
YGNDC-1000V-ACC-01#

  

Thursday, 16 June 2016

Cisco Nexus 1100 Series Virtual Services Appliances




The management VLAN is used for management of the Cisco Nexus 1100 Series VSA.
The control VLAN is a Layer 2 interface used for communication between the redundant Cisco Nexus 1100 Series appliances. This interface handles low-level control packets such as heartbeats as well as any configuration data that needs to be exchanged between the Cisco Nexus 1100 Series appliances.

Network Connectivity Options

The interfaces on the Cisco Nexus 1100 Series can be connected to the network in five ways.

Network Connection Option 1 uses the two LOM interfaces to carry all traffic types: management, control, packet, and data. In this configuration, each uplink connects to two different upstream switches to provide redundancy. Option 1 is preferred in cases in which customers are not using a Cisco NAM and therefore have little or no data traffic traversing the uplinks. It is commonly used when the Cisco Nexus 1100 Series is used only for VSMs; the simplest configuration and lowest risk of misconfiguration. The LOM ports are active-standby only and cannot be part of a PortChannel or virtual PortChannel (vPC).




Option 2 uses the two LOM interfaces to carry management, control, and packet traffic. The other four interfaces on the PCI card carry only data traffic. Option 2 is well suited for customers who are deploying a Cisco NAM in the Cisco Nexus 1100 Series. Option 2 provides the most dedicated bandwidth for Cisco NAM traffic. The 4-port network interface card (NIC) adapter does support PortChannel and vPC capabilities and can provide added bandwidth utilization and redundancy.








Option 3 uses the two LOM interfaces for management traffic only, and it uses the four interfaces on the PCI card to carry control, packet, and data traffic. It's well suited for customers who are deploying a Cisco NAM or VSG in the Cisco Nexus 1100 Series but require a separate management network. Recommended for most deployments because it provides the flexibility to handle both currently supported and future VSBs.




Option 4 uses the two LOM interfaces for management traffic, two of the four PCI interfaces for control and packet traffic, and the other two PCI interfaces for data traffic. Well suited for customers who want to use the Cisco NAM but require separate data and control networks.




Option 5 Users can now more flexibly deploy their VSBs on the Cisco Nexus 1100 Series. With this option, you do not need to specify which ports allow which types of traffic (management, control, or data traffic). One of the main advantages of this option is that you can define a VSB to use a particular interface. This flexible option is an excellent option for users who want more control over the design of the VSBs for optimized flexibility and redundancy.



Deployment Considerations

Typically, the Cisco Nexus 1100 Series is best deployed at the aggregation layer of the network so that it can host a larger set of servers. Deploying the Cisco Nexus 1100 Series on the Cisco Nexus 2000 Series provides a large pool of servers supported on a single point of management for those servers. Because the Cisco Nexus 1100 Series uses 1 Gigabit Ethernet interfaces to connect to the network, a fabric extender provides an optimal connectivity solution.

In the uplink type 1 topology, all traffic (management, control, and VSB data traffic) is switched out at an effective bandwidth of 1 Gbps. Both LOM ports on the Cisco Nexus 1100 Series, Ethernet interfaces 1 and 2, are teamed to form an active-standby pair. This uplink type is simplistic and does not require any PortChannel or LACP configuration on the upstream switches.


In the uplink type 2 topology below, management and control traffic is switched out of the first two Ethernet interfaces. Ethernet interfaces 1 and 2 are forwarding as an active-standby pair, just as in uplink type 1. However, VSB data traffic is carried out of Ethernet interfaces 3 through 6.


Uplink type 3 is physically identical to uplink type 2 because it uses all the Ethernet interfaces available. The difference is in the way that the traffic is carried across these interfaces. In this topology, management traffic is switched out of the first two Ethernet interfaces. Ethernet interfaces 1 and 2 are forwarding as an active-standby pair, just as in the other uplink types. However, both control and VSB data traffic is carried out of Ethernet interfaces 3 through 6.



In uplink type 4, The VSMs residing on the Cisco Nexus 1100 Series VSA and the hosts that are managed by the VSMs can be connected over Layer 2 or 3 as explained in the previous sections.


Uplink type 5 is for the flexible network option and can be a combination of any of the other uplink types.




Cisco Nexus 1000V Series Backup and Restore Procedures


Here are the high-level steps for the VSM installed on the Cisco Nexus 1100 Series VSA.

Backup Procedure
1. Shut down the secondary or standby VSM VSB.
2. Export that VSB to remote storage.
3. Back up the running configuration of the Cisco Nexus 1000V Series VSA to a remote server or site.
a. Copy the running configuration often or whenever network the configuration has changed.
4. Power back on the secondary or standby VSM.
Restore Procedure
1. Completely remove the Cisco Nexus 1000V Series VSB if it is still on the Cisco Nexus 1100 Series VSA.
2. Create a new Cisco Nexus 1000V Series VSB.
a. Import a backup Cisco Nexus 1000V Series instance to the new VSB.
b. Verify that the Cisco Nexus 1000V Series instance is operational.
3. Restore the backup network configuration as the running configuration.
a. Verify that the port profiles and configurations are correct.
b. Verify that the virtual machines are connected to the appropriate port profiles.
c. Create a backup configuration of the running configuration after the environment has stabilized.

Cisco Nexus 1000v virtual switch

A software-based switch that spans multiple hosts running VMware ESX or ESXi.



VMware Networking
Each virtual machine has one or more vNICs(virtual NIC). These vNICs are connected to a virtual switch to provide network connectivity to the virtual machine. The guest OS sees the vNICs as pNICs.

Hosts running VMware ESX have a virtual management port called vswif, sometimes called as service console interface. It is used for communication with VMware vCenter Server, to manage the device directly with the VMware vSphere client, or to use Secure Shell (SSH) to log in to the host’s command-line interface (CLI). VMware ESXi hosts do not use vswif interfaces because the hosts lack a service console OS.

Each host also has one or more virtual ports called virtual machine kernel NICs (vmknic) used by VMware ESX for Small Computer Systems Interface over IP (iSCSI) and Network File System (NFS) access, as well as by VMware vMotion. On a VMware ESXi system, a vmknic is also used for communication with VMware vCenter Server.

The pNICs on a VMware ESX host, called virtual machine NICs (VMNIC), are used as uplinks to the physical network infrastructure.

Each vNIC is connected to a standard vSwitch or vDS through a port group. Each port group belongs to a specific vSwitch or vDS and specifies a VLAN or set of VLANs that a VMNIC, vswif, or vmknic will use.
===
Nexus 1000v Components

VSM – Virtual Supervisor Module
VEM – Virtual Ethernet Module  //linecards
Features – vTracker (shown in cisco device), VCPlugin (shows network info in Vmware)

The VSM is a virtual appliance that can be installed independent of the VEM: that is, the VSM can run on a VMware ESX server that does not have the VEM installed. The VEM is installed on each VMware ESX server to provide packet-forwarding capability.

VMware’s management hierarchy is divided into two main elements: a data center and a cluster. A data center contains all components of a VMware deployment, including hosts, virtual machines, and network switches, including the Cisco Nexus 1000V Series.

Within a VMware data center, the user can create one or more clusters. A cluster is a group of hosts and virtual machines that forms a pool of CPU and memory resources. A virtual machine in a cluster can be run on or migrated to any host in the cluster. Hosts and virtual machines do not need to be part of a cluster; they can exist on their own within the data center.

Port profiles create a virtual boundary between server and network administrators. Port profiles are network policies that are defined by the network administrator and exported to VMware vCenter Server. Within VMware vCenter Server, port profiles appear as VMware port groups in the same locations as traditional VMware port groups would. Port profiles are also used to configure the pNICs in a server; known as uplink port profiles, are assigned to the pNICs as part of the installation of the VEM on a VMware ESX host.

The Management VLAN is used for system login, configuration. and corresponds to the mgmt0 interface. The management interface appears as the mgmt0 port on a Cisco switch. Although management interface is not used to exchange data between the VSM and VEM, it is used to establish and maintain the connection between the VSM and VMware vCenter Server. When the software virtual switch (SVS) domain mode for control communication between the VSM and VEM is set to Layer 3 mode, the management interface can also be used for control traffic. Control interface handles low-level control packets such as heartbeats as well as any configuration and programming data that needs to be exchanged between the VSM and VEM. Because of the nature of the traffic carried over the control interface, it is of most importance in the Cisco Nexus 1000V Series solution. Packet interface is used to carry packets that need to be processed by the VSM; mainly used for Cisco Discovery Protocol and Internet Group Management Protocol (IGMP) control packets.
You can use the same VLAN for control, packet, and management, but if needed for flexibility, you can use separate VLANs.


switch# show switch edition //to show what version essential or advanced license for 256 sockets is free. License per VEM are sticky.


VEM are running on headless mode. If VSM goes down, data continue to flow but you can’t make changes. Each vSphere host gets a VEM. 

VSM

Deploy VSM in pairs just like in regular switches. 1 HA pair (or a VSM if standalone) can manage 64 VEM. So, it’s like virtual 66 slot chassis.

VSM heartbeats between Pri and Sec VSM are layer 2.

VSM control modes – L2 mode or L3 mode (more flexible; uses udp-4785) //VSM controlling VEM
VSM mgmt0 is default interface for L3
VSM connects to vmware vCenter using SSL connection.
Primary and secondary VSM must be in the same L2 domain. Latency should be 5-10ms max.  Usually makes sense to use Control0 interface for L3. Backup your config early and often.
VSM and VEM version can be 1 version difference but try to match.
#show module uptime
#show module vem
#show module vem counters
#show license usage
  





Backing up the VSM – running-config alone isn’t enough to restore. Clone the VSM and save it. To clone, shut down the standby VSM, and clone it as VSM doesn’t have vmware tools installed.


 

VEM

VEM is installed on each VMWare ESX host as a kernel component.
L3 control requires a VMKernel NIC(a virtual NIC on ESX host created by Vmware admin). Recommended using the ESXi management vmkernel nic as VMware does not have vrf concept – requires you migrate the management interface to VEM. –doesn’t require static routes on ESXi hosts.
VEM installation – VMware update manager (VUM). Or install vib file on ESXi     


Port Types

Nexus 1000V Series supports multiple switch-port types for internal and external connectivity: virtual Ethernet (vEth), Ethernet (Eth), and PortChannel (Po). The most common port type in a Cisco Nexus 1000V Series environment is the vEth interface which represents the switch port connected to a virtual machine’s vNIC or connectivity to specialized interface types such as the vswif or vmknic interface. A vEth interface’s name does not indicate the module with which the port is associated; a vEth interface is notated like this: vEthY. This unique notation is designed to work transparently with VMware VMotion, keeping the interface name the same regardless of the location of the associated virtual machine.

The second characteristic that makes a vEth interface unique is its transient nature. A given vEth interface appears or disappears based on the status of the virtual machine connected to it. The mapping of a virtual machine’s vNIC to a vEth interface is static. When a new virtual machine is created, a vEth interface is also created for each of the virtual machine’s vNICs. The vEth interfaces will persist as long as the virtual machine exists.

The switch contains two interface types related to the VMNICs (pNICs) in a VMware ESX host. An Ethernet, or Eth, interface is the Cisco Nexus 1000V Series’ representation of a VMNIC. A PortChannel is an aggregation of multiple Eth interfaces on the same VEM.

How Nexus1000v Operates 

Unlike physical switches with a centralized forwarding engine, each VEM maintains a separate forwarding/MAC address table. There is no synchronization between forwarding tables on different VEMs. In addition, there is no concept of forwarding from a port on one VEM to a port on another VEM. Packets destined for a device not local to a VEM are forwarded to the external network, which in turn may forward the packets to a different VEM. Static entries are automatically generated for virtual machines running on the VEM; these entries do not time out. For devices not running on the VEM, the VEM can learn a MAC address dynamically, through the pNICs in the server.

Cisco Nexus 1000V Series does not run Spanning Tree Protocol. Because the Cisco Nexus 1000V Series does not participate in Spanning Tree Protocol, it does not respond to BPDU packets, nor does it generate them. BPDU packets that are received by Cisco Nexus 1000V Series Switches are dropped. N1kv prevents loops between the VEMs and the first-hop access switches without the use of Spanning Tree Protocol. PortFast is configured per interface and should be enabled on interfaces connected to a VEM, along with BPDU guard and BPDU filtering. Filtering BPDUs at the physical switch port will enhance VEM performance by avoiding unnecessary processing at the VEM uplink interfaces.

VEM to VSM Communication 

The VSM can communicate with the VEM over the Layer 2 or Layer 3 network. Layer 3 is the recommended mode for control and packet communication between the VSM and the VEM. The VEM uses vSwitch data provided by VMware vCenter Server to configure the control interfaces for VSM-to-VEM control and packet communication. The VEM then applies the correct uplink port profile to the control interfaces to establish communication with the VSM. In layer 3 mode, you can specify whether to use the VSM management interface or use the dedicated Control0 interface for VSM-to-VEM control traffic. The port profile configured for Layer 3 (VSM-to-VEM) communication on the VSM needs to have capability l3control enabled. This process requires configuration of a VMware vmkernel interface on each VMware ESX host.

Layer 3 connectivity between VSM/VEM uses vmkernel ports. The VEM will use any vmkernel port in a port-profile tagged with “capability l3control” parameter.



Port Profiles
Live policy changes – changes to active port profiles are applied to each switch port that is using the profile.
Virtual Ethernet profiles  - A vEth profile can be applied on virtual machines and on VMware virtual interfaces such as the VMware management, vMotion, or vmkernel iSCSI interface.

Ethernet or Uplink Profiles - To define a port profile for use on pNICs, the network administrator applies the ethernet keyword to the profile. When this option is used, the port profile is available only to the server administrator to apply to pNICs in a VMware ESX server.

System VLANs – are defined by an optional parameter that can be added in a port profile. Interfaces that use the system port profile and that are members of one of the system VLANs defined are automatically enabled and forwarded when VMware ESX starts, even if the VEM does not have communication with the VSM. This behavior enables the use of critical host functions if the VMware ESX host starts and cannot communicate with the VSM.

VLAN consistency - Multiple VEMs require a physical Ethernet switch for inter-VEM connectivity. Each VEM needs consistent connectivity to all VLANs that are defined for the Cisco Nexus 1000V Series. Thus, any VLAN that is defined for the Cisco Nexus 1000V Series must also be defined for all upstream switches connected to each VEM. Each VLAN should be trunked to each VEM using IEEE 802.1q trunking.

#show module
#show interface brief

In the running config, there is “vem 3” , “vem 4” – a new line is added everytime a new host is added. “interface port-channel” configuration was dynamically created and added. “interface VethernetX” configuration was dynamically created and added.





System Vlans enable interface connectivity before an interface is programmed. System Vlans address the issue à VEM (line card) needs to be programmed but it needs working network (back plane) for this to happen. VEM will load system port-profiles and pass traffic even if VSM is not up.

Port Channels
3 modes à LACP, VPC –MAC Pinning and VPC-Host Mode CDP/Manual.
For LACP, on the VEM, use vemcmd show lacp
On the VSM and Upstream switch, use show port-channel summary, show lacp counters/neighbor

VPC-MAC Pinning works with any upstream switch. Allows for pinning of veths (VM) to specific links. Use it when upstream switch does not support LACP or using blade servers. Most blade servers cannot do LACP down to their servers.
When you assign the vmware physical NIC to the port-profile, that NIC has a number like vmnic0, vmnic1. And that vmnic becomes service group id on n1000kv. That gives us the ability to pin the traffic from VM based on the service group id number. If all esx host are the same, it is good but if they are not, those vmnic numbers could be different. To solve this, cisco has Relative mode à to assign the service group id based on the order which you add the nic to n1kv.