Translate

Tuesday, February 3, 2015

ACI Example - Fast deployment of infrastructure


Many people focus on ACI from the point of view of network virtualization. ACI indeed delivers a powerful network virtualization solution, through an integrated VXLAN overlay which can be use in a programmatic way. These virtualization capabilities are built with a policy model concept that links well with application definition. This is what most people stay with.

But the ACI policy model extends beyond using it for providing application connectivity. The APIC also provides many functions that are useful for a network/fabric administrator, in terms of topology management, switch on boarding, policy-based configurations and so on.

This blog is a simple example to illustrate what it takes to bring up a new rack for instance. In this example, I will add a new ToR to an existing fabric and will leverage the APIC policy model to provide connectivity for an ESXi host.

We start with a working fabric of only one leaf, as below:


Now I want to add another physical switch to that fabric. We can imagine that I have just racked a new ToR (leaf), and plugged the uplink ports eth1/49 and eth1/50 using 40GE to the spines. I can already see APIC has seen the switch:
















The switch still has no management address, because we have not registered it to the fabric. But the network admin does not need to console to the switch, think of its management address, or use any configuration management tools to provision it. All we have to do now is register the switch:











We give it an ID, a name, and that is it, the switch is now added to the Pod1 we are working with. All that is necessary for it to work with the fabric is taken care off. At the switch console we see the name has been changed already as well:


Because we already had a leaf working with connected servers, we have previously created a switch profile for it, with associated interface selector profiles, all we need to do is add the new switch to the switch selector, for the right interface configuration to become available. In our case, this is setting a number of ports to GE, with CDP enable, etc. and other ports to 10GE, CDP and LLDP, etc.




That is it. In this lab I have created two interface policies, with single links, one for GE connected ESXi hosts and another for 10GE connected ESXi hosts (CDP, MTU and other settings are part of the profile). The same model can be applied if using vPCs of course. The right ports have already been configured for GE and with proper VLANs from a pre-defined pool. As soon as we plug the ESXi hosts and apply their configuration they already show the leaf on CDP:


The ESXi hosts were already configured as well, as they were working before connected to a Nexus 9K standalone lab. ESXi infrastructure traffic is mapped to application profiles, and traffic or a particular kind to its own EPG (vMotion, VSAN, iSCSI, NFS ...). As soon as we add the EPG bindings, we see for instance all iSCSI hosts (herewith statically mapped):













And we can then also benefit from immediate visibility into each of the vSphere traffic types, without adding any other tools (which can of course also be used!):























This is but a really basic example of how a fully programmable fabric is useful, beyond providing network virtualisation …

Thanks to my good friend @alonso_Inigo for helping me ramp up on so many things! :)