The above diagram was taken from a Cisco Live presentation I attended on cloud networking.
The challenges of converting a NF (network function) into a VNF (or, Virtualized Network Function), running on virtualized software and hardware, or even in a cloud, are the same for NSX and the growing list of products.
Some of the ever growing list from Cisco can be found here.
Still borrowing from the presentation, take a look at the path a network packet (ok or frame) generally follows from the switch to a virtual environment.
The challenges of converting a NF (network function) into a VNF (or, Virtualized Network Function), running on virtualized software and hardware, or even in a cloud, are the same for NSX and the growing list of products.
Some of the ever growing list from Cisco can be found here.
Still borrowing from the presentation, take a look at the path a network packet (ok or frame) generally follows from the switch to a virtual environment.
The network data come into the Nic, gets assigned to memory..
Ok then an interrupt request and process to the hypervisor, I will say...
Then DMA onto the kernel packet buffer...
Ah then finally it get to the user space and the, let's say virtual machine..
Ah but then if the RX buffer space is used up we hit the dropped packets and out of buffers.
It's amazing to me that with all these steps that a virtual network appliance or NFV of some flavor, is able to process data with the speed and low latency as they do, ala NSX-T today!
The proposed solution, at the time of the presentation was to use the Vector Packet Processor (VPP) modules developed by fd.io
If you're looing to tune a NSX-T implementation for higher performance, check out he Mellanox adapter info here
For the Cisco Live presentation I have borrowed for this, you can find it at:
Cloud Networking BRKCLD2013
It's amazing to me that with all these steps that a virtual network appliance or NFV of some flavor, is able to process data with the speed and low latency as they do, ala NSX-T today!
The proposed solution, at the time of the presentation was to use the Vector Packet Processor (VPP) modules developed by fd.io
If you're looing to tune a NSX-T implementation for higher performance, check out he Mellanox adapter info here
For the Cisco Live presentation I have borrowed for this, you can find it at:
Cloud Networking BRKCLD2013