At the recent RSA Conference 2016, it was apparent to me that networking is performing a significant role in modern infrastructure security with visibility, context and enforcement for modern application workloads.
If the network is the nervous system for a data center, we can gain better insights into its health and operation by better network visibility, much like what electroencephalography (EEG) does with brain signals in medicine.
Of course we know that networks are critical for traditional uses: client/server communications, server/storage data transfer, and long distance communications for branch or internet access. In these traditional uses, the computational workloads or storage tended to reside on one side of the connection, and the network was used to access the results. In more modern workloads, the computation and data are distributed. Consider micro-services that split a program into services spanning many servers, and in some cases combining services that reside in the public cloud with those in a data center.
Rise of Network Glue
The network starts to take on a different role, acting as the glue for programs or workloads. It starts to resemble the role dynamic memory served for sharing data within a single computer. The memory in traditional programs serves as a buffer to transfer input (or parameters in a stack frame) between devices, procedures or processes. We had programming techniques for network access such as sockets or remote procedure calls, but programs were still structured to be a central workload. Now, as the programs get decomposed, the network increasingly serves to tie the elements together with a common foundation. Sun Microsystems’ ads once said that the network is the computer and this is becoming ever more true.
By examining and controlling the network, we can place better controls over program behavior, and gain visibility over their actions. Of course, we still need visibility within a computer, but we need to gain better understanding of the behavior over the network.
We had network analysis tools for a long time, with a variety of network packet capture, network packet brokers and related analysis tools. These need to evolve to better understand the traffic occurring within a data center (or between workloads), and apply analysis and correlation to understand the behavior of the data center in a holistic manner throughout the application stack.
A wide variety of companies provide technologies that provide the lower level support, augmented with higher level functions for analysis. Network testing companies like Ixia now offer solutions for visibility. Software- and hardware-based approaches from firms like APCON, BigSwitch Big Monitoring Fabric, cPacket, Gigamon, Netscout, and Pluribus give insight. Higher-level functions provided by products from traditional networking vendors like Cisco’s Lancope or Juniper’s Sky Threat protection help complete the view. Open source projects such as OpenStack’s Tap as a Service (TaaS) extension to the Neutron network project, with contributions with programmers from companies like Ericsson) are also providing an community based alternative. This is just a smattering of the solutions available out there.
New Network Standards
With such a variety of choice, it means standards are going to be important. We have flow records from formats sFlow and IPFIX (based on Cisco’s NetFlow), and they have been successful. Now we are looking at higher level metadata derived from these low level foundations so that a variety of solutions can gain more meaning from the raw data.
So call it what you want: The network is the new RAM or the network is the new program glue. In either case, it will provide the visibility to provide security and telemetry and insights for troubleshooting and analyzing programs.
For more information, I recorded a video of my thoughts on the conversations at the event. Check it out here.
Dan Conde is an analyst covering enterprise networking technologies for ESG. Read more ESG blogs here.