Monday, April 30, 2012

[Guest post]: Packet Signing

By Marc Tremblay*, contributed independently

Traffic classification inevitably resonates with Deep Packets Inspection, DPI. Typically, DPI is used for two purposes: identification of the flow of traffic crossing a network as well as targeted extraction of protocol, meta and very specific payload data from the network sessions. DPI typically works out of a duplicate copy of the network traffic obtained from in-band or out-of-band network elements.
  
The process is extremely resource-intensive for the DPI and switching equipment. It has been very challenging for DPI vendors to keep up with the increasing pace in the volumes of network traffic while simultaneously improving the packet inspection mechanisms and keeping those mechanisms up-to-date with the latest applications signatures and patterns.
  
The explosion of consumer operating systems and applications, combined with the ever increasing share of encrypted traffic constantly challenges the viability of even the best DPI offerings. This is confirmed by the increased use of machine-learning based approaches by DPI vendor in an attempt to keep-up with the pace. A DPI solution that does not correctly categorize the vast majority of traffic is doomed.
   
The multiplication of DPI deployments in support to multiple applications is another growing source of pain for network operators. In addition to the costs and network complexity involved, within the most mature organisations, issues around the misalignment of DPI capabilities from various vendors, the differences in volumes computation as well as in applications classification taxonomies are beginning to emerge.
  
As organizations progress toward higher levels of maturity with regards to DPI-based applications and information, they need to be able to rely on uniform vocabularies and consistent measurement across applications. For instance, a wireless operator that relies on DPI-based analytics to design tiered service plans will expect to have access to harmonized applications taxonomies, data volumes measurements and parameters extraction capabilities, on both its PCC and Analytics infrastructure.
  
Application-level authentication has been around at least since the very early 60’s. The authorisation to use an application is mostly controlled and once a user is cleared to go, every aspect of the use of the application is contextualized to the identity of that specific user: preferences, roles, transactions logging, configuration,  etc.
  
Traffic transported by the Network does not yet benefit from the same controls and contexts, at least not in a coherent and harmonized fashion. Layers 2 to 6 together form an unequally policed and chaotic area where anyone and their code are anonymously admitted in a grand free-for-all party. At best, post-admission techniques such as port-based identification and DPI are used in an attempt to identify and police the traffic.
  
Why is it that the admission to the network is not better controlled? That generated traffic is not identified at the source, as it is the case for applications usage? Why is it that the identity of traffic, application and end-user is not preserved during the session lifetime? How would such architecture look like?
  
The need to attribute an identity to the traffic is an area that appears to be underserved by academics, commercial ventures and normalization groups. The concept is to attach a unique “license plate” at the Network layers, resulting in a unique Packet Signing that would enable the identification of the application that generated a given traffic flow. That traffic “license plate” would be eventually associated with an in-band or out-of-band “driver license”, enabling the effective authentication of the end-user or organization associated with a given traffic.
  
The Packet Signature could be transported alongside individual packets or in a specific independent protocol. No elaboration on the implementation of such a protocol is provided here but the latter appears more realistic.
  
This thinking inevitably brings us to the concept of a unified registry for signed applications: a DNS-like repository hierarchy where characteristics of individual software and releases could be registered and made available. At minimum, the registry could contain classification information for applications: a standardized taxonomy would be required for that. As DPI applications multiply within the same or different organizations, this would enable a common vocabulary across DPI-sources and DPI-based applications. Other characteristics to follow: author, vendor, coordinates for parameters extraction, security compliance, etc. 

If we try to envision an architecture for Packet Signing, the following elements come to mind:

1.     Packet Signing Client – Application-aware gatekeeper software, ideally deployed prior to network admission and closer to the ever-changing applications, responsible for attaching unique Packet Signatures to the generated traffic. Network Access Control (NAC) and Trusted Network Connect (TNC) appear to be natural fits for this functionality.
  
2.      Packet Signing Protocol - A protocol to transport or associate the Signatures with the Packets. It is anticipated that the application-specific signatures generated by Code Signing infrastructures would an ideal basis for signing the packets.
  
3.      Signed Packets Inspection – In this architectural vision, DPI is replaced or complemented by Signed Packets Inspection (SPI). Outside the box, for client applications, SPI has roughly the same functionality as DPI. The difference essentially resides in the reliance on Packet Signing for applications classification and use of a standardized taxonomy. As the standard evolves, the SPI will implement standardized and secure methods to extract specific information from the application’s data flow when needed.
  
4.   Applications Registry - A standardized taxonomy is the most basic element required in order to perform an effective and uniform classification of applications. As the architecture evolves, other formats will need to be standardized, such as the specifications for extracting specific parameters from specific application’s traffic streams. Hence there is a need for a distributed registry hierarchy to hold that information.  The registration process implemented for Code Signing appears to be an ideal entry point into the Applications Registry.

Packet Signing Architecture



Many aspects are left unaddressed here. Among the most important is Packet Authentication: the ability to associate packets with a specific end-user or organization, as well as the need for a fully Trusted Environment as a secure framework to ensure the integrity of the whole Packet Signing system. The Trusted Computing Architecture (TCA) appears to be a credible basis for the latter.
In addition to the motivations expressed earlier there are other advantages to Packet Signing, noticeable ones are, the complete elimination of rogue applications from networks as well as new authentication paradigms.

The following table attempt to summarize the technical pros-and cons of DPI versus Packet Signing:


DPI
SPI
Dependence on standardization and community alignment Very low Very high: requires alignment between 
Code Signing, networking equipment manufacturers, SPI manufacturers, taxonomy, etc.
Keeping-up with applications fingerprints     Scalability of this process is an issue as it requires reverse-engineering millions of applications and it is a forever task.
    Process cannot be economically viable or too many vendors.
None required
Detection and classification precision     Varying between DPI releases, between DPI vendors and highly dependent on reverse-engineering of applications and interpretation of
standards.
   Getting lower every day as encryption gets more widely used and mobile applications are booming.
Close to 100%
Measurements precision     Varying between DPI releases, between DPI vendors and highly dependent on reverse-engineering of applications and interpretation of standards. Close to 100%
Classification and
measurements consistence across DPI releases and vendors
     Not perfect between releases from same vendor
     None between vendors
Close to 100%
CPU and memory resources
requirements
     Server-side: Very High
     Client-side: Not Applicable
    Server-side: Low
    Client-side: Expected to be low; however, this is an aspect that requires further research.

Traction will be very unequal from one application to another. The following table is a very subjective attempt to measure the level of desirability based on the applications. This is an embryonic effort to foresee where Packet Signing will emerge from:

Typical DPI-based Applications
Traction for Packet Signing
Sample Players and Organizations
Content-Based Forwarding
High
OpenFlow, Cisco, IEEE
Copyright Enforcement
High
HADOPI, RIAA, MPAA
Lawful Interception
High
Siemens, Verint Systems, VeriSign, CALEA, RIPA
Tiered Services & Charging
High
Ericsson, Huawei, Amdocs, PCC
Analytics
Medium
Amethon, Guavus, IBM, NetCracker, Neuralitic, SDM
Security
Medium
Arbor Networks, Radware, Sonicwall
Service Assurance & Application Performance Management
Medium
BMC, CA, Compuware
Targeted Advertising
Medium
Kindsight, Openet
Network Management
Low
Tektronix, EXFO, Polystar

It’s going to take a long time before we get there. Packet Signing will most likely follow the same rocky paths as Network Access control (NAC), Trusted Computing Architecture (TCA) and Code Signing. Initial experiments are happening now, on campus, alongside Content-based Forwarding, IPv6 and OpenFlow.
   
Questions and objections related to Net Neutrality and vulnerability exposures, such as man-in-the-middle attacks, will need to be addressed as we go forward. And, without a doubt, Packet Signing is a controversial proposition.
  
If Packet Signing is ever deployed in large scales, the first real-life deployments is anticipated to happen in Defence organisations, with the enterprise market to follow, and Communications Service Providers last. Finally, no need to worry about conventional DPI, as it is also expected to cohabit synergistically with Signed Packets Inspection for a long, long time.

  
_____________

*As CTO and Vice-President of Product Development at Neuralitic Systems, M. Tremblay headed the development of the company’s big data, DPI-based, mobile analytics platform as well as its intellectual property strategy. Prior to that, he occupied executive positions at Cilys, the pioneering wireless data optimization start-up acquired by Openwave Systems, and Sipro Lab Telecom/VoiceAge Corporation, as well as engineering management and product management positions at Openwave Systems.

M. Tremblay contributed to multiple pending patents related to DPI, PCRF analytics, classification of web content, classification of encrypted traffic and converged analytics. Marc is based in Montréal.

No comments:

Post a Comment