We are actively working on security protocols and methods for authentication as well as access control. To evaluate security properties, we also apply formal methods like model checking. We work with new protocols or modifications of existing protocols that use hardware security chips as the Trusted Platform Module (TPM) or Smart Cards.
Privacy is another issue we are working on. This includes privacy in protocol design as well as anonymity on the Internet. Network security in general is related to many of our other research topics. Intrusion Detection is strongly related to monitoring. As Peer-to-Peer systems are increasingly used to improve classic client/server systems, securing Peer-to-Peer and other self-organising systems is in our focus.
Protection against DoS Attacks
Today (Distributed) Denial of Service Attacks are a major threat to the internet. In the past years there were attacks against the Internet infrastructure (i.e. DNS root servers), various services and companies and even against private persons using specific services (i.e. XBOX live). We are researching different ways to mitigate this threat.
Some networks are specially vulnerable against DoS attacks, for example if a core service that other services depend on has only limited capacity available. An attack against such a service will also affect dependend services. We are working on methods to check network and service topologies for such weaknesses.
Defense against ongoing attacks is easier if the defender has the possibility to flexibly re-configure his network topology. We are working on virtualization techniques that allow to change the network on the fly to limit the consequences on an attack.
Another research topic is the defense of HTTP-Servers by redirecting traffic between the client and multiple proxies. This way the attacker has to spend more resources to cause load on the server.
Honeypots, Malware Analysis and Intrusion Detection
In order to protect networks against Distributed Denial of Service Attacks understanding the mechanisms which are used to conduct these attacks is crucial.
Our research activities therefore deal with the investigation of malware and botnets. We employ different kinds of honeypots in order to collect worms and other kinds of malware. All collected malware is automatically analyzed in sandbox environments in order to gain knowledge about its functionality and the botnets which are build with it. Using the results of our analysis enables us to enhance our traffic analysis and intrusion detection methods.
Network Access Control and Applications of Trusted Computing Technology
We work on authentication and authorization in various areas of networking. Peer-to-Peer networks and other self-organising systems, Web Services, and sensor networks are some examples. Especially in the context of (partially) self-organising systems, we investigate solutions that go beyond classic X.509 PKI or shared key infrastructures.
To this end, we develop cryptographic protocols, especially for authentication and conduct security analyses. One way to do this is to apply methods of model checking. We also adapt yet unprotected applications and services to be able to use standardized state-of-the-art security solutions (TLS, IPSec, WS Security, XACML, ...) with them.
We also work on security solutions that use the Trusted Platform Module (TPM) technology. One use-case for TPM is the secure storage of keys. Users cannot interfere and copy keys to insecure locations. The same is true for attackers who might want to get hold of the key to attack the network and its services. We also investigate Remote Attestation with the help of TPM. Remote Attestation allows to signal to another party that only a certain set of applications and a certain version of an Operating System (OS) is running on a computer. The primary usage is to avoid that worms, trojan horses or users of the system comprise its security by installing attack software. This is especially useful in business settings where even priviledged users could be attackers that need to be stopped.
Staff Members: Lothar Braun, Ali Fessi, Marc Fouquet, Ralph Holz, Holger Kinkelin, Heiko Niedermayer
Running Projects: SpoVNet, ResumeNet, AutHoNe
Traffic Measurement and Analysis
An important prerequisite for many network operation tasks today is the availability of traffic measurement functions that provide information about the current traffic characteristics with low latency. The resulting measurement data can then be analyzed and interpreted in order to classify the traffic into application classes, to detect malicious activities (e.g., worm outbreaks or botnet traffic), or to detect network malfunctions. Furthermore, communication patterns observed in a network allow inferring dependencies between different service, which is useful to identify the most critical components and end systems in a network.
Our research work focuses on the development and evaluation of novel passive traffic measurement functions, in particular for real-time packet-level and flow-level measurements, as well as the analysis of packet and flow data for traffic classification and the detection of attacks and anomalies. Furthermore, we contribute to standardization bodies, especially to the IETF.
Packet and Flow-based Traffic Measurement
Packet-based traffic measurements deal with the capturing of traffic traces which contain packet header information and optionally parts of the payload as well. Typical systems performing packet-based traffic measurements are network analyzers and network-based intrusion detection systems which analyze the captured packets directly. However, it is also possible to capture the traffic at routers and network monitors which export the resulting measurement data to a remote analysis systems. A recent IETF standard for the export of packet reports to a remote collector is the PSAMP protocol specified in RFC5476.
Packet-based traffic measurements in high-speed networks require a lot of computational and memory resources. A less demanding alternative are flow-based traffic measurements which gather statistics about flows of packets sharing a set of common properties called flow keys. A typical set of flow keys consists of the IP quintuple of transport protocol, source IP address, destination IP address, source port, and destination port. The IETF standard for exporting flow records is the IPFIX protocol specified in RFC5101.
Our group is working on advanced monitoring and export functions for PSAMP and IPFIX compliant devices. For evaluation and practical deployment, we implement these advanced functions as software solutions, mainly in C and C++. Most of this implementation work takes place in the scope of the HISTORY project, which is a joint project with the University of Erlangen, aiming at the development of open-source software tools for high-speed network monitoring and analysis. The main software tool developed in this context is VERMONT, which is a modular monitoring probe supporting IPFIX and PSAMP export and collection.
Members of our group have been actively contributing to the standardization of IPFIX and PSAMP. In particular, we are working on a data model for configuring monitoring devices. Further standardization initiatives concern the secure and efficient transport of monitoring data using encryption and compression methods.
Attack and Anomaly Detection
The detection of harmful traffic caused by attacks, worms, or botnets still is an interesting research topic. Although abundant research work has been conducted in this area, the emergence of new security threats (e.g., flux and fast-flux botnets) and the ever changing characteristics of benign network utilization (e.g., mobile web 2.0 applications) require a continuous research effort.
One of our research activities in this area deals with the investigation of worm and botnet traffic. With the resulting knowledge, we develop innovative monitoring and detection functions which enable the detection of such malicious traffic with limited computational and memory resources. Furthermore, we work on methods for detecting traffic anomalies in flow data. Since many anomalies are the result of harmless traffic variations, the principal objective is to find appropriate traffic metrics and detection methods which are primarily sensitive to incidents which are of potential relevance for the network administrator.
Network operators are interested in identifying the traffic of different applications in order to monitor and control the utilization of the available network resources. Since the traffic of many new applications cannot be identified by specific port numbers, deep packet inspection (DPI) is the current technology of choice. However, DPI is very costly as it requires a lot of computational resources as well as up-to-date signatures of all relevant applications. Furthermore, DPI is limited to unencrypted traffic.
In order to overcome the limitations and drawbacks of port and content-based traffic classification, the development of statistical classification methods has become an important area of research. As part of the LUPUS project, our goal is to find new traffic properties and metrics which can be derived from passive traffic measurements and which allow us to better distinguish between different protocols and applications. Thereby, we concentrate on statistical methods which are easy to implement and to deploy in real networks.
Staff Members: Lothar Braun, Gerhard Münz
Completed Projects: FP6 DIADEM Firewall
The Internet was designed about 40 years ago, and initially was intended as a means of communication only for a relatively small group of people in academic and research contexts. As we all know, the Internet has meanwhile experienced an enormous growth; the number of hosts and thus the number of users has grown by several orders of magnitude. At the same time, some assumptions that drove the Internet's original design are no longer true today: An increasing number of end devices is mobile and thus frequently changes its location in the topology. Not only some users, but end hosts and even entire networks
AutHoNe, ResumeNet, EADS Cabine Communication
Peer-to-Peer and Overlay Networks
Overlay networks change the structure of a network to a structure of their need. Applications organize and manage their networks. Peer-to-Peer overlays allow to utilize resources at the edges of the network -- resources from service providers as well as home users. The decentralized nature of the Peer-to-Peer paradigm allows new ideas, but also leads to additional problems with respect to security and service quality. We research on improving resilience with Peer-to-Peer methods, on security for overlay networks in general, spontaneous networks, and on the optimization of overlay networks using Cross-Layer information and measurements.
Peer-to-Peer networks provide a diversity of nodes and links that is unknown to the classic Client/Server Internet. This is beneficial for all services that profit from diversity. In the project ResumeNet we work on improving the resilience of networked service in future networks. The use of Peer-to-Peer methods is our first choice.
We adapted and studied the use of the Kademlia/KAD DHT to lookup services. Even when a lot of nodes fail a lookup can succeed. Future DNS service could also be more resilient with this kind of service resilience. Network resilience is based on the idea to use different additional route to the traditional IP routing. In case of failures or triangular inequality violations one may use overlay routes to improve performance or resolve failures.
Security and Privacy
Authentication and Authorization in Peer-to-Peer systems is usually delegated to a server. We developped new means to overcome this limitation and still provide reasonable security. The idea is to use social structures of humans behind the peers to form clusters of nodes that operate as one clique (or domain). The more scalable level of the cliques is used to build trust between the "servers" of different cliques. As trust establishment needs to deal with yet untrusted potentially insecure cases, we propose to include a risk assessment in the authentication and authorization process. Applications can then decide if they interact in order to build trust or skip the communication.
We also study attacks and defenses against Peer-to-Peer systems, in particular the Sybil and Eclipse attack. The increasing combination of social networks and Peer-to-Peer systems is not only used for security, but also studied in order to preserve the privacy of users.
Spontaneous Networks are formed spontaneously to provide a certain functionality for some time. Together with other partners we developped an architecture for such networks in the SpoVNet project. We expect that future services will utilize service-specific networks in a Future Internet. Given enough diversity, spontaneous interactions of hetergeneous systems will be a building-block in future networks.
Cross-Layer Measurement and Optimization
CLIO and UNISONO are our tools to collect and measure Cross-Layer information. UNISONO is a generic tool that operates within the system. CLIO adapts spontaneous overlays from the SpoVNet project to UNISONO. In SpoVNet, we use this to optimize multicast and video services.
Combining Server and P2P Infrastructures
The P2P paradigm has advantages and disadvantages. The Client/Server paradigm also has advantages and disadvantages. The idea here is that we could benefit from the advantages of both if we combine server and P2P system properly. The project CoSIP improve resilience for VoIP signalling using a server for performance and a P2P network for resilience when the server is unreachable. In other work we study the interaction of Cloud Computing and Peer-to-Peer. This may allow normal home users to benefit from the advent of Cloud Computing and lead to new kinds of applications.
Staff Members: Heiko Niedermayer, Ali Fessi, Ralph Holz, Dirk Haage
Autonomic Networks / Self Management
Networks have become ubiquitous in our lives. The humanity is dependent on a functioning of a multitude of different networks. Even for experts, manually running these networks has grown to a increasingly difficult task close to impossibility. It is therefore indispenable to increase management automation up to a state of autonomy.
Not only larger operator controlled networks are important also smaller scale networks have a growing importance. More and more devices in our houses and our everyday lives have networking capabilities to offer advanced functionality.
We target management automation from several directions.
Content-Centric Management for Future Networks
We are currently developing a platform for secure distributed autonomic content-centric management. Our aim is to contribute to the standardization of a network management that meets the requirements of today.
With the raising amount of technical equipment in our daily environments (e.g. at home), autonomic functionality becomes necessary to automatically integrate new hardware. With the abstraction of our platform new applications become possible that make life more agreeable…
Besides the core architecture our special research interests are Remote Access, Trust mechanisms, Security as well as Services and Applications for networks with our new autonomic mechanisms.
Large Multitechnology Operator controlled networks
Management of Operator Networks, especially mobile networks has grown very complicated for several reasons.
There is a increasing number of access technologies that are used within a single network. Several generations of the same technologies have to be seamlessly integrated to provide a unique user experience. As for example the parallel operation of 2G, 3G and 3.5G networks. In the future additional radio technologies as for example LTE or WiMAX will be integrated in the same way. Operators have to handle the large number of Network Elements but also have to provide a fine tuned configuration to enable seamless operation between different access networks. In order to handle those heterogeneous multi vendor networks with their complex inderdependencies new management concepts are required. We focus to provide a system that offers a high degree of automation and aims at autonomic management while stilll beeing under full operator control. Operation and maintenance staff should be freed from time consuming standard tasks to allow them to focus on critical situations and the optimization of the network. In case the automated functions do not act as expected the operator still has the possibliity to overrule the system.
Such an autonomic management system requires a way to include operational experience and the possibility to dynamically adapt to the current context.
Staff Members: Marc-Oliver Pahl, Tobias Bandh, Andreas Müller, Holger Kinkelin
Running Projects: AutHoNe , SelfMan
Wireless networks have become ubiquitous. Ranging from large scale public switched telephone networks downto Wireless Sensor networks they have become part of our daily life. The large number of different interconnected networks, network technologies and devices leads to an unpreceded level of heterogeneity and complexity with strong impacts on management and operation. For us, topics of special interest are autonomic configuration, efficient operation and a high level of security.
Staff Members: Corinna Schmitt, Tobias Bandh
- Wireless Sensor Networks
- Autonomic Mobile Network Management
- Secure Access to Next Generation Mobile Networks