This webpage is a translation of part of a grant application to the Canada Foundation for Innovation (CFI).
The grant launched Dalhousie's Network Information Management and Security Group.
The objective of this research is to provide intelligent solutions to
the infrastructure protection problem as experienced on distributed
computer information systems. Given the all-pervasive nature of
computer networks, the concept of mission critical
infrastructure or data is widely held. At one extreme governments
naturally attach great importance to maintaining national security and
stability, where this has implications for telecommunications, power
systems, banking, transportation and other public
utilities. International organizations and business as a whole attach
a lot of significance to the protection of commercially confidential
information. Naturally, none of the effected parties are particularly
interested in having experiments conducted directly on their current
working architectures.
Business, government and military organizations are increasingly relying on the networking of computer systems for the seamless integration of distributed information systems. This has provided many advantages, in productivity, transparency and integration of computing. However, the capacity for the disruption of mission critical services has also significantly increased. Typical sources for threats include viruses (email and document borne), network services (distributed denial of service attacks) and hacking (attempts to gain privileged user statuses). Some of these problems can be addressed by more thorough implementation of good network management, but new approaches to respond to the continued improvements in the methods of attack are also needed where these methods require integration into those currently available. We propose a holistic yet distributed approach to the infrastructure protection problem using these three perspectives:
Current approaches to Intrusion detection typically rely on off-line, generally centralized, approaches to intruder detection with an emphasis on data mining [1]. Our innovative proposal is to investigate benefits of using a real-time distributed monitoring system. The principal objective of the monitoring process is to provide a set of sensors capable of collecting and reporting information in a timely manner. We propose to develop sensors that will enable the preemption of previously unseen threats.
Reactions to threats must be well planned and quickly executed. Current
systems only detect threats, and then often only after they have
occurred. We expect a mixture of reactive and interactive decision-making
will be needed to substantially improve current practice. Depending on
their nature, some attacks can be blocked as they happen, while others
require recovery after an attack has been detected. We assume that
network administrators will not necessarily be near their offices when
threats occur. Systems should be developed for aiding the formulation of
reactive behaviours to threats using portable as well as fixed
computing platforms. A prime goal of this research is developing
interfaces that will help network managers of tomorrow's distributed
mobile work environments to quickly and accurately react to network
security threats. The proposed infrastructure specifically supports the
development of innovative interface tools for interacting on pervasive
computer systems such that real-time annotation of technical problems is
facilitated in distributed group decisison-making contexts.
There is a lack of serious benchmarking for this type of application. Initial data sets from the DARPA Intrusion Detection Initiative, have been found wanting in several important areas [3]. In particular, they do not have sufficient support for modern distributed systems, in which multiple protocols and computing systems may co-exist. Moreover, we are also interested in supporting the documentation of attacks on superuser status for the development of suitable automated counter-measures (cf. [2]). An essential part of our proposal will be the development of a testing system that can realistically mimic the distributed systems that are increasingly becoming the norm. Also, our work will emphasize real-time (on-line) as opposed to off-line techniques for dealing with threats.
The following are innovative, unique and original to this project:
Each of these activities entail several advances to the state of the art, the rationale and methodology for which are summarized in the following sections.
Many of the potential threats will need to be dealt with by humans. The managers will need access to the relevant information quickly and without any ambiguity or distraction. The managers may need to share their thoughts about the meaning of the data with each other and select a course of action even when they are not in the same room. The software will need the manager's feedback to learn how to deal with similar threats in the future. The interface must allow managers to remain in control, do their job quickly and effectively, and to inform the software of the salient points of the attack (e.g. the systems that were targeted, the frequency of attacks) without these updates being a burden. We expect that network administrators will not be comfortable if they think they are seeing only filtered data. The interface must therefore allow them to examine any of the data. The interface must act as a support for the manager's work never as a hindrance.
Generic packet-switched network routing is a distributed and dynamic
problem. Traffic experienced by networks is subject to widely varying
load conditions, making it impossible to design for typical
network
conditions. Solutions sort to this problem should therefore be adaptive,
able to reason beyond the local information (intelligent), and emphasize
co-evolutionary behaviour. In our published research, we have reported
that methodologies based on problem solving from nature and resource
management has the potential to address these problems. Additionally,
systems need to be built that can identify and respond to new attack
types without affecting legitimate users.
Problem solving from Nature
emphasizes the use of natural
metaphors to solve problems in parallel settings. Several such schemes
are available, specific examples include Evolutionary Computation,
Swarm Intelligence and Artificial Immune Systems. The specific
interest of this work is in the development and application of such
techniques to solve distributed problems under local information
constraints to satisfy global objectives.
Co-evolutionary approaches can be more resilient and react to change faster than methods that rely on the collection of data at a single point. We are also interested in the application of techniques to detect unusual behaviour patterns in interactions between users and host systems. We are specifically interested in the use of hierarchical unsupervised neural nets in which a graphical summary of the network is also available.
Defensive information operations and computer intrusion detection systems (IDS) are primarily designed to protect the availability, confidentiality, and integrity of critical networked information systems. These operations protect computer networks against denial-of-service (DoS) attacks, unauthorized disclosure of information, and the modification or destruction of data.
The automated detection and immediate reporting of these events
are required in order to provide a timely response to attacks. The two
main classes of intrusion detection systems are those that analyze
network traffic and those that analyze operating system audit
trails. These systems typically use either rule-based misuse detection
or anomaly detection. Rule-based misuse detection systems attempt to
recognize specific behaviours that represent known forms of abuse or
intrusion. Anomaly detection attempts to recognize abnormal
user behaviour. In all of these approaches, however, the amount of
monitoring data generated is extensive, thus incurring large
processing overheads. For instance, threatening behaviour templates,
as used by general rule-based systems, aim to search/match for any
known abnormal behaviour
within the monitored data. This process is
often too inefficient to conduct without parallel hardware. In
addition, such systems cannot identify any new abnormal behaviour
.
A statistical anomaly detection approach will identify the normal
behaviour
by mining the monitored behaviour of each user (e.g., each command that is typed by every
user) so that abnormal behaviours
can be characterized. Such systems
unfortunately further increase the processing overheads. A balance
between the use of resources and the accuracy and timeliness of
intrusion detection information is needed. We therefore propose to use
the systems perspective of an artificial immune system augmented with
learning systems to address automation of intrusion detection and
network management operations.
We propose to develop a system that can share information in a distributed network and on multiple platforms to: inform managers of threats, and enable them to discuss system issues as though they were all in one room with full access to the system. Groupware systems currently support some of that behaviour but issues of how to display the same data (text, graphics, etc.) on different platforms (e.g. desktop, tablet, PDA) so that users can usefully collaborate are far from resolved. Furthermore it is not clear what type of interface network managers will work best with. Central to principles of user-centred design and the ISO definitions of usability are considerations of how users interact with systems. It would be a major mistake to make assumptions about what these users need; yet, we find no published studies of their needs.
Much research exists in related areas -- network management and load balancing on computer networks -- but without placing it in a more general context. Moreover load balancing traditionally focuses on balancing the tasks/jobs at the system or application level. Traditional network management focuses on monitoring the system to detect a fault or intrusion or to measure performance. On the other hand, classical information retrieval research assume all of the above work well, and focus on increasing the efficiency of retrieval. Our research takes a systems-oriented approach to study load balancing and traffic management concepts from the perspective of security management and infrastructure protection in a distributed systems setting. The objective of which will be to develop a system capable of learning to dynamically change the location of the distributed information for more efficient use of these features without recourse to centralized control. To do so, such a system will actively monitor network load profiles, provide timely reports and aid the identification of bottlenecks using a distributed set of agents. Management will then be in a position to identify whether reported conditions represent attacks or a network utilization problem, and make appropriate recommendations.
Over the last decade the problem solving from Nature
paradigm has
began to provide several unique solutions to various distributed
problems applicable to computer networks. A landmark solution of this
nature was network routing using a Social Insect Metaphor [4, 5, 6] and resource
balancing using Genetic Algorithms [7]. An important
property shared by such systems is the ability to provide a sufficient
working solution in real-time. However, current approaches often
suppose access to global sources of information that are not available
in practice. This assumption results in an over-reliance on global
sources of information in the environment studied (the network). Local
information limits the usefulness of this information and the agents
then perform poorly. The work proposed here will increase the autonomy
of the agents using evolutionary concepts and therefore provide a much
stronger capacity for problem solving. Moreover, rather than
attempting to evolve a single super individual
with the
capacity to solve any form of problem -- as is currently the case in
genetic algorithms or neural networks -- we emphasize shared problem
solving or co-evolution. Finally, this work will provide working
examples of the methodology, where previous examples have relied on
simulation that result in information assumptions that do not hold
true in practice.
Intrusion detection generally falls into two generic domains: those
that are able to respond to unseen attacks and those that are
not. Most commercially available systems rely on signature
verification
, hence are only able to identify intruders for which
previously recorded attack templates exist. Such systems share many
attributes with methods for virus detection -the need for frequent
updates to the database of templates and the increasing computational
cost of detection as the number of behaviours in the database
increases. Given the interactive and individualistic nature of
intruders, there has been an interest in developing systems able to
identify novel attacks. Such systems have the potential to avoid
waiting for a successful attack before being able to react to
it. Anomaly detection represents a widely used methodology, in which
case statistical methods are generally employed to provide
descriptions for what represents typical
user behaviour. More
recent methods utilize metaphors from the biological immune system [8] or neural nets. In these latter cases, the emphasis has
been towards concentrating on specific areas of activity in order to
suitably constrain the search
process. Indeed recent work has
recommended viewing intruders as attempting to perform activities that
provide privileged access rights [2]. In our
approach, we view this as a process similar to document
summarization
. We are interested in deriving learning systems able
to efficiently summarize the intent of word sequences, and then
measuring the difference from the typical behaviour. Moreover, our
work indicates that unsupervised as opposed to supervised learning
systems are capable of performing such tasks. Use of unsupervised
learning methods is significant as it makes fewer assumptions
regarding the initial data.
Dalhousie's Network Information Management and Security Group homepage.
Blueprint for a Computer Immune System,in Artificial Immune Systems and Their Applications, D. Dasgupta (Ed.), Springer-Verlag, pp.242-261, 1998. [alternative link: copy of article with same title and authors at IBM website]
Detecting and Displaying Novel Computer Attacks with Macroscope,IEEE Transactions on Systems, Man, and Cybernetics-Part A, 31(4): 275-281, 2002. [alternative link: PDF copy at 2000 IEEE Workshop on Information Assurance and Security conference website]
Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory,ACM Transactions on Information System Security, 3(4): 262-294, 2000.
AntNet: Distributed Stigmergetic Control for Communications Networks,Journal of Artificial Intelligence Research, 9: 317-365, 1998.
Ant-based Load Balancing on Telecommunications Networks,Adaptive Behaviour, 5(2): 167-207, 1997.
An Adaptive Network Routing Algorithm Employing Path Genetic Operators,Proceedings of the 7th International Conference on Genetic Algorithms, Morgan Kaufmann, pp.643-649, 1997.
Exploring Evolutionary Approaches to Distributed Database Management,in Telecommunications Optimization, D. Corne, M. J. Oates, G. D. Smith, (Eds.), John Wiley & Sons, pp.235-264, 2001.
An Immunological Approach to Change Detection: Algorithms, Analysis and Implications,IEEE Symposium on Security and Privacy, 1996. [alternative link: copy at S. Forrest's homepage]
Created on 08 October 2002 by J. Blustein.
Last updated on 07 August 2003 by J. Blustein.