Details
Start Date: October 2024
Deadline for applications: Friday 19th April 2024
Interview Date: To be confirmed
Description
Background: Network security remains a critical challenge in today's world, with the ever-evolving landscape of cyber threats demanding new and adaptable solutions. Traditional rule-based methods for anomaly detection in network traffic struggle to keep pace with the sophistication of attackers, often requiring manual intervention, and lacking the flexibility to adapt to novel attack vectors. This project seeks to address these limitations by leveraging the power of advanced statistical learning, specifically focusing on the field of Explainable Artificial Intelligence (XAI), to develop a real-time anomaly detection system for network Intrusion Detection Systems (IDS).
The challenge: AI techniques have shown great promise in anomaly detection, offering the potential to automatically learn patterns from historical data and identify deviations indicative of malicious activity. However, the sheer volume, velocity, and heterogeneity of network traffic data present significant challenges. Efficient and scalable algorithms are needed to process this data in real-time, while simultaneously ensuring the interpretability and explainability of the results. This is crucial in an IDS setting, where understanding the rationale behind anomaly detection is critical for effective decision-making and maintaining trust in the system.
Project outline: This PhD project aims to build a data-driven approach for real-time anomaly detection in network traffic using XAI techniques. We will begin with a comprehensive review of existing online anomaly detection algorithms, focusing primarily on state-of-the-art XAI techniques while concurrently investigating competing deep learning approaches. At the same time, to gain a deeper understanding of the problem space, we will conduct an in-depth study of the specific challenges and complexities associated with network traffic data, such as its high volume, velocity, and inherent heterogeneity (both of the data streams and of the anomalous events). The goal of the initial analysis will be to evaluate and understand the limitations of current approaches with network traffic anomaly detection.
Drawing upon the insights gained from the comparative analysis and the in-depth study, we will propose a novel methodology tailored specifically for real-time anomaly detection in network traffic. One way is to generalise our recent developments in Statistical Anomaly Detection to work with high-dimensional network data. Those approaches are based on well-defined fast and efficient optimizations that identify unexpected changes in data patterns to answer the question “Are we seeing something significantly different from what has been observed so far?”. This will allow the methodology to handle real-time data streams, while simultaneously ensuring the interpretability of its outputs for informed human decision-making.
The hope would be to evaluate the resulting procedure via a real-world IDS, via a collaboration with the Lancaster University's ISS department, refining the algorithm directly with help of the practitioners.
Broader outcomes: While the primary focus of this project will be on real-time anomaly detection for Lancaster University's IDS, the proposed approach, with appropriate adaptation and consideration for specific domain requirements and data characteristics, has the potential to be generalized to other network monitoring applications beyond intrusion detection and potentially larger-scale scenarios.
The candidate: The ability to work and research independently is highly valued. This project expects strong foundational knowledge in data science, with a specific emphasis on statistical learning, and general understanding of ML. Given the wide scope of the project, in addition to a solid theoretical background, the candidate should have knowledge of both R and python as well as the most popular data manipulation and ML libraries.
For informal enquiries about the project, please contact Gaetano Romano on (g.romano@lancaster.ac.uk) or Bill Oxbury on (w.oxbury@lancaster.ac.uk).
To apply, please send a CV and cover letter demonstrating your motivation for the post to dsi@lancaster.ac.uk.
Details
Start Date: As soon as possible
Deadline for application: Open (it is recommended you apply as soon as possible)
Interview: Rolling
Description
If you’re interested in protecting AI from rapidly emerging cyber threats and securing a technology that will define the coming decades, this PhD studentship is for you.
We are seeking candidates to join our AI security group at Lancaster University, and become part of this rapidly growing research field.
The adoption of Artificial Intelligence (AI) and prominent technologies such as Generative AI, LLMs, and Agentic AI systems is rapidly accelerating across both research and industry.
While there is considerable research activity on the application of AI for security, there has been less attention towards the security of AI itself. AI security focuses on addressing cyber security risks against the AI systems against a wide plethora of cyber attacks, spanning prompt injection, data leakage, jailbreaking, bypassing guardrails, model backdoors, and more. The emergence of such AI risks has drawn the attention of every nation and major business, however existing cyber security tools and methods are ineffective within AI systems due to the intrinsically random, complex, and opaque nature of neural networks. To date, how to secure today’s and tomorrow’s AI models and systems remains unsolved.
This project would provide you the skill and training necessary to become a researcher specializing in AI security – an area that is increasingly sought after in academia and industry.
Research Areas:
Topics of interest you could pursue include:
- Discover new types of cyber attacks / security vulnerabilities in AI and GenAI
- Create defence systems and countermeasures against AI cyber attacks
- Design run-time detection systems for prompt injection and jailbreaking
- Explore different cyber attack modalities (i.e. malicious instructions in images/audio)
- Build and develop cutting-edge LLM guardrails and firewalls
- Investigate hidden security characteristics within neural networks
- Identify ShadowAI – malicious AI systems hidden within an organization
- Uncover backdoor attacks and model hijacking within ML artefacts
What We Offer:
- A 3.5-year fully funded PhD studentship (including both tuition and stipend).
- Access to a large-scale GPU data centre entirely dedicated to our research lab.
- Comprehensive training in cutting-edge AI technology and cyber security techniques.
- Employment opportunities at Mindgard (https://mindgard.ai/), an award-winning AI security company founded at our lab, and now based in the heart of London.
- Collaboration opportunities with Nvidia, Mindgard, GCHQ’s National Cyber Security Centre, and NetSPI, amongst others.
- Opportunity to travel to conferences internationally to present your research.
Our Research Lab:
We are among the few labs globally specializing in AI security. You will be part of a new cohort of PhD students joining an established team of scientists and engineers. Founded in 2016, the research lab led by Prof. Peter Garraghan is internationally renowned in AI systems and security, publishing over 70 research papers, securing over £14M in external grant funding, the formation of Mindgard, and all research students to date securing positions in academia or industry R&D labs upon graduation.
About You:
- We highly value people who are kind, curious and believe in making a difference.
- A good background in Computer Science, ideally a BSc in Computer science (or equivalent) with a 2:1 classification and above.
- Interest in Artificial Intelligence, Cyber Security, Distributed Systems, or a combination of the above.
- Highly motivated, and capable of working both independently and as part of a team.
- Good communication, technical knowledge, and writing skills.
Get in Touch:
These positions are available now, thus candidates are strongly recommended to apply as early as possible.
For informal enquiries about these positions, please contact and share your CV with Prof. Peter Garraghan. To apply, please visit our school PhD opportunities page, which includes guidance on submission, and a link to the submission system.