QS World University Rankings by Subject 2024
Lancaster University has been ranked 51-70 for Data Science and Artificial Intelligence in the QS World University Rankings by Subject 2024.
51-70 for Data Science and Artificial Intelligence in the QS World University Rankings by Subject 2024
World Leading Research
Theoretical and Practical Study
Develop your skills and expertise in artificial intelligence (AI) and data science with our flexible Master's programme that allows you to select modules, guided by pathways, to develop an enhanced understanding of modern data science technologies. Designed in co-operation with industry, you will learn how data science and AI work together with real industry-specific projects and through guest lectures from professional data scientists. Upon graduation, you will have the confidence to apply data science techniques that enable companies to use artificial intelligence to gain insights and make better decisions. This Master's has been pivotal in launching the career of hundreds of in demand data scientists with a high starting salary.
How does it work?
Term 1, core data science modules
Study five core modules that span the breadth of data science including the fundamentals of statistics and programming in Python and R; modern machine learning; and artificial intelligence. This term is essential in providing the foundations for you to advance your knowledge and technical skills in your chosen pathway.
Term 2, specialist modules
Study three modules in this term and choose your specialist pathway that aligns with your interests and career goals:
Term 3, industry placement
Apply the knowledge and skills you've gained in the previous 2 terms with a 14-week placement, either within industry, or as part of an academic research project. Our students really value this experience, with many offered jobs at the end, and many find that it builds confidence and adds weight to their CV.
The placement is the equivalent to a substantial research project or dissertation. This means that an academic specialist will supervise you, as you will develop your ability to formulate a project plan, gather and analyse data, interpret your results, and present findings in a professional environment. This research will be an opportunity to bring together everything you have learnt over the year, expand your problem-solving abilities and manage a significant project.
A 2:1 Hons degree (UK or equivalent) in any discipline, provided that the applicant has some experience of programming and has had exposure to quantitative methods such as statistics, or mathematical modelling. Applicants with a 2:2 Hons degree (UK or equivalent) in any discipline along with relevant experience are welcome to apply and will be considered on a case-by-case basis.
Students have successfully completed the course with undergraduate degrees in Computer Science, Mathematics, Statistics, Engineering, Physics, Life Sciences, Economics, Finance, Linguistics, and others.
If you have studied outside of the UK, we would advise you to check our list of international qualifications before submitting your application.
We may ask you to provide a recognised English language qualification, dependent upon your nationality and where you have studied previously.
We normally require an IELTS (Academic) Test with an overall score of at least 6.5, and a minimum of 6.0 in each element of the test. We also consider other English language qualifications.
If your score is below our requirements, you may be eligible for one of our pre-sessional English language programmes.
Contact: Admissions Team +44 (0) 1524 592032 or email pgadmissions@lancaster.ac.uk
You will study a range of modules as part of your course, some examples of which are listed below.
Information contained on the website with respect to modules is correct at the time of publication, but changes may be necessary, for example as a result of student feedback, Professional Statutory and Regulatory Bodies' (PSRB) requirements, staff changes, and new research. Not all optional modules are available every year.
Students are provided with a comprehensive coverage of the problems related to data representation, manipulation and processing in terms of extracting information from data, including big data. They will apply their working understanding to the data primer, data processing and classification. They will also enhance their familiarity with dynamic data space partitioning, using evolving, clustering and data clouds, and monitoring the quality of the self-learning system online.
Students will also gain the ability to develop software scripts that implement advanced data representation and processing, and demonstrate their impact on performance. In addition, they will develop a working knowledge in listing, explaining and generalising the trade-offs of performance, as well as the complexity in designing practical solutions for problems of data representation and processing in terms of storage, time and computing power.
This module will help you understand what the data science role entails and how that individual performs their job within an organisation on a day-to-day basis. You will look at how research is performed in terms of formulating a hypothesis and the implication of research findings, and be aware of different research strategies and when these should be applied. You will gain an understanding of data processing, preparation and integration, and how this enables research to be performed and you will learn how data science problems are tackled in an industrial setting, and how such findings are communicated to people within the organisation.
A large part of the Master's involves completing the industry or research related project. This starts with the students selecting an industry or research partner, undertaking a placement in June - July, and then submitting a written dissertation of up to 20,000 words in early September.
This is primarily a self-study module designed to provide the foundation of the main dissertation, at a level considered to be of publishable quality. The project also offers students the opportunity to apply their technical skills and knowledge on current world class research problems and to develop an expert knowledge on a specific area.
The topic of the project will vary from student to student, depending on the data science specialism (eg computing may involve the design of a system, while specialism in data analytics, health or environment, are likely to be more applied, perhaps focusing upon inherent data structure and processes).
This module is designed for students that are completely new to programming, and for experienced programmers, bringing them both to a high-skilled level to handle complex data science problems. Beginner students will learn the fundamentals of programming, while experienced students will have the opportunity to sharpen and further develop their programming skills. The students are going to learn data-processing techniques, including visualisation and statistical data analysis. For a broad formation, in order to handle the most complex data science tasks, we will also cover problem solving, and the development of graphical applications. Two open source programming languages will be used, R and Python.
This module will only be completed by students who, depending on their mathematical background, require an introduction, at the graduate level, to two core areas which are essential building blocks to further advanced study of statistical modelling, methodology and theory. Students will study either this module or the other core module titled ‘Statistical Foundations I' but not both.
This module will motivate the use of statistical modelling as a tool for making inferences on a population given a sample of data. Students will be introduced to the basic terminology of statistical modelling, and the similarities and differences between statistical and machine learning approaches will be discussed to lay the foundations for the development of both of these over the remaining core modules They will cover the concepts of sampling uncertainty, statistical inference and model fitting, with sampling uncertainty used to motivate the need for standard errors and confidence intervals. Once core concepts have been established, linear regression and generalised linear models will be introduced as essential statistical modelling tools. An understanding of these models will be obtained through implementation in the statistical software package R.
This module is only core for those with the required mathematical background to complete it. Some students may require an introduction to the area, at the graduate level, and they will study the core module titled ‘Statistical Fundamentals I’. If you complete this module, you will not be required to take Statistical Fundamentals I.
The areas that will be covered are statistical inference using maximum likelihood and generalised linear models (GLMs). Building on an undergraduate-level understanding of mathematics, statistics (hypothesis testing and linear regression) and probability (univariate discrete and continuous distributions; expectations, variances and covariances; the multivariate normal distribution), this module will motivate the need for a generic method for model fitting and then demonstrate how maximum likelihood provides a solution to this. Following on from this, GLMs, a widely and routinely used family of statistical models, will be introduced as an extension of the linear regression model.
This module provides an introduction to statistical learning. General topics covered include big data, missing data, biased samples and recency. Likelihood and cross-validation will be introduced as generic methods to fit and select statistical learning models. Cross-validation will require an understanding of sample splitting into calibration, training and validation samples. The focus will then move to handling regression problems for large data sets via variable reduction methods such as the Lasso and Elastic Net. A variety of classification methods will be covered including logistic and multinomial logistic models, regression trees, random forests and bagging and boosting. Examination of classification methods will culminate in neural networks which will be presented as generalised linear modelling extensions. Unsupervised learning for big data is then covered including K-means, PAM and CLARA, followed by mixture models and latent class analysis.
This module provides students with up-to-date information on current applications of data in both industry and research. Expanding on the module ‘Fundamentals of Data’, students will gain a more detailed level of understanding about how data is processed and applied on a large scale across a variety of different areas.
Students will develop knowledge in different areas of science and will recognise their relation to big data, in addition to understanding how large-scale challenges are being addressed with current state-of-the-art techniques. The module will provide recommendations on the Social Web and their roots in social network theory and analysis, in addition their adaption and extension to large-scale problems, by focusing on primer, user-generated content and crowd-sourced data, social networks (theories, analysis), recommendation (collaborative filtering, content recommendation challenges, and friend recommendation/link prediction).
On completion of this module, students will be able to create scalable solutions to problems involving data from the semantic, social and scientific web, in addition to abilities gained in processing networks and performing of network analysis in order to identify key factors in information flow.
In this module we explore the architectural approaches, techniques and technologies that underpin today's Big Data system infrastructure and particularly large-scale enterprise systems. It is one of two complementary modules that comprise the Systems stream of the Computer Science MSc, which together provide a broad knowledge and context of systems architecture enabling students to assess new systems technologies, to know where technologies fit in the larger scheme of enterprise systems and state of the art research thinking, and to know what to read to go deeper.
The principal ethos of the module is to focus on the principles of Big Data systems, and applying those principles using state of the art technology to engineer and lead data science projects. Detailed case studies and invited industrial speakers will be used to provide supporting real-world context and a basis for interactive seminar discussions.
Clinical trials are planned experiments on human beings designed to assess the relative benefits of one or more forms of treatment. For instance, we might be interested in studying whether aspirin reduces the incidence of pregnancy-induced hypertension, or we may wish to assess whether a new immunosuppressive drug improves the survival rate of transplant recipients.
This module combines the study of technical methodology with discussion of more general research issues, beginning with a discussion of the relative advantages and disadvantages of different types of medical studies. The module will provide a definition and estimation of treatment effects. Furthermore, cross-over trials, issues of sample size determination, and equivalence trials are covered. There is an introduction to flexible trial designs that allow a sample size re-estimation during the ongoing trial. Finally, other relevant topics such as meta-analysis and accommodating confounding at the design stage are briefly discussed.
Students will gain knowledge of the basic elements of clinical trials. They will develop the ability to recognise and use principles of good study design, and will also be able to analyse and interpret study results to make correct scientific inferences.
Distributed artificial intelligence is fundamental in contemporary data analysis. Large volumes of data and computation call for multiple computers in problem solving. Being able to understand and use those resources efficiently is an important skill for a data scientist. A distributed approach is also important for fault tolerance and robustness, as the loss of a single component must not significantly compromise the whole system. Additionally, contemporary and future distributed systems go beyond computer clusters and networks. Distributed systems are often comprised of multiple agents -- multiple software, humans and/or robots that all interact in problem solving. As a data scientist, we may have control of the full distributed system, or we may have control of only one piece, and we have to decide how it must behave in face of others in order to accomplish our goals.
Extreme Value Theory is an area of probability theory which describes the stochastic behaviour of events occurring in the tail of a distribution (eg. block maxima). This course will cover both an overview of key theoretical results and the statistical modelling approaches which are motivated by these results. Theoretical results covered will include limiting distributions for block maxima and Peaks Over Threshold events in the case of both independent and time-series data. Modelling will involve the development of extreme value statistical models and their application to data sets taken from financial and environmental applications. The concepts of risk will be explored, leading to an understanding of return levels and Value At Risk measures. The concept of extremal dependence will be introduced.
After introducing the topic of forecasting in business organisations, issues concerned with forecasting model building in regression and its extensions are presented, building on material covered earlier in the course(s). Extrapolative forecasting methods, in particular Exponential Smoothing, are then considered, as well as Machine Learning / Artificial Intelligence methods, in particular Neural Networks. All methods are embedded in a case study in forecasting in organisations. The course ends by analysing how forecasting is applied to operations and how forecasting can best be improved in an organisational context.
This module introduces students to the fundamental principles of Geographical Information Systems (GIS) and Remote Sensing (RS) and shows how these complementary technologies may be used to capture/derive, manipulate, integrate, analyse and display different forms of spatially-referenced environmental data. The module is highly vocational with theory-based lectures complemented by hands-on practical sessions using state-of-the-art software (ArcGIS & ERDAS Imagine).
In addition to the subject-specific aims, the module provides students with a range of generic skills to synthesise geographical data, develop suitable approaches to problem-solving, undertake independent learning (including time management) and present the results of the analysis in novel graphical formats.
The module provides an introduction to the fundamental methods and approaches from the interrelated areas of data mining, statistical/ machine learning, and intelligent data analysis. It covers the entire data analysis process, starting from the formulation of a project objective, developing an understanding of the available data and other resources, up to the point of statistical modelling and performance assessment. The focus of the module is classification and uses the R programming language.
Almost every set of data, whether it consists of field observations, data from laboratory experiments, clinical trial outcomes, or information from population surveys or longitudinal studies, has an element of missing data. For example, participants in a survey or clinical trial may drop-out of the study, measurement instruments may fail, or human error invalidate instrumental readings. Missingness may or may not be related to the information being collected; for instance, drop out may occur because a patient dislikes the side-effects of an experimental treatment or because they move out of the area or because they find that they no longer have the time to attend follow up appointments. In this module you will learn about the different ways in which missing data can arise, and how these can be handled to mitigate the impact of the missingness on the data analysis. Topics covered include single imputation methods, Bayesian imputation, multiple imputation (Rubin's rules, chained equations and multivariate methods, as well as suitable diagnostics) and modelling dropout in longitudinal modelling.
Optimisation, sometimes called mathematical programming, has applications in many fields, including operational research, computer science, statistics, finance, engineering and the physical sciences. Commercial optimisation software is now capable of solving many industrial-scale problems to proven optimality.
The module is designed to enable students to apply optimisation techniques to business problems. Building on the introduction to optimisation in the first term, students will be introduced to different problem formulations and algorithmic methods to guide decision making in business and other organisations.
Introducing epidemiology, the study of the distribution and determents of disease in human population, this module presents its main principles and statistical methods. The module addresses the fundamental measures of disease, such as incidence, prevalence, risk and rates, including indices of morbidity and mortality.
Students will also develop awareness in epidemiologic study design, such as ecological studies, surveys, and cohort and case-control studies, in addition to diagnostic test studies. Epidemiological concepts will be addressed, such as bias and confounding, matching and stratification, and the module will also address calculation of rates, standardisation and adjustment, as well as issues in screening.
This module provides students with a historical and general overview of epidemiology and related strategies for study design, and should enable students to conduct appropriate methods of analysis for rates and risk of disease. Students will develop skills in critical appraisal of the literature and, in completing this module, will have developed an appreciation for epidemiology and an ability to describe the key statistical issues in the design of ecological studies, surveys, case-control studies, cohort studies and RCT, whilst recognising their advantages and disadvantages.
This module addresses a range of topics relating to survival data; censoring, hazard functions, Kaplan-Meier plots, parametric models and likelihood construction will be discussed in detail. Students will engage with the Cox proportional hazard model, partial likelihood, Nelson-Aalen estimation and survival time prediction and will also focus on counting processes, diagnostic methods, and frailty models and effects.
The module provides an understanding of the unique features and statistical challenges surrounding the analysis of survival avant history data, in addition to an understanding of how non-parametric methods can aid in the identification of modelling strategies for time-to-event data, and recognition of the range and scope of survival techniques that can be implemented within standard statistical software.
General skills will be developed, including the ability to express scientific problems in a mathematical language, improvement of scientific writing skills, and an enhanced range of computing skills related to the manipulation on analysis of data.
On successful completion of this module, students will be able to apply a range of appropriate statistical techniques to survival and event history data using statistical software, to accurately interpret the output of statistical analyses using survival models, fitted using standard software, and the ability to construct and manipulate likelihood functions from parametric models for censored data. Students will also gain observation skills, such as the ability to identify when particular models are appropriate, through the application of diagnostic checks and model building strategies.
The course is designed to provide foundational knowledge in linear and non-linear time-series analysis through building awareness of various well used time-series models. While the module focuses on univariate analysis, students will have time to read around lecture notes and materials to extend their understanding of these methods. By the end of the course, the student should understand both the theoretical and practical foundations of time-series analysis, how to fit, and choose from a range of models. They will understand different methods of evaluating time-series model performance and how these models can be used to provide forecasts.
Location | Full Time (per year) | Part Time (per year) |
---|---|---|
Home | £13,600 | £6,800 |
International | £29,150 | £14,575 |
There may be extra costs related to your course for items such as books, stationery, printing, photocopying, binding and general subsistence on trips and visits. Following graduation, you may need to pay a subscription to a professional body for some chosen careers.
Specific additional costs for studying at Lancaster are listed below.
Lancaster is proud to be one of only a handful of UK universities to have a collegiate system. Every student belongs to a college, and all students pay a small College Membership Fee which supports the running of college events and activities. Students on some distance-learning courses are not liable to pay a college fee.
For students starting in 2025, the fee is £40 for undergraduates and research students and £15 for students on one-year courses.
To support your studies, you will also require access to a computer, along with reliable internet access. You will be able to access a range of software and services from a Windows, Mac, Chromebook or Linux device. For certain degree programmes, you may need a specific device, or we may provide you with a laptop and appropriate software - details of which will be available on relevant programme pages. A dedicated IT support helpdesk is available in the event of any problems.
The University provides limited financial support to assist students who do not have the required IT equipment or broadband support in place.
For most taught postgraduate applications there is a non-refundable application fee of £40. We cannot consider applications until this fee has been paid, as advised on our online secure payment system. There is no application fee for postgraduate research applications.
For some of our courses you will need to pay a deposit to accept your offer and secure your place. We will let you know in your offer letter if a deposit is required and you will be given a deadline date when this is due to be paid.
The fee that you pay will depend on whether you are considered to be a home or international student. Read more about how we assign your fee status.
If you are studying on a programme of more than one year’s duration, tuition fees are reviewed annually and are not fixed for the duration of your studies. Read more about fees in subsequent years.
You may be eligible for the following funding opportunities, depending on your fee status and course. You will be automatically considered for our main scholarships and bursaries when you apply, so there's nothing extra that you need to do.
Unfortunately no scholarships and bursaries match your selection, but there are more listed on scholarships and bursaries page.
If you're considering postgraduate research you should look at our funded PhD opportunities.
Scheme | Based on | Amount |
---|---|---|
Based on {{item.eligibility_basis}} | Amount {{item.amount}} |
We also have other, more specialised scholarships and bursaries - such as those for students from specific countries.
Browse Lancaster University's scholarships and bursaries.
Join us online and let us tell you about postgraduate study at Lancaster and how to apply.
Book our online eventThe information on this site relates primarily to 2025/2026 entry to the University and every effort has been taken to ensure the information is correct at the time of publication.
The University will use all reasonable effort to deliver the courses as described, but the University reserves the right to make changes to advertised courses. In exceptional circumstances that are beyond the University’s reasonable control (Force Majeure Events), we may need to amend the programmes and provision advertised. In this event, the University will take reasonable steps to minimise the disruption to your studies. If a course is withdrawn or if there are any fundamental changes to your course, we will give you reasonable notice and you will be entitled to request that you are considered for an alternative course or withdraw your application. You are advised to revisit our website for up-to-date course information before you submit your application.
More information on limits to the University’s liability can be found in our legal information.
We believe in the importance of a strong and productive partnership between our students and staff. In order to ensure your time at Lancaster is a positive experience we have worked with the Students’ Union to articulate this relationship and the standards to which the University and its students aspire. View our Charter and other policies.