At Lancaster University, we believe that AI should be safe to use. That's why we've developed Safe AI at Lancaster (SAIL), our service with additional features for safer AI use, developed with our students' safety as top priority.
Notice
SAIL is currently running as a Proof of Value pilot for approximately four months, after which outcomes will be reviewed. Continuation or wider rollout will depend on evidence gathered during the pilot. Access is managed through sign-up and approval. Features and availability may change during this period.
About SAIL
About SAIL accordion
SAIL is an early-stage service being tested through a Proof of Value pilot to understand demand, value, and support needs. SAIL is Lancaster University’s secure environment for exploring and using approved generative AI tools. It provides access to:
Institution approved large language models
A prompt library
A knowledge base
Document analysis tools
Guardrails that help users experiment safely and responsibly.
SAIL allows staff, students and researchers to work with AI in a controlled space that follows the university’s data protection, privacy and governance requirements.
SAIL has been created to give the university community a secure place to use AI without needing personal subscriptions. It reduces the risks of shadow AI, supports safe experimentation and aligns with the university’s AI governance and digital strategy. SAIL ensures that data is handled securely and remains under institutional control.
All staff, students and researchers with a valid Lancaster University account can request access to SAIL. Access is managed through sign-up and approval.
SAIL provides safer and free access to generative artificial intelligence (Gen AI) via access to large language models (LLMs). SAIL offers students and staff at Lancaster University safer access to existing AI models, including OpenAI’s ChatGPT, DeepSeek, and many more – with added features to increase user safety and security.
Your SAIL chats are private, like your email inbox – lecturers do not have access to your conversation history.
How is it safer?
Unlike other AI models, SAIL ensures: • Your data is not shared with third parties or used to train the AI model. • Safety guardrails help prevent AI from providing hateful or harmful responses. • Personally Identifying Information (PII) protection – gives you the option to shield sensitive personal information from AI model providers. • The servers where data is temporarily stored are based in Europe and are GDPR-compliant. • Increased privacy – conversations will be automatically deleted after 30 days. • Dictation is performed on your device, meaning that recordings of your voice are not sent to a server to be processed, where it has potential to be misused.
SAIL works in your normal web browser and does not require additional software. You can access SAIL at: www.lancaster.ac.uk/itpi/web/sail
To sign up, please request access using your standard university username and password. Once approved, access may take up to 12 hours to update across university systems.
Yes. Staff, students and researchers may see different AI models, knowledge bases or tools depending on their role. This ensures that access aligns with university governance, data sensitivity, and local needs.
Yes. SAIL is browser based and available from any location, provided you sign in with your university credentials.
Features and Tools
Features and Tools accordion
You can chat with a selection of institution approved models. All models run in no training and no log mode. Your data does not train any external systems and model providers do not retain your prompts.
Different models specialise in different tasks, such as writing, analysis, reasoning, or creativity. The Model Switcher helps you choose the right one for your task. This keeps your work efficient and reduces unnecessary compute.
SAIL provides a safe space to learn how different models behave, so you can build confidence, understand limitations, and experiment responsibly.
You can upload files such as Word documents, PowerPoints, or PDFs and ask questions about its content. This is helpful for summarising, analysing, checking structure, or explaining material.
You may upload personal or sensitive information if it is part of your normal work. SAIL will highlight if it detects personal or sensitive data to help you stay aware and make an informed choice. These alerts do not block your work.
Uploaded content is stored securely for your session and deleted automatically after a short retention period.
ASK allows you to run lookups on a document such as a policy, handbook, or guidance note. You can ask for definitions, explanations, or quick clarifications based on approved content.
ASK is particularly useful for exploring long or complex documents, helping you find relevant sections without having to manually search through the text.
Knowledge bases contain reliable, university specific information. When you enable a knowledge base, SAIL can answer questions using official material from the relevant service or department.
Knowledge bases are curated by teams across the university to ensure accuracy and consistency. Using them helps you work more effectively and reduces the risk of relying on out of date information.
Each knowledge base is linked to you based on your staff or student role, so you only see content that is appropriate for your needs.
The prompt library provides pre approved prompts to support teaching, research, productivity and administrative tasks. Each prompt aligns with guidance on responsible AI use and has been tested to produce reliable and efficient results.
Using high quality prompts can reduce the number of attempts needed to get the outcome you want. This saves time and supports more sustainable use of AI by reducing repeated queries and unnecessary compute.
Prompts are reviewed and updated regularly based on user feedback and evolving best practice.
SAIL provides alerts if it detects personal data, sensitive information, or student assessment material. These alerts do not block your work. They are there to raise awareness and help you decide whether the information is appropriate to include.
Your conversations remain private and are not visible to anyone else. They are stored securely for a short period, then automatically deleted.
Some features, such as the Eduroam Helper, run in a simplified kiosk view. Kiosk mode combines a knowledge base with an approved prompt to support a specific task. It provides quick access to consistent and reliable help while keeping the workflow focused.
Kiosk mode is useful for guided tasks that benefit from structure and clarity, and ensures users receive correct information without needing to navigate the broader SAIL interface.
Promptions (prompts + options) combine what you type with dynamic, clickable controls that let you precisely guide AI responses. Instead of rewriting prompts repeatedly, you adjust settings like detail level, tone, format, and focus area - the system automatically generates relevant options based on your specific question, making it faster and easier to get exactly the response you need.
Capture your meeting notes by dictating in real-time or pasting existing notes, then instantly choose a quick action:
Quick Summary (concise overview of key points)
Detailed Notes (comprehensive organised record with sections for topics, decisions, and next steps)
Compile Actions (extract all action items, tasks, and to-dos with assignees and deadlines into a clear list).
The Study Planner agent accesses your timetable to help you organise your academic schedule. It can view your classes and lectures to suggest optimal study times, identify free blocks for group work or assignments, create personalised study schedules around your existing commitments, and help balance workload across the week by analysing gaps in your timetable - making it easier to manage your time effectively without manually tracking everything yourself.
Conversations are stored in a secure internal session store for a limited period to support your working session. They are automatically deleted after 30 days.
No. Your SAIL conversations are private to you, similar to your university email. They cannot be seen by other users.
No. SAIL connects only to institution approved models in no training and no log mode. This means external providers do not retain prompts or outputs.
No. SAIL does not automatically access institutional datasets. You choose what you input, and you should avoid sensitive, confidential or assessment related content.
No, SAIL is designed to operate in a no-training, no-log mode with institutional controls, so prompts and outputs are not reused to improve models, and are not retained outside University control.
SAIL uses a short-term session store and is designed to purge cached session data automatically after 30 days by default.
You should avoid entering sensitive personal data unless it is necessary for your task. If you do choose to include personal data, keep it minimal and ensure you have a lawful basis to process it, especially if it relates to other people.
You must follow University policy and your local information governance rules. If your material includes personal data, confidential research, or assessment items, only proceed where policy allows, keep content to the minimum needed, and do not treat AI output as a final authority.
SAIL protects your privacy. Your chat history is private to you, similar to your email. No one else can see your conversations.
All models operate in no training and no log mode, so your data is not used to train external AI systems. SAIL stores information only for as long as needed for your session and deletes it automatically after a short retention period.
You may enter personal or sensitive data if it is part of your role. SAIL will notify you when it detects such information to help you make informed decisions.
Responsible Use, Academic Integrity and User Responsibilities
Responsible Use, Academic Integrity and User Responsibilities accordion
No. SAIL provides access to approved AI models, but the accuracy and suitability of any output depends on your critical evaluation. AI can produce incorrect, biased, or incomplete information. You must always check and verify content before using it in your studies or work.
AI tools produce text by predicting patterns in their training data. They are not subject experts and they do not understand context in the way a human does. The university cannot guarantee correctness and does not validate model outputs. This is similar to using the internet or other digital tools. You remain responsible for how you use the information.
avoid entering sensitive data unless you choose to enable it
AI outputs are a support tool. They can help with drafting, summarising, planning, and exploration. They cannot replace reading, analysis, or critical judgement. AI outputs are not academic evidence and must not be used as authoritative sources.
Amber means AI can be used in limited ways, for example planning or structure.
Green means AI-supported work is permitted with clear citation.
Each module and each assessment specifies the allowed level. If you are unsure, you must ask your tutor before using AI.
Yes, if you use AI in any part of an assessment where use is allowed, you must acknowledge this clearly. You should check with your department to see the appropriate approach to take traffic light system.
In most academic work you cannot cite AI as a source of evidence. AI does not produce verifiable academic knowledge. If AI is allowed in your assessment, you should use it to support your thinking, but factual claims must be verified with peer reviewed or authoritative sources.
No. SAIL uses a no training and no log configuration for all approved models. Prompts and responses are kept only in a secure session store and are automatically deleted after 30 days.
If you choose to continue after the PII warning, your session will switch into PII enabled mode. You must still avoid entering unnecessary sensitive information. The data still follows the same no training and auto deletion rules.
Yes. Lancaster University expects students and staff to apply academic literacy skills. Always cross check important claims, statistics, citations and arguments with reliable sources. AI is a starting point, not a replacement for research.
SAIL provides safe access, clear guidance and model safeguards. Once the university provides these protections, responsibility shifts to you. This is the same principle used for internet searching, digital tools and reference managers. You are expected to use good judgement and follow academic rules.
Users must follow Lancaster University’s Responsible Use of AI guidance and the Acceptable Use Policy.
SAIL will help detect you entering personal data, confidential information. SAIL will warn you if it detects potential Personally Identifiable Information (PII), you then get the choice to continue, mask the data or stop – it raises awareness so you can understand what you are doing when chatting to AI.
These alerts are designed to increase awareness, not prevent use.
You are responsible for deciding whether entering personal data is appropriate for your task. You should always follow the university principles for the responsible use of AI.
There are three levels of PII:
Low-sensitivity - Publicly accessible information with minimal risk. Such as Public job titles, zip codes, or general age ranges. This requires basic safeguards but poses a low risk if exposed.
Medium-sensitivity - Information that could be used for identity theft or cause inconvenience if exposed. Such as Names, email addresses, and home addresses. This requires strong security measures.
High-sensitivity - Information that, if improperly disclosed, could lead to severe harm, identity theft, or financial damage. Such as Social Security numbers, medical records, and financial account information. This requires the highest level of protection and strictest security controls.
PII alerts are designed to help you understand when you are working with sensitive information and to support good data handling. They help you pause and check that you are using AI appropriately, especially in situations where privacy or confidentiality is important.
No AI system is perfect. SAIL encourages safe experimentation while helping you learn the limitations of current AI tools. Always critically review outputs.
SAIL supports sustainability by consolidating AI use into a single institutional service. This reduces the environmental impact of many separate personal AI subscriptions and removes unnecessary duplication of compute workloads.
Yes. SAIL uses shared, institutionally governed infrastructure with monitored cloud efficiency. Centralisation helps reduce the overall footprint compared to thousands of separate external accounts running independently.
SAIL encourages users to build understanding of how AI models work so they can use them more effectively and reduce unnecessary compute. Better model literacy leads to more efficient use of AI and fewer wasted queries.
FAQs - Responsible Use, Academic Integrity and User Responsibilities - continued accordion
The prompt library helps you reach successful outcomes in fewer attempts. Well-crafted prompts reduce repeated queries and aim to improve efficiency. This lowers compute consumption and contributes to more responsible digital practice.
No. You must follow academic integrity rules. SAIL can help you understand a topic, plan, critique drafts, improve clarity, or test your understanding, but you remain responsible for producing your own work and citing sources appropriately.
AI can be wrong, incomplete, or overconfident. Treat outputs as a starting point, cross-check facts, and use primary sources for anything important, especially in research, policy, finance, legal, or health-related contexts.
Stop and do not reuse the output as-is. Reframe your prompt, try a different model, and report concerning results through the SAIL support route so the team can improve guardrails and guidance.
Using SAIL for Safe Experimentation
FAQs - Using SAIL for Safe Experimentation accordion
You can use SAIL to:
Explore how AI might support your work.
Learn prompt techniques.
Experiment with summarising or analysing documents.
Understand how AI models behave.
Test ideas while keeping data secure.
You should check:
Whether the information is necessary for the task.
Whether it can be redacted or reduced.
Whether the task falls under your normal job responsibilities.
Whether your department has any local restrictions.
That you are following the university responsible AI principles when working with personal or sensitive data.
Relying on outputs without checking accuracy.
Entering Personally Identifiable Information (PII), confidential data or assessments.
Using AI where prohibited by departmental rules.
Assuming the AI output is correct, authoritative or complete.
If you experience a technical problem, contact the ISS Helpdesk.
SAIL includes built in request options for new models and enhancements. You can also submit ideas on the university Ideas Wall.
Technical Questions
Technical Questions accordion
Models are selected based on governance approval, security review, licensing and responsible AI considerations. Only models that meet university governance standards can be enabled.
Personal accounts may retain data, store prompts or breach assessment and data protection rules. SAIL provides a secure alternative under university governance controls.
To reduce risks related to personal data, assessments, sensitive information or policy non-compliance.
Roadmap and Development
Roadmap and Development accordion
Yes. SAIL is developed using feedback from the university community. New prompts, tools, models and knowledge bases will be added based on governance and user needs.
Use the built in feedback options or contribute to the Ideas Wall.
Personally Identifiable Information (PII), Workload and User Choice
PII, Workload and User Choice accordion
Yes. SAIL does not block personal data and you may need to process PII as part of your normal workload. SAIL provides alerts to help you understand when personal or sensitive information has been detected. These alerts are designed to increase awareness, not prevent use.
You are responsible for deciding whether entering personal data is appropriate for your task. You should always follow the university principles for the responsible use of AI.
PII alerts are designed to help you understand when you are working with sensitive information and to support good data handling. They help you pause and check that you are using AI appropriately, especially in situations where privacy or confidentiality is important.
You should check:
Whether the information is necessary for the task
Whether it can be redacted or reduced
Whether the task falls under your normal job responsibilities
Whether your department has any local restrictions
Follow the university responsible AI principles when working with personal or sensitive data.
No. Your chat history is private to you. No one else can see your conversations. Session data is automatically deleted after 30 days.
Staff Use and Day-to-Day Work
Staff Use and Day-to-Day Work accordion
Yes. You can use SAIL to help with summarising, drafting, planning and improving productivity. Always check outputs carefully and follow responsible AI principles.
AI supports productivity and experimentation. Human judgement, expertise and decision making remain essential in all university roles.
Sustainability and Responsible Digital Practice
Sustainability and Responsible Digital Practice accordion
SAIL supports sustainability by consolidating AI use into a single institutional service. This reduces the environmental impact of many separate personal AI subscriptions and removes unnecessary duplication of compute workloads.
SAIL uses shared, institutionally governed infrastructure with monitored cloud efficiency. Centralisation helps reduce the overall footprint compared to thousands of separate external accounts running independently.
SAIL encourages users to build understanding of how AI models work so they can use them more effectively and reduce unnecessary compute. Better model literacy leads to more efficient use of AI and fewer wasted queries.
The prompt library helps you reach successful outcomes in fewer attempts. Well-crafted prompts reduce repeated queries and improve efficiency. This lowers compute consumption and contributes to more responsible digital practice.
Using any AI service consumes energy, so SAIL use has a footprint. The benefit is that SAIL helps the University manage that footprint through governance, monitoring, and encouraging appropriate model selection, rather than leaving usage scattered across personal accounts and unmonitored services.
SAIL provides shared prompt libraries, guidance, and training so users get better results with fewer attempts. It also supports a governed environment where usage patterns can be reviewed and policy controls can be applied if needed.
Use the smallest suitable model for the task, keep prompts focused, avoid re-running the same request repeatedly, and only upload the parts of a document you actually need analysed. If a task can be done with a short summary or bullet list, ask for that rather than a long response.
SAIL uses cloud infrastructure where the energy mix and efficiency are managed at data centre level. SAIL’s practical sustainability benefit is that it centralises AI access in a controlled service where the University can manage and report on usage rather than relying on ad hoc personal services.
SAIL is designed as a central service, which makes it feasible to track and review usage at a service level. This supports evidence-based decisions about demand management and platform optimisation.
Your Rights Under Data Protection Law
Your Rights Under Data Protection Law accordion
See SAIL Privacy Notice to understand how personal data is processed when SAIL is accessed. SAIL operates within Lancaster University’s data protection framework. Your rights under UK GDPR apply to the personal data processed within the platform. Below is a summary of how each right applies in the context of SAIL.
You can view your chat history and uploaded content directly within SAIL. This provides direct access to the personal data you have entered.
SAIL does not store institutional personal data beyond what you supply yourself, for example text you type, files you upload or preferences you set. Because SAIL does not alter or enrich your data, this right does not generally apply within SAIL.
This right may apply in specific cases. SAIL automatically deletes session data after 30 days. Where additional erasure is requested, it will be assessed against the university’s legal obligations and any legitimate reasons to retain the data.
Since SAIL processes only the data you choose to input, you can self-manage this right by choosing what to enter. SAIL does not process university held personal data on your behalf. Restriction is therefore controlled by your own usage.
This right applies to data processed under consent or legitimate interest. SAIL processes data under the university’s contractual and operational requirements for providing digital services, so data portability does not apply.
This right does not apply because SAIL operates under the legal basis of providing a university service that forms part of your digital account provision.
SAIL provides clear notices about data retention, privacy protections and PII detection to support responsible use.
You may contact the University Data Protection Officer if you have concerns about how your data is handled within SAIL.
Access, governance, and support
Access, governance, and support accordion
SAIL is designed as a secure, University-governed hub with approved model endpoints, auditability, and privacy controls, reducing the risks associated with shadow AI, unvetted terms, and uncontrolled retention.
SAIL is designed to support audit and governance at a service level. This helps manage misuse and protect the institution, while still operating with privacy-first controls around prompts
Try a smaller, clearer prompt, reduce the size of your upload, or switch to a different model that better fits the task. If the issue persists, use the SAIL support route and include what you were trying to do, without pasting sensitive content.