Our Initiative

At Lancaster University, we believe that AI should be safe to use. That's why we've developed Safe AI at Lancaster (SAIL), our service with additional features for safer AI use, developed with our students' safety as top priority.

 

How does it work?

SAIL provides safer and free access to generative artificial intelligence (Gen AI) via access to large language models (LLMs). SAIL offers students and staff at Lancaster University safer access to existing AI models, including OpenAI’s ChatGPT, Microsoft’s Phi and DeepSeek – with added features to increase user safety and security.

Please note, lecturers do not have access to your conversation history in SAIL – it is private like your email inbox.

How is it safer?

While other AI models do not, SAIL ensures:
• Your data is not shared to third parties or used to train the AI model.
• Safety guardrails help prevent AI from providing hateful or harmful responses.
• PII (personally identifying information) protection – gives you the option to shield sensitive personal information from AI model providers.
• The servers where data is temporarily stored are based in Europe and are GDPR-compliant.
• Increased privacy – conversations fall off after 30 days.
• Dictation is performed on your device, meaning that recordings of your voice are not sent to a server to be processed, where it has potential to be misused.

Ready to get started?

Become one of our first testers! Sign up now to start using Safe AI at Lancaster: SAIL Tester Sign-up Form

Leave your feedback for SAIL here: SAIL Ideas Wall

Linked icons