Cuppa Conundrums are informal, discussion-based sessions designed to prompt reflection and dialogue around contemporary issues in research and academia. Each session focuses on a specific topic and is structured around a series of guiding questions. The primary aim is to empower researchers to confidently use novel methodologies, knowing they have thoughtfully considered and reflected on their research practices.
Participants spend around 15–20 minutes discussing each question in turn, followed by a short open discussion to bring together key ideas before moving on to the next. A final group discussion at the end allows participants to reflect on overarching themes and insights.
Over the course of the project, we have run sessions on a variety of topics, both at Lancaster and elsewhere. The summaries below highlight some of these sessions, including the questions used and examples of the kinds of conversations they generated. They are intended to offer inspiration for those interested in running similar discussions or exploring these questions within their own research communities.
If you’d like to consider running such a session, a facilitator guide is available.
Over the course of the project, we have run sessions on a variety of topics, both at Lancaster and elsewhere. The summaries below highlight some of these sessions, including the questions used and examples of the kinds of conversations they generated. They are intended to offer inspiration for those interested in running similar discussions or exploring these questions within their own research communities.
accordion
Guiding questions:
What is the responsibility of the university given the climate crisis?
What are your responsibilities within your role — as a researcher, teacher, administrator, or technician?
Where do we go from here? How do we bridge the gap between aspiration and reality?
Discussion snapshot:
This session invited participants to reflect on the role of universities in the climate crisis — not only as producers of knowledge, but as civic and moral institutions with responsibilities that extend beyond emissions targets and sustainability plans.
The opening discussion focused on what universities owe to society in a time of climate emergency. Participants questioned whether universities should remain neutral spaces for debate or take an explicit stance on climate action through their operations, teaching, and research priorities. Many felt that while declarations and targets have become common, action has often lagged behind, constrained by financial pressures and competing institutional demands. Others highlighted examples of progress — universities declaring climate emergencies, setting net-zero goals, and integrating sustainability into curricula — while noting the unevenness of these efforts across the sector.
The second discussion turned to individual responsibility. Participants reflected on what action looks like within their own roles and where personal agency meets institutional constraint. There was recognition that those most likely to attend such discussions are already motivated, raising the question of how to engage others and create a shared culture of responsibility. The conversation touched on burnout, institutional barriers, and how changes to frameworks such as the REF or funding criteria could embed sustainability across the system rather than leaving it to individual goodwill.
The final discussion considered how to move from commitment to meaningful change. Participants emphasised that staff and students want to act but often lack the structural support or security to do so. For universities to fulfil their climate responsibilities, sustainability needs to be embedded throughout — in governance, funding, research priorities, and working culture.
Emerging themes: moral and civic responsibility, empowerment and burnout, structural versus individual change, and the tension between aspiration and institutional constraint.
Guiding questions:
How can researchers ensure that AI tools are used responsibly and do not replace critical human judgment in the research process?
To what extent should AI systems be allowed to make autonomous decisions in research (e.g., when reviewing literature, analysing data, etc.)?
What are the implications of using AI for data generation, synthesis, and analysis in terms of open science, reproducibility, and research integrity?
Discussion snapshot: This session invited participants to reflect on how AI is reshaping research practices and the responsibilities that come with its use. Discussion touched on concerns about disclosing AI use and how this might be perceived, the temptation to use AI for time-intensive tasks such as data mining, and the need to maintain a critical stance toward outputs — distinguishing when the technology supports, rather than substitutes for, human judgement.
When considering AI autonomy, participants debated whether AI should act as a facilitator or decision-maker, with some expressing unease about opaque “black box” systems and potential biases in the data that underpin them. The conversation also raised questions about who ultimately controls these tools and how that shapes research integrity.
In thinking about the broader implications for open science, participants discussed whether reliance on AI might lead to a loss of critical research skills or whether it might instead open up new spaces for creativity and collaboration.
Emerging themes: transparency and disclosure, trust and verification, bias and integrity, and the evolving balance between automation and human expertise.
Guiding Questions:
Is social media a public or private space?
Do we need to get consent when using social media data in research?
How might we be mindful of vulnerable groups who use social media data when using their data in research?
Discussion Snapshot:
This session invited participants to reflect on the use of different types of social media data, and for various purposes in research contexts. This ranged from scraping large amounts of data from social media sites, to anonymising individualised content.
The first discussion began by considering the analogy of the internet, specifically social media, as a coffee shop. It is a public space within which private conversations are taking place. Participants questioned who deems what is public or private, and expressed concerns around the potential for hidden consents within the terms and conditions of the major social media platforms. Trust in what is posted online is also eroding and poses conundrums for the authenticity of responses in online communities. Using content from closed forums becomes particularly difficult to obtain informed consent for.
Would participants give consent after the fact of posting something that may be personally or professionally compromising? The discussion centred around the feasibility of consent being dependent on the scale of the data being used. Approaching each individual would not be possible in scraping thousands of tweets for example. There is also an issue of deleted or inactive accounts or since deceased accounts. One barometer for effective practice could be reflecting on what other researchers are doing in the area and sharing best practice.
The final discussion started with an acknowledgement of the knowingness of the participants. If people know they are being observed, then they may behave differently online. Participants also considered the consequences of deceptive practices however, which could fan the flames of distrust in academics and research more widely. The value of co-design, having gatekeepers to a community and involving those who you want to research throughout the process emerged as vital touchstones for successful research in this area.
Emerging themes: trust and deception, informed consent, authenticity, large versus small data sets, collaboration in best practice.
Guiding Questions:
Is the terminology of citizen science a barrier to participation?
Does the potential value of academic or community contribution outweigh the potential risks of this research practice?
Who owns the content/outcomes/legacy of a research project involving public participants?
Discussion Snapshot:
The first discussion began by considering the importance of definitions and terminology when working in this area, to ensure clarity for any participants including researchers and the public. This then turned to a conversation on how we define the public, and whether a separation between public and researchers was problematic in the terminology. One proposal suggested eradicating Citizen Science as a term and moving to something closer to community research.
The second discussion began with a determination that a primary benefit comes from research being “done well”. Working in a way that desires a means to an end is negative and should integrate a community from the outset. Participants reflected on the potential harm to a community and to the reputation of the university is work was undertaken in an extractive manner. There are balances between short term funding for community projects and the long term labour required by all involved in projects of this kind. A research culture is required in changing this, allocate workload to academics for community research, empower the individual to prioritise working with communities.
The final discussion centred on the difference between IP and the empowerment of individuals in the case of determining ownership. There was also a conversation around the conundrum of ownership versus access. Who stores any data produced during a project? Who continues the activities of the community once the official funding for a project has concluded?
Emerging themes: ownership and access, terminology and definitions, co-creation and collaboration, defining roles, empowering researchers and “the public”.
Guiding Questions:
What do you see as the most exciting opportunities/pressing challenges from deepfake technology?
How can we achieve a balance between technological development and responsible and ethical innovation?
What regulation or policy changes could help mitigate the harms of tech development?
Discussion Snapshot:
The first discussion began with a general consensus that thinking of positives for deepfake technology was much more challenging than thinking of challenges. Some positives included the technological advancement in crime processes, creating mock ups of perpetrators for example. The potential for reducing accessibility issues, specifically in contexts of health care for the elderly. The challenges included the replacement of the human element, increased scepticism, the inability to control the spread of misinformation and the undermining of the creative industry.
The second discussion involved considerations of existing stories of pre-built bias in tech development, phones being designed for male hands, body sensors being designed on Caucasian skin, for example. Participants suggested the necessity for involving a diversity of people in testing, that moves beyond the heteronormative. There was a doubt at the expertise and training possessed by those in control of legislation to appropriately anticipate and account for ethical innovation. An alignment will and motivation is needed by all parties in order to be responsible and ethical. There is no money in being ethical was one provocation, so compliance at a governmental level was perhaps the only way to enforce companies to develop responsibly and ethically.
The final discussion involved the participants considering a wide range of suggestions for policy changes in this area. Some examples being the introduction of data auditing, education on how AI works in order to have effective policy makers and therefore policy. The question of whether this training is universal and who carries this out was also considered without resolution. Treat AI like the Data Protection Act, have an AI officer and ethics officer in positions of influence in corporations and governments to regulate AI. Finally, the considerations of making everything Open Access was considered with the value of anyone being able to read your research but the downside of anyone being able to make money from it.
Emerging themes: diversity in development, policy making and regulation, challenges outweighing the benefits, ethical and responsible development, in-built biases, accountability.
Guiding Questions:
Do scientists have an obligation to conduct sustainable research?
What are the trade-offs between research quality and environmental sustainability?
How can researchers innovate with AI while minimising the environmental footprint associated with data centres and computational resources?
Discussion Snapshot:
The first discussion began by considering the responsibility of the individual morals and how this can influence the way they undertake research. Does sustainability align with their research priorities for example? There can also be an absolving of responsibility because their research is about the environment. The primary issue in this area that participants voiced was a general intertia in making change. There is a lack of engagement in sustainability activities and initiatives and is a small, repeated crowd of people trying to make change. The crux of the conundrum is a research culture shift, saying is one thing, doing is another.
The second discussion considered the balance for early career researchers in this conundrum. Attending international conferences for example, can be extremely valuable in networking and gaining experience in presenting and sharing your research. The environmental impact of long haul flights should be balanced with the career opportunities for ECR’s. Another discussion centred on the use of materials. Specifically, approaches differed depending on the cost of items and materials, When things are cheap, they become disposable and excessive, when they are expensive, sustainable and creative solutions to extending the life of these materials are developed. Participants also considered the ways in which the University as an institution could improve in this regard, with sharing of resources and more flexibility in chosen procurement providers to keep costs down and ensure minimal waste.
The final discussion centred on what changes were needed in order to combat the environmental risks of AI. Firstly, a deeper transparency in the data provided by tech companies on the energy AI uses in each instance. The creation of further guidance was also noted by the participants, for the institution to guide researchers on best practice. The lack of education is the primary missing component currently. Quantifying the impact and usefulness of AI for each project is required, is its use helpful in time only or can it produce something which human work could not and what is the environmental cost of doing so?
Emerging themes: inertia and lack of engagement, research culture change, ECR career development, transparency, resourcefulness, guidance and training, challenging the status quo.
Podcast Series
We have also developed a podcast series as part of our research culture project as we unpack conundrums on a variety of topics relating to ethical tensions in research. Hosted by Dr Dan Craddock, in conversation with a new guest each week, each episode reflects on a different cuppa conundrum event held here at Lancaster University. Topics range from AI in research to climate action, sustainability and citizen science, Deepfakes and social media data.
You can find our podcast episodes on acast, spotify and apple podcast platforms.
Dataveillance Workshop
We are pleased to invite you to a full-day workshop (9:30am - 4:30pm) on data privacy, digital profiling, and research ethics in the Data Immersion Suite developed and delivered by the Reimagining Research Practices team. This workshop will explore critical issues surrounding the use of personal digital data in research, focusing on its ethical implications and best practices.
Through structured discussions and interactive activities, you will have the opportunity to:
Reflect on how digital data (e.g., social media, wearable devices) is generated and used.
Engage with peers to explore attitudes toward dataveillance.
Contribute to shaping a more ethically aware research culture.
This workshop is open to all staff and students, regardless of career stage or research background.
When: The event is running twice, 22nd January and 27th January, see below for each sign up. Both days will follow the same timings (9:30am - 4:30pm)
Please note that due to the dynamic displays and bright lighting that occurs in the decision theatre that the workshop may pose a risk for those with a with a history of epilepsy or sensitivity to bright or flashing lights.
Please read our participant information sheet and if you agree to our terms please sign the consent form at the end of the information sheet, before the event takes place.
DECIDE Framework
Dr. Heather Shaw has collaborated with other colleagues to produce the DECIDE Framework: A framework designed to encourage ethical reflections and discussions throughout all stages of the research process. It acknowledges that research routinely relies on digital data, supported by digital technologies and platforms. The framework addresses ethical considerations in behavioural data generated through human-technology interactions, such as social media activity, app usage, and sensor data. Designed for experimental, observational, primary, and secondary research, it adapts to new forms of digital data while focusing on privacy and ethical challenges unique to human-subject data.
Several resources have been created to support researchers with their ethical reflections and discussions, including The DECIDE Framework Spreadsheet, The DECIDE App, Information Documents, Flowcharts. We are continuing to improve the app as part of the Reimagining Research Practices project.