Care England’s Policy Officer, Sibel, reflects on her experience attending the Ethics in AI Summit in Oxford, which offered a thought-provoking exploration of the intersection between technology, care, and ethics.
The day commenced with Dr Caroline Green from the Institute of Ethics in AI discussing the application of generative AI in medicine to the broader implications of AI in social care, raising important questions about both the benefits and challenges that AI brings to the sector. One key point was the lack of official policy or guidance in the UK on the use of AI in social care, with many AI systems operating outside of medical device regulations, meaning they are subject to less scrutiny. Hence the Oxford Statement on the responsible use of generative AI in social care arrives at the perfect time.
The Minister for Care Stephen Kinnock noted via a virtual message AI’s rapid growth and increasing role in improving accessibility and safety in care services. AI, he pointed out, is central to the government’s vision for care, offering transformative benefits such as reducing falls by more than 50% and assisting in recognising chronic pain through facial recognition technology. The aim is not to replace humans but to elevate and empower care workers, enabling people to maintain choice and dignity in their care.
Katie Thorn from the Digital Care Hub, and Daniel Casson, Managing Director of Casson Consulting followed with an interactive session on the importance of rewarding those who create AI solutions and making them more accessible, emphasising the role of lived experience in building trust. However, there were concerns about AI fostering inequalities and the risk of focusing too much on the technology itself rather than the actions required by people. The danger of cost-cutting in the name of efficiency was also raised, with questions from the audience stressing that care is inherently a human endeavour.
Chaired by Dr Jane Townson OBE, Chief Executive of the Homecare Association and Chair of the Care Provider Alliance, the panel session to follow explored the practical applications of AI in care. Speakers explained that providers expressed strong desire to implement proven technologies, but there was consensus on the need for more investment in research and a better understanding of both the benefits and risks of AI among staff. Fear of the technology, coupled with a lack of digital skills, emerged as a barrier that could be addressed through further investment. Issues such as consent, autonomy, and an international comparison to Australia, where commissioning of care has been removed, were also raised.
The afternoon panel, chaired by Dr Donald Macaskill, Chief Executive of Scottish Care, delved into the ethical considerations surrounding generative AI in care. The discussions focused on the protections needed for care workers handling sensitive data and the risk of bias inherent in generative AI systems, which are shaped by the data they are trained on. The need for explainability and transparency in AI systems was stressed, as was the challenge of ensuring that individuals have the capacity to provide informed consent.
Robotics and its potential to cause social exclusion, despite its success in Japan, were also discussed, alongside environmental impacts, with Finland being cited as an example of better practices.
Dr Samir Sinha, Dr Christoph Ellssel, and Silvia Perel-Levin brought a global perspective to the discussion, comparing AI practices in different countries. They explored how AI is being used in care internationally, with a focus on the ethical challenges and opportunities in diverse cultural and regulatory contexts.
Ultimately, the Oxford Statement on the Responsible Use of Generative AI in Social Care highlights the importance of transparency, accountability, privacy, and inclusivity in using AI to enhance care services while protecting individuals’ rights. It stresses the need for human oversight, safety, and continuous collaboration among stakeholders to address ethical issues and ensure positive outcomes. As AI continues to shape social care, providers must adapt through training, ethical frameworks, and regulatory compliance.
Care England will continue to advocate for clear guidelines, tailoring training resources, and promoting collaboration between care providers, tech developers, and policymakers to ensure the responsible implementation of AI in social care.
Comments
Login/Register to leave a comment