Essential Guide: Using ChatGPT Safely in Social Care and Child Protection


Recent headlines have highlighted concerns about using ChatGPT in social care, especially after Sam Altman warned about the lack of legal confidentiality when using it as a therapeutic tool. As AI chatbots become more common in therapeutic and administrative roles, it's crucial for social care providers to understand the implications for confidentiality, ethics, and professional responsibility.
In a TechCrunch article, OpenAI CEO Sam Altman emphasized that conversations with ChatGPT are not legally protected. This is a significant consideration for social care professionals who might use AI as a supplementary tool. Information shared or generated through AI interactions could be subpoenaed or disclosed in legal proceedings.
Several court cases illustrate these risks. In Victoria, Australia, a child protection worker used ChatGPT to draft sensitive court reports. Unfortunately, the AI-generated report downplayed child safety concerns, leading to serious legal and ethical consequences, including a ban on AI use in that agency. In the UK, the Harber v. HMRC case revealed the dangers of AI-generated "hallucinations" when case summaries produced by ChatGPT were found to be fabricated. The tribunal highlighted the severe professional and legal risks of relying on unverified AI output.
These incidents demonstrate the potential harm that AI-generated inaccuracies or breaches of confidentiality can cause in social care settings. For social care providers, maintaining client confidentiality is not just ethical—it's a legal requirement. Unlike interactions with trained professionals, conversations with AI chatbots like ChatGPT are not legally protected. Therefore, professionals using ChatGPT should never input personally identifiable or sensitive client information, ensure AI-produced content is independently verified, and inform clients about the lack of confidentiality and limitations of AI-generated support.
To address these challenges, social care providers should focus on education and training, ensuring staff understand the capabilities, limitations, and risks of AI chatbots. Developing robust internal policies to govern AI use, outlining ethical guidelines, confidentiality expectations, and verification protocols is essential. Transparency with clients is also crucial; they should be informed that AI chatbot interactions lack confidentiality protections, and their informed consent should be obtained. Human oversight is necessary to avoid reliance on unverified AI outputs.
As AI tools like ChatGPT become more prevalent in social care and child protection services, the need for caution and diligence increases. While AI offers significant advantages, such as efficiency and support, the risks to client safety, confidentiality, and organizational reputation can be substantial if not managed properly. Providers should stay informed about the evolving legal landscape to ensure all AI interactions remain ethical, confidentially appropriate, and professionally sound. The key takeaway is that AI chatbots can support but should never replace qualified human judgment in social care settings.
Subscribe to my newsletter
Read articles from Michael Hinett directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
