Concerns Raised by Microsoft Worker Over AI Tool Generating "Inappropriate" Content

SaronSaron
2 min read

A Microsoft Corp. software engineer, Shane Jones, has alerted the company’s board, lawmakers, and the Federal Trade Commission (FTC) about potential risks associated with its AI image generation tool, Copilot Designer. In letters sent to these entities, Jones expressed concerns that Microsoft is inadequately safeguarding its AI tool, allowing it to produce abusive and violent content.

Jones identified a security vulnerability in OpenAI’s latest DALL-E image generator model, a component embedded in various Microsoft AI tools, including Copilot Designer. He reported the issue to Microsoft and urged the company to temporarily halt the public use of Copilot Designer until enhanced safeguards could be implemented.

Despite Microsoft publicly promoting Copilot Designer as a safe AI product for users of all ages, Jones claimed in his letter to the FTC that the company is well aware of systemic issues leading to the creation of harmful and inappropriate images. According to Jones, Copilot Designer lacks the necessary warnings or disclosures for consumers to be aware of these risks.

In his communication with the FTC, Jones disclosed that Copilot Designer had a tendency to randomly generate “inappropriate, sexually objectified images of women” and also produced harmful content in various categories, such as political bias, underage drinking and drug use, misuse of trademarks and copyrights, conspiracy theories, and religious content.

The FTC acknowledged receipt of the letter but refrained from providing further comments on the matter. This incident adds to growing concerns about the capacity of AI tools to generate harmful and offensive content.

Microsoft recently faced reports of disturbing responses from its Copilot chatbot, prompting an investigation. In February, Alphabet Inc.’s Gemini, a flagship AI product, received criticism for generating historically inaccurate scenes in response to user prompts.

Jones also reached out to Microsoft’s board’s Environmental, Social, and Public Policy Committee, emphasizing the importance of voluntarily and transparently disclosing known AI risks, especially when marketing products to children.

Microsoft responded with a commitment to addressing employee concerns in line with company policies and expressed gratitude for employee efforts in testing and enhancing the safety of their latest technology. OpenAI, the organization behind the AI model, did not provide a comment in response to requests for clarification.

Jones, who has raised concerns over the past three months, has also contacted Democratic Senators Patty Murray and Maria Cantwell, as well as House Representative Adam Smith, urging an investigation into the risks associated with AI image generation technologies and the corporate governance practices of companies developing and marketing such products. As of now, lawmakers have not responded to these requests for comment.

0
Subscribe to my newsletter

Read articles from Saron directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Saron
Saron

my name is Saron.I am lived in Bangalore,I passed MCA with first class in Bangalore university.currently i am work for desiging in Delhi.