DeepSeek AI Faces Criticism Over Lack of Safeguards, Poses Potential Risks

DeepSeek's artificial intelligence technology is under scrutiny as concerns rise over its lack of safeguards, despite its expansive global operations. Unlike its Western counterparts such as OpenAI, Google, and Perplexity, DeepSeek operates without established guidelines or policies, raising alarms about potential misuse. The power of generative AI, when left unchecked, can lead to severe repercussions,…

Alexis Wang Avatar

By

DeepSeek AI Faces Criticism Over Lack of Safeguards, Poses Potential Risks

DeepSeek's artificial intelligence technology is under scrutiny as concerns rise over its lack of safeguards, despite its expansive global operations. Unlike its Western counterparts such as OpenAI, Google, and Perplexity, DeepSeek operates without established guidelines or policies, raising alarms about potential misuse. The power of generative AI, when left unchecked, can lead to severe repercussions, and DeepSeek's current state presents a significant problem.

ActiveFence, a company specializing in content moderation and online safety, conducted tests on DeepSeek's V3 artificial intelligence, particularly focusing on dangerous prompts. The findings were alarming: DeepSeek's AI was found to be severely lacking in crucial operational aspects. The report highlighted that the AI shared harmful responses in 38% of cases, pointing to a substantial deficiency in protective measures.

The absence of proper safeguards in DeepSeek's AI could lead to severe issues in the future. The previous year saw a disturbing trend where bad actors exploited advanced technology to create deepfakes of well-known personalities. These deepfakes were then used to mislead the public, embedding propagandistic beliefs. With DeepSeek's lack of protective measures, there is a genuine concern that similar malicious activities could proliferate.

The possibility of criminals exploiting DeepSeek's services is another pressing concern. Without robust guidelines and policies, the technology could be used to orchestrate scams and manipulate public opinion across various scenarios. The potential for such exploitation underscores the need for immediate attention and intervention.

Alexis Wang Avatar