Source: Tech – South China Morning PostChinese artificial intelligence start-up DeepSeek has lifted the veil on how it filters data to train its models, raising red flags about “hallucination” and “abuse” risks.
In a document published on Monday, the Hangzhou-based start-up said it “has always prioritised AI security” and decided to make its disclosure to help people use its models, at a time when Beijing is ramping up oversight over the industry.
The company said data in the pre-training stage was “mainly” collected from publicly…Read More