According to a report by CCTV Finance on March 15, a service called GEO (Generative Engine Optimization) has been exposed as a new method to manipulate AI model outputs. These service providers claim that by paying a fee, they can make any product appear at the top of mainstream AI model recommendations, turning commercial advertisements into the 'standard answers' provided by AI systems.
The GEO operation chain works as follows: the system automatically generates promotional content, publishes it in bulk on self-media accounts, and then AI models crawl and cross-validate this information, eventually incorporating the fabricated data as authoritative sources. One service provider named 'Li' explained that the key to manipulating AI models is publishing content across various internet accounts.
Industry experts warn that if the source of Chinese language training data is contaminated, it becomes difficult to solve through algorithmic adjustments alone. Wen Yuan Think Tank founder Wang Chao stated that this issue requires collaborative efforts from multiple parties, as speculators using GEO technology could rapidly pollute AI models if left unaddressed.[citation:1][citation:5]