
VANK announced that it has identified multiple types of errors—including factual inaccuracies, missing information, and distorted context—after reviewing the accuracy of information on Seoul’s administration and policies, tourism, and cultural heritage provided by generative AI platforms. VANK said such errors could affect not only citizens’ access to public policies but also broader perceptions of Seoul as a city.
The latest review builds on VANK’s ongoing efforts to examine errors in generative AI systems at the regional level, including previous checks focused on Gyeonggi Province, Gyeongju, and North Chungcheong Province. In order to assess how Korea’s public information is learned and reproduced within the global AI environment, VANK selected Seoul as the focus of a city-level analysis. Seoul was chosen in particular because it is the most frequently accessed Korean city for both domestic and international users, meaning that any AI-related errors could have a wide-ranging impact on policy awareness and overall trust in the city.
The review covered major global generative AI platforms with high user traffic, including ChatGPT, Gemini, Copilot, Grok, and Perplexity. VANK submitted questions related to Seoul and compared the AI-generated responses with official Seoul Metropolitan Government materials and public data to identify inaccuracies. The analysis found that the same or similar errors appeared repeatedly across administrative and policy information, tourism guidance, and cultural heritage content.
At the same time, the way citizens and users search for public information is rapidly changing. Rather than checking city websites, official notices, or call centers, many users now encounter and evaluate policy, tourism, and cultural heritage information for the first time through generative AI. As a result, decisions related to policy eligibility, travel planning, and perceptions of cultural heritage are increasingly being made based on AI-generated responses.
Against this backdrop, VANK said that errors in public information provided by generative AI could limit citizens’ access to their rights, create confusion in tourism and usage decisions, and distort perceptions of the city and its history. The organization explained that this study was conducted to comprehensively identify patterns and types of such errors.
When asked about Seoul’s administrative and policy programs, generative AI systems were often found to oversimplify eligibility requirements or omit key conditions. According to the findings, inaccurate explanations were repeatedly provided for major policies such as youth housing programs, youth allowances, the Climate Card, free school meals, and childbirth-related benefits. Although these policies involve complex criteria—including eligibility, income thresholds, application methods, and usage periods—some AI platforms presented only core elements or offered generalized explanations without reflecting exceptions, making it difficult for users to accurately assess whether they qualified.
Similar problems were observed in tourism and transportation information. In the case of the Climate Card, some AI systems incorrectly stated that one-day passes do not exist, that cash top-ups are not allowed, or that foreign visitors cannot use the card. These claims differ from actual operating rules and could directly disrupt travel planning and cost calculations for both domestic and international visitors to Seoul.
Errors were also frequently found in general tourism information. The review revealed numerous inaccuracies in basic details such as admission fees, operating hours, and usage guidelines for major attractions. For royal palaces such as Gyeongbokgung and Changdeokgung, some AI responses listed admission fees for children that do not exist or presented prices without distinguishing between domestic and international visitors or age groups. Ticket prices for Lotte World were also reported incorrectly, including inaccurate discount rates for holders of the “Multi-Child Happiness Card.”
Mistakes in operating hours were also common. In some cases, Changdeokgung’s seasonal visiting hours were not reflected, while Changgyeonggung—whose hours remain consistent year-round—was incorrectly described as having different schedules. The analysis also found that recent updates, such as extended Friday hours at the Seoul Museum of History, were not reflected in AI-generated responses.
Problems extended to image generation related to tourism content. In one case, an image request for “Seouldal,” a specific Seoul attraction, resulted in a generic moon scene in an urban setting, highlighting how generative AI may fail to properly understand the context and substance of tourism-related content.
Similar issues were repeatedly observed in descriptions and image generation related to Seoul’s cultural heritage. From basic facts about tangible and intangible heritage to historical context and symbolism, inaccurate explanations were provided, and in some cases cultural elements were mixed with styles from other regions. Image analysis showed repeated structural inaccuracies, such as Sungnyemun being depicted without its distinctive vertical signboard or being portrayed as a palace rather than a gate. There were also cases in which Gyeongbokgung was rendered in a style resembling China’s Forbidden City, raising concerns about confusion over cultural identity and origin.
Images of Seoul’s five major palaces were often generated in highly similar forms, making it difficult to distinguish their unique characteristics. VANK said this suggests that generative AI systems are producing simplified images without fully learning each site’s defining features and historical context. Errors were also found in written descriptions, including underreporting the number of Seoul-designated intangible cultural heritage items and incorrectly stating that the Hanyangdoseong City Wall has already been listed as a UNESCO World Heritage site. In some cases, AI systems generated information about things that do not exist, a phenomenon commonly referred to as hallucination.
Based on the Seoul study, VANK said it confirmed that errors in generative AI could lead to restricted access to citizens’ rights, declining trust in administrative and tourism policies, and distorted perceptions of the city and its cultural heritage. The organization stressed the importance of ensuring the accuracy and standardization of public data, as well as establishing systems for continuous updates, in an environment where AI use is expanding.
Kim Ye-rae, a youth researcher at VANK who participated in the project, said errors in AI-generated information on administration and policy are not simple mistakes in answering individual questions, but a structural problem in which policy frameworks and eligibility criteria are repeatedly learned and reproduced without proper verification. She noted that when incorrect information is provided about policies directly tied to daily life—such as housing, welfare, and childbirth—citizens may fail to recognize their rights and give up on accessing support altogether. “When policies exist but disappear in the process of information delivery, trust in public administration inevitably weakens,” she said, adding that there is an urgent need for government agencies and platforms to manage public policy information as clearly defined reference data and to continuously verify it in the AI era.
Baek Si-eun, another VANK youth researcher, said Seoul’s cultural heritage goes beyond historical assets and plays a central role in tourism, regional branding, and national image-building. “As a city that represents Korea, the accuracy of cultural heritage information provided in digital spaces becomes even more important as global interest in Seoul grows,” she said. She added that for foreign users with limited background knowledge, information encountered through generative AI can easily become their fixed understanding of history. “When distorted information is repeatedly circulated, it can blur the overall image of Seoul and Korea,” she said, expressing hope that the analysis would serve as a starting point for protecting both the public nature and accuracy of cultural heritage amid technological advancement.
Lee Sei-yeon, also a VANK youth researcher, said Seoul has already established itself as a global tourism city with high visitor numbers, repeat visits, and strong satisfaction ratings. “Now, how tourists experience and remember a city determines its competitiveness,” she said. She emphasized that the accuracy and consistency of tourism information directly affect not only visitor movement and spending, but also trust and overall city image. Lee added that Seoul must strengthen information management and verification systems so that its tourism appeal and strengths are conveyed accurately in the generative AI environment, thereby continuing to enhance its brand value as a global tourism destination.
VANK Director Park Gi-tae said generative AI has become a new public space where information is accumulated and reproduced, and that the study aimed to identify how distortions are spreading within that space. He warned that errors in public information related to administration, tourism, and cultural heritage could influence how the international community perceives Seoul and Korea. Park said systematic verification and management are essential to preserve the accuracy and context of public information in the AI environment, adding that VANK will continue to monitor distortions and errors related to Korean public information in generative AI and raise issues to promote a more responsible information ecosystem.
VANK said it plans to use the findings to request corrections from generative AI platforms and to expand discussions on citizen-participatory verification models for public information and stronger AI accountability in the future.