Ultra-Realists

Author: Max Hart

  • Datafication Laid Bare: Making sense of the Grok AI leaks

    What the Grok? Performance Promises, Cases of Controversy and Grok.

    On the 22nd of August 2025, news headlines began to circulate on social media sites regarding X (formerly Twitter) owner and tech-billionaire Elon Musk’s Artificial Intelligence (AI) chatbot ‘Grok’. Initially launched in November 2023, Grok has seen rapid evolution with Grok-2 offering image-generation capability and Grok-3 advancing key features such as AI reasoning and reflection. Finally, Grok-4, launched in July 2025, claims to offer PhD level intelligence reasoning (Business Today, 2025). Of course, newer premium pricing tiers (around $300 a month for ‘SuperGrok Heavy’) have also emerged (ibid). However, recent headlines are about controversy and not the usual celebrations of AI ambition or performance. Rather, they demonstrate only the latest structural example of datafication, algorithmic governance, and harmful asymmetry.

    Interestingly, this is not the first instance where Grok has been at the forefront of controversy. In July 2025, Grok experienced backlash for generating anti-Semitic material, with reports suggesting the bot had praised Hitler whilst referring to itself as “MechaHitler”. This sparked condemnation from watchdogs and resulted in the developers promising improvements to hate speech moderation (Speakman, 2025). Prior to this, concerns were raised regarding the bot’s safeguards and prompts design as it was reported to have issued guidance on practical violence, offering advice to users on how to assault a public figure (Saeedy, 2025).

    Recent reports have revealed that over 370,000 chat transcripts between Grok and its users have been unintentionally published on the open web after being indexed by numerous search engines such as Google, Bing and DuckDuckGo (Caswell, 2025; Dees, 2025). This was due to a technological oversight whereby neither no-index tags nor restriction of access commands were programmed, leaving unique shareable URLs unprotected – ultimately making them visible to search engine crawlers (Martin and White, 2025). All of this was reportedly done without any user knowledge, with many believing their chats were private (ibid). The exposed content seemingly varies in sensitivity and legality. Reports include relatively benign uses, such as summarising journal articles or drafting tweets, alongside the sharing of highly sensitive information, including names, passwords, private medical and/or psychological queries, and confidential uploaded documents such as spreadsheets and images. Further to this, much more dangerous or illicit content has been reported. Instructions for making fentanyl, methamphetamines and bombs were found. There is also evidence of users instructing the bot to write its own malware, assist in planning suicides and assassination plots of figures such as Elon Musk himself (Kundaliya, 2025; Dees, 2025).

    It may be easy to understand these events as another example of a privacy accident or data breach resulting in erosion of user trust, akin to those we have seen since the development of the internet and its subsequent technological advancements (Singh, 2025). So we may call for better safeguards in future use of the AI bot. However, such understandings and action fail to recognise that accountability and safety in AI, much like its technological predecessors, should not just be about technical fixes, but about confronting political-economic and cultural structures within late capitalism that normalise such exposure and harm. To aid this, the remainder of this blog draws from recent critical criminological discussions of AI and Harm (Hart et al. 2025 forthcoming); Ultra-Realist perspectives and critique of the structural logics of late capitalism (Hall and Winlow 2025), Kelly et al’s. (2023) ‘graze culture’, as well as Atkinson and Rogers (2016) work on ‘zones of exception’ to outline how we can better make sense of the Grok leaks.

    Grok, Graze Culture, and Zones of Exception

    As Atkinson and Rogers (2016) explain, society has witnessed a cultural re-positioning of our once previously pseudo-pacified desires and “guilty” pleasures. We now engage with ‘enclosed screen spaces’ such as video games (or in this instance smartphones) to interact with sexual and violent desires under the assurance that they remain within these zones of cultural exception. As we move further into prosumer society (Ritzer and Jurgenson, 2020), AI technology has developed to allow users to access such zones with ‘AI girlfriends’ or through the creation of ‘AI deepfakes’ (Goodwin, 2024). In this current context, however, Grok and other AI chatbots alike form a conversational zone of exception where one can engage with violent, criminal or deviant content, or share personal and sensitive information, in what is perceived as a private space. However, as users click the ‘share’ button, their intimate exchanges become globally accessible artifacts.

    Ultimately, Grok did not just experience a technological flaw, it positioned its users into a permanent digital zone of objection. A more public space where private desires and sensitive information is laid bare for all netizens to consume. Essentially, as we seek out further virtual spaces to fulfil such pseudo-pacified desires, spaces offered to us in the form of commodified technological innovation, we willingly offer data to a political and economic order orientated towards extraction, optimisation and profit (Hart et al. 2025 forthcoming). Here, chats designed to feel safe become instruments of exposure and harm.

    AI bots such as Grok have further blurred the line between production and consumption, just as social media apps and similar technology have done (Ritzer and Jurgenson, 2020). AI, by design, produces content from what it consumes. It is both a vehicle of and dependent upon prosumerism. However, in light of the Grok leaks, users were, whether knowingly or not, producing valuable cultural and emotional labour in the form of conversations and prompts. However, the flawed “share” button rebranded this labour into indexable content, unpaid and involuntary – commodifying intimacy, turning private exchanges into commodified data streams. Essentially, users became prosumers at their own exposure – creating and consuming simultaneously whilst corporations extracted surplus value. What may have been satire, experimentation or cathartic expression has now become a media spectacle.

    Kelly et al’s. (2023) ‘graze culture’ adds important depth here. They explain that society brushes up against the familiar (usually in the form of obvious subjective forms of violence epitomised by the serial killer) in order to disavow their sense of lack and experience of everyday structural violence, such as political inequality and global disasters. The implications here are twofold. First, it positions the leaked transcripts as fodder for our graze culture. Content for journalists, readers, doom scrollers and perhaps academic commentators to skim without context, disavowing their own realities. Secondly, it allows us to recognise that, whilst a technological fix may be offered, and we may raise alarms towards the safeguards in place in such technology, we will ultimately disavow the realities of the system that creates such harms in the first instance. In essence, the outcomes of such data optimisation (exposure and embarrassment, for example) become the very dark matter we brush up against to banish the reason it happened in the first place.

    AI’s Logic of Harm and Grok

    Raymen’s (2023) work on telos tells us that in order to fully understand harm we must explore the end goal or purpose of an entity. In this respect, AI, once marketed as a force for human advancement, has been redirected to optimise surveillance and profit and thus its telos is corrupted. This crucial point was raised at the recent Critical Criminology conference at Northumbria University, where myself alongside my colleagues Kyla Bavin and Adam Lynes presented our forthcoming work exploring the harms of AI (see: Hart et al. 2025 forthcoming). As we explained, the elite’s implementation of AI technologies in the gig economy (Lynes and Wragg, 2023) demonstrates this corrupted telos, as well as the special liberty they enjoy (Hall, 2012).

    The Grok case demonstrates similar luxuries as elites continue to profit from the infrastructure of surveillance and datafication whilst users absorb its costs. In Grok’s case, over 370,000 individuals have had their vulnerabilities laid bare whilst the corporation remains opaque and shielded from responsibility. Drawing upon Hart et al’s (2025 forthcoming) critical typology of AI, we can understand the harm generated by Grok’s leaks as follows:

    Datafication harms: Personal conversations have been transformed into searchable, exploitable data points.

    Algorithmic governance harms: Platform designs of Grok (for example the “share” button and lack of privacy warnings) governed user behaviour invisibly, coercing them into unwanted exposure.

    Operational harms: Users may experience reputational damage, psychosocial stress, and the chilling effect of knowing that their private queries might circulate without consent.

    Existential harms: Trust in AI as a safe mediator of thought and dialogue is momentarily destabilised, leaving users disempowered and alienated as they brush back up against the very system that harms them in the first place.

    Ultimately, the Grok case demonstrates how AI infrastructures govern not through overt coercion, but by creating conditions of pacification and exception. Users feel free to share intimate thoughts as the interface appeared safe. However, this freedom is illusory as the act of sharing transports them into a digital zone of objection where they can be surveilled, indexed, and judged. This is a form of algorithmic pacification where individuals are pacified into compliance, only to find that compliance itself generates new harms. Whilst we should not overlook the somewhat heinous prompts being inputted into Grok, seen critically, these leaks are not an isolated technical misstep but an exemplary case of how AI platforms embody the logic of late capitalism: the suspension of protections (zones of exception), the palatable fodder to brush up against in times of misery (graze culture), the corruption of emancipatory promises (telos), and the granting of unchecked freedoms to elites (special liberty). They highlight that criminology must move beyond narrow cybercrime framings to confront AI as a structure with extreme zemiogenic and criminogenic potential – a system whose very design can produce and reproduce harm, inequality, and disempowerment.

    References

    Atkinson, R., and Rodgers, T. (2016) Pleasure Zones and Murder Boxes: Online Pornography and Violent Video Games and Cultural Zones of Exception. British Journal of Criminology, 56(6), pp. 1291-1307.

    Business Today (2025) ‘The rise of Grok: Elon Musk’s foray into the AI chatbot landscape’, Business Today, 17 March. Available at: https://www.businesstoday.in/technology/news/story/the-rise-of-grok-elon-musks-foray-into-the-ai-chatbot-landscape-468150-2025-03-17 [Accessed: 26 August 2025].

    Caswell, A. (2025) ‘Hundreds of thousands of Grok chatbot conversations are showing up in Google Search — here’s what happened’, Tom’s Guide, 20 August. Available at: https://www.tomsguide.com/ai/hundreds-of-thousands-of-grok-chatbot-conversations-are-showing-up-in-google-search-heres-what-happened [Accessed: 26 August 2025].

    Dees, M. (2025) ‘Hundreds of thousands of Grok chats accidentally published’, Techzine, 22 August. Available at: https://www.techzine.eu/news/privacy-compliance/133998/hundreds-of-thousands-of-grok-chats-accidentally-published [Accessed: 26 August 2025].

    Goodwin, L. (2024) Romance scammer duped £17k from me with deepfakes. BBC News. 19th December. Available at: https://www.bbc.co.uk/news/articles/cdr0g1em52go. [Accessed 25th August 2025].

    Hall, S. (2012). Theorising Crime and Deviance: A New Perspective. London: Sage.

    Hall, S. & Winlow, S. (2025). Revitalizing Criminological Theory: Advances in Ultra-Realism. Abingdon: Routledge.

    Hart, M., Bavin, K. and Lynes, A. (2025 – forthcoming) Artificial Intelligence, Capitalism, and the Logic of Harm: Toward a Critical Criminology of AI. Critical Criminology.

    Kelly, C., Lynes, A. & Hart, M., 2023. ‘Graze Culture’ and serial murder: Brushing up against ‘familiar monsters’ in the wake of 9/11. In: S.E. Fanning & C. O’Callaghan, eds. Serial Killing on Screen: Adaptation, True Crime and Popular Culture. Cham: Palgrave Macmillan, pp. 295–321. Available at: https://doi.org/10.1007/978-3-031-17812-2 [Accessed 26 August 2025].

    Kundaliya, D. (2025) Elon Musk’s xAI exposed hundreds of thousands of Grok conversations to Google search. Computing. Available at: https://www.computing.co.uk/news/2025/security/elon-musk-s-xai-exposed-hundreds-of-thousands-of-grok-conversations-to-google-search [Accessed 26 August 2025].

    Lynes, A. and Wragg, E. (2023). “Smile for the camera”: Online warehouse tours as a form of dark tourism within the era of late capitalism. Tourism and Hospitality Research, 24(4), 615-629.

    Martin, I. and White, E. (2025). Elon Musk’s xAI published hundreds of thousands of Grok chatbot conversations. Forbes. Available at: https://www.forbes.com/sites/iainmartin/2025/08/20/elon-musks-xai-published-hundreds-of-thousands-of-grok-chatbot-conversations/ [Accessed 26 August 2025].

    Raymen, T. (2023). The Enigma of Social Harm The Problem of Liberalism. Abingdon: Routledge.

    Ritzer, G. and Jurgenson, N. (2010) ‘Production, consumption, prosumption: The nature of capitalism in the age of the digital “prosumer”’, Journal of Consumer Culture, 10(1), pp. 13–36.

    Saeedy, A. (2025) ‘Why xAI’s Grok Went Rogue’, The Wall Street Journal, 10th July. Available at: https://www.wsj.com/tech/ai/why-xais-grok-went-rogue-a81841b0 [Accessed: 26 August 2025].

    Singh, A. (2025) From Past to Present: The Evolution of Data Breach Causes (2005–2025). LatIA, 3(333). Available at: https://doi.org/10.62486/latia2025333 [Accessed 26 August 2025].

    Speakman, K. (2025) ‘Elon Musk’s X Chatbot Praises Hitler While Sharing Multiple Antisemitic Posts’, People, 9 July. Available at: https://people.com/elon-musk-x-chatbot-praises-hitler-antisemitic-posts-11769138 ]Accessed: 26 August 2025].