Ultra-Realists

Category: UR PGR/ECR Network

  • Datafication Laid Bare: Making sense of the Grok AI leaks

    What the Grok? Performance Promises, Cases of Controversy and Grok.

    On the 22nd of August 2025, news headlines began to circulate on social media sites regarding X (formerly Twitter) owner and tech-billionaire Elon Musk’s Artificial Intelligence (AI) chatbot ‘Grok’. Initially launched in November 2023, Grok has seen rapid evolution with Grok-2 offering image-generation capability and Grok-3 advancing key features such as AI reasoning and reflection. Finally, Grok-4, launched in July 2025, claims to offer PhD level intelligence reasoning (Business Today, 2025). Of course, newer premium pricing tiers (around $300 a month for ‘SuperGrok Heavy’) have also emerged (ibid). However, recent headlines are about controversy and not the usual celebrations of AI ambition or performance. Rather, they demonstrate only the latest structural example of datafication, algorithmic governance, and harmful asymmetry.

    Interestingly, this is not the first instance where Grok has been at the forefront of controversy. In July 2025, Grok experienced backlash for generating anti-Semitic material, with reports suggesting the bot had praised Hitler whilst referring to itself as “MechaHitler”. This sparked condemnation from watchdogs and resulted in the developers promising improvements to hate speech moderation (Speakman, 2025). Prior to this, concerns were raised regarding the bot’s safeguards and prompts design as it was reported to have issued guidance on practical violence, offering advice to users on how to assault a public figure (Saeedy, 2025).

    Recent reports have revealed that over 370,000 chat transcripts between Grok and its users have been unintentionally published on the open web after being indexed by numerous search engines such as Google, Bing and DuckDuckGo (Caswell, 2025; Dees, 2025). This was due to a technological oversight whereby neither no-index tags nor restriction of access commands were programmed, leaving unique shareable URLs unprotected – ultimately making them visible to search engine crawlers (Martin and White, 2025). All of this was reportedly done without any user knowledge, with many believing their chats were private (ibid). The exposed content seemingly varies in sensitivity and legality. Reports include relatively benign uses, such as summarising journal articles or drafting tweets, alongside the sharing of highly sensitive information, including names, passwords, private medical and/or psychological queries, and confidential uploaded documents such as spreadsheets and images. Further to this, much more dangerous or illicit content has been reported. Instructions for making fentanyl, methamphetamines and bombs were found. There is also evidence of users instructing the bot to write its own malware, assist in planning suicides and assassination plots of figures such as Elon Musk himself (Kundaliya, 2025; Dees, 2025).

    It may be easy to understand these events as another example of a privacy accident or data breach resulting in erosion of user trust, akin to those we have seen since the development of the internet and its subsequent technological advancements (Singh, 2025). So we may call for better safeguards in future use of the AI bot. However, such understandings and action fail to recognise that accountability and safety in AI, much like its technological predecessors, should not just be about technical fixes, but about confronting political-economic and cultural structures within late capitalism that normalise such exposure and harm. To aid this, the remainder of this blog draws from recent critical criminological discussions of AI and Harm (Hart et al. 2025 forthcoming); Ultra-Realist perspectives and critique of the structural logics of late capitalism (Hall and Winlow 2025), Kelly et al’s. (2023) ‘graze culture’, as well as Atkinson and Rogers (2016) work on ‘zones of exception’ to outline how we can better make sense of the Grok leaks.

    Grok, Graze Culture, and Zones of Exception

    As Atkinson and Rogers (2016) explain, society has witnessed a cultural re-positioning of our once previously pseudo-pacified desires and “guilty” pleasures. We now engage with ‘enclosed screen spaces’ such as video games (or in this instance smartphones) to interact with sexual and violent desires under the assurance that they remain within these zones of cultural exception. As we move further into prosumer society (Ritzer and Jurgenson, 2020), AI technology has developed to allow users to access such zones with ‘AI girlfriends’ or through the creation of ‘AI deepfakes’ (Goodwin, 2024). In this current context, however, Grok and other AI chatbots alike form a conversational zone of exception where one can engage with violent, criminal or deviant content, or share personal and sensitive information, in what is perceived as a private space. However, as users click the ‘share’ button, their intimate exchanges become globally accessible artifacts.

    Ultimately, Grok did not just experience a technological flaw, it positioned its users into a permanent digital zone of objection. A more public space where private desires and sensitive information is laid bare for all netizens to consume. Essentially, as we seek out further virtual spaces to fulfil such pseudo-pacified desires, spaces offered to us in the form of commodified technological innovation, we willingly offer data to a political and economic order orientated towards extraction, optimisation and profit (Hart et al. 2025 forthcoming). Here, chats designed to feel safe become instruments of exposure and harm.

    AI bots such as Grok have further blurred the line between production and consumption, just as social media apps and similar technology have done (Ritzer and Jurgenson, 2020). AI, by design, produces content from what it consumes. It is both a vehicle of and dependent upon prosumerism. However, in light of the Grok leaks, users were, whether knowingly or not, producing valuable cultural and emotional labour in the form of conversations and prompts. However, the flawed “share” button rebranded this labour into indexable content, unpaid and involuntary – commodifying intimacy, turning private exchanges into commodified data streams. Essentially, users became prosumers at their own exposure – creating and consuming simultaneously whilst corporations extracted surplus value. What may have been satire, experimentation or cathartic expression has now become a media spectacle.

    Kelly et al’s. (2023) ‘graze culture’ adds important depth here. They explain that society brushes up against the familiar (usually in the form of obvious subjective forms of violence epitomised by the serial killer) in order to disavow their sense of lack and experience of everyday structural violence, such as political inequality and global disasters. The implications here are twofold. First, it positions the leaked transcripts as fodder for our graze culture. Content for journalists, readers, doom scrollers and perhaps academic commentators to skim without context, disavowing their own realities. Secondly, it allows us to recognise that, whilst a technological fix may be offered, and we may raise alarms towards the safeguards in place in such technology, we will ultimately disavow the realities of the system that creates such harms in the first instance. In essence, the outcomes of such data optimisation (exposure and embarrassment, for example) become the very dark matter we brush up against to banish the reason it happened in the first place.

    AI’s Logic of Harm and Grok

    Raymen’s (2023) work on telos tells us that in order to fully understand harm we must explore the end goal or purpose of an entity. In this respect, AI, once marketed as a force for human advancement, has been redirected to optimise surveillance and profit and thus its telos is corrupted. This crucial point was raised at the recent Critical Criminology conference at Northumbria University, where myself alongside my colleagues Kyla Bavin and Adam Lynes presented our forthcoming work exploring the harms of AI (see: Hart et al. 2025 forthcoming). As we explained, the elite’s implementation of AI technologies in the gig economy (Lynes and Wragg, 2023) demonstrates this corrupted telos, as well as the special liberty they enjoy (Hall, 2012).

    The Grok case demonstrates similar luxuries as elites continue to profit from the infrastructure of surveillance and datafication whilst users absorb its costs. In Grok’s case, over 370,000 individuals have had their vulnerabilities laid bare whilst the corporation remains opaque and shielded from responsibility. Drawing upon Hart et al’s (2025 forthcoming) critical typology of AI, we can understand the harm generated by Grok’s leaks as follows:

    Datafication harms: Personal conversations have been transformed into searchable, exploitable data points.

    Algorithmic governance harms: Platform designs of Grok (for example the “share” button and lack of privacy warnings) governed user behaviour invisibly, coercing them into unwanted exposure.

    Operational harms: Users may experience reputational damage, psychosocial stress, and the chilling effect of knowing that their private queries might circulate without consent.

    Existential harms: Trust in AI as a safe mediator of thought and dialogue is momentarily destabilised, leaving users disempowered and alienated as they brush back up against the very system that harms them in the first place.

    Ultimately, the Grok case demonstrates how AI infrastructures govern not through overt coercion, but by creating conditions of pacification and exception. Users feel free to share intimate thoughts as the interface appeared safe. However, this freedom is illusory as the act of sharing transports them into a digital zone of objection where they can be surveilled, indexed, and judged. This is a form of algorithmic pacification where individuals are pacified into compliance, only to find that compliance itself generates new harms. Whilst we should not overlook the somewhat heinous prompts being inputted into Grok, seen critically, these leaks are not an isolated technical misstep but an exemplary case of how AI platforms embody the logic of late capitalism: the suspension of protections (zones of exception), the palatable fodder to brush up against in times of misery (graze culture), the corruption of emancipatory promises (telos), and the granting of unchecked freedoms to elites (special liberty). They highlight that criminology must move beyond narrow cybercrime framings to confront AI as a structure with extreme zemiogenic and criminogenic potential – a system whose very design can produce and reproduce harm, inequality, and disempowerment.

    References

    Atkinson, R., and Rodgers, T. (2016) Pleasure Zones and Murder Boxes: Online Pornography and Violent Video Games and Cultural Zones of Exception. British Journal of Criminology, 56(6), pp. 1291-1307.

    Business Today (2025) ‘The rise of Grok: Elon Musk’s foray into the AI chatbot landscape’, Business Today, 17 March. Available at: https://www.businesstoday.in/technology/news/story/the-rise-of-grok-elon-musks-foray-into-the-ai-chatbot-landscape-468150-2025-03-17 [Accessed: 26 August 2025].

    Caswell, A. (2025) ‘Hundreds of thousands of Grok chatbot conversations are showing up in Google Search — here’s what happened’, Tom’s Guide, 20 August. Available at: https://www.tomsguide.com/ai/hundreds-of-thousands-of-grok-chatbot-conversations-are-showing-up-in-google-search-heres-what-happened [Accessed: 26 August 2025].

    Dees, M. (2025) ‘Hundreds of thousands of Grok chats accidentally published’, Techzine, 22 August. Available at: https://www.techzine.eu/news/privacy-compliance/133998/hundreds-of-thousands-of-grok-chats-accidentally-published [Accessed: 26 August 2025].

    Goodwin, L. (2024) Romance scammer duped £17k from me with deepfakes. BBC News. 19th December. Available at: https://www.bbc.co.uk/news/articles/cdr0g1em52go. [Accessed 25th August 2025].

    Hall, S. (2012). Theorising Crime and Deviance: A New Perspective. London: Sage.

    Hall, S. & Winlow, S. (2025). Revitalizing Criminological Theory: Advances in Ultra-Realism. Abingdon: Routledge.

    Hart, M., Bavin, K. and Lynes, A. (2025 – forthcoming) Artificial Intelligence, Capitalism, and the Logic of Harm: Toward a Critical Criminology of AI. Critical Criminology.

    Kelly, C., Lynes, A. & Hart, M., 2023. ‘Graze Culture’ and serial murder: Brushing up against ‘familiar monsters’ in the wake of 9/11. In: S.E. Fanning & C. O’Callaghan, eds. Serial Killing on Screen: Adaptation, True Crime and Popular Culture. Cham: Palgrave Macmillan, pp. 295–321. Available at: https://doi.org/10.1007/978-3-031-17812-2 [Accessed 26 August 2025].

    Kundaliya, D. (2025) Elon Musk’s xAI exposed hundreds of thousands of Grok conversations to Google search. Computing. Available at: https://www.computing.co.uk/news/2025/security/elon-musk-s-xai-exposed-hundreds-of-thousands-of-grok-conversations-to-google-search [Accessed 26 August 2025].

    Lynes, A. and Wragg, E. (2023). “Smile for the camera”: Online warehouse tours as a form of dark tourism within the era of late capitalism. Tourism and Hospitality Research, 24(4), 615-629.

    Martin, I. and White, E. (2025). Elon Musk’s xAI published hundreds of thousands of Grok chatbot conversations. Forbes. Available at: https://www.forbes.com/sites/iainmartin/2025/08/20/elon-musks-xai-published-hundreds-of-thousands-of-grok-chatbot-conversations/ [Accessed 26 August 2025].

    Raymen, T. (2023). The Enigma of Social Harm The Problem of Liberalism. Abingdon: Routledge.

    Ritzer, G. and Jurgenson, N. (2010) ‘Production, consumption, prosumption: The nature of capitalism in the age of the digital “prosumer”’, Journal of Consumer Culture, 10(1), pp. 13–36.

    Saeedy, A. (2025) ‘Why xAI’s Grok Went Rogue’, The Wall Street Journal, 10th July. Available at: https://www.wsj.com/tech/ai/why-xais-grok-went-rogue-a81841b0 [Accessed: 26 August 2025].

    Singh, A. (2025) From Past to Present: The Evolution of Data Breach Causes (2005–2025). LatIA, 3(333). Available at: https://doi.org/10.62486/latia2025333 [Accessed 26 August 2025].

    Speakman, K. (2025) ‘Elon Musk’s X Chatbot Praises Hitler While Sharing Multiple Antisemitic Posts’, People, 9 July. Available at: https://people.com/elon-musk-x-chatbot-praises-hitler-antisemitic-posts-11769138 ]Accessed: 26 August 2025].

  • A Multi-Level Ultra-Realist Approach to Understanding Violence Against Women and Girls: An Introduction

    Emma Armstrong and Paul Alker

    Teesside University

    Despite being a global issue for centuries, violence against women and girls (VAWG) has often been dismissed by governments as a private or peripheral concern rather than a systemic problem (Andrews and Ellis, 2022). The term VAWG encapsulates a wide range of offences disproportionately perpetrated by men against women and girls, including domestic abuse, sexual violence, and so-called ‘honour-based’ abuse. Additionally, the rise of smartphone technology, social media and artificial intelligence has not only reshaped existing forms of violence and abuse, but has also given rise to new ones – what has been dubbed Technologically Facilitated Sexual Violence, which takes myriad forms. This includes non-consensual sexting (more commonly known as revenge porn), cyber-flashing, cyber-stalking and the creation of deep-fake pornography (Henry and Powell, 2018).
     
    While activists and scholars have long drawn attention to these harms, policy responses have historically been reactive and inconsistent. In recent years, a series of high-profile cases have forced the issue into the public consciousness, leading to widespread calls for change. In response, the Labour government declared VAWG a national emergency and pledged to halve its prevalence within a decade. While this commitment signals a shift in political will, the question remains: will such interventions address the root causes of VAWG, or continue tackling symptoms while systemic violence remains unchallenged?
     
    As scholars such as Treadwell (2013) and Yardley and Richards (2023) point out, crime and harm do not occur in a vacuum, but in a context that is influenced by social, economic, cultural and political factors. With this in mind, this blog series will examine VAWG through an integrated multi-level framework (Hall and Wilson, 2014; Lloyd, 2018) that provides us with the theoretical tools to explore VAWG from a uniquely parallax perspective, by examining the issue at a macro, meso and micro level. In adopting this approach, alongside drawing upon the ultra-realist theoretical framework, we aim to provide a comprehensive analysis of why such violence occurs and why it remains such a persistent issue. Ultra-realism is particularly well-suited to this task, not only for its return to the question of offender motivation, but also due to its willingness to transcend analytical boundaries – bridging the personal and structural. In the seminal ultra-realist book, Revitalizing Criminological Theory, Hall and Winlow (2015) note how the progress from feminist and critical criminology laid crucial groundwork for understanding harm and power beyond legalistic definitions, yet argue that a deeper ontological and structural analysis is now required to account for the evolving conditions of late modernity. Ultra-realists have certainly embodied this goal in various contexts, such as Lloyd’s (2018) research into the harms of neoliberal work, or James’ (2020) application of the theory to the harms of hate. To our knowledge, however, this is the first attempt to apply ultra-realist theory specifically to the problem of VAWG.
     
    It is our contention that much of the research and commentary on VAWG, while illuminating, tells only part of the story. Simply analysing this violence and harm on an individualised micro level, as much of the mainstream media has a tendency to do, is problematic in that it obscures the structural conditions that underpin such violence (Kelly et al., 2022). At the same time, broad explanations that attribute VAWG solely to the dominance of patriarchy or the presence of ‘toxic masculinity’ fail to account for wider structural and cultural changes associated with neoliberalism and consumerism. These forces have fostered harmful subjectivities across the gender spectrum, imbuing the individual with a kind of ‘toxic sovereignty’ (Tudor, 2020) – a form of individualism where self-interest is prioritised at the expense of others, what Hall (2012) has termed special liberty – a libertine drive to satisfy one’s own interests regardless of the harm it may cause to others (Kotzé and Lloyd, 2022) for both expressive and instrumental purposes (Kotzé, 2024). As Yardley and Richards (2023) point out, this has intensified a sense of entitlement within the perpetrators of VAWG to inflict such harm. Much of the academic and cultural commentary has failed to acknowledge these factors. Additionally, the development of policing strategies, governmental promises to address the pandemic of VAWG and calls for changes to legislation may have some impact, but will only serve to address the symptoms of what we believe is a far deeper issue.
     
    Ultimately, this blog series seeks to make sense of VAWG by providing a critical analysis of issues present at the macro, meso, and micro levels. We appreciate that there are nuances across the broad spectrum of offences encapsulated under this umbrella term and across cases that we will not have space to delineate here. However, we hope to offer a robust starting point which not only demonstrates the utility of the ultra-realist theoretical framework for understanding VAWG, but also contribute to what we believe to be some much-needed discussion on the level of intervention required to begin to meaningfully address the issue of violence against women and girls. Therefore, in the spirit of the ultra-realist return to the fundamental question of motivation, this blog series poses a fundamental question: why do individual men seek to inflict harm upon women in both a symbolic and subjective form? Once we have attempted to answer this question via three posts tackling each level, we will then consider what exactly is to be done to challenge VAWG in a meaningful way in the concluding post.
     
    References
     
    Andrews, S., and Ellis, A. (2022) ‘Incel masculinity.’ In Atkinson, R., and Ayres, T. (Eds). Shades of Deviance. (2nd Ed). Abingdon: Routledge.
     
    Hall, S. (2012) Theorizing Crime & Deviance: A New Perspective. London: Sage Publications.
     
    Hall, S., and Wilson, D. (2014) ‘New foundations: Pseudo-pacification and special liberty as potential cornerstones for a multi-level theory of homicide and serial murder.’ European Journal of Criminology. 11(5). Pp. 635-55.
     
    Hall, S., and Winlow, S. (2015) Revitalising Criminological Theory: Towards a New Ultra-Realism. Abingdon: Routledge.
     
    Henry, N., and Powell, A. (2018) ‘Technology-facilitated sexual violence: A literature review of empirical research.’ Trauma, Violence, and Abuse. 19(2). Pp. 195-208.
     
    James, Z. (2020) ‘Gypsies’ and travellers’ lived experience of harm: A critical hate studies perspective.’ Theoretical Criminology. 24(3). Pp. 502-20.
     
    Kelly, C., Lynes, A., and Hart, M. (2022) ‘”Graze culture” and serial murder: Brushing up against “familiar monsters” in the wake of 9/11.’ In Fanning, S. E., and O’Callaghan, C. (Eds). Serial Killing on Screen. Cham: Palgrave Macmillan.
     
    Kotzé, J. (2024) ‘On special liberty and the motivation to harm.’ The British Journal of Criminology. 65(2). Pp. 314-27.
     
    Kotzé, J., and Lloyd, A. (2022) Making Sense of Ultra-Realism. Leeds: Emerald Publishing Ltd.
     
    Lloyd, A. (2018) Harms of Work: An Ultra-Realist Account of the Service Economy. Bristol: Bristol University Press.
     
    Treadwell, J. (2013) Criminology: The Essentials. London: Sage Publications.
     
    Tudor, K. (2020) ‘Toxic sovereignty: Understanding fraud as the expression of special liberty in late-capitalism.’ In Kuldova, T., Hall, S., and Horsely, M. (Eds). Crime, Harm, and Consumerism. Abingdon: Routledge.
     
    Yardley, E., and Richards, L. (2022) ‘The elephant in the room: Toward an integrated, feminist analysis of mass murder.’ Violence Against Women. 29(3-4). Pp. 752-72.