Authored By: Lesedi Manganye
Varsity College
Abstract.
The proliferation of artificial intelligence systems in social media platforms has resulted in new ethical and legal issues in regulation than ever before.1 This is a prime example of the next phase in this direction: xAI’s generative artificial intelligence chatbot Grok, which has been aggregated into the social media site X.2 Grok is an early example of platform-embedded AI systems that provide creative solutions but present serious governance challenges.3It aims to deliver a response directly on the platform by real-time data.4 Within this article, Grok is used as a case to illustrate how rapid innovation of AI can be used to work around existing legal and ethical approaches.5 Using a qualitative and doctrinal methodology, the article explores Grok’s architectural, deployment, and ethical challenges post-public release, such as misinformation, non-consensual image production, and platform accountability.6 This article develops a more general understanding of the legislative responses to Grok, both through the lens of existing digital services and content moderation legislation, [the Digital Service Act (EU)], in which it also points out the shortcomings of reflexive governance structures. It contends that Grok highlights the systemic failings of existing AI governance, most critically the need to address transparency, accountability, and harm prevention. We argue that even though creativity remains at the heart of the AI revolution, Grok demonstrates that a flexible and law-based governance framework is more than necessary and demands tackling the unique threats posed by generative AI systems operating within global digital platforms.
Introduction.
Artificial intelligence has evolved from an instrument of specialized technical expertise into the backbone of digital infrastructure today.7 With generative AI systems, we have seen a significant shift in how humans interact with information, platforms, and the media, with text and visuals that seem to replicate those generated by humans being able to be generated.8 These systems greatly enhance the efficiency and creativity with which we share information, but raise difficult questions, both moral and legal.9
Many of the concerns related to misinformation, privacy, consent, ethical issues, and accountability stem from the growing integration of AI into social media platforms.10 One new example of this trend is Grok, the conversational AI system created by xAI and deployed on the X platform, where Grok generates responses on a live, real-time social media platform as opposed to standalone AI chatbots.11
Grok is uniquely positioned in the emerging field of digital communication, as it is designed to engage in online conversation in a way that is reactive, conversational, and, very often, even performative (and by extension, socially desirable).12 However, this interconnectedness also exposes Grok to threats related to ungoverned and dynamic online content ecosystems.13
It caused widespread public debate because Grok has been found to produce harmful content that is not consensual.14 These incidents raised questions as to whether existing legal frameworks can effectively regulate cutting-edge AI systems and engendered regulatory scrutiny, Under the platform liability principles accepted in cases such as Delfi AS v Estonia.15 So the Grok case provides an excellent opportunity for investigating the relationship between innovation in AI and governance.16
This paper aims to investigate the question of whether AI is rapidly advancing at the expense of established moral norms and current legal landscapes, with Grok as a case study. It argues that the development of Grok highlights core weaknesses in modern AI governance systems, particularly those that primarily embrace reactive enforcement rather than proactive risk management.17 By looking at how Grok is built, what ethics is involved and how the authorities have reacted to it, it includes responses that are grounded in the Digital Service Act and emerging obligations under the EU Artificial Intelligence Act, the article is looking to add to the conversation about how AI should be regulated in an increasingly platform-driven digital world.18
AI Governance and Innovation.
AI innovation can be a process of building, deploying, and integrating with new systems that perform the type of tasks that once were connected to human beings’ innate intelligence.19 In the era of generative AI, innovation is all about speedy prototyping, wide distribution and permeation of digital platform systems in common places across the globe.20
Though that new level of innovation makes something more accessible and efficient, it is also more likely to cause harm, bias and misinformation.21 AI governance refers to the ethical, legal and institutional frameworks that guide and control the development and use of AI systems.22 Governance mechanisms are responsible for transparency, safety, accountability and respect for human rights as paramount, the principles are reflected in data protection regime such as the General Data Protection Regulation (EU) 2016/679.23
One of the ultimate obstacles includes striking a sweet spot between innovation and safeguard, since regulation that is too strong will be detrimental to the furtherance of progress, and too weak will put human lives and entire societies to serious damage.24
History and Context of Grok.
Grok is the latest iteration of xAI’s large-scale effort to improve the effectiveness of artificial intelligence-assisted human connection (AI) systems.25 It is distinctive because Grok is linked directly to social media platform X, providing a direct platform for accessing user-generated content and reacting to user-generated content-based content in real time.26
This feature is unique and differentiates Grok from typical generative AI systems where the feature is stored in static or periodically updated data sets among the existing generative AI systems. Ideally speaking, Grok is an innovation for platform-embedded AI systems.27
Instead of being another tool, Grok is built into X’s social and communicative infrastructure. Although this helps to make users more user-centered and relevant for the service, it also exposes the system to the kind of misinformation, offensive content, and coordinated manipulation commonly found on social media platforms, raising legal questions comparable to those addressed in Glawisching-Piesczek v Facebook Ireland Ltd.28
The lightning-fast pace of Grok’s deployment is another indication of the accelerated rate at which AIs develop.29 New features and updates were added quickly, and were frequently in response to user pushback or competitive pressure.30
Such an approach can support innovation, but it leaves limited opportunities for thorough risk assessment and ethical oversight before deployment.31 As a result, several Grok’s state-of-the art features were deployed into use before proper protections were in place.32
Innovation in Grok’s Design and Deployment.
Grok’s primary innovation was integrated with data, in real time, so it could quickly adapt to trending social media discussion and news events.33 Its deployment cycle is characterized by rapid iteration, with frequent updates to improve functionality.34 By merging Grok into an existing online social media platform, Grok was able to extend its influence and success to a broader scale.35 People could interact with the generative artificial intelligence system embedded in everyday communication now, in an easily understandable, personalized manner. Yet, this platform-based deployment also increased governance risk.36 Grok did not have the necessary precautions in place at the time of its release to stop it from generating harmful or non-consensual content, raising dignity and privacy concerns comparable to those articulated in NM v Smith.37 In this way, its deployment not only further expanded the capacity and visibility of abusive outputs, but it also extended the size of those harms.38
Ethical and Governance Challenges.
Ethics issues presented in Grok are more than just those that result from individual hardware bugs; generative AI systems are at greater risk of becoming a public ethical trap.39 The generation of harmful and non-consensual images was one of the major issues.40 And these results are raising serious ethical questions regarding human dignity, consent, and harm prevention, which are protected in both constitutional and jurisprudence and data protection law.41
Developers and platform operators have elevated ethical obligations because the authors have greater control over the design of the system and system deployment.42 According to ethical AI approaches, the burden of responsibility shouldn’t lie with end users alone, even when abuse takes place. It comes at a time when developers and platforms are supposed to predict risks they could foresee and establish protections against harm. It mirrors judicial reasoning in cases such as Various Claimants v WM Morrison Supermarkets plc, where responsibility for data misuse is scrutinised.43
Transparency was also a major issue of ethics. Most users also cannot see how Grok creates outputs, what data it is dependent on, or how moderation decisions are being made. When opacity occurs, accountability isn’t effective, and those who are affected find it difficult to recourse.44 It is hard to implement fairness and responsibility unless ethics and transparency principles are articulated in Google Spain SL v AEPD.45
Legal and Regulatory Responses.
Regulatory reactions to Grok predominantly used digital platform and content moderation laws instead of AI legislation.46 At the European Union, the focus was on adherence to the Digital Services Act, which obliges platforms to tackle the spread of illegal and harmful content.47
These responses, which do not focus on preventing harm (rather on containing it), reflect a reactive governance model that responds only after harm has happened.48 In a similar vein, regulators in other jurisdictions depended on rules about online safety and consumer protection to force platforms to limit or update Grok’s functionality.49 Although such measures show flexibility in extant legal constructs, they do expose their limitations.50 Although traditional laws designed for content moderation may be very effective, they may not address the systemic risk inherent in the implementation of generative AI systems.51
The Grok case highlights the absence of clear liability rules for harms produced by AI.52 By not explicitly naming developers, platform operators, or the users of our services as at fault, we risk diluting enforcement and reducing deterrents, reinforcing the case for clearer, more proactive AI governance frameworks, such as the EU Artificial Intelligence Act 2024.53
Conclusion.
Grok highlights both the innovation and governance complexities surrounding generative artificial intelligence.54 Grok is a platform-embedded AI system that can engage in real-time, representing the transformative potential of cutting-edge AI technologies.55 At the same time, its controversies expose important ethical and legal failings when innovation outpaces governance.56 It has been argued in this article that Grok is not an isolated instance of technical failure but shows up within the overall structural weaknesses in existing AI management frameworks.57 If the laws themselves are neither enforceable nor provide ethical tools as long as there is no one in authority to control implementation, then even ethical principles are insufficient, while reactive regulatory policies struggle to stop harm before it happens.58
Grok’s case calls for a shift toward an AI governance agenda that is proactive, with added transparency responsibilities, clarity about who can or cannot be held accountable, measures ahead of deployment forcing companies to undertake risk assessments within a minimum time, and regular reviews of high-impact AI systems.59 Nor would these measures be a deterrent to innovation, but rather would keep technological progress consistent with core ethical values and legal standards.60 The Grok case, then, demonstrates at the end the point: Responsible development of AI is not just about technological sophistication.61 It calls for governance systems that can adapt to innovation so that AI system designs can operate in service for and support society, rather than against it.62
Reference(S):
Primary Sources
Statutes
United Kingdom
Online Safety Act 2023.
South Africa (Comparative)
Protection of Personal Information Act 4 of 2013.
Films and Publications Act 65 of 1996 (as amended).
Legislation
European Union
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services (Digital Services Act) OJ L 277, 27 October 2022.
Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) OJ L, 2024.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation) OJ L 119, 4 May 2016.
Case law.
European Court of Human Rights.
Delfi AS v Estonia [2015] 62 EHRR 6.
Sanchez v France [2023] 76 EHRR 19.
Court of Justice of the European Union
Google Spain SL v Agencia Española de Protección de Datos (AEPD) (C-131/12) EU:C:2014:317.
Glawischnig-Piesczek v Facebook Ireland Ltd (C-18/18) EU:C:2019:821. United Kingdom
Various Claimants v WM Morrison Supermarkets plc
[2020] UKSC 12.
South Africa
NM v Smith (Freedom of Expression Institute as Amicus Curiae)
2007 (5) SA 250 (CC).
Heroldt v Wills
2013 (2) SA 530 (GSJ).
Secondary Sources.
Books
Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
Veale M and Borgesius F, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22 Computer Law Review International 97.
Journal Articles
Binns R, ‘Human Judgement in Algorithmic Loops: Individual Justice and Automated Decision-Making’ (2018) 15 European Journal of Law and Technology 1.
Edwards L, ‘Regulating AI in Europe: Four Problems and Four Solutions’ (2022) 35 Harvard Journal of Law & Technology 1.
Kroll J and others, ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633.
Edwards (n above); European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence COM (2021) 206 final.
Institutional & Policy Reports
European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence COM (2021) 206 final.
OECD, Artificial Intelligence and Responsibility (OECD Publishing 2019). UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
Credible Media & Expert Commentary (Used Contextually)
Vincent J, ‘Elon Musk’s Grok AI Sparks Controversy Over Content Moderation’ The Verge (2023).
Shead S, ‘EU Regulators Scrutinise X’s AI Chatbot Grok’ CNBC (2023).
Kharpal A, ‘Elon Musk’s X probed by EU over sexually explicit images on Grok’ CBNC (2026)
Financial Times Editorial Board, ‘Why AI Regulation Is Struggling to Keep Pace’ Financial Times (2024).
Website.
Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised imagery-lawmaker-says-2026-01-26/ accessed 30 January 2026.
1 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
2 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised-imagery-lawmaker says-2026-01-26/ accessed 30 January 2026.
3 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
4 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026). 5 Veale M and Borgesius F, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22 Computer Law Review International 97.
6 NM v Smith (Freedom of Expression Institute as Amicus Curiae) 2007 (5) SA 250 (CC).
7 OECD, Artificial Intelligence and Responsibility (OECD Publishing 2019).
8 Kroll J and others, ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633. 9 UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
10 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
11 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised-imagery-lawmaker says-2026-01-26/ accessed 30 January 2026.
12 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
13 Ibid.
14 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised-imagery-lawmaker says-2026-01-26/ accessed 30 January 2026.
15 Delfi AS v Estonia (2015) 62 EHRR 6.
16 Ibid.
17 Ibid.
18 Ibid
19 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
20 Edwards (n above); European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence COM (2021) 206 final.
21 Ibid.
22 Floridi L, The Ethics of Artificial Intelligence (Oxford University Press 2023).
23 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation) OJ L 119, 4 May 2016.
24 Ibid.
25 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised-imagery-lawmaker says-2026-01-26/ accessed 30 January 2026.
26 Ibid.
27 Ibid.
28 Glawischnig-Piesczek v Facebook Ireland Ltd (C-18/18) EU:C:2019:821.
29 Glawischnig-Piesczek v Facebook Ireland Ltd(C-18/18) EU:C:2019:821.
30 Ibid.
31 Ibid.
32 Ibid.
33 Reuters, ‘EU opens investigation into X over Grok’s sexualised imagery’ (26 January 2026) https://www.reuters.com/world/europe/eu-opens-investigation-into-x-over-groks-sexualised-imagery-lawmaker says-2026-01-26/ accessed 30 January 2026.
34 Ibid.
35 Ibid.
36 Ibid.
37 NM v Smith (Freedom of Expression Institute as Amicus Curiae) 2007 (5) SA 250 (CC).
38 Ibid.
39 Ibid.
40 Ibid.
41 Ibid.
42 Various Claimants v WM Morrison Supermarkets plc, [2020] UKSC 12.
43 Ibid.
44 Ibid.
45 Google Spain SL v Agencia Española de Protección de Datos (AEPD) (C-131/12) EU:C:2014:317.
46 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services (Digital Services Act) OJ L 277, 27 October 2022.
47 Ibid.
48 OECD, Artificial Intelligence and Responsibility (OECD Publishing 2019).
49 EU Artificial Intelligence Act 2024.
50 Ibid.
51 Ibid.
52 Ibid.
53 Ibid.
54 UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).
55 Veale M and Borgesius F, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22 Computer Law Review International 97.
56 Ibid.
57 Ibid.
58 Ibid.
59 EU Artificial Intelligence Act 2024.
60 Ibid.
61 Ibid.
62 Ibid.





