Home » Blog » Criminal Minds or Criminal Machines “AI and cybercrimes “

Criminal Minds or Criminal Machines “AI and cybercrimes “

Authored By: Nadine Hesham

Ain shames

Abstract

Artificial Intelligence (AI) lead to new Cybercrimes and enabling offenses such as automated hacking, AI-driven fraud, and deepfake impersonation. The key legal question here is  who is liable when AI used in illegal. The current legal frameworks struggle to know who should be held responsible a lot of countries around the world is struggling from criminal use of AI that help in criminal offences. Here we are trying to highlight how different countries is handling cybercrimes by putting laws but there’s still a legal gaps that need for legislative updates to address modern cybercrimes particularly those who design AI to be something addictive manipulative and abusive or helping in advanced cyberattack techniques .we will show cases from U.S and U.K that reflects legal gaps in these new crimes and if it was an alert or still they will keep in silence and ignorance .

Introduction

Small journey in the history of cybercrimes till we are in front of important question we are the Criminal minds or criminal AI this lead to other question who is responsible for such crimes made by users they are the only criminals or also the companies and developers are partners who is responsible when AI help in cybercrimes or work autonomously?

These crimes not only harm individuals, but also have a broad impact on economic stability and national security[1] Without putting any ethical regulations in front of users developing and regularly review of the criminal law by putting harsher punishments for any user or programmer of AI to harm others by blackmailing him or to kill himself as we will see in case already was held in front of the court should face a severe punishment for harming others and all of the country as every country needs a capital which is the human being to be cycle of country’s economic production any user developer owner of a company programmer should respect that this technology designed to help not harm assist not kill to let people life easy not more suffering and fear. Article explores how countries trying to face these new phenomenon in last few years as all laws focus on users without putting any responsibility on company or developers .Article highlights cybercrime laws in US and UK real cases and some of legal gaps in it and proposed solutions for filling legal gap of AI and cybercrimes.

Background

Let us start our journey in cybercrimes history Technically the first cyberattack happened in France well before the internet was even invented in 1834. Attackers stole financial market information by accessing the French telegraph system the attack known by “telegraph system”[2].After a lot of time and during the second world war in 1940 woman called Camille hacked the Nazi data registry to prevent them from registering information correctly and from tracking the Jewish people she was the first known ethical hacker.

The first computer virus was created at 1971 for research purposes by Bob Thomas it was the Creeper Virus, the self-replicating program was potential of future viruses to cause significant damage to computer systems. In 1981 Ian Murphy became the first person ever to be convicted for committing a cybercrime after successfully hacking internal systems and changing their computers causing a huge damage from the 1980’s the cybercrime definition was known as any illegal act committed by using electronic communications networks and information systems to committee cyber theft fraud stalking bullying terrorism money laundry or spying 1988 The first major cyberattack on the internet made by Cornell graduate student called Robert Morris it was in the year before the World Wide Web as internet was primarily for the academic researchers he hacked computer systems at Stanford Princeton Johns Hopkins NASA Lawrence Livermore Labs and UC Berkeley among other institutions.

New technology New crimes this is the stage slogan at 1990 the start to be more advanced and popular among people and criminals as what happened in 1995 Vladimir Levin was the first known hacker to rob a bank he hacked the Citibank’s network and conducted many fraudulent transactions he transferred more than 10 million dollars into various bank accounts worldwide computer viruses stuck the general public in March 1999 it was a document uploaded online and promising access to adult videos the virus would take over individual’s Microsoft Word applications then jump to their Microsoft Outlook and self-propagate by sending itself to various email accounts it caused an estimated  damages by $80 million  and was one of the first major viruses known by the public by the spread of the computers and internet appeared 2000’s hackers name Michael Calse was 15 years old who was known by Mafiaboy he launched several attacks on the largest commercial websites in the world as Amazon and eBay the sites where brought down for hours and cost these businesses untold millions cyber theft 1.4 million were stolen from HSBC Bank MasterCard users in 2005.

Sony Corporation announced in April 2011 some hackers stole information of 77 million users of its PlayStation Network it included gamer’s usernames and passwords their birthdates answers to security questions and more The attack of the Pentagon 2015 a spear phishing attack against the defense department with customized emails led to a data breach of information for 4,000 military and civilian personnel who worked for the Joint Chiefs of Staff  largest data breaches in banking history 2019 when over 100 million credit card applications were accessed and thousands of Social Security and bank account numbers were taken let have a Sample of how countries trying to handle cybercrimes.

US [3]

Substantive law

The primary federal criminal law to combat cyber threats is the Computer Fraud and Abuse Act (“CFAA”), 18 U.S.C. § 1030. Enacted by Congress in 1986, the CFAA criminalizes various computer- and network-related criminal activity, including the unauthorized access of computer systems, unauthorized damage and destruction of computer systems, illicit trafficking in passwords, and cyberextortion. Other statutes address specific forms of cybercrime, including cyberstalking (18 U.S.C. § 2261A), identity theft (18 U.S.C. § 1028), and the illegal interception of communications (18 U.S.C. § 2511).

Procedural law

Passed in 1986, the Electronic Communications Privacy Act (“ECPA”) regulates how federal and state law enforcement can obtain access to certain electronic evidence. The stored communication portion of ECPA, 18 U.S.C. §§ 2701-2713, governs access by the government to stored communications, records, and subscriber information held by covered providers, such as Internet service providers, email and social media companies, and telephone companies. The government may utilize warrants and other legal process pursuant to Section 2703 to compel covered providers to disclose records, while Section 2702 regulates how a provider may disclose the records voluntarily to the government. With respect to the prospective collection of electronic evidence, ECPA updated the Wiretap Act, 18 U.S.C. §§ 2510-2522, to permit law enforcement to obtain warrants to intercept electronic communications. In addition, ECPA included the Pen Register/Trap and Trace Act, 18 U.S.C. §§ 3121-3127, which allows law enforcement to obtain court orders for the prospective collection of non-content dialing, routing, and signaling information associated with communications.

UK[4]

Section 3A of the CMA 1990 makes it an offence if a person makes, adapts, supplies or offers to supply any article, which includes any program or data held in electronic form, intending for it to be used to commit an offence (by themselves or another) or believing it is likely to be used to commit an offence, under either Section 1 of the CMA 1990 (see “Hacking” above), Section 3 of the CMA 1990 (see “Denial-of-service attacks” above) or Section 3ZA of the CMA 1990 (i.e. unauthorised acts that cause, or create risk of, serious damage).  On summary conviction in England and Wales, an individual may be imprisoned for a term not exceeding 12 months or be subject to a fine not exceeding the statutory maximum, or to both.  On the more serious conviction on indictment, the imprisonment is for a term not exceeding two years or a fine, or both.

The PECR: Where a public electronic communications service provider fails to notify the ICO regarding an Incident involving a personal data breach, it can incur a GBP 1,000 fixed fine.  Where such a provider has failed to take appropriate technical and organizational measures to safeguard the security of their service, it can incur a fine of up to GBP 500,000 from the ICO.

Despite all of these laws trying to handle cybercrimes these countries still having common legal gaps as jurisdiction Complexities  because traditional forms of crime confined by geographical boundaries cybercrime operates in a borderless and decentralized environment as Cybercriminals can launch attacks from any location globally making it is challenging for law enforcement agencies to attribute and apprehend them the essence of cyberspace defies conventional jurisdictional principles creating a jurisdictional vacuum that poses a significant hurdle in the pursuit of justice . Second legal gap released in last 10 years as the significant developing of technology and AI released the cybercrimes become powered by AI and this leads in different gaps among legal frameworks the legal gaps become more clear in last 5 years when AI become part of our life we have faced new type of cybercrimes which is powered by AI   all legal frameworks dose not have any legislation for different companies developing AI tool or among programmers.

AI is an umbrella term for various technologies that rely on algorithms, which have different features and are designed for diverse fields of application. AI has been defined in terms of its perceived intelligence, its ability to act autonomously and its characteristic of evolving in an unforeseeable way.[4] AI in simple terms it refers to a machine’s ability to combine computers, datasets and sets of instructions to perform tasks that usually require human intelligence, such as reasoning, learning, decision-making and problem-solving [5]. From here cybercrimes took different and simple forms that anyone can committee as:

Deepfakes(generated videos impersonating real people for financial scams)

AI-powered phishing (automated social engineering attacks)

Autonomous hacking (AI-driven malware that evolves to bypass security systems).

Legal dilemma here is AI can be criminally responsible ?

Currently AI does not have legal personality(the status of being recognized as a person by the law) in any jurisdiction but  there is philosophical an legal discussions and theories  propose possible frameworks for AI liability no one up till now sued AI directly the plaintiffs mainly sued the company or users but still there is a legal gap that lead to exemption from liability as what happened in one of most significant case happened in U.S

 Garcia v. Character.AI[6]

Mother of teenager suit a case against chatbot  startup Character.ai after her son has killed himself after being obsessed by using AI  she accused them of intentionally designed and programmed  chatbot to operate as a deceptive and hypersexualized product and to be something addictive lead to depression then suicidal thoughts and knowingly marketed it to children like her son(teenagers)  and they did not exert the due care by regularly reviewing  that minor customers content addressed to them as they are targeted with sexually explicit material abused, and groomed into sexually compromising situations which lead him suicide the company  have dismiss from all  responsibilities based on product liability(company responsible for any harm caused by there machines during work time ) the first argument product liability does not apply on services it is only mechanical machines second one not conflicting with right to receive speech the court by this decision the court escape from who or what actually is speaking a chatbot or its creators and still there is no measures have took.

As I said before AI is not only ChatGBT or cahtbot it is a programed algorithm that works autonomously as what happened in 21of November 2017  Molly Russell 14 years old girl from London died by suicide after her death here parents found self harm  and suicide content she has exposed to it on Instagram and Pinterest by this content her father accused Meta in contributing to his daughter’s death as some of the content she viewed romanticized depression and suicide and also Pinterest sent her recommendations for more self harming content British court in 2022 ruled that social media contributed in to Molly’s death based on exposure to Harmful Content algorithmic amplification lack of safeguards for minors (age-verification measures) failure to prevent access to dangerous content [7]

Discussion

The case of Molly Russell highlights the growing legal and ethical concerns surrounding AI-driven content recommendation systems and their impact on users particularly minors. The investigation into her death confirmed that Instagram and Pinterest’s algorithms played a role in exposing her to self harm and suicide content contributing to her mental distress  While no legal action was taken directly against these platforms and they are not directly held responsible the case intensified the global debate on social media responsibility and pushed for stricter regulatory frameworks like the UK Online Safety Act 2023.

Comparing this to lawsuits of Garcia v. Character.AI the US case where a chatbot allegedly influenced a minor’s suicide raises important legal questions and the court dismissed all there liabilities for violating minor policies.While AI systems and social media platforms do not currently bear direct legal responsibility for user harm but cases like Molly’s have led to increasing calls for stricter content regulation algorithm transparency and legal liability for technology companies.

Conclusion

Molly Russell’s was a call for ethical wake to her father to work as an advocate for online safety reforms pushing social media companies to remove harmful content and implement better moderation policies .It also serves as an urgent call for stronger online safety regulations it highlight critical gaps in social media content laws and user protection prompting legislative reforms such as the UK Online Safety Act 2023 which enforces stricter accountability for online platforms. While social media companies have to update their policies to restrict to reduce harmful content the case proves the need for continuous reviewing for criminal laws.

It remains to be seen how courts will handle AI and algorithm related harm cases in the UK and globally. As should be equilibrium between free speech AI innovation, and user safety ensuring that companies take proactive steps to prevent digital harm. These cases  call for speed up change to . I recommend global commission to censors social media companies and companies owning and developing AI tool and to give them verification for there updates do not violate ethical standers and algorithm because crimes do not continue.

Reference(S)

Online

[1] Cyber_Crime_And_Criminal_Law_In_The_Era_Of_Artific.pdf (accessed in8 of March 2025)

[2] A Brief History of Cybercrime https://arcticwolf.com/resources/blog/decade-of-cybercrime/  (accessed in 10 March 2025)

[3]Council of Europe  https://www.coe.int/en/web/octopus/-/united-states-of-america  (accessed in 8 March 2025)

 [4] ScienceDirect https://www.sciencedirect.com/science/article/pii/S0267364923000055#sec0011  (accessed in  March 2025)

[5]Forbes AI And Cybercrime Unleash A New Era Of Menacing Threats https://www.forbes.com/councils/forbestechcouncil/2023/06/23/ai-and-cybercrime-unleash-a-new-era-of-menacing-threats/   (accessed in 6 March 2025)

[6] Garcia v. Character.AI   https://cdn.arstechnica.net/wp-content/uploads/2024/10/Garcia-v-Character-Technologies-Complaint-10-23-24.pdf     (accessed in 10 March2025)

[7] Molly Russell – Prevention of future deaths report – 2022-0315 https://www.judiciary.uk/wp-content/uploads/2022/10/Molly-Russell-Prevention-of-future-deaths-report-2022-0315_Published.pdf  (accessed in 11 March)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top