DEEPFAKES AND AI-ENABLED CYBERSECURITY THREATS: ARE MALAYSIA’S EXISTING LEGAL FRAMEWORKS SUFFICIENT?
- 4 hours ago
- 11 min read
Lisshrina Subramaniam, Universiti Utara Malaysia
I. Introduction
In today’s world, the rapid growth of AI technologies such as deepfakes, generative AI, and AI-powered scams is undeniable. With continuous technological advancement, the rise of cybersecurity threats in Malaysia has also become increasingly apparent. In Malaysia, artificial intelligence has been increasingly adopted across various sectors, including finance, healthcare, education, e-commerce, and public administration, in line with the government’s digital transformation agenda. The introduction of the AI Roadmap (AI-RMAP 2021–2025) further demonstrates Malaysia’s commitment to positioning itself as a regional leader in AI development by outlining strategic initiatives to drive AI innovation, governance, and talent development.
However, alongside these advancements, reports of AI misuse have risen significantly. Recent news cases involving AI-generated deepfake pornography, voice-cloning scams impersonating public figures, and AI-powered online fraud schemes illustrate how emerging technologies are being exploited for criminal purposes. Statistical reports from enforcement agencies have also indicated a steady increase in online scam losses in recent years, reflecting the growing sophistication of cyber-enabled crimes.
However, Malaysia has no standalone AI legislation yet, raising the question of how courts are addressing AI-related harms. This article argues that Malaysian courts currently rely on traditional criminal and communications laws to regulate AI-driven misconduct, but significant regulatory gaps remain. Accordingly, this article seeks to examine how Malaysian courts interpret and apply existing legal frameworks to AI-related offences, to identify the limitations of relying on conventional statutes, and to assess whether a more comprehensive regulatory approach is necessary to effectively address emerging AI-driven harms.
II. Understanding AI-Driven Cybersecurity Threats in Malaysia
Deepfake technology, powered by artificial intelligence and machine learning systems, enables the creation of hyper-realistic synthetic images, videos, and audio recordings. In Malaysia, regulators have reported the large-scale removal of AI-generated and manipulated online content, including fabricated investment advertisements and digitally altered media circulating on social platforms (The Star, 2026). The Malaysian Communications and Multimedia Commission (MCMC) has actively sought removal of tens of thousands of AI-generated disinformation posts, including deepfake scams, from social media to curb public deception (The Vibes, 2025). Government officials have also stated that Malaysia is considering mandatory labelling of AI-generated content under the Online Safety Act to help users identify potentially misleading material (Lee, 2025). One of the most concerning uses of deepfake technology is the creation of non-consensual explicit imagery and AI-generated sexually explicit material. The Malaysian Communications and Multimedia Commission (MCMC) has stated that repeated misuse of AI to generate obscene and non‑consensual manipulated images including those involving women and children threatens privacy, dignity, and online safety, and constitutes a violation of national content laws (Free Malaysia Today, 2026). AI tools can replicate facial features and vocal patterns with increasing accuracy, allowing offenders to convincingly pose as trusted individuals. Malaysian authorities have recognised identity manipulation and AI-driven impersonation as emerging threats that complicate both prevention and enforcement efforts.
Second, artificial intelligence has significantly enhanced the sophistication and scale of cybercrime. Voice cloning scams, for example, allow criminals to mimic the speech patterns of victims’ family members or colleagues to request urgent financial transfers. The Home Ministry in Malaysia has acknowledged that AI-enabled scams including those that use synthetic audio and deepfake technologies to impersonate individuals and deceive victims are a growing concern that current laws are ill-equipped to address (De Silva, 2025). Phishing operations have also evolved with generative AI tools capable of producing grammatically precise and highly personalised scam messages. Unlike traditional phishing attempts, AI-generated communications are more difficult to detect, increasing the likelihood of successful fraud. Parliamentary discussions and digital policy reports have indicated that AI-assisted scams and manipulated digital content are contributing to a rising number of cybersecurity incidents nationwide (Ministry of Digital Malaysia, 2025).
Cybersecurity threats such as fraud, identity theft, and misinformation have long existed in digital environments. However, AI amplifies these risks by increasing their speed, realism, and scalability. AI-generated content can be produced almost instantly, replicated across platforms, and continuously modified to evade detection systems. This technological advancement reduces the technical expertise required to commit sophisticated cyber offences, thereby lowering barriers to entry for malicious actors. Furthermore, AI-generated media blurs the distinction between authentic and fabricated evidence, complicating investigative and judicial processes. Authorities have highlighted that deepfake technology and AI-driven impersonation create new evidentiary challenges, particularly in proving authorship and intent. As a result, traditional cybersecurity risks become more dangerous in the AI era because they are no longer limited by human production capacity or technical skill, but are instead amplified by automated, adaptive systems.
III. Existing Malaysian Legal Framework
Malaysia’s existing legal framework governing AI-enabled cybersecurity threats relies primarily on technology-neutral statutes that were enacted before the rise of generative artificial intelligence. The Penal Code (Act 574) remains the principal criminal instrument used to prosecute AI-related harms. Section 292 on obscene materials has been applied to AI-generated deepfake pornography, focusing on the obscene nature of the content rather than the technological method of its creation. Similarly, the cheating and fraud provisions under Sections 415–420 are increasingly relevant where AI tools are used for voice cloning, phishing automation, or impersonation scams, while Section 503 on criminal intimidation may apply to deepfake blackmail or AI-assisted threats. Although these provisions are sufficiently broad to capture AI-enabled deception and harassment, they regulate the harmful outcome rather than the algorithmic process, creating doctrinal uncertainty in areas such as attribution, intent, and machine-assisted conduct.
The Communications and Multimedia Act 1998 (Act 588) further strengthens the regulatory framework by criminalising the improper use of network facilities under Section 233, which targets the transmission of obscene, false, or menacing content online. This provision enables enforcement against the dissemination of AI-generated deepfakes, misinformation, and automated scam communications. However, like the Penal Code, the CMA is content-focused and does not directly regulate AI systems, developers, or platform design. Complementing these statutes, the Sexual Offences Against Children Act 2017 (Act 792) provides enhanced protection where AI tools are used to generate exploitative content involving minors, specifically Section 5 of the Act. Yet ambiguity persists regarding whether fully synthetic child sexual abuse material created without depicting an actual child falls squarely within statutory definitions, revealing interpretive challenges in the AI context.
The Personal Data Protection Act 2010 (Act 709) introduces a data governance dimension to AI regulation by imposing obligations on entities processing personal data in commercial transactions. Because many AI systems rely on biometric data, facial images, and large-scale data scraping for model training, the PDPA potentially governs aspects of deepfake creation and AI-driven identity manipulation where such activities involve “personal data” processed in the course of commercial transactions within the meaning of the Act. However, the applicability of the PDPA is confined to personal data as defined under the statute and is further limited to commercial contexts. Moreover, pursuant to section 3 of the PDPA, the Act does not apply to the Federal Government or State Governments, thereby excluding public sector data processing from its regulatory scope. Nevertheless, its scope is limited to commercial activities and does not explicitly address AI model training, automated decision-making, or algorithmic transparency. Taken together, Malaysia’s legal framework demonstrates adaptability but also strain: it addresses the consequences of AI misuse, such as obscenity, fraud, harassment, and data breaches, without directly regulating high-risk AI technologies. As a result, the current approach remains largely reactive, highlighting the need for clearer statutory definitions and AI-specific governance mechanisms.
IV. Judicial and Enforcement Responses
A. Case Study: AI Deepfake Prosecution (Johor Magistrate’s Court 2025)
In 2025, a Johor teen was charged under Section 292 of the Penal Code for allegedly creating and distributing AI-generated explicit images, reported as among the first publicly documented prosecutions involving deepfake pornography. The Magistrate’s Court relied on the traditional obscenity framework, focusing on whether the material was “obscene” rather than analysing the technological process used to generate it. This demonstrates the adaptability of Malaysian criminal law: the court treated AI-generated images as legally equivalent to conventional obscene materials. However, the reasoning remained technology-neutral, with no judicial articulation of standards for synthetic media, algorithmic manipulation, or AI-specific harm. The case thus reflects both the flexibility and the doctrinal limitations of relying on pre-digital statutes to regulate emerging AI misconduct (The Star, 2025).
B. Regulatory Actions by MCMC
Beyond the courts, the Malaysian Communications and Multimedia Commission (MCMC) has taken an increasingly proactive role in addressing AI-generated harmful content. Investigations have been launched into deepfake dissemination, AI-assisted scams, and online misinformation, with authorities signalling that platforms may face legal consequences if they fail to remove or control such content. Under the Communications and Multimedia Act 1998 and the Online Safety Act 2025, enforcement efforts emphasise harmful content regulation and platform compliance rather than direct AI system oversight. The Online Safety Act, which came into force on 1 January 2026, imposes clear obligations on licensed Application Service Providers (ASPs), Content Application Service Providers (CASPs) and Network Service Providers (NSPs) to implement risk-based security measures, provide specific protections for children, and establish user reporting and assistance mechanisms, while also bringing large social media platforms under Malaysian jurisdiction via a deeming provision in the CMA 1998. Between January and late 2025, MCMC acted on thousands of harmful posts including fraud and harassment, and the Act’s enforcement framework requires platforms to take decisive action against harmful online material, especially to protect vulnerable groups such as children and families, including the removal of harmful content and the submission of online safety plans to demonstrate compliance.
This approach underscores a regulatory strategy centred on monitoring dissemination channels, although it stops short of imposing comprehensive AI-specific obligations on developers or service providers.
C. Cybersecurity & Platform Liability Cases
Emerging cybersecurity disputes suggest that Malaysian courts may increasingly assess whether platforms and digital service providers meet reasonable security standards in managing AI-related risks. Where AI systems enable large-scale scams, impersonation, or data misuse, principles of negligence could potentially apply if operators fail to implement adequate safeguards, monitoring mechanisms, or risk controls. However, it should be clarified that Malaysian courts have not yet imposed AI-based negligence liability, and the application of such principles in the AI context remains prospective rather than established. This raises complex questions regarding foreseeability, standard of care, and the allocation of liability between users and system designers. Although Malaysian jurisprudence has not yet developed a definitive AI liability doctrine, existing negligence principles provide a possible pathway for courts to evaluate accountability in cases involving autonomous or semi-autonomous AI systems (Kosmo, 2025).
V. Legal Gaps and Challenges
Despite these judicial and regulatory responses, substantial gaps remain in Malaysian law regarding AI and cybersecurity. First, there is no legal definition of “AI”. Existing statutes do not define artificial intelligence, making it difficult to delineate the scope of offences or liability clearly. Second, there are no deepfake-specific offences. While cases can be prosecuted under obscenity or fraud laws, there is no provision directly criminalising the creation or dissemination of AI-generated deepfakes. Third, evidentiary issues arise because proving authorship, intent, and AI-driven manipulation is technically complex, thereby complicating investigation and prosecution. Attribution of liability remains particularly challenging when content is automatically generated or disseminated by AI platforms. Fourth, cross-border enforcement difficulties arise because many AI tools and platforms operate internationally, creating jurisdictional challenges in investigating and prosecuting Malaysian cases. Lastly, regarding platform versus user liability, current laws do not clearly assign responsibility between AI platform providers and end-users, creating uncertainty in enforcement.
National discussions reflect these concerns. Former Deputy Minister Teo Nie Ching stated that amended CMA provisions and the Online Safety Act 2025 are “sufficient” for current threats, although civil society groups argue that AI-specific regulation is necessary to adequately address deepfake harms and prevent legal loopholes (Aliran, 2025). These debates underscore the urgent need for Malaysia to clarify both statutory definitions and liability frameworks in order to keep pace with the rapid evolution of AI-driven cybersecurity risks.
VI. The Proposed Cybercrime and AI Governance Bills
The Malaysian government has announced plans to introduce a comprehensive AI Governance Bill alongside a new Cybercrime Bill intended to modernise the existing digital legal framework. The Cybercrime Bill is expected to replace the Computer Crimes Act 1997, updating offences to address contemporary threats such as AI-enabled fraud, identity theft, automated phishing, and deepfake misuse. Concurrently, the proposed AI Governance Bill aims to regulate risks associated with artificial intelligence systems, including disinformation, synthetic media, and platform accountability. Reform discussions have also highlighted potential measures such as mandatory labelling of AI-generated content, enhanced investigative powers, and clearer compliance obligations for digital platforms (Malay Mail, 2026). These initiatives signal a legislative shift from purely technology-neutral enforcement toward more structured AI oversight.
Whether these reforms will fully resolve existing legal gaps depends on their design and implementation. If the bills introduce clear statutory definitions of AI, establish deepfake-specific offences, and clarify platform versus user liability, they could significantly strengthen Malaysia’s regulatory coherence. However, the effectiveness of the framework will hinge on balancing preventive regulation such as transparency requirements, risk assessments, and platform duties of care with reactive enforcement, including criminal prosecution after harm occurs. A purely reactive model risks perpetuating the current reliance on stretched traditional laws, while overly rigid preventive controls may inhibit innovation. Thus, the success of the proposed reforms will depend on whether they meaningfully integrate forward-looking risk management with enforceable accountability mechanisms (The Star, 2026).
VII. Recommendations
To strengthen Malaysia’s regulatory response to AI-enabled cybersecurity threats, three key reforms are particularly critical. First, Parliament should introduce AI-specific statutory definitions within the proposed AI Governance framework. Clear legal definitions of “artificial intelligence,” “generative AI,” and “deepfake” would reduce interpretive ambiguity and provide courts with doctrinal clarity when assessing liability. At present, enforcement relies on broad, technology-neutral statutes that do not distinguish between human-generated and algorithmically generated conduct. A statutory definition would ensure consistency in prosecution, regulatory oversight, and judicial reasoning while aligning Malaysia with emerging international standards (Malay Mail, 2026).
Second, Malaysia should enact a specific criminal offence targeting non-consensual deepfake creation and distribution, particularly involving minors or intimate imagery. While existing provisions under the Penal Code and Communications and Multimedia Act can address obscene or harmful content, they do not directly criminalise the manipulation of a person’s likeness through synthetic media. A targeted deepfake provision would shift the focus from general obscenity to consent, identity integrity, and digital exploitation. Such reform would close a significant doctrinal gap and reflect the evolving nature of AI-driven harms (The Star, 2026).
Third, legislation should impose a platform duty of care coupled with mandatory AI transparency requirements. Platforms deploying or hosting generative AI systems should be legally obligated to implement reasonable safeguards, risk assessments, content moderation mechanisms, and clear labelling of AI-generated material. This preventive approach would move Malaysia beyond a purely reactive enforcement model, distributing responsibility between users and service providers while enhancing accountability and traceability in cases of harm. Embedding transparency and oversight obligations within statutory law would provide regulators with clearer enforcement tools and strengthen overall cybersecurity governance (Malay Mail, 2025).
VIII. Conclusion
Malaysia is presently in a transitional phase in its regulation of artificial intelligence and AI-enabled cybersecurity threats. Courts have demonstrated adaptability by applying long-standing, technology-neutral statutes such as the Penal Code and the Communications and Multimedia Act 1998 to misconduct involving deepfakes, automated scams, and synthetic media. This judicial flexibility has allowed enforcement to proceed despite the absence of AI-specific legislation. However, reliance on pre-digital laws reveals structural strain: provisions drafted for conventional offences are being extended to govern algorithmic systems capable of generating harm at unprecedented scale and speed.
Sustainable AI governance in Malaysia will therefore require more than reactive interpretation. It demands legislative clarity through defined AI terminology, targeted offences for synthetic media abuse, structured platform accountability, and evidentiary standards capable of addressing algorithmic manipulation. At the same time, judicial development must keep pace, ensuring that principles of intent, causation, negligence, and liability evolve in response to semi-autonomous technologies. Ultimately, the question is no longer whether AI can commit harm, but whether the law can evolve quickly enough to respond.
References
Aliran. (2025). Why Malaysia needs stronger laws against AI-generated deepfakes. https://aliran.com/thinking-allowed-online/why-malaysia-needs-stronger-laws-against-ai-generated-deepfakes
Communications and Multimedia Act 1998 (Act 588) (Malaysia).
Computer Crimes Act 1997 (Act 563) (Malaysia).
De Silva, I. (2025, November 5). Urgent need to strengthen Malaysia’s legal framework against AI-driven scams. Penang Institute. https://penanginstitute.org/publications/issues/urgent-need-to-strengthen-malaysias-legal-framework-against-ai-driven-scams/
Free Malaysia Today. (2026, January 3). MCMC to call up X over AI‑generated obscene images. https://www.freemalaysiatoday.com/category/nation/2026/01/03/mcmc‑to‑call‑up‑x‑over‑ai‑generated‑obscene‑images
Kosmo. (2025, October 8). Guna AI untuk menyerang, menipu. https://www.kosmo.com.my/2025/10/08/guna-ai-untuk-menyerang-menipu/
Lee, B. (2025, July 13). Govt considering mandatory ‘AI generated’ label under Online Safety Act, says Fahmi. The Star. https://www.thestar.com.my/news/nation/2025/07/13/govt-considering-mandatory-039ai-generated039-label-under-online-safety-act-says-fahmi
Malay Mail. (2025, July 13). AI-generated labelling could become law by end-2025, communications minister says to curb scams, defamation and deepfakes. https://www.malaymail.com/news/malaysia/2025/07/13/ai-generated-labelling-could-become-law-by-end-2025-communications-minister-says-to-curb-scams-defamation-and-deepfakes/183830
Malay Mail. (2026, February 9). Digital minister confirms AI governance bill to combat deepfakes is in the works. https://www.malaymail.com/news/malaysia/2026/02/09/digital-minister-confirms-ai-governance-bill-to-combat-deepfakes-is-in-the-works/208639
AI Roadmap (AI-RMAP 2021–2025). https://www.google.com/url?q=https://mastic.mosti.gov.my/publication/artificial-intelligence-roadmap-2021-2025/&sa=U&sqi=2&ved=2ahUKEwjG1J7s9u-SAxXRTWcHHQikGbUQFnoECCYQAQ&usg=AOvVaw0MoLn3CDgSZg2bCTE9ygTm
Ministry of Digital Malaysia. (2025, August 5). AI guidelines, training programmes among proactive measures by the Ministry of Digital to combat online scams. Digital Gov Malaysia. https://www.digital.gov.my/en-GB/siaran/Garis-Panduan-AI,-Program-Latihan-Antara-Langkah-Proaktif-Kementerian-Digital-Untuk-Memerangi-Online-Scam
Online Safety Act 2025 (Malaysia).
Penal Code (Act 574) (Malaysia).
Personal Data Protection Act 2010 (Act 709) (Malaysia).
Sexual Offences Against Children Act 2017 (Act 792) (Malaysia).
The Star. (2026, January 29). New AI laws to arrest deepfakes. https://www.thestar.com.my/news/nation/2026/01/29/new-ai-laws-to-arrest-deepfakes
The Star (2025, April 23). AI deepfake porn case: Teen’s detention exceeded remand period, rep claims. https://www.thestar.com.my/news/nation/2025/04/23/ai-deepfake-porn-case-teen039s-detention-exceeded-remand-period-rep-claims
The Vibes. (2025, August 30). Malaysia removes over 40,000 AI-generated disinformation posts in three years. https://www.thevibes.com/articles/news/112177/malaysia-removes-over-40000-ai-generated-disinformation-posts-in-three-years

Comments