Let’s start with an analogy to keep us informed.
For those of us who have watched the TV show Person of Interest, we are aware of the two intelligent surveillance systems in the show. The machine is the ethical one and is considered the protagonist. It was built by Finch. The second system, Samaritan, was built by Arthur but deployed by Decima Technologies. That’s not what I want to talk about, though. I want to focus on their working philosophy.
The machine is the ethical one, it focuses on giving everyone a chance and does not classify one as either the enemy or friend, it just points to the subject/target as a person of interest and it’s up to its agents, to investigate and find out where the subject falls in the dichotomy. This makes it the system everyone would prefer watching over them, or just me.
Samaritan, by contrast, does things differently. It perceives everything as binary, good or evil. This is, however, not permanent; the good can be deemed evil or irrelevant at any given time, which would require the target’s elimination. Additionally, Samaritan can somehow take in requests, meaning its agent can ask for a person to be found, and within seconds, Samaritan can find the subject. This is not the case at first with the machine. The access to Samaritan that allows its agent to choose a target makes it dangerous, as it can be used for malicious purposes.
Immediately after I started reading the report, I began thinking about this show. Most of the functions of these surveillance systems remind me of how AI is being used today. According to the recently published International AI Safety Report for January 2025, various risks have been identified in the use of general-purpose AI.
Before we get started on the risk…
What is general-purpose AI?
AI can perform a wide variety of tasks. It can write computer programs, generate custom photorealistic images, and engage in extended open-ended conversations. Most of us have used this type of AI in one way or another.
Companies are rushing to invest and develop general-purpose AI agents to compete and advance their standing in the market. AI agents are autonomous, general-purpose AI systems that can act, plan, and delegate. They are meant to achieve a goal with minimal or no human oversight. This rapid development, however, reduces the time needed to evaluate the risks they pose. This results in the evidence dilemma challenge, where policymakers cannot weigh the potential risks and benefits of these advancements because there is insufficient scientific evidence. Most of the risks identified below suffered from this evidence dilemma, since the growth rate did not provide enough time to study the systems and evaluate the risks.
So, which are these risks, you ask?
Let’s dive in and look at General-purpose AI risks:
According to the report, the risk can be categorized into 3:
- Malicious use risks
- Risks from malfunctions, and
- Systemic risks
Several harms from general-purpose AI have been well established. NCII—nonconsensual intimate imagery, CSAM—child sexual abuse material, bias output on people and opinions, reliability issues, and privacy violations are common issues witnessed with the rampant rise of general-purpose AI. Additional risks continue to emerge as these AI systems demonstrate more capabilities. Let’s briefly look through the three categories mentioned above.

Malicious use risks
When we talk about malicious use, we mainly refer to malicious users. Given the capabilities of general-purpose AI today and in the future, malicious actors can use AI to harm individuals, organizations, and society. Lately, there has been a surge in fake content created to embarrass or portray individuals, especially celebrities, in a negative light. Cases of President Trump being portrayed as saying something that he did not have been on the rise in the media (link). AI-generated content has been used to harm individuals by using nonconsensual deepfakes. Voice impersonation can also be used to commit financial fraud, such as blackmail.
With the generative power of general-purpose AI, manipulative content can be generated to sway public opinion today. A simple prompt on a general-purpose AI can generate content at scale for manipulating political views. One mitigative approach is content watermarking, but this can be circumvented with a simple tool such as cropping or image manipulation.
Another scarier risk is cyberattacks/offenses. Using general-purpose AI, we can detect previously unknown bugs and cybersecurity vulnerabilities in systems. This is both a benefit and a risk, depending on who is using the AI system. A malicious attacker would exploit the vulnerability, while a defender would focus on patching it.
We also have the potential for biological and chemical attacks arising from the use of AI. According to the report, recent AI systems have displayed some ability to provide instructions and guidance for reproducing known biological and chemical weapons and to facilitate the design of novel toxic compounds. This results in a major risk from AI. Malicious individuals who choose to use AI's power to attack or prepare for an attack would result in a massive loss of life. One way to mitigate this might be to use AI to hypothesize antidotes for chemical and biological weapons. However, this means the weapons, too, will be hypothesized. Perhaps watch Night Agent season 2 before doing this.

Risks from malfunctions
Even without users being evil actors, AI can malfunction and cause harm on its own. I am not referring to the scope of the Terminator world, but to the risks that can arise from the malfunction of general-purpose AI. One of these risks is reliability issues. Currently, people are using general-purpose AI for various tasks, including seeking medical or legal advice. Yes, we are from googling our symptoms to asking AI for a diagnosis!
These systems might generate false answers or misleading Responses, which most people don’t take the time to verify. Most people don’t verify because they have limited AI literacy, meaning they don't know that AI responses can be false or that AI systems can hallucinate. Another reason we sometimes fail to verify is misleading advertisements and miscommunication by the AI developers.
AI systems can amplify social and political biases, harming the affected groups. This would result in discriminatory outcomes in resource allocation, the reinforcement of stereotypes, and the neglect of some groups or viewpoints.
Remember, AI governance does not aim at thwarting the development and advancement of AI systems, but to prevent harm as a result of the advancement.