‘We learned how to deal with fire and air travel, and are learning about renewable energy. There should be a way to make the most out of AI without having to surrender our humanity.’
A FREQUENTLY y misunderstood expression in media theory is Marshall McLuhan’s “the medium is the message,” which is often misinterpreted literally. By message, McLuhan was not referring to content, which he said “blinds us to the character of the medium,” but to “the change of scale or pace or pattern that it introduces into human affairs.” The greater the change, the more it alters our lives, the more it defines our routines, the louder the message.
When he said this 60 years ago, the most disruptive medium was black-and-white TV that maybe changed a few things from our previous routine. Today, we have the smartphone that has brought about every imaginable alteration in the way we do things.
What makes the smartphone seem smart has something to do with a rapidly growing technology we know as artificial intelligence. AI is involved in many things that make our lives more convenient that for some people a simple hardware or connectivity issue can become existential.
McLuhan noted how new technology tends to eliminate jobs, but also to create roles. The same goes for AI. And maybe more.
In the latest Mission Impossible installment, the global enemy is an advanced AI called “The Entity.” It is described as “very intelligent and cunning, creating plans on top of plans and manipulating humans to its advantage. It tries to be very mysterious, hiding its true intentions to anyone including its follower[s].”
In real life, we have heard the so-called Godfather of AI himself, the Nobel laureate Geoffrey Hinton, warning of the dire consequences — “human extinction within the next three decades” — of AI development “without intention and guardrails.”
This week, we note the nearly 30 AI-related proposals in the 20th Congress. We focus on nine bills that seek to develop and regulate AI. They are House Bill Numbers 13, 57, 73, 252, 658, 659, 1196, 2827 and 3195, and Senate Bill No. 25. They go by the titles AI Development Act, AI Development and Regulation Act, AI Regulation Code, National AI Code, and AI Governance Act.
These bills did not materialize out of nowhere. Four years ago, UNESCO adopted 10 core principles guiding a human-rights approach to AI ethics. These are: Proportionality and Do No Harm; Safety and Security; Right to Privacy and Data Protection; Multi-Stakeholder and Adaptive Governance and Collaboration; Responsibility and Accountability;
Transparency and Explainability; Human Oversight and Determination; Sustainability; Awareness and Literacy, and Fairness and Non-Discrimination.
Locally, experts like the Ambit Coalition have been clamoring for a national AI strategy by way of legislation, among other recommendations that reflect the UNESCO core principles.
These principles are reflected in the Congress bills, where they appear as principles, bills of rights, strategies, or prohibitions.
In addition to the principles, the bills call for policymaking and regulatory bodies like the Philippine Council on AI, AI Board, AI Bureau, AI Development Authority, Bureau of AI Systems, National AI Commission, and the National Center for AI Research.
Because of similarities and differences, we expect a refined substitute bill. And while we appreciate judicious deliberation, we must also realize that we are behind the adoption of an official, national AI policy, even in Southeast Asia.
Any approach to regulating AI should balance cynicism and naïveté, innovation and restraint, XY and XX tensions, Jekyll and Hyde, destruction and preservation, technophilia and technophobia.
Such a highly anticipated and badly needed, top-level policy will strategically guide decisions to be made in society, particularly in the various industries, and also in academe.
For instance, some Philippine newsrooms have either flatly rejected the use of all AI tools that could otherwise benefit the editorial process, or do not bother to find out how their reporters and editors are using work-related apps. There are those who view AI skills as “mandatory” and follow the path of “careful experimentation.” Different work and learning environments have their respective stories to tell, but sadly sometimes in the absence of meaningful direction.
McLuhan ended his famous essay with a quote he attributed to Carl Jung: “Every Roman was surrounded by slaves. The slave and his psychology flooded ancient Italy, and every Roman became inwardly, and of course unwittingly, a slave. Because living constantly in the atmosphere of slaves, he became infected through the unconscious with their psychology. No one can shield himself from such an influence.”
We can reword this text to reflect the 21st century, in which we today are the Romans, swamped by AI-driven technology. Unthinking, unreflecting and uncritical, we might become sleepwalking slaves of AI, the victims of a true-to-life Entity.
We have to engage AI, learn it, and hopefully profit from the experience. At the same time we need to be wary of its dangers. We learned how to deal with fire and air travel, and are learning about renewable energy. There should be a way to make the most out of AI without having to surrender our humanity.
For Geoffrey Hinton, we can achieve this by training AI to be compassionate: “super-intelligent caring AI mothers” who “won’t want to get rid of the maternal instinct because they don’t want us to die.”
We look forward to a law that will promote smart technology that will, in turn, make us smarter, more productive, and more human and humane.