‘While we wait for Congress to pass AI-related legislation, students and young professionals are already into all sorts of AI tools for every conceivable academic (as well as non-academic) purpose.’
THIS afternoon, a former student of mine is conducting a session on using artificial intelligence to “supercharge” one’s job. Unfortunately for me, the job Nix Lañas will talk about is not in journalism or communication, which I still teach. Lañas is now a real-estate broker and an associate guru in another school. “AI won’t replace brokers,” her poster reads, “but it will replace the admin work that you hate.” Her message might just as well refer to other jobs, perhaps including yours.
“ChatGPT is great for push paperwork like emails, contracts, computations, and data-heavy documents. It also helps entrepreneurs articulate big ideas, especially those who struggle with writing or English,” she replied when I asked her if she allowed her students to use AI. She acknowledges the danger of “unoriginal and copied ideas” but balances that with AI’s ability to “handle heavy data mining and analysis. Imagine piles of receipts and handwritten notes instantly sorted into neat Excel sheets in minutes. You waste no time in forecasting, execution and iteration.”
I asked another former student, Mayk Juat, who runs a video-production outfit on how AI-literate college students should be. Not only college students, he said, but younger students, “in understanding what AI can and cannot do, and in using it as a tool for research, productivity, and creativity.” A few months ago, Juat ran a module in an AI workshop for marketers. “It should help develop critical thinking and ethical responsibility, not encourage copy-paste shortcuts,” he said.
On another front, a news executive last year said that AI literacy was a “mandatory skill set.” Increasingly, I am encountering potential bosses who are not only encouraging AI use but expecting future workers to know how to put it to good use.
Yet, we also hear more from the AI skeptics who fear that it will “dull critical thinking” or put an end to humanity. If we allow it to go unregulated, perhaps.
Lañas, Juat and the journalist are just a couple of professionals who favor a “careful experimentation” approach to AI use for productivity.
While we wait for Congress to pass AI-related legislation, students and young professionals are already into all sorts of AI tools for every conceivable academic (as well as non-academic) purpose. They are, after all, more adventurous, more tech-savvy and sometimes more desperate.
On the other hand, some of their teachers are fearful, suspicious and perhaps intimidated by uncharted waters. Fortunately, our top universities have already adopted general policies, most of them reflecting the United Nations’ 10 core principles of ethical AI. They emphasize privacy, transparency, accountability, fairness, safety and security, the common good, and human oversight. These same principles can also be found in the 30-something bills filed in the 20th Congress.
These schools are one in the need to balance potential benefits and opportunities with the risks and negative consequences. As one of them put it, encouraging the “responsible use of generative AI in teaching, learning, and research through augmentation, rather than replacement, of human output.”
Two of these schools require that professors specify Gen AI use policy in their course syllabi, including the extent to which it is permitted.
Where I teach, we are supposed to tell students which requirements they are free to use GenAI, where it is allowed but in specific contexts, or where it is banned. I teach a CHED-mandated class on risk and disaster communication, where I make students write risk-monitoring reports. Our data come from the latest updates from official sources online, but as much as possible I make them write their drafts longhand on paper, under time pressure. Those conditions allow them to use heretofore unpublished facts, and to compose copy in their own hand.
Why, technology, if not education itself, can be a double-edged sword. The expertise learned in medicine, the humanities, social science, law, sciences, business, and engineering can all help make ours a better world. Yet all that mastery can be corrupted and used for evil. As a biting poser of late challenges us: “Kung edukasyon ang sagot sa kahirapan, bakit edukado ang nagnanakaw sa bayan?”
But that doesn’t mean education and technology — in this case Generative Artificial Intelligence — are to be shunned.
Educators are open to technology because of the potential to make better persons out of us. In closing, we are reminded of a speech delivered more than 60 years ago by then Representative Jovito Salonga to celebrate “The Educated Man,” whom he imagined as someone able to discern what is right and defend it with all the resources at his command, with “a healthy sense of values, a breadth of outlook and the depth of compassion… with intelligence and fairness and understanding.” Characteristics that perhaps we can teach AI.