The recent barbs exchanged by youth leaders regarding allegations that the ruling party, Swapo, generated parts of its election manifesto using ChatGPT has stirred debate.
This commotion comes at the time Namibia celebrated the eighth National Information and Communication Technology (ICT) Summit, centred on digital transformation and artificial intelligence.
There, ICT minister Emma Theofelus said ethical considerations and protection of rights need to be interrogated to ensure AI achieves its desired goals.
While Namibia and the world at large are still in the process of digital transformation, we are relying on new systems and approaches to adapt to the digital era, and AI is at our fingertips.
Not only does it create convenience, but also saves time and provides information that to the naked or untrained eye, could look accurate.
However, AI might be penetrating organisations faster than policies and strategies are established to ensure that as we embrace innovation, we maintain ethical standards and integrity.
It is equally important to distinguish between generative AI and computational systems that are meant to facilitate productivity or speed up processes.
Therefore, every sector and every institution must produce policies to guide and inform operators and the public where to draw the line. This is to ensure credibility and originality are maintained to enhance ethical conduct.
Amid the uproar are also allegations that an academic had used ChatGPT to conduct research. Whether or not this is true is a cause for concern regarding the measures in place by institutions to detect the use of AI – beyond plagiarism software such as Turnitin. Should it be the case that these AI-generated theses and dissertations make it through the cracks, then the reputations of institutions and those of their graduates will also be questioned.
The purpose of education is to instil requisite skills in students so that they may become change-makers in society, problem-solvers and be employable. With Namibia already grappling with the high unemployment, this issue could be exacerbated if institutions produce unemployable graduates. It also reflects negatively on the economy because some problems cannot be solved by ChatGPT.
Institutions should, therefore, be vigilant to this new form of academic dishonesty to ensure they restore people’s trust in the graduates and the qualifications they will possess after graduating.
This is however only possible if everyone understands that generating an assignment or thesis using ChatGPT is a form of theft, plagiarism and manipulation. Plagiarism is an act of fraud, and fraud is not just unethical, it is also illegal.
Equally, publishing content that is AI-generated, and therefore not crediting the original source of the information, is also unethical, as it is not a reflection of one’s state of mind or point of view.
In February, OpenAI, the company that owns ChatGPT, completed a deal that values the San Francisco artificial intelligence company at more than US$80 billion.
What we all should know is that these giant Western companies including Google, rarely consider their impact on the global south and make no effort to learn our way of doing things. In fact, their machines and bots mainly dredge up content they create and therefore, whatever they present as content has a heavy Western slant or bias.
It is, therefore, important to strike a balance by using AI responsibly to inform thought or prompt ideas and not the other way round. While we think we should rely on AI, it depends on us for prompts and programming. It can only generate as much as we feed it.
We should train our minds to know, be intellectual, quick thinkers who can authentically respond to conversations without having to reply with ChatGPT. For students, assessments train them to solve problems they might encounter in their daily lives or careers. Relying on AI might defeat this purpose.