Remarkably, we are unanimous in our assessment of the benefits and opportunities of Artificial Intelligence (AI) to optimize business processes, drive operational competitiveness, and increase productivity across several industries.

At its early stages of adoption, AI has already made verifiable impacts and shown great potential to become one of the most impactful tech tools developed in the last decade with universal and tailored business case use applications and relevance. At the MTN CTIO Roundtable Africa event last month, speakers, panelists, and participants were convinced by its usefulness and agreed on the need for its accelerated adoption by every business seeking to drive value despite its associated risks.

However, one unresolved concern not only from the roundtable event but generally about the use of AI has been whether we need to regulate it now or in the future. On this question of when to regulate, two schools of thought have emerged strongly – regulate it now versus regulate it in the future.

I intend to use this article to illustrate why I strongly support the latter school of thought and demonstrate how existing legal frameworks provide regulatory safeguards and remedies for the concerns with AI adoption.

 

ARTIFICIAL INTELLIGENCE (AI), ITS USE CASES AND CONCERNS

In simple terms, “Artificial Intelligence” (AI) connotes the demonstration of “intelligence” by computers or machines in the delivery of assigned tasks.

By design, human beings are the primary creatures gifted with the cognitive power to think, act, and produce outcomes in rational and logical forms. Over time, the exhibition of such powers has proven human beings as intelligent creatures – and such display of intelligence has accounted for all human inventions, innovations, and developments including the making of computers.

Comparably, human beings have remained superior to any creation or creature on the question of intelligence. Nonetheless, human ingenuity has also led to the development of advanced computing devices with competitive and superior power to iterate outcomes closer, sometimes, surpassing human intelligence, outputs, and capacities.

The use of these supercomputers and their underlying technologies, algorithms, and training on datasets is producing outputs, only capable of human intelligence. And because it is computers or machines exhibiting these human comparable levels of intelligence, the use of such technological tools to deliver incredible outputs is credited as “Artificial Intelligence” (AI).

In less than a decade, we have witnessed transformational advances in AI-related tech tools across almost every industry – the fastest-evolving technology. In the process, innovative ways of performing tasks, designing new products and services, manufacturing, delivering healthcare, agriculture, financial services, telecommunications, and customer service, among others are upon us. Countless use cases are being demonstrated in ways to improve efficiency, ensure quality of work, and enhance productivity.

The overall impact of AI on businesses going forward will be comparable to the effects of the invention of the internet on business today (remember the effects of the internet downtime on Thursday 14th March 2024?). AI will become to businesses what oxygen is to humans. The relevance of its subcategories such as Generative AI to everyday life and work is even more immensely compelling. With AI, the cognitive powers of computers and machines have been unlocked to simplify work and life.

These benefits notwithstanding, there are raging concerns with its adoption and use. According to Forbes, 15 of such concerns include the lack of transparency, bias and discrimination, privacy concerns, ethical dilemmas, security risks, the concentration of power, dependence on AI, job displacement, economic inequality, legal and regulatory challenges, AI arms war, loss of human connection, misinformation and manipulation, unintended consequences, and existential risks.

It is these concerns that are heightening the calls for the immediate regulation of AI.

 

THE CALLS FOR ITS IMMEDIATE REGULATION

It is instructive to note that, the full potential of AI has not yet been realized. At best, we are just beginning to experience its experimental use cases and are not able to fully understand its real scope, potential, and limitations.

Its rate of application spanning several industries, remarkable outputs, and our inability to predict its full scope and limitations have resulted in some of the concerns listed above. The fear particularly of its potential to develop AI systems that do not align with human values and priorities has increased the call for its regulation.

However, such calls have only resulted in limited legislative initiatives in Europe (the European Union (EU) AI Act), the United States (the National Artificial Intelligence Initiative Act of 2020 (H.R. 6216)), and the United Nations (the recent 1st General Assembly adopted Resolution on AI) despite its widespread usage.

While these legislative or regulatory interventions seek to ensure ethical and responsible development and use of AI systems to advance human development, the biggest challenge remains the lack of opportunity to fully understand and appreciate how AI technologies work – the black box syndrome.

It is practically impossible to regulate what one does not know or understand. By legislative designs, we regulate what we know and understand and largely prohibit those we do not know. However, the risk of legislating to prohibit AI because we do not understand its workings is undoubtedly impossible due to its demonstrated usefulness so far – society stands to lose in any attempt at prohibition.

Therefore, the legislative approaches so far have been to set up a legal framework that ensures safe, secure, and trustworthy AI systems. For example, according to the EU, the aim of its AI Act is to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. Additionally, the Act seeks to “foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing the risks of very powerful and impactful AI models.”

These objectives have been set from the standpoint of some appreciable understanding of how AI systems work. The definition of 4 levels of risks namely unacceptable risk, high risk, limited risk, and minimal or no risk associated with AI systems has helped the EU to design a regulatory framework which provides clear guidelines for addressing the various levels of risk.

The collective power of the EU, its technological resources, and expertise in terms of AI developers, researchers, and innovators justifies its regulatory response or approach. Many developing countries do not have such resources to respond boldly to the risks of AI systems. There is limited capacity to understand, regulate, and enforce compliance for AI systems. These limitations have compelled other governments instead of legislating to regulate AI to consider National AI Strategy Frameworks – providing comprehensive outlines of how AI use cases can be integrated and adopted in a transparent, ethical, and responsible manner.

 

EXISTING REGULATORY FRAMEWORKS AND AI RISK MANAGEMENT

Certainly, one form or another of AI is in use in many businesses in Ghana today. Equally, some government institutions may be relying on AI to optimize their service offerings, particularly in the areas of customer service and data analysis. These use cases are increasing across several industries despite the absence of AI regulation in Ghana. One reason we may not immediately be exposed to AI-related risks is the opportunity of inherent provisions in existing legislation and how they deal with such related risks.

We have in place regulations covering intellectual property assets and rights, data protection and privacy, cybersecurity and fraud, electronic transactions, and organized crime among others.

Although these existing regulations may predate the AI revolution, their general and specific applications can accommodate and deal fairly with some of the concerns with AI systems.

Specifically, there exist clear guidelines on asserting intellectual property rights over creations, inventions, and works, among others. The Copyrights Act, the Patent Act, the Industrial Design Act, the Trademarks Act, and the general acceptability of trade secret regimes will offer protections for innovations underlying AI systems. The grey area will be the ascertainment of authorships and ownerships for AI-generated outputs – and in such circumstances, our courts may on a case-by-case basis depending on the degree of contribution by humans and AI systems provide guidance and standards for such determinations.

For concerns related to privacy, data protection, and cybersecurity, there are provisions in the 1992 Constitution, the Data Protection Act, and the Cybersecurity Act respectively to deal with them. Additionally, the registration, certification, and compliance mandates under the Data Protection Act and the Cybersecurity Act for institutions and individuals in areas of data collection, use, storage, sharing and cybersecurity services provision reinforce the ability of the existing regulatory framework to deal with any immediate AI risks in these areas.

More of such regulatory frameworks exist for civil and criminal remedies for some anticipated risks and their resulting breaches. For instance, through the instrumentality of the established cybercrime unit of the Ghana Police Service, the Economic and Organized Crime Office (EOCO), the powers of specialized regulators such as the Bank of Ghana, Security and Exchange Commission (SEC), National Communication Authority (NCA), etc, institutions and individuals who leverage the power of AI in an unethical manner, misinform and manipulate citizens and for fraudulent purposes could be punished – with imposed civil or criminal sanctions.

Admittedly, existing regulatory frameworks do not fully provide for all AI-related risks.  Risks such as biases and discrimination, lack of transparency, ethical dilemmas, etc relative to how AI systems are designed and developed are currently not regulated. Primarily, these risks result from underlying data sets used in training the related AI system and are in the domain of technology itself. To mitigate these, AI systems must be trained on accurate, verifiable, and quality local datasets to reduce the likelihood of biases, discrimination, and misinformation.

 

THE APPROACH FORWARD

At some point in the future, it will become imperative to regulate AI in Ghana as the EU has done. What will inform this will be a proven stakeholder understanding and appreciation of how AI systems work, the availability of local expertise to develop localized AI systems, strong evidence of collaboration between government and private institutions on AI adoption, and proven implementation and adaption of regional and global AI commitments from institutions such as the UN, among others.

However, we must not adopt a “wait and see” attitude in anticipation of the arrival of such regulation. The immediate first step should be to develop a National AI Strategy Framework based on inputs from all stakeholders on the scope, considerations, and acceptable use cases of AI systems across all industries in Ghana.

Further, we must build the capacities of existing regulatory institutions to respond timely and enforce compliance with regulatory dictates that address some of the AI concerns discussed above. Investments must be channeled into people development, procurement of leading technologies and devices, the establishment of protocols, etc to help build institutional preparedness for dealing with AI risks. The regulatory institutions should also adopt collaborative work plans to permit knowledge sharing, resource leverage, and proactive remedial responses.

Also, to improve AI adoption by businesses, adoption pioneers must be prepared to share their experiences, lessons, and pitfalls to help build the best-case scenarios for AI adoption by businesses in Ghana. Such practical feedback will provide proven roadmaps for the adoption of relevant AI systems to improve business competitiveness and increase productivity.

 

CONCLUSION

The demonstrated potential of Artificial Intelligence (AI) to become an imperative tech tool to improve work, optimize operational competitiveness, and deliver accurate tailored services among others is not without concerns. The related risks are informing the calls for the immediate regulation of AI in most countries. But as discussed above, we must hasten slowly towards the ultimate AI regulation and in the meantime adopt a National AI Strategy Framework while strengthening the institutional capacities of existing regulators to deal with some of the associated risks.

ABOUT THE AUTHOR

RICHARD NUNEKPEKU is a Technology Consultant and the Managing Partner of SUSTINERI ATTORNEYS PRUC (www.sustineriattorneys.com) a client-centric law firm specializing in transactions, corporate legal services, dispute resolutions, and tax. Richard is a lawyer with an entrepreneurial mindset working at the intersection of law, technology, and business in Ghana. He welcomes views on this article and is reachable at richard@sustineriattorneys.com