Latest Geopolitical Intelligence

6/trending/recent
Type Here to Get Search Results !

AI: Ethics of AI - Principles, Rules, and the Way Forward - Observer Research Foundation

AI: Ethics of AI - Principles, Rules, and the Way Forward - Observer Research Foundation
By: Husanjot Chahal - Observer Research Foundation
Syndicated By: GEO' PRWire

In recent years, different research institutions, government bodies, and private entities across countries have issued principles and guidelines for the ethical use of Artificial Intelligence (AI). There is little consensus, however, over universal ethical principles and how to implement them. What are the similarities and differences in AI ethics discussions across geographies, and what are the existing gaps? Crucially, if the larger goal is the ethical development and deployment of AI, are efforts towards codifying and devising high-level ethical AI principles even a fruitful exercise?

The Current Landscape of AI Ethics

Artificial Intelligence (AI) is being deployed in ways that touch people’s lives, including in areas of healthcare, financial transactions, and delivery of justice. Advances in AI can have profound impacts across varied societal domains, and in recent years, this realisation has sparked ample debate about the values that should guide its development and use.

States and international organisations have reacted to these societal concerns in various ways. Some have formed ad-hoc committees tasked to deliberate and provide recommendations on the subject. Examples include the United States National Artificial Intelligence Advisory Committee (NAIAC) that dispenses advice to the president and various federal officials; the expert group on AI at the Organisation for Economic Co-operation and Development (OECD); the High-Level Expert Group on AI formed by the European Commission; and the Select Committee on AI appointed by the UK Parliament’s House of Lords. These bodies have either drafted or are currently drafting policy documents on the ethical, economic, and social implications of advances in AI.

Similar efforts are underway in the private sector. Companies that are at the forefront of AI development like Google, IBM, Intel, Microsoft, and Sony, have released guidelines for developing ethical AI. Some analysts have propounded that these private entities desire to shape the AI ethics domain in ways that either eschew regulation or else meet their own business priorities. Meanwhile, non-profit organisations and professional associations, such as the Institute of Electrical and Electronics Engineers (IEEE), Internet Society, OpenAI, and the World Economic Forum have also issued declarations and recommendations on AI principles and policies.

The multitude of efforts across such diverse stakeholders reflects the need for guidance in AI development. Not only are the organisations that have produced ethical guidelines on AI diverse—the content of such documents is equally wide ranging. Several empirical studies of AI ethical principles have attempted to examine the various topics under discussion across sectors and countries, and to propose how such principles can be implemented in practice. A review of the findings across these studies can offer insights into the scope and potential for a global agreement on the subject of AI ethics, as well as the disagreements therein.

Points of Convergence

Research shows that most of the available ethical guidelines adopted by states, international organisations, and private companies include a discussion of the following five ethical principles: transparency, justice and fairness, responsibility and accountability, privacy, and non-maleficence. These themes were referenced in at least half of the documents analysed across different studies and could indicate some convergence in global thinking on ethical AI.

The principle of transparency, or the need to have transparent processes in the development and design of AI algorithms, reflects a commitment to increase interpretability, explainability, or other acts of disclosure. It is one of the most prevalent principles in current literature on AI. Justice and fairness. This principle is expressed mainly in terms of fairness and mitigation of unwanted bias, as a caution to the global community that AI may increase inequality and reinforce societal biases if they are not addressed adequately.

Responsibility and accountability. There are widespread references to “responsible AI,” although the concept of ‘responsibility’ is rarely defined. Recommendations centred on responsibility include clarifying legal liability, focusing on underlying processes that may cause potential harm, or whistleblowing in case of potential harm. Responsibility seems to be intertwined with the principles of transparency and justice such that promoting both these themes can increase responsibility and accountability by entities that develop and deploy AI.

While often undefined, privacy is viewed both as a value to uphold and as a right to be protected in ethical AI, and gets presented commonly in relation to data protection and data security.

Non-maleficence. 

The mention of non-maleficence (encompassing calls for safety and security) exceeded that of beneficence, indicating the precedence of moral obligation to preventing harm over the promotion of good. This could be due to a negativity bias in characterisation of ethical values concentrating more on negative issues and events rather than positive ones. For instance, existing guidelines do not generally discuss how ethical principles could be promoted through responsible innovation in AI.

Points of Divergence

There are substantive divergences across various ethical AI guidelines as analysed by scholars. Most of them relate to the following three main factors:

a. Interpretation

There are significant differences in how the same principles are interpreted across various guideline documents and the requirements considered important for their realisation. For instance, the need for more datasets to “unbias” AI—to ensure that AI models are trained on representative data in order to avoid flawed or biased conclusions and recommendations—appears to be in conflict with the need to give individuals greater control over their data and ensure privacy. Some guidelines emphasise the need to balance risks and benefits in AI development while others talk of avoiding harm at all costs.

b. Attribution

There are also divergences in attribution—interpreting which domain, actor, or issue these ethical principles pertain to. For instance, does the European guideline on privacy (encompassing protection of individual’s data from both state and commercial entities) also apply to China where privacy guidelines target only private companies, and citizens are accustomed to living in a protected society with high trust in their government? Different perspectives, interpretations, and priorities in ethical AI are of course to be expected given that these documents are developed by a broad range of countries, international organisations, and companies. That said, such divergences could undermine attempts to develop a global ethical AI agenda because varied perspectives, for example risk-benefit evaluations, will lead to different results based on whose well-being they are developed for or the actors involved in developing them.

c. Implementation

Finally, there are differing opinions on how ethical AI principles should be implemented—through government organisations, inter-governmental organisations, industry leaders, individual users or developers, or by harmonising AI agendas across the board. If harmonisation is a goal, then how does one account for moral pluralism and cultural diversity across countries, considering that AI is a general-purpose technology operating in varied contexts and cultures?

Persistent Gaps 

Discussions on the ethical development and use of AI are ongoing, and as such, there are gaps that remain unaddressed. For example, themes of sustainability and solidarity are sparsely referenced across documents. Sustainability appears more commonly in public sector documents than in those drafted by private or non-governmental organisations (NGOs). AI deployment today requires massive computational resources, and thus high energy consumption, and this need will only expand with time. This makes the broader underrepresentation of sustainability-related principles particularly concerning, and calls into question the possibility of harnessing the benefits of AI for the entire biosphere.

Solidarity—a concept mostly referenced in relation to the consequences of AI for the labour market—is also absent in most discussions. There are very few guidelines that pay attention to promoting solidarity by exploring the use of AI expertise for redistributing the augmentation of prosperity for all, and solving socio-economic challenges such as job losses, inequality, and unfair sharing of burdens. Sharing prosperity could mean, for example, compensating humans whose actions provide data for training AI models.

Integrity—meaning being explicit about best practices and disclosure of errors—is another theme that is missing across guideline documents. Current documents place crucial focus on propagating the values of accountability and responsibility, but hardly any emphasise the duty of all stakeholders to develop and deploy AI with integrity. Similarly, the discussion of lack of diversity within the AI community is mostly absent, which is problematic because such dearth of diverse thought could result in flawed AI systems that perpetuate gender and racial biases.

Several initiatives, particularly those offered by industry, are generally criticised as mere virtue-signalling designed to debate on abstract problems and delay regulation. In relation to this, it has been observed that many guidelines, especially those produced by the private sector, indicate that technical solutions exist for several of the identified issues, such as privacy and non-maleficence. However, very few guidelines have offered, or at least acknowledged, technical explanations at all; and when they do, they are sparse. While one cannot expect guidelines to be exhaustive about all problems AI could cause, issues pertaining to political abuse of AI systems—generating election fraud, fake news, and propaganda, which are widely acknowledged as critical problems of today—are also an oversight.

Furthermore, shifting the focus from principle-development to implementation is an important next step. However, existing discussions lack clarity on which ethical principles should be emphasised and how existing conflicts in interpretation can be resolved. Moreover, there is a need to determine how conflicts between ethical principles can be resolved and who should enforce oversight and ensure that researchers and institutions comply with ensuing guidelines. 

Factors for the Convergence, Divergence, and Gaps

The field of AI ethics is expanding. Convergences across the five ethical principles is understandable as it could be a testimony to the significance of those principles; divergences likely reflect the diversity in viewpoints; and gaps could result because much of the work in this domain is still in progress. Having said that, it is crucial to consider other factors possibly influencing these results.

A significant question pertains to equality of participation in the ongoing global discussion on AI ethics. Some scholars have indicated that the current AI ethics discourse is mostly dominated by countries in the Global North. Indeed, of the 506 AI-related documents listed in Council of Europe’s data visualisation of AI initiatives (as of October 2022), only 10 percent come from countries outside Europe and North America. Moreover, research indicates that there is a dearth of reference to key terms associated with gender within AI ethics documents and the ratio of female-to-male authors across these documents is a low 31 percent. Therefore, like other parts of AI research, the discourse on AI ethics is also primarily shaped by men. The absence of an inclusive AI ethics landscape means that mainstream discussions are reinforcing certain viewpoints while possibly neglecting other risks and ethical considerations of importance to women and countries beyond Europe and North America.

Consensus or dissensus among AI ethics documents could also result due to the provenance of literature. Different types of organisations—public, private, and NGOs—have differing priorities, audiences, motivations, and scope of responsibility. The public sector is known to emphasise questions related to unemployment and economic growth, while the private sector focuses more on ethical issues with technical fixes (such as transparency and algorithmic bias); for their part, NGOs address a broader range of topics such as accountability and misinformation. In comparison to the private sector, NGOs and public sector entities are reportedly more similar to each other in their approach to AI ethics—they have more participatory processes in creation of guidelines, greater engagement with issues of regulation and law, and more depth and ethical breadth. Consequently, depending on the corpus of documents and types of organisations at hand, an assessment of AI ethics could indicate meaningful variations or similarities in the choice of topics.

The Way Forward

In AI ethics, what forms “AI for good” is under negotiation through dialogues among people or organisations impacted by AI development and other intergovernmental initiatives. If calls for more technology access and multi-stakeholder participation are followed, the field is likely to become even more diverse. Narrower versions of the existing themes are likely to emerge with respect to particular geographies and stakeholder groups. This strengthens the case for putting more effort into clarifying the variations that exist within themes and also undertaking measures to resolve differences in interpretation or attribution where possible. If the goal is to have a better articulated ethical AI landscape, the current discourse should be enriched through evaluation of critical but underrepresented principles, such as sustainability and solidarity underlining social and ecological costs of AI.

Beyond a principled approach to AI ethics

While ‘principlism’ has been the underlying framework to influence the development of safe and beneficial AI, many have questioned its effectiveness. Some critics have pointed out that the field of AI ethics has produced largely vague and high-level principles and value statements. A 2018 study by McNamara et al. reviewed the idea that ethical guidelines serve as a basis for ethical decisions made by developers. The study found that the effectiveness of guidelines is almost negligible since it does not change the behaviour of students or technology professionals.

Relatedly, scholars have indicated that there are other reasons to be concerned about the future impact of AI ethical guidelines. Certain characteristics of AI development indicate that any principled efforts at ethics might not have significant impact on AI’s governance and design.

First, the fundamental aims of AI developers, users, and affected parties do not align, and a unified regulatory framework does not exist yet in the field that establishes clear fiduciary duties towards data subjects and users. This means that users cannot trust that developers will act in their best interests when implementing ethical principles in practice. Reputational risks may compel companies, and personal moral conviction may press AI developers towards good behaviour. However, any righteous actions that place public interests before the company and that do not align with company incentive structures are unlikely.

Second, the situation gets further complicated given that AI development lacks a homogenous professional culture, history, moral obligations, and professional standards of what it means to be a “good” AI developer. AI ethics initiatives try to address this gap by offering broadly acceptable guidelines for AI development across radically different contexts of use. But this results in principles or values that are abstract and based on vague concepts that are not specific enough to guide action and are left to developers to interpret as they see fit.

Third, outside of academic contexts, any principled approach to AI ethics does not have proven methods to transform principles into practice. For instance, the field of medicine has numerous professional societies, accreditation and licensing boards, ethics review bodies, codes of conduct, peer self-governance, and other mechanisms reinforced by strong institutions that ensure ethical conduct on a daily basis. AI development lacks comparable structures to translate guidelines into practice to ensure that this technology, developed behind closed doors, is value-conscious.

Finally, a key weakness for AI is the relative lack of professional and legal accountability mechanisms to redress misbehaviour and ensure that standards are upheld. Research indicates that the existence of mere codes of ethics is not sufficient, and they are often viewed as “checklists” that get pursued in letter rather than spirit. Broader guidelines and self-regulatory efforts alone cannot prevent AI development from failures or misuse, and existing norms and requirements will not be able to set matters right. What makes matters more complicated is that setting up strong accountability mechanisms in AI appears unlikely in the future given that AI is not a unified profession operating in a single sector with a long history of harmonised aims. All of this questions the need for high-level principles as a tool to effect change.

Conclusion

A plethora of national, international, and commercial AI guidelines in recent years have paved the way for some progress on the development of a principles-led approach to AI. However, one should not celebrate limited consensus on high-level guidelines that conceal deep normative and political disagreements. Instead, it is time to move forward in defining clear long-term pathways, setting explicit professional standards tailored towards specific applications, and building accountability structures that are not only country-specific but also sector- and organisation-specific. Mechanisms should also be set up to license developers of applications with elevated risks, such as facial recognition tools or other systems trained on biometric data.

It will also be interesting to see any future AI principles-based discussions geared toward particular applications of AI, like autonomous vehicles, credit scoring services, recruitment procedure software, or other high-risk AI. There have been instances where ethically motivated efforts have been undertaken to improve AI systems, and most of them have been in specific fields where technical solutions exist for particular problems. For example, many privacy-preserving techniques, like homomorphic encryption or federated learning, or other methods using differential or stochastic privacy, have been developed for the use of data and learning algorithms. A deeper assessment of these context-specific cases to underline guidelines for AI principles could be a way forward.

Admittedly, principles are difficult to translate into practice. However, they still play a crucial role in building awareness and acting as catalysts for building beneficence and a culture of responsibility among AI developers. Internalised norms and values have a role in influencing extrinsic measures, and how individual developers conceptualise, communicate, and enforce extrinsic measures will be crucial in facilitating their implementation. Principles alone cannot govern AI, but nor can rules and requirements. An effective AI governance strategy will require both—principles encouraging cultural change in the AI community, and explicit rules and regulations buttressing them. Learn More /... 
(This brief was first published in Digital Debates 2022, ORF’s annual journal on technology and society.)

Daily Geopolitical News

About Husanjot Chahal

Husanjot Chahal is a Research Analyst at Georgetown University’s Center for Security and Emerging Technology (CSET) where she is focused on producing data-driven research examining the security implications of emerging technologies like artificial intelligence (AI). Prior to CSET, she worked at the World Bank’s Corporate Security division. Husan's research on AI has featured in the European Parliament, U.S. House of Representatives, NATO, and several leading publications such as The Diplomat, National Defense, Politico, Scientific American, and the Fortune Magazine. Learn More /...

About ORF [Observer Research Foundation]

ORF began its journey in 1990 at the juncture of ideation tempered by pragmatism. During the period of India’s transition to a new engagement with the international economic order, several challenges emerged, evoking a need for an independent forum that could critically examine the problems facing the country and help develop coherent policy responses. ORF was thus formed, and brought together, for the first time, leading Indian economists and policymakers to present the agenda for India’s economic reforms. Learn More /... 

About GEO´ PRWire Channel

Our PR Wire Channel Management Team provide direct, immediate, highly cost-effective access to our entire Geopolitical contacts network including our proprietary userbase of 132k individually named & profiled CSuite influencers and policy makers, across the Banking & Finance, Insurance, Manufacturing, Technology, Aviation and Maritime industries as well as NGO's and Government Departments Worldwide. A recent userbase survey revealed that they have a collective annual spending power in excess of €370 million.

We deliver just over 148k Geopolitical news emails every day across our entire Userbase as well as  our media base of 28k subscribing Geo-centric Editors, Journalists, Influencers & Bloggers. We further post extensively via our Web 2.0 Network affording us a further reach of 36k subscribers across all the Leading Article, Knowledge & Bookmarking Sites Globally. 

Our PRWire is available to bona-fide corporations who have a corporate message to share. Release syndications are available for single use, or for those organisations who produce a lot of news & information they need to disseminate, we offer Quarterly, bi-Annual & Annual Campaigns offering unlimited use of our Network at the same time delivering outstanding ROI. Learn More/…

Attribution: Husanjot Chahal, “Ethics of AI: Principles, Rules, and the Way Forward,” ORF Issue Brief No. 589, November 2022, Observer Research Foundation.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Top Post Ad

Microsoft365 for Business

Below Post Ad

McAfee EU

Ads Bottom

nordvpn