For the third time in my career as an academic librarian, we are facing a digital revolution that is radically and rapidly transforming our information ecosystem. The first was when the internet became broadly available by virtue of browsers. The second was the emergence of Web 2.0 with mobile and social media. The third—and current—results from the increasing ubiquity of AI, especially generative AI.
Once again, I am hearing a combination of fear-based thinking alongside a rhetoric of inevitability and scoldings directed at those critics who are portrayed as “resistant to change” by AI proponents. I wish I were hearing more voices advocating for the benefits of specific uses of AI alongside clearheaded acknowledgment of risks of AI in specific circumstances and an emphasis on risk mitigation. Academics should approach AI as a tool for specific interventions and then assess the ethics of those interventions.
Caution is warranted. The burden of building trust should be on the AI developers and corporations. While Web 2.0 delivered on its promise of a more interactive, collaborative experience on the web that centered user-generated content, the fulfillment of that promise was not without societal costs.
In retrospect, Web 2.0 arguably fails to meet the basic standard of beneficence. It is implicated in the global rise of authoritarianism, in the undermining of truth as a value, in promoting both polarization and extremism, in degrading the quality of our attention and thinking, in a growing and serious mental health crisis, and in the spread of an epidemic of loneliness. The information technology sector has earned our deep skepticism. We should do everything in our power to learn from the mistakes of our past and do what we can to prevent similar outcomes in the future.
We need to develop an ethical framework for assessing uses of new information technology—and specifically AI—that can guide individuals and institutions as they consider employing, promoting and licensing these tools for various functions. There are two main factors about AI that complicate ethical analysis. The first is that an interaction with AI frequently continues past the initial user-AI transaction; information from that transaction can become part of the system’s training set. Secondly, there is often a significant lack of transparency about what the AI model is doing under the surface, making it difficult to assess. We should demand as much transparency as possible from tool providers.
Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements.
The Belmont Report articulates three primary ethical principles:
- Respect for persons
- Beneficence
- Justice
“Respect for persons,” as it’s been translated into U.S. code and practiced by IRBs, has several facets, including autonomy, informed consent and privacy. Autonomy means that individuals should have the power to control their engagement and should not be coerced to engage. Informed consent requires that people should have clear information so that they understand what they are consenting to. Privacy means a person should have control and choice about how their personal information is collected, stored, used and shared.
Following are some questions we might ask to assess whether a particular AI intervention honors autonomy.
- Is it obvious to users that they are interacting with AI? This becomes increasingly important as AI is integrated into other tools.
- Is it obvious when something was generated by AI?
- Can users control how their information is harvested by AI, or is the only option to not use the tool?
- Can users access essential services without engaging with AI? If not, that may be coercive.
- Can users control how information they produce is used by AI? This includes whether their content is used to train AI models.
- Is there a risk of overreliance, especially if there are design elements that encourage psychological dependency? From an educational perspective, is using an AI tool for a particular purpose likely to prevent users from learning foundational skills so that they become dependent on the model?
In relation to informed consent, is the information provided about what the model is doing both sufficient and in a form that a person who is neither a lawyer nor a technology developer can understand? It is imperative that users be given information about what data is going to be collected from which sources and what will happen to that data.
Privacy infringement happens either when someone’s personal data is revealed or used in an unintended way or when information thought private is correctly inferred. When there is sufficient data and computing power, re-identification of research subjects is a danger. Given that “de-identification of data” is one of the most common strategies for risk mitigation in human subjects’ research, and there is an increasing emphasis on publishing data sets for the purposes of research reproducibility, this is an area of ethical concern that demands attention. Privacy emphasizes that individuals should have control over their private information, but how that private information is used should also be assessed in relation to the second major principle—beneficence.
Beneficence is the general principle that says that the benefits should outweigh the risks of harm and that risks should be mitigated as much as possible. Beneficence should be assessed on multiple levels—both the individual and the systemic. The principle of beneficence demands that we pay particularly careful attention to those who are vulnerable because they lack full autonomy, such as minors.
Even when making personal decisions, we need to think about potential systemic harms. For example, some vendors offer tools that allow researchers to share their personal information in order to generate highly personalized search results—increasing research efficiency. As the tool builds a picture of the researcher, it will presumably continue to refine results with the goal of not showing things that it does not believe are useful to the researcher. This may benefit the individual researcher. However, on a systemic level, if such practices become ubiquitous, will the boundaries between various discourses harden? Will researchers doing similar scholarship get shown an increasingly narrow view of the world, focused on research and outlooks that are similar to each other, while researchers in a different discourse are shown a separate view of the world? If so, would this disempower interdisciplinary or radically novel research or exacerbate disciplinary confirmation bias? Can such risks be mitigated? We need to develop a habit of thinking about potential impacts beyond the individual in order to create mitigations.
There are many potential benefits to certain uses of AI. There are real possibilities it can rapidly advance medicine and science—see, for example, the stunning successes of the protein structure database AlphaFold. There are corresponding potentialities for swift advances in technology that can serve the common good, including in our fight against the climate crisis. The potential benefits are transformative, and a good ethical framework should encourage them. The principle of beneficence does not demand that there are no risks, but that we should identify uses where the benefits are significant and that we mitigate the risks, both individual and systemic. Risks can be minimized by improving the tools, such as work to prevent them from hallucinating, propagating toxic or misleading content, or delivering inappropriate advice.
Questions of beneficence also require attention to environmental impacts of generative AI models. Because the models require vast amounts of computing power and, therefore, electricity, using them taxes our collective infrastructure and contributes to pollution. When analyzing a particular use through the ethical lens of beneficence, we should ask whether the proposed use provides enough likely benefit to justify the environmental harm. Use of AI for trivial purposes arguably fails the test for beneficence.
The principle of justice demands that the people and populations who bear the risks should also receive the benefits. With AI, there are significant equity concerns. For example, generative AI may be trained on data that includes our biases, both current and historic. Models must be rigorously tested to see if they create prejudicial or misleading content. Similarly, AI tools should be closely interrogated to ensure that they do not work better for some groups than for others. Inequities impact the calculations of beneficence and, depending on the stakes of the use case, could make the use unethical.
Another consideration in relation to the principle of justice and AI is the issue of fair compensation and attribution. It is important that AI does not undermine creative economies. Additionally, scholars are important content producers, and the academic coin of the realm is citations. Content creators have a right to expect that their work will be used with integrity, will be cited and that they will be remunerated appropriately. As part of autonomy, content creators should also be able to control whether their material is used in a training set, and this should, at least going forward, be part of author negotiations. Similarly, the use of AI tools in research should be cited in the scholarly product; we need to develop standards about what is appropriate to include in methodology sections and citations, and possibly when an AI model should be granted co-authorial status.
The principles outlined above from the Belmont Report are, I believe, sufficiently flexible to allow for further and rapid developments in the field. Academia has a long history of using them as guidance to make ethical assessments. They give us a shared foundation from which we can ethically promote the use of AI to be of benefit to the world while simultaneously avoiding the types of harms that can poison the promise.