Equity, Privacy and Other Concerns With Generative AI
- RIT/
- Center for Teaching and Learning/
- Teaching/
- Generative AI in Teaching/
- Equity, Privacy and Other Concerns With Generative AI
Teaching and learning strategies, even AI, have their benefits and potential problems. Below are high-level overviews of some of the concerns regarding generative AI tools. Each item on its own can be a deep topic to explore, which is outside the scope of this article. Luckily, there are a lot of great experts with resources on the internet and otherwise that you can choose to explore.
Accuracy
Generative AI tools learn from patterns in their training data. Their outputs are generally the prediction of the most likely pattern based on your input. Because these tools don't have an understanding of the material they generate or a sense of true or false, the items they output may not be accurate. The outputs may also be convincing as plausible even though they are inaccurate. Users of generative AI tools still need a base level of understanding of the topics they are using with these tools to help them critically evaluate the outputs.
Bias
Due to the way that generative AI pulls information and learns, there are several ways in which AI can reinforce or amplify bias. At a basic level, generative AI is a prediction model based on a large set of data. It leverages frequently occurring patterns in that data, and certain patterns will occur less frequently than others because there is less data (e.g. certain populations are not represented at all or are represented less frequently). This can result in things being missing from the results or incorrect assumptions being made. If the tool pulls from sources that exhibit biased assumptions or are not diverse, then the biased result will be reflected in the output.
Privacy
There is also the potential for privacy concerns with respect to personal data, intellectual property, and copyrighted data. Generative AI is trained on data, and for some tools, the prompts you use are being used by the tool's developer to train the model further. Content you put into the tool may become part of the tool. Once trained, the AI could respond to another user in the future with this information or very similar information, and without attribution to the original owner.
Sustainability
Generative AI models take a lot of computing power to train and a lot of computing power to run. The energy consumption, water use, and greenhouse gas emissions when developing and using these tools are some of the main topics of conversation in this area.
Students with Disabilities
Some articles in the higher education press have suggested that assigning in-class, hand-written or oral work is the most effective way to bolster academic integrity within the context of generative AI. However, relying exclusively or excessively on many of the proposed low-tech, time-limited approaches may prevent non-native English speakers, deaf and hard of hearing learners, or students with disabilities requiring laptop access during class and other accommodations from RIT’s Disability Services Office from fully demonstrating their learning.
Last Updated: 4/15/2024