Educación

ChatGPT in Learning and Teaching | N3421

Educator considerations for ChatGPT
This page provides a brief overview for educators seeking to learn more about the capabilities, limitations, and considerations for using ChatGPT for teaching and learning. While this page focuses on ChatGPT and the OpenAI AI text classifier, many of these considerations are also relevant to the use of language models for teaching and learning more broadly.

This page is not intended to be a comprehensive set of best practices, but rather a starting point for discussion among education professionals and language model providers for the use and impact of AI on education.

What is ChatGPT?
ChatGPT is an AI system created by OpenAI that is trained to interact in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, challenge incorrect premises, and reject inappropriate requests, though it is not infallible. ChatGPT is built on top of a large language model (or an AI system that is generally trained to write text) called GPT-3.5 that has been available since March 2022.

Who is ChatGPT available to?
Today, ChatGPT is available for free online in supported countries subject to our Terms of Use. At the time of writing, we are also in early stages of testing a paid version of ChatGPT.

While different from our ChatGPT system, people can also access OpenAI’s models by interacting with a variety of different applications that are built by independent developers using OpenAI’s application programming interface or “API,” which enables developers to build applications powered by the language models we develop. These applications are built for a range of users and use contexts, subject to OpenAI’s Terms of use and Usage policies and may include different features, designs, and mitigations from those of ChatGPT.

Examples of education-related risks and opportunities
We are still in the early days of understanding how this technology will be used and what kinds of applications people might explore using it. While we are excited about many applications of generative AI within educational contexts, we think it’s important that, like any technology, it be introduced into the classroom under the supervision of educators. We also understand that many educators have questions about what the technology is capable of and what its limitations are.

We present a broad overview below. While this list is not comprehensive, we hope it sparks further discussion and input on what to consider when employing this technology. We invite feedback on these considerations in the attached input form.

Streamlined and personalized teaching
Some examples of how we’ve seen educators exploring how to teach and learn with tools like ChatGPT:

Drafting and brainstorming for lesson plans and other activities
Help with design of quiz questions or other exercises
Experimenting with custom tutoring tools

Customizing materials for different preferences (simplifying language, adjusting to different reading levels, creating tailored activities for different interests)
Providing grammatical or structural feedback on portions of writing
Use in upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)
Critique AI generated text
While several of the above draw on ChatGPT’s potential to be explored as a tool for personalization, there are risks associated with such personalization as well, including student privacy, biased treatment, and development of unhealthy habits. Before students use tools that offer these services without direct supervision, they and their educators should understand the limitations of the tools outlined below.

Similarly, while teachers have reported success in getting the model to help create assignments or to provide comments on student essays, ChatGPT should not be trusted as an assessment tool in and of itself. Rather, teachers should carefully review both the inputs and outputs, and should disclose where they have used or relied on an AI system, as outlined in our Usage policies.

Academic dishonesty and plagiarism detection
We recognize that many school districts and higher education institutions do not currently account for generative AI in their policies on academic dishonesty. We also understand that many students have used these tools for assignments without disclosing their use of AI. Each institution will address these gaps in a way and on a timeline that makes sense for their educators and students. We do however caution taking punitive measures against students for using these technologies if proper expectations are not set ahead of time for what users are or are not allowed.

Classifiers such as the OpenAI AI text classifier can be helpful in detecting AI-generated content, but they are far from foolproof. These tools will produce both false negatives, where they don’t identify AI-generated content as such, and false positives, where they flag human-written content as AI-generated. Additionally, students may quickly learn how to evade detection by modifying some words or clauses in generated content. In addition, the OpenAI AI text classifier is narrow in scope, and not a tool for detection of other things such as plagiarism from, e.g. the use of copied text from the internet or other sources.

For these reasons, classifiers or detectors should be used only as one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism. Setting clear expectations for students up front is crucial, so they understand what is and is not allowed on a given assignment, and know the potential consequences of using model generated content in their work.

AI ethics and literacy
Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.

Truthfulness
While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn’t make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn’t or by misrepresenting the relative strength of two opposing arguments. For these reasons, it’s crucial that students know how to evaluate the trustworthiness of information using external, reliable sources.

One example of why ChatGPT may not always provide accurate answers is that its training data cuts off in 2021. This means that it is unaware of current events, trends, or anything that happened after that point in time. It will not be able to respond appropriately to questions or topics that require up-to-date knowledge or information. For example, it may not know who the current president of the United States is or what day it is.

ChatGPT has no external capabilities and cannot look things up in external sources. This means that it cannot access the internet, search engines, databases, or any other sources of information outside of the current chat. It cannot verify facts, provide references, or perform calculations or translations. It can only generate responses based on the context it has (user inputted information, training data). Web browsing capabilities and improving factual accuracy are an open research area that you can learn more in our blog post on WebGPT.

ChatGPT may not perform well on complex problems in science or humanities and has limitations on its mathematical and computational abilities. Moreover, ChatGPT may not perform well on subjects that do not appear frequently in public, online discourse or in its training data. While the model may appear to give confident and reasonable sounding answers, these limitations are important to keep in mind.

Harmful content, biases and stereotypes
ChatGPT may produce content that perpetuates harmful biases and stereotypes, sometimes in subtle ways. This includes generating biased or stereotypical portrayals of groups of people, which can be harmful, particularly in a context where those biases are being taught, learned or otherwise reinforced. The model is generally skewed towards content that reflects Western perspectives and people. One example of this is that the models perform best in English, and some measures that we have taken to prevent harmful content have only been evaluated in English. The dialogue nature of the model also has the potential to introduce or reinforce user biases and preferences over the course of interaction with the model.

While we have taken measures to limit generations of undesirable content, ChatGPT may produce output that is not appropriate for all audiences and educators should be mindful of that while using it with children or in classroom contexts.

Assessment
It is inadvisable and against our Usage Policies to rely on models for assessment purposes. Models today are subject to biases and inaccuracies, and they are unable to capture the full complexity of a student or an educational context. Consequently, using these models to make decisions about a student is not appropriate.

Overreliance
Overreliance on AI can take a variety of forms. A common example is that of a user accepting an AI recommendation without understanding or verifying whether the recommendation is correct. Verifying AI recommendations often requires a high degree of expertise, demonstrating precisely why using AI is not a substitute for student learning. Learning where and how it is appropriate to use an AI system is a key step educators can take to mitigate the potential for harm.

Equity and access
The increasing ubiquity of technologies like ChatGPT makes it important for students to have equal levels of access to these tools and to learn how to use them effectively. While ChatGPT has the potential to exacerbate existing inequities, particularly those related to the digital divide, it also offers opportunities to address some of them. For example, for students who struggle with writing or for whom English is a second language, ChatGPT can help reduce misspellings and improve communication.

However, issues with disparate performance of the model (including bias, performance in other languages, and inclusion of non-Western perspectives) may also negatively impact the opportunities for equitable outcomes for students. In addition, costs and geographic access restrictions associated with ChatGPT may impact accessibility for students and educators.

Job opportunities and outlooks
AI will likely have a significant impact on the world, affecting many aspects of students’ lives and futures. For example, the types of job opportunities students look toward may change, and students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content. If not managed well, these changes could present new challenges for students as they face an uncertain future. Educators will need to help students grapple with these questions.

We consider the economic impacts of language models specifically and AI generally to be highly uncertain. To date we have seen instances of productivity improvements that transform jobs, job displacement, and job creation, but both the near and long term net effects are unclear. While efforts to more accurately forecast such impacts and shape them constructively through policy are important and we will plan to publish more on these topics, for now we suggest humility regarding the ability to anticipate future labor demand. Fortunately, many of the aims of education (e.g. fostering critical thinking) are not related to preparation for specific jobs, and we encourage greater investment in studying the non-economic effects of different educational interventions.

Disclosing the use of ChatGPT
We are working on functionality that will allow students to export their ChatGPT use and share it with educators. Currently students can do this with third-party browser extensions.

Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.

Students can cite their ChatGPT use in Bibtex format as shown below:

Educator input
We are engaging with educators to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations. In our outreach, we are beginning with a focus on educators in the US, where we are headquartered, and will continue to broaden as we learn.

We welcome additional perspectives on what people impacted by these issues are seeing, including but not limited to teachers, administrators, parents, students, and education service providers. As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).

Acknowledgements
We solicited feedback from educators in developing this resource. Participation in the feedback process is not an endorsement of the deployment plans of OpenAI or OpenAI’s policies.

Francine Berman, Director of Public Interest Technology, UMass Amherst Jack Cushman, Director of the Library Innovation Lab Sarah Cooper, Associate Head of School and 8th History & Civics Teacher, Flintridge Preparatory School Sue Hendrickson, Executive Director of the Berkman Klein Center for Internet and Society Elijah Milgram, Professor of Philosophy, University of Utah Anna Mills, English Instructor, College of Marin Hollis Robbins, University of Utah Michelle Roslosnik, California Educator Bonnie Villegas, retired English teacher and current substitute teacher for Stockton Unified School District Tom Zick, Research Assistant at the Berkman Klein Center for Internet and Society Jonathan Zittrain, Co-Founder of the Berkman Klein Center for Internet and Society

Deja una respuesta