Artificial Intelligence (AI)

Advantages and disadvantages of AI

The advantages of AI and large language models are many. One of the main advantages is their ability to process and analyse large amounts of data much faster than a human. This means that AI can help solve complex problems and make decisions based on objective data and facts.

Another advantage is that large language models can be used to automate and facilitate communication between humans and machines. By understanding and generating natural language, these models can help improve customer service, translation services, and create smart assistants that can assist us in everyday life.

Another advantage is that AI and large language models can be used to create personalised and tailored experiences. By communicating with an AI tool, you can ask follow-up questions and thus learn more about a topic. As a teacher, there are many positive aspects to taking help from AI when planning teaching and examinations, for example in designing quizzes.

But despite their advantages, there are also disadvantages to AI and large language models. One of the biggest challenges is that these systems depend on large amounts of data to function optimally. Without sufficient accurate and representative data, the results can be unreliable or completely misleading.

AI generates answers based on probability and not on what is actually true, which also means that AI can exclude important or even crucial information. There is also a risk of bias as the models can reflect and amplify existing inequities and biases in the data sets they are trained on.

Another disadvantage is that AI and large language models can require significant resources. Training and maintaining these models requires significant amounts of data, computing power, and storage space. This can be a challenge for smaller organisations or developers with limited resources.

Security and privacy aspects also need to be considered. Because AI systems and language models can be complex and difficult to fully understand, it can be difficult to determine whether they are making decisions that are ethically correct or safe. There is also a risk that these technologies can be misused to spread misinformation or to manipulate people.

Things to think about when planning teaching and examinations

In the planning of a course, the course coordinator together with the examiner needs to decide whether and, if so, how the AI may be used by the students. It is of the utmost importance that the students receive clear instructions about what applies prior to each course and each examination. The information shared with the students about this should be written.

When AI is permitted in connection with an examination, the students should be clearly informed about how referencing should take place. Here are two examples:

Considering that the course syllabus usually specifies learning outcomes regarding the student's competencies, skills, and abilities, the use of AI should be limited. If the basic idea of a course, for example, is that the student should develop certain abilities, using AI may mean that what the student submits in connection with an examination does not reflect this. It also means that the teacher cannot make an adequate assessment based on what the student actually performs.

It has been said that today's use of AI can be compared to the entry of calculators into mathematics education in the 21st century. (Digital competences in 2035. Future analysis for the competence supply of cutting-edge digital competences, Report 2022:2, the Swedish Agency for Economic and Regional Growth and the Swedish Higher Education Authority, p. 48). On the basis of such a comparison, perhaps the path choices and instructions can be worked out.

There is a risk that the data you share with the AI tool will then be owned by the company providing the tool, unless there is evidence to the contrary. Therefore, students need to be informed that they should not share either personal data (the university is responsible for all personal data processing) or other data that they do not want to be disseminated. Everything large language models receive is potentially sent out in some form.

Checklist

  1. On the basis of the course syllabus, what should the students know and be able to do?
  2. How do we measure students' knowledge, skills, and abilities?
  3. In light of points 1 and 2 respectively, should AI use be allowed in the course/examination?
  4. If yes to point 3, for what purpose and to what extent can AI be used in the course/examination?
  5. If AI use is permitted, decide which form of examination is applicable.
  6. How should the student reference it if AI use is allowed?
  7. How is it ensured that the students receive clear information about what applies in each course?
  8. Where and when is the information provided?

Information for students

This webpage contains general information about AI and large language models for students.

Disciplinary offences

Unauthorised use of AI tools in connection with examinations must be reported as a disciplinary case according to Chapter 10 of the Higher Education Ordinance. Currently, there are no fully developed control tools for detecting the use of AI tools and even if new ones are added/currently developed, we will probably see further development of methods to get around the control programs. From a disciplinary perspective, these systems are thus not effective.

Tools for detecting AI-written text may possibly be used as part of an investigation, but will need to be supplemented with other material or other evidence in order to form a basis for disciplinary cases.