Navigation auf uzh.ch
Generative Artificial intelligence (AI) tools create content such as text, images, code, music or video in just a few seconds on the basis of patterns and structures from the training data used.
Depending on the content, different data are used to train generative AI algorithms. One example are large language models (LLMs) which specialise in textual analysis, text processing and text generation and are trained on very large datasets. The output generated is based on plausibility and probability; however, generative AI does not understand categories such as “real” vs “fake” or “true” vs “false”. As a result, the output is often inaccurate, biased or even made up (known as hallucinations or confabulations). While the potential of generative AI, which is becoming ever more powerful, can be harnessed for constructive and beneficial ends, it can just as easily be used for destructive and harmful purposes.
The responsible and informed use of generative AI in terms of assistance systems can also be very helpful in a scientific/academic context, e.g. from brainstorming, identifying topics and refining questions to getting an initial overview, generating code, illustrations, tables or presentations, or for troubleshooting and editing.
However, the rapid and ongoing (further) development of a growing number of generative AI tools presents various challenges and risks in both teaching and research. These primarily concern the concept of authorship (and thus also copyright), new forms of scientific misconduct involving undeclared or non-transparent use of generative AI tools, the very credibility of science and scientific publications in general, the significant misuse potential for (e.g. anti-scientific) disinformation campaigns, and the transmission of biases.
Against this backdrop, it is clear that responsible use of generative AI tools also requires knowledge of their limitations:
Informed use of generative AI that is in line with academic integrity not only requires practice with the relevant tools and knowledge of their limitations, but also a deep understanding of the respective academic field, as users must have the ability to check and assess the factual accuracy of all statements independently of AI.