skip to content

Course Info

Module [Module Number] Schwerpunktmodul Seminar Information Systems l [1277SMSI01]
Schwerpunktmodul Seminar Information Systems II [1277SMSI02]
Regular Cycle Summer Term
Teaching Form Seminar
Teaching Language English
Instructor Dr. Karl Werder
KLIPS Summer Term 2020 (First Registration Phase)
Syllabus Download



Today’s abundance of data in conjunction with technological progress in machine learning and artificial intelligence (AI) has led to an entirely new data labelling industry (Murgia, 2019). Data labelling refers to the process of marking data with a specific code. These codes are used to train algorithms to correctly predict the code or label based on other input data. Market research companies expect the market for third-party data labeling solutions to increase from 150 million USD in 2018 toward 1 billion USD by 2023 (Cognilytica Research, 2019). Data-driven organizations react to the importance of and value-added through data labeling for their AI-based systems, as suggested by the acquisition of mighty AI by Uber (Soper, 2019). However, we suggest that current data labeling practices–i.e., hiring cheap labor to perform labeling tasks, e.g., through crowd-based platform (Murali, 2019)–threaten the quality of AI-based recommendations in expert systems, as the algorithms output will only be as good as the data that is provided.

Hence, the quality of data that is fed into an AI-algorithms is of utmost importance, as it largely defines whether or not the user trusts the algorithm’s recommendation (W. Wang & Benbasat, 2016). Generally speaking, data quality includes many different aspects of issues; for example, completeness, accuracy, consistency, timeliness, duplication, validity, availability and provenance (Burt, Leong, & Shirrell, 2018). While data quality is a broad term that has been researched extensively (Strong, Lee, & Wang, 1997; Wand & Wang, 1996; R. Y. Wang & Strong, 1996), we suggest that in the context of data labeling understanding data provenance is increasingly important. Data provenance is one aspect of data quality (Burt et al., 2018) that describes the data process flow in order to provide credibility and trustworthiness of data (Alkhalil & Ramadan, 2017). Data provenance is defined “as a record that describes the people, institutions, entities, and activities involved in producing, influencing, or delivering a piece of data” (Belhajjame et al., 2013). Consequently, the data–and its processing–an AI algorithm relies on is made transparent. The quality of data that is used by an algorithm influences the quality of its recommendations (Stvilia, Gasser, Twidale, & Smith, 2007). Relying on incorrect recommendations can have disastrous effects, such as the incorrect treatment of a patient in healthcare (Holzinger, Langs, Denk, Zatloukal, & Müller, 2019), or the unjust sentence of a person in the legal system (FRA, 2019). Hence, we need to better understand the effect of data provenance toward the effective use of AI algorithms.

Hence, this seminar seeks to understand the effect of artificial intelligence toward user behavior, as well as the influence of data labelling and data quality.

In this seminar, students will learn to identify, plan and conduct their own research project. The projects will use secondary data in order to answer their developed research questions. Given the explosion of information in today’s society, the ability to extract, transform and analyze data from secondary data sources is an important business skill in our knowledge society. While different types of data collection method exist, this seminar focuses on the use of secondary data in order assure data access for later analysis.

(Please see the syllabus for the list of references)

Course Design

The seminar work consists of five main phases:

  1. The students acquire the basics of conducting scientific work via the Flipped Classroom.
  2. The students learn the fundamentals concerning AI research in IS and secondary data collection and analysis.
  3. The students plan their seminar project and develop a study protocol that is submitted and discussed.
  4. The improved study protocol guides the student to collect their data and assists them in their analysis. Hence, relevant data sources are identified, data is collected and processed in order to develop a key deliverable of the seminar project. 
  5. The seminar project is documented in a seminar paper. 

Learning Objectives

Students …
  • search, interpret, systematise and present material for an academic presentation on a specifically defined topic.
  • develop and, in the case of an advanced seminar that is project-based or in the style of a case study, assess approaches and solutions for a specifically defined assignment, based on literature and their own work and in a limited amount of time.
  • present findings and defend them in critical discussion with fellow students.
  • engage in academic discourse.


  • 6th April 2020, 11:00-17:00: Classroom session on Scientific Work (not necessary if you have attended before)
  • 07th April 2020, 09:00-10:00: Kick-off (Introduction to Seminar; Organization)
  • 14th April 2020, 09:00-11:00: Discussing AI-System Characteristics
  • 21st April 2020, 09:00-11:00: Discussing Algorithm Aversion
  • 28th April 2020, 09:00-11:00: Discussing Explainable AI
  • 12th May 2020, 09:00-10:30 & 11:00-12:30: Study protocols: Discussions and feedback
  • 7th July 2020, Submission of final seminar paper 

Venue: Online sessions (e.g., zoom). Details will be communicated through ILIAS.


The course grading is threefold:

  • Paper Summary (20%) - you are expected to write a clear and concise one-page summary of the article that has been assigned to you. In addition, you are expected to read two more papers within your topic domain, so that you can lead an online discussion. You are expected to read the summary articles or the papers of the additional topic domains within this course, so that you can participate in online discussions.
  • Study Protocol (30%) - you are expected to develop and write a study protocol (3-5 pages). You will also be assigned two study protocols of your peers that you review, so that you can lead and contribute to online discussions.
  • Seminar paper (50%) - departing from your initial study protocol and the feedback received, you are expected to hand in a seminar research paper. This work contains (1) a clear and concise introduction that motivates the research, (2) a review of the state-of-the-literature, defining central terms, (3) document your research approach in a transparent, yet concise way, (4) present and discuss your developed results and (5) give an outlook toward future research needs.

Selected Readings


Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access6, 52138–52160.

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion. Journal of Marketing Research56(5), 809–825.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General144(1), 114–126.

Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local Rule-Based Explanations of Black Box Decision Systems. (May). Retrieved from

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges. Philosophy and Technology31(4), 611–627.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes151(December 2018), 90–103.

Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F. (2019). Meaningful Explanations of Black Box AI Decision Systems. Proceedings of the AAAI Conference on Artificial Intelligence33, 9780–9784.

Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior98(March), 277–284.

Wang, W., & Benbasat, I. (2016). Empirical Assessment of Alternative Designs for Enhancing Different Types of Trusting Beliefs in Online Recommendation Agents. Journal of Management Information Systems33(3), 744–775.