USING MOODLE AS A RESPONSE SYSTEM FOR HIGHER EDUCATION IN POLAND: BENEFITS AND CHALLENGES

: The paper explores learners’ responses to a new course format introduced in the Faculty of Philology at the University of Łódź. The course contained an interactive component with online tasks conducted in real time in the classroom. Instead of using dedicated Response Systems, we adopted Moodle LMS to verify whether it may be successfully used in that capacity. It may be concluded that despite its dated UI this system is still a viable option for offering interactive in-class instruction. The results of the student survey indicate that the course was well-received, although not all of its components were equally appreciated by the students. In particular, there was a significant difference between the reception of Open-Ended and Close-Ended questions. This difference might be explored from various perspectives, e.g. convenience vs learning effort or memorization vs acquiring skills The paper offers pedagogical implications which may help deal with the aforementioned problems.


Introduction
The use of ICT tools in Polish universities has been beneficial for students and teachers alike. However, the benefits are mostly manifested by improving the accessibility and user-friendliness of teaching practices guided by the principles of expository methods. Despite a number of ambitious projects, it remains true that providing access to materials and automated quizzes is still the most popular way of using technological tools (Turula, 2014). In this context, many challenges which were anticipated to disappear owing to technology, are still present in Polish academia.
One of the major issues which still needs addressing is the underuse of the teaching tools that promote interactivity on a technical and pedagogical level (Tanner & Jones, 2007). While one cause of this problem might be related to teachers' insistence on using the 'old' paradigms, it is equally important to consider the impact of technology. As mentioned by Turula (2014), most Polish universities rely on open-source e-learning platforms which do not always meet the expectations of the academic community. Moodle, by far the most widely-adopted platform (Madej, Faron, & Maciejewski, 2016;Redlarski & Granik, 2014), might be perceived as counter-intuitive (Redlarski and Granik, 2014;Hasan, 2018), and may lack essential features necessary to fully support achieving learning outcomes (Turula, 2014).
It is also worth mentioning that in order to support and improve learning, numerous worldwide centres use the aforementioned LMSs as well as Dokeos, Claroline etc. (Azouzi, Ghannouchi, & Brahmi, 2017). However, a distinction should be made between the cloud-based (with considerably low startup costs) and opensource solutions (requiring installation and setup). A cloud-based LMS, hosted on a server by the provider, ensures data security and maintenance. According to a recent ranking (Pappas, 2020), Moodle is placed at the top in the group of open-source LMSs due to its affordability, customization options as well as extensive tutorials and professional community that helps to develop Moodle.
In addition, the use of many other free tools -such as Response Systems (RSs) like Kahoot or Plickers -is discouraged if these systems collect sensitive user data outside the university (EU Regulation 2016/679). As a result, the aforementioned RSs are safe if used anonymously, but should not be employed to track learners' progress or record their activity throughout the semester.
Another source of problems can be the learners' unwillingness to participate in interactive activities. This is mostly due to the norms and expectations formed at previous levels of education. As described by Srokowski (2015) and Fazlagić et al. (2017), Polish primary and secondary schools suffer from a deficit of active learning. This attitude, if transferred to academia, might have a negative impact on learners willingness to engage in interactive learning. Such a phenomenon can be explained by the situated learning theory, which stresses the importance of cultural-based context for successful learning (Lave & Wenger, 1991).
The authors found that students of English at the University of Lodz who attended their course displayed a lack of interest in non-technology interactive activities conducted in the form of groupwork, pair work or discussion. One of the major issues was the fact that a select few monopolized the classroom discourse, thus 'protecting' the rest from being actively involved in the learning process.
Therefore it was decided to use technology-based tools offered by the official university e-learning platform in order to facilitate interactive in-class learning.
The decision to focus on promoting pedagogical interactivity with digital tools corresponds to some of the major trends in higher education predicted in the Educause Horizon Report ( This paper describes both the technological challenges of using Moodle to achieve the intended interactivity, and the results of the post-course survey, in which students shared their opinions concerning technology-based interactive teaching.

Response systems
One of the most popular tools for conducting interactive classes are various response systems (RS). In the literature, a number of studies can be found which are related to this concept, notably Audience Response Systems (Chien, Chang, & Chang, 2016;Papadopoulos, Natsis, Obwegeser, & Weinberger, 2018), Interactive Response Systems (Auras & Bix, 2007;Wang, 2018), Student Response Systems (Arnesen, Sivertsen Korpås, Hennissen, & Birger Stav, 2013;Hwang, Wong, Lam, & Lam, 2015), Personal Response Systems (Guthrie & Carlin, 2004) and Classroom Response Systems (Fotaris, Mastoras, Leinfellner, & Rosunally, 2016). The variety seems to reflect the fact that in many cases the systems differ in terms of functionality or the way in which they are used. For instance, Papadopoulos et al. (2018) define response systems as synonymous with clickers, pointing to the closed-ended nature of the tasks, while Wang (2018) mentions open-ended questions as an essential component of their course. Another difference is the size of the audience, which might vary from a classroom (mentioned in Hwang et al., 2015) to a lecture hall (e.g. Fotaris et al., 2016).
Regardless of the variation in the naming and modes of use, response systems for students are primarily used in order to make the learning more interactive for participants. Their implementation in the classroom is based on two assumptions (Guthrie & Carlin, 2004), namely that students must process the question and try to answer it in order to learn (Thalheimer, 2003) and that immediate feedback is more beneficial than delayed feedback (Kulik & Kulik, 1988). Offering student-centred stimulating learning with immediate feedback is the core purpose of response systems, which concurs with the general goals of using any educational technology in the classroom (Firmin & Genesi, 2013).
From the educator's perspective, RS can be used to considerably reduce the teaching workload. This goal can be achieved by saving and re-cycling questions (Stav, Nielsen, Hansen-Nygård, & Thorseth, 2010), by instantaneously accessing and analysing learners' responses in class (Stav et al., 2010) or by providing data for learner assessment and course design analysis (Hwang, Wong, Lam, & Lam, 2015).
However, the most important benefits in the case of the Polish educational context seem to be related to reducing the disengagement in the classroom (Papadopoulos et al., 2018). Hwang et al. (2015) cite a number of studies which show that the issue of 'inactivity' is largely attributed to shyness or fear of being wrong. They propose that psychological safety might be restored by using RS, and research shows that anonymous feedback might indeed contribute to a more active participation. For instance, Barr (2017, p. 630) witnessed increased participation in students using anonymous RS, which they attribute to "(…) the decrease in fear, pressure, anxiety and embarrassment associated with answering questions in class.", while Bojinova and Oigara (2013, p. 154) state that RS help avoid ridicule and "public humiliation".
In the latest meta-analysis of RS studies, Chien, Chang and Chang (2016), found that the effectiveness of response systems was attributed by researchers to one or more of the following factors: • novelty effect, • unequal item-exposure effects, • testing effect, • feedback-intervention effect, • (self-) explanation effect.
The results of the analysis show that the benefits for learner performance are most likely due to feedback-intervention, defined as the ability to see relevant feedback and to respond to it, thus creating a "performance-feedback-adjustment" loop (Chien, Chang, & Chang, 2016, p. 5). In this context, the aforementioned psychological safety seems to play a crucial role, as learners responding via RS are more likely to perceive feedback as focused on the subject matter rather than themselves (Hattie & Timperley, 2007).
Most publications which describe the use of RS in the classroom focus on providing instruction to large groups of students (e.g. Arnesen, Sivertsen Korpås, Hennissen, & Birger Stav, 2013;Fotaris et al., 2016;Stav et al., 2010). The key benefit of this approach is replacing traditional lectures with activities which are more "stimulating, enjoyable and engaging" (Stav et al., 2010, p. 179) in an environment in which the lecturer can devote little time to each individual separately. Most interaction is, therefore, based on a model in which -in its most basic form -information is presented to the learner, then a question is asked, followed by some intermediate or delayed feedback (Chien et al., 2016). In the case of delayed feedback, learners might be asked to reflect on the answers, discuss them, and modify their final response.
Finally, one of the latest trends in the field is giving more attention to multi-step problem-solving activities (Fuad, Deb, Etim, & Gloster, 2018). This functionality is aimed at building deeper and more thoughtful user engagement with interactive elements such as branching, navigating between questions, etc. One important characteristic of this system is that it cannot be easily implemented with standard RS apps or tools. Fuad et al. (2018) designed a custom MRS or Mobile Response System (ibid.) to achieve this goal, but it can also be successfully replicated with other tools, notably an LMS-based activity.

Background: teaching procedure for using LMS tools as RS
While most RS described in literature function as standalone solutions, it should be noted that Polish higher education uses chiefly Learning Management Systems (such as Moodle or Blackboard) in order to impart knowledge, track progress and record activity.
Therefore, it seems natural that reporting students' in-class answers in a synchronous manner should be included as a part of this toolset. In the case of the University of Lodz, this was both a challenge and an opportunity. While the standard Moodle used by the University does not contain any RS plugins, the wide variety of Moodle Quiz tools could be used as an RS in the classroom.
In order to replicate the functions of response systems, the following procedure was used: Prompts indicating that learners would need to open their Moodle page were included as separate slides in every presentation. Thus, the teacher ensured that the beginning of every digital activity was signaled both verbally and visually. Due to the university Moodle website low automatic refresh rate, it was necessary to ask learners to refresh their pages to see the tasks uploaded by the teacher.
The tasks were divided into three categories: 1) questions used to make learners voice their opinions/propose definitions or examples, 2) questions used to make learners predict the correct answer, 3) questions used to consolidate knowledge and reinforce skills. The questions in the aforementioned categories were created as open-ended (OE) or closed-ended tasks (CE), built with native Moodle Quiz (Qz) and Questionnaire (Qn) activities. Examples of all types and categories are presented in Table 1.
During the instruction, Categories 1 and 2 were mostly used as introductory tasks, whose purpose was to help produce data which could later be used as a starting point for a discussion. Later during the lesson, Categories 1 and 2 were also used as prediction activities, i.e. exercises in which the students use their knowledge to guess the correct answer. In the consolidation phase, Category 3 questions were used as the principle method of self-assessment. As no personal results were displayed in front of the group, the feedback can be considered 'safe'.

Research questions
The following research questions were formulated: Q1: Are learners satisfied with the new format of the course? Q2: How do learners asses OE questions? Q3: How do learners asses CE questions? Upon completing the course, the students were asked to submit an online Moodle questionnaire form which consisted of four questions. In the first one, the subjects were asked to rate items on a four-point Likert scale, from strongly disagree to strongly agree (Wagner, 2015, p.88). The items included statements about the subjects' previous experience with online tools (a), statements related to their perception of the course (b), and statements referring to possible further directions of development for the course (c). The list of statements is presented in Table 2. For the sake of clarity, the statements are grouped into sections (a), (b) and (c), however it should be noted that they were dispersed in the original form. While most RS described in literature function as standalone solutions, it should be pointed out that Polish higher education mainly uses Learning Management Systems (such as Moodle or Blackboard) in order to impart knowledge, track progress and record activity.
The first question was followed by two open-ended questions, which asked students to assess the quality of OE tasks, CE tasks, and to provide suggestions for improvement (coded d and e, respectively). In the last question (f), the subjects were asked for any other remarks and suggestions.

Results
In total, 58 responses were collected. One learner did not answer the open-ended questions. In section (a), 36 (61%) of the respondents answered that they had never participated in a blended/technology-enhanced course before. The responses in section (b) are presented in Table 3. The highest number of positive responses ("agree" + "strongly agree") was observed in b.2 and b.7, which had no negative answers. This indicates that the majority of learners find technology-enhanced learning to be more interesting and that they appreciated the design of course materials. The highest score for SD was registered in items b.4 and b.6 which concerned enjoying completing online tasks and the user-friendliness of the Moodle platform, respectively; b4 also had the highest number of negative responses. Finally, the results of the reverse-coded item b.5 showed a pattern which is relatively consistent with other answers, with 7% of respondents assessing online tasks to be boring.
No statistically-significant correlation was found between students' previous experience (a.1) and the following items from section b.: It was, however, found that there was a statistically significant negative correlation between the respondents' experience with technology-enhanced instruction (a.1) and item b.9, i.e. their belief that more courses ought to contain in-class online exercises (r s = −0.45, p (2-tailed) < 0.05). It was also found that the more experienced learners tended to show a higher level of boredom while completing online tasks (b.5), with rs = 0.31, p (2-tailed) = 0.02. Therefore, it can be concluded that previous experience did not affect the general assessment of the course, but it could have contributed to the perception that certain tasks are boring.
Moreover, the more experienced learners tend to think that the increased presence of online education is not necessarily something desirable in a classroombased system. Since this result was rather unexpected, it was decided to investigate further and to check whether there was any correlation between b.5 and b.9. The result proved to be statistically significant (rs = −0.42512, p (2-tailed) < 0.001), i.e. students who showed a higher level of boredom seem to be less likely to support the idea of including an online component in more classes.
In section (c), the learners were asked to answer questions related to future ways of enhancing the course. The results are presented in Table 4. The responses to items c.10 and c.11 show that the majority of learners would not like to increase (66%) or decrease (84%) the number of online tasks used during the course. As regards the choices of the minority, more supported the idea of increasing the number of such tasks (34%) than decreasing it (16%). In addition, over half of the respondents (62%) wanted to see the course offered as distance-learning classes, while only 44% supported the idea of using smartphones instead of computers in university settings.
For items d and e, all the positive and negative comments were counted and grouped according to the aspects of technology-enhanced instruction which they refer to. In the case of comments which refer to multiple aspects, the occurrence of each aspect was counted separately. Therefore, the data presented in Table 5 show the number of mentions of each particular aspect. All the items mentioned in the tables are simplified to best represent the variety of answers that referred to a single aspect. Comments on quantity or quality were are added whenever necessary (too little, clear…, etc.), but some statements remain neutral. In such cases, the header determines the meaning, e.g. the comment "mental effort required to complete the task" in the "negative comments" section represents the statement in which the learner clearly expressed their dissatisfaction with this particular aspect. "General" comments represent expressions of (dis)like which do not explain why a given person thinks in a certain way, such as "I liked it". The results for item d are presented in Table 5. In item d (OE questions), the number of negative and positive comments was relatively similar (respectively, 48 and 52%). The most frequently quoted concern was the lack of clear instructions, while the highest number of positive comments were general statements. Interestingly, some comments were contradictory, such as "unclear instructions" (14.4%) and "clear instructions" (6.7%), or the complaint that expressing one's opinion without being assessed by the teacher is pointless (4.4%) vis-à-vis the positive comment about self-assessment without explicit teacher feedback (1.1%).
While the number of positive and negative comments concerning OE tasks and questions was relatively similar, the answers for item e, presented in Table 6 show a clear preference for CE activities. The most important positive aspects mentioned by the respondents were: enhanced knowledge retention and the ability to complete the task quickly and without too much cognitive effort. This perceived ease in comprehending the task and completing it quickly and without major obstacles could, however, lead to the perception that the task is not challenging enough. Although in this case the number of positive comments exceeded the number of negative opinions (18.6% as compared to 4.7%), it should be noted that the lack of challenge was the most common complaint.
In item f the answers were not required, so the total number of comments was lower (41). Upon filtering out responses which pertain to faculty-specific matters, such as the time slots allotted for the course, three general types of comments emerged. The first one contain general positive comments similar to the ones in previous categories, such as "Please spread this approach to teaching in the faculty!" (S6). The total number of respondents who submitted such comments was 22. Another important trend were complaints about the amount of time devoted to each task. In general, the teacher waited for everyone to submit the answers, which made some people feel bored during the lesson, as evidenced by comments such as the one from S21: "While I understand that the class is created for the whole group, I do believe the overall difficulty of the course was very easy and it should give a chance to tackle more challenging aspects of the topic whenever that's possible. Sometimes the tasks could be easily done and were indeed done by some people in 2-3 minutes when it was necessary to wait 10 minutes for the whole group to finish." In total, two students voiced such complains. Another two respondents suggested that more online pair-work might increase the quality of the course.

Discussion
Overall, the course might be assessed as moderately successful. The majority of students found it to be interesting and helpful in terms of information retention. In addition, there was no evidence of a 'pushback' against the idea of having everyone work during the class without being able to delegate the work to the most active students. Most respondents also seem to have accepted the ratio of online-to-offline teaching, which was evidenced by the fact that they did not want to increase or decrease the number of online tasks used during the course.
This overall positive assessment is nonetheless less obvious when OE and CE tasks are analyzed separately. In the case of the former, the number of positive and negative comments was nearly identical. Even if one excludes the comments about unclear instructions, which point to problems in execution rather than some inherent flaws of the task, it is still true that the number of negative comments for OE questions exceeds the one for CE by over 100%. Unfortunately, the most frequent type of a negative response, i.e. a general comment, does not provide enough information to determine one specific factor which might have contributed to the problem. Instead, information from the less frequent comments might be used to formulate hypotheses about key factors which lead to the mixed reception of OE questions.
The first type of comments which might explain the aforementioned phenomenon is the one about the lack of precise feedback. This thought was clearly expressed by S21, who wrote, "Honestly, I don't see much point in questions which strictly concern our opinion, I'd rather focus on tasks which can be right or wrong." Two difficulties arise in this case; firstly, it seems impractical for the teacher to give synchronous personalized feedback to every learner due to time constrains; secondly, it is sometimes impossible to design a task in which there is only one correct answer. This is especially evident in the case of questions about interpretation and in the case of creative tasks, which were both frequently used during the course. The proposed solution is to use a comparison task as a self-feedback tool. In addition to discussing selected patterns of answers -as was done during the course -the learners might be asked to compare their work with a checklist containing the minimum requirements for creative tasks or to contrast their responses with the ones provided by the experts upon answering the analysis-based question.
Another important aspect is the time allotted for each task. In order to make it clear that all the learners were supposed to complete their tasks, the teacher waited for everyone to finish. This approach forced everyone to focus on the task, but one of the consequences was the fact that many students had to wait for the slow learners to finish. The waiting time was especially long in the case of OE Category 3 tasks (cf . Table 1), which often required performing a number of activities. While it seems relatively difficult to offer one solution to this problem, the pedagogy of mixedability classes might suggest assigning the more advanced students another task to complete. These extra tasks should be checked automatically (CE) so as to avoid an excessive workload for the teacher who ought to stay focused on the main task. Since every such attempt is automatically graded in Moodle, it would be relatively easy to record and reward all the extra work done by learners.
The mixed reception of OE questions seems to be counterbalanced by a more positive opinion about their CE counterparts. Some respondents explicitly contrasted them with OE questions by mentioning short completion time and immediate feedback. In addition to commenting on the ease of use of CE questions, learners also mentioned one long-term effect, namely the enhanced information retention (CE Category 3). No explicit attention was given in the comments to the questions which asked respondents to predict the outcome of some task or to propose their own solutions. This might indicate that the learners are more concerned with being able to memorize certain information rather than gaining skills related to predicting the outcome or analyzing the data. From this perspective, CE questions seem to be mostly a technique for effectively absorbing knowledge. In a relatively frank comment, S46 remarked that this effectiveness might be further enhanced by using the BYOD approach: "I believe that using our own devices might be more efficient. It could be harder to use the same device to distract ourselves with other things considering that there might be a task to complete at any given moment -we would have to keep on our toes." The picture of CE questions that emerges from the aforementioned analysis can be condensed to the statement that they constitute a "quick and easy" method of learning, which can compete for learners' attention with their smartphones. However, some other students saw this lack of challenge as something negative, as mentioned by S54: "It is pretty nice, I cannot say that I like it as I like open-ended tasks, because here we have a limited number of answers but I understand that sometimes it is the only way to be precious (probably misspelled 'precise')." Again, it seems that the interests of both groups are hard to reconcile; it might, however, be possible by using the same strategy as the one mentioned in the case of OE questions, i.e. by administering additional tasks with automated feedback.
Another aspect related to teaching mixed-ability classes may be the issue of learners' differences in their IT skills -rather than their ability to answer the questions about the subject matter. While declared e-learning experience as a measure of these skills should be approached with a certain caution, it seems to have some impact on the perception of the course. The higher level of previous experience on the part of the subjects meant that they were less open to the idea of more in-class online activities, while also reporting a higher degree of boredom during classes. This might be due to the fact that the experienced learners were less prone to the novelty effect. Alternatively, it is also possible that the learners who have more developed IT skills are also the ones who perform better in academic tasks, which means that they complete them earlier and experience boredom while waiting for the rest of the class to finish. This assumption is partly supported by the statistical analysis presented in Section 5, which suggests a correlation between the level of boredom and the unwillingness to participate in more RS-based classes. Nevertheless, given the amount of data collected, it would be unwarranted to assume that addressing the issues related to learners' mixed ability in dealing with academic tasks would have any impact on the perception of the course among the more IT-advanced learners.
In terms of feasibility of using Moodle as an RS, it should be noted that the majority of the students assessed this platform as intuitive and easy to use (89% in item b.6). In terms of positive comments, two aspects were mentioned, namely the ability to see the results displayed after the task, and the variety in activities, such as the comment by S50, "Quite liked those as well, definitely more interesting to drag and drop on a diagram than have to write it out. Nothing to complain about.". Interestingly, the variety in tasks is not always a feature of more modern RS, which are better in terms of visualizing responses by means of word clouds or interactive lists, but do not offer a vast array of tools for automated-feedback tasks. From this perspective, Moodle bears a resemblance to digital worksheets, in which displaying and analysing learner responses is just one out of many available modes of teaching.
Despite all these advantages, in some instances Moodle might look subjectively 'aged' in comparison to more modern tools, which could lead to problems with user experience. Complains about this aspect appeared in sections d and e, but they exclusively concerned the Attendance module, which is not relevant to the RS-oriented use of Moodle. To sum up, it can be assumed that Moodle, despite its shortcomings, can still offer a valuable alternative to other RS.
Despite all the aforementioned advantages, there exists one caveat, namely the fact that the positive reception of Moodle in this context is restricted to the environment in which a PC computer is a terminal for communicating with the RS. By contrast, the majority of learners stressed that they would not like to work in the BYOD mode. In terms of UX factors that might cause this attitude, it is assumed that that there are two explanations for this phenomenon. Firstly, OE questions which entail typing longer texts are not mobile-friendly regardless of the RS; secondly, Moodle tasks such as drag-and-drop might cause difficulties on small touch screens. One of the middle-of-the-road solutions which would bring to the classroom the benefits of using mobile technologies without abandoning versatile Moodle tasks could be the use of larger mobile devices, such as tablets.

Conclusions and pedagogical implications
Moodle used as an RS can be an effective and enjoyable way to teach certain academic subjects. Despite its shortcomings, the learners generally seem to enjoy using it. In addition, they appreciate the enhanced information retention which they consider to be the result of using CE questions aimed at consolidating knowledge.
One of the potential challenges for the teacher is the fact that learners show a clear preference for CE questions which they perceive as more accessible and less time-consuming. On the other hand, for some learners this type of questions is not challenging enough, which might lead to negative consequences, such as experiencing boredom during classes. Designing a course to include the elements of a mixed-ability approach to teaching could be a solution to this problem, but further research is required to confirm this claim. Nevertheless, the authors would like to postulate that the tasks which are considered to be more challenging and/or timeconsuming can still be valuable in terms of achieving Intended Learning Outcomes.
OE questions seem to be received with more caution than CE questions, and their use in the course seems to divide the learners. While the data did not provide a definitive answer as to why this is the case, some responses suggest that OE questions should be more feedback-oriented. Therefore, a simple follow-up discussion might not be enough to make learners satisfied; instead, it is suggested that a short self-assessment post-task activity, with rubrics or points for comparison, might help increase the acceptance rate for this type of questions. In addition, the aforementioned additional tasks should be included in the course to help learners who complete OE questions first to avoid longer periods of inactivity.