Between 2001 and 2020 I taught at the Institute for Multimedia and Interactive Systems at the University of Lübeck in the fields of Interaction Design – Extended Reality (XR) – Design and Media Theory – Media Philosophy – Computer and Media Art. I was also responsible for research and implementation of body- and space-related digitally enriched learning environments.
Following the very successful three-year (2001-2003) research project Theory and practice of integrated Arts, Design and Computer Science in Education (ArtDeCom), in which scientists at the University of Luebeck and the Cristian-Albrecht-University in Kiel together with educators at schools researched and implemented digitally enriched learning environments by means of body- and space-related media in the context of regular teaching, the initiative Kids in Media and Motion (KiMM) was also launched at the University of Lübeck in 2004. It ran for 17 years until my regular research activity at the university ended due to my retirement.
KiMM included various research and research transfer projects in which, in addition to scientists at universities and school-based teachers, also scientists from the field of teacher education were significantly involved. The goal was to transfer scientifically tested and evaluated teaching models so that new types of digital media expand the school learning environment in relation to all school subjects, especially in relation to the connection of subjects.
Pedagogical Foundations
At the end of the 20th century and now in the 21st, pedagogy and didactics mostly follow interactionist-constructivist (e.g., Kersten Reich, 2011) and systemic-constructivist approaches (e.g., Rolf Arnold, 2014).* Contemporary didactic approaches assume that learning is an active construction process, where a learner creates an individual aesthetic and algorithmic (functional) representation of the world. However, this process is not purely subjective, as early constructivists like Humberto Maturana and Francisco Varela (1992) hypothesized since each subject is in relationships with other subjects within different communication communities. Via communication systems, any statement about reality is subject to viability, because of changes in interests, power possibilities, as well as social, economic, cultural, symbolic capital formations in the sense of Pierre Bourdieu (1991). Learning thus depends strongly on individual prior knowledge and the social, natural, and technical environment, in which learning takes place. Individual aesthetic expression and communication will drive creative co-construction processes. Even when we have to question Jean Piaget’s rigid step-by-step development model (1977) because of the social, societal, and cultural influences of learning, his fundamental approach to learning theory remains groundbreaking: It is not possible to transfer knowledge from one person to another person; instead, each person must construct it by him- or herself depending on his or her previous knowledge and skill level, attitudes contextualized by challenges and learning contexts. As a result, learning is not passive storage, but active and creative construction of knowledge, which should be supported by learning environments. In contemporary constructivist models, the role of a teacher is not to impart knowledge, but to support learners in their individual learning processes through a balanced measure of instruction. The learners shall independently deal with learning content, for example by the content selection, the discovery of relations, and algorithmic combination of chunks of already available knowledge. Comparable to the interactionist- and systemic-constructivist approaches, but not as prominent, is the theory of Expansive Learning in the sense of Yrjö Engeström. His pedagogical approaches follow the so-called cultural-historical theory of activity, founded in the 1920s by researchers such as Lew Semjonowitsch Vygotskij (2012) and Alexei Nikolajewitsch Leont’ev (1978) and further differentiated in Critical Psychology for Self-Determined Learning by Klaus Holzkamp, as discussed by Engeström (2014). According to Critical Psychology, learning, in general, means the appropriation of an object meaning by a learning subject and not the achievement of a normative educational ideal. In addition to concrete things, this also includes abstract and symbolic references. Thus, Expansive Learning addresses individual or collective learning processes with the goal of extending action possibilities, competencies, and self-determination within processes of co-construction.
* see Publications: Ambient Learning Spaces: Systemic Learning in Physical-Digital Interactive Spaces
Design of the KiMM learning software for Ambient Learning Spaces
Starting in 2004, many teaching environments were designed and evaluated in which existing digital free or open-source systems were used in a new and different way. However, these existing digital systems were only partially suitable for pedagogically meaningful use in the context of contemporary pedagogical approaches. For this reason, new types of software were developed, initially only in the context of qualification work by students at IMIS.
From 2008 on, the research project Ambient Learning Spaces (ALS) started within the framework of KiMM, which also led to the targeted development of novel, now purely web-based software based on the pedagogical approaches of the previous KiMM projects that had been tested and evaluated until then. While the German Research Foundation (Deutsche Forschungsgemeinschaft / DFG) provided significant financial support for the technological development of the novel learning applications within the framework of ALS, the KiMM initiative made these functioning software prototypes available to schools and non-school charitable institutions pursuing corresponding contemporary didactic approaches. The learning environments enriched with these novel media were evaluated, further optimized, and partially integrated into teacher training. The special feature of the ALS project as of 2008 is the development of digitally networked learning systems that enable complex, so-called ambient learning environments that combine physical space with digital information and action space in a special way.
We called the technological basis of this system Network Environment for Multimedia Objects (NEMO). NEMO consists of
– various frontends (a multitude of user interfaces),
– the middleware for the central storage of the logic of a multitude of learning programs and of program modules used together in these, as well as various modules for the preparation and conversion of media and user authentication,
– as well as the backend for the semantic data storage.
The multimedia content, in the form of text, image, audio, video, or 3D files, is represented as NEMO-MediaFileObjects. These are semantically annotated, person- or location-specific adjusted, and adapted for devices-specific use before they are stored as NEMO-MediaObjects (MO). Categorization in various ontologies plays a special role here. Images are edittional automatically assigned to specific categories via an image analysis component.
Through the frontends of the ALS learning applications developed and used at IMIS, access to NEMO is performed from different contexts to enable users to generate media content in a particular way to make it available for enrichment of the learning environment. NEMO enables performing filtering for media output based on user and device information, and takes user preferences into account in its output (for example, translation into another language, alternative use of simple language, etc.). In addition, the system interacts with localization data, accesses usage histories, has an integrated narrator, and has the ability to provide cross-device interaction (XDI) and disability-friendly output in the context of ALS learning applications.
All ALS applications are centrally managed by their users through the ALS Portal, as well as media uploaded. After logging in, individual views with different rights for the different user groups are shown. In addition to the learning applications, there are also other applications, for example, to control the InteractiveWall (see below), to edit videos or 3D objects, to create narrative paths (for example, for the AR app InfoGrid), or to make specific annotations of the media.
InteractiveWall (IW) is a hypermedia platform for playful, incidental, and discovery learning but also to show internal school information or the social life of the school. Among other things, students can create multimedia content for the ALS learning applications that are integrated into the IW. The IW is based on at least one, usually on several multi-touch screens, which are placed in a location that is always easily accessible, for example, the school’s foyer. Here, the IW is accessible to all and can be used in the context of all subjects. By logging into the IW, students can have specific content displayed individually.
The applications most used on the IW are Announcements, MediaGallery, HyperVid, TimeLine, and SemCor, as well as various EduGames developed at IMIS that can also be edited by students.
Announcements is an area of the InteractiveWall. Here educators or the student council have the possibility to share general information in the form of announcements with everyone in the school via the ALS Portal. An announcement is visible to everyone for a set period of time and is displayed on the main view of the InteractiveWall.
MediaGallery provides a place where photos, graphics, and videos reflect school life. The media with their descriptions show successful student work, school sports events, visits to the partner school, and much more. Educators can fill MediaGallery with new content, mostly generated by students, at any time through the ALS Portal. Videos for MediaGallery can be edited by students using ALS’ browser-based VideoEdit software (see below).
TimeLine is a software developed for historical contexts, in which media created by students, but also media from DBpedia, or another semantically annotated database, are visualized in temporal correlation to each other or to various events. For this purpose, several timelines, for example, a geographical or a political-social timeline, can be displayed among each other and various filter functions can be used. The software is primarily part of the InteractiveWall, but can also be used without it, for example on the screens in the classrooms.
SemCor is used for visualization and playful exploration of information in the “Semantic Web”. Different topics can be explored in a self-expanding graph. After a term has been entered in the start mask and various filters have been set, further “nodes” are formed around a central “start node” in colored topic fields (person, place, etc.), which contain a small image. These relate to lines (so-called edges), which are labeled in such a way that they explain the relationships between the nodes (e.g., “influenced by” or “born in”). If one of the nodes is touched for a long time, an information overlay with (also zoomable) detailed information slides into the viewing area of the screen from the right.
With the help of HyperVid, self-created video fragments can be easily linked to a hyper video. The foundation for this is a net-like structure of individual video fragments, which HyperVid makes vividly visual and intuitively designable. In this way, multi-perspective, non-linear thinking is promoted in collaborative learning with time-based multimedia. The video fragments can be edited by the students using the browser-based application VideoEdit in the ALS Portal.
VideoEdit is a very easy-to-use web-based tool that can be used to create videos from one or more videos, photos, and sound files. Since VideoEdit is part of NEMO, the resulting videos can be used directly in all ALS applications, which are also accessible via the ALS Portal.
With InfoGrid, texts, photos, videos, or digital 3D objects can be determined web-based on PC, laptop, or tablet in the ALS Portal, which can be played in the form of augmented reality (AR) from mobile devices such as smartphones or tablets or also from VR glasses up to 360° 3D dome projections. For AR experience paths, photographic images of real-physical locations are created as “targets.” The easiest way to create media for an AR experience is via MoLES (see below). However, it is also possible to input media via ALS Portal. This media, especially for InfoGrid, can also be created in the form of an indoor or outdoor Narrative Tour using Narrator in the ALS Portal. All those (the public) who have the InfoGrid app installed on their mobile device can view what students have created interactively using the InfoGrid or also using VR glasses or in a dome projection. The InfoGrid app, which takes up little memory on the mobile device, is downloaded for free from the Google Play Store or Apple’s App Store and installed on smartphones or tablets to interactively, user-specifically receive the media created by students.
MoLES (Mobile Learning Exploration System) was developed to support learning in out-of-school locations. In the ALS Portal, Educators or even students create tasks that are solved on subsequent field trips by creating media. Field trips are done using the mobile web-based MoLES app on students’ smartphones. The media created by the students can consist of texts, images, sound recordings, or videos. Digital 3D objects can be generated automatically from the images and videos created with MoLES.
Using 3DEdit in the ALS Portal, the automatically generated 3D objects can be cleaned of remaining artifacts and the alignment can be adjusted. All media generated with MoLES can be further used in all ALS learning applications.
Educational games developed at IMIS as part of KiMM are referred to as EduGames. For the primary level, three co-learning games (AlgoFrogs, SpelLit, and CollMath) are available for children to learn together in front of a larger screen (for example, the IW) using mobile devices.
AlgoFrogs is a mobile, body- and space-based, tangible co-learning game that promotes cognitive processes coupled with social and gross motor activities in elementary school-aged students. Using any mobile device with a browser and another device, eventually with a slightly larger screen, elementary school children can learn early forms of programming or algorithmic thinking in the interactive learning experience. While children up to second grade only learn together with ready-made game rounds, children starting from third grade also learn by “programming” new game rounds with GameCreator within ALS Portal.
SpelLit acts is a mobile, body- and space-based learning game for three to five children, with which they learn to write together. At the same time, it promotes gross motor and social activities. Even though SpelLit is based on Reichen’s (1982) “reading by writing” method, learning to write using SpelLit does not allow errors to become entrenched. First, children are shown a picture on a mobile device and the term is read aloud. This is particularly helpful if. for example in families with a migration background, German is not spoken aloud at home. Then they each select an initial or final sound, which is also displayed as a picture and corresponding letter(s) read aloud. Each child takes over one sound. Together, standing in front of a larger screen, they then consider the order in which the letters will be put together to form a word. To write a new word, the children run to a mobile device slightly away to unlock another round of the game. To prevent the children from joining letters via trial and error, they must also run to the remote mobile device after an incorrect entry. With GameCreator, educators can enter new words into the system. Here they also determine the difficulty level of the game rounds. For example, they can enter the names of the children with their pictures, which are often not spelled correctly.
CollMath is a mobile, interactive, body- and space-based collaborative learning game that makes it easier for students to learn mathematical basics and imparts advanced knowledge of mathematics. Using GameCreator, teachers can set new mathematical tasks, which can then be solved independently and playfully by small groups of students. For example, they move around the room with their mobile devices in hand and gather together for a short time in front of a screen (e.g., the InteractiveWall) to solve a mathematical problem.
But also, for older students, up to the high school level, there are more learning games, which are also filled with new content by the so-called GameCreator via ALS Portal. For example, EcoFootprint or MysteryGame.
EcoFootprint is an educational game for children in grades four and up. It was developed for the InteractiveWall. Again, new content is created by students under teacher supervision in the GameCreator of the ALS Portal.
MysteryGame is designed for use on a multi-touch table. At least four learners stand at the four sides of the table to jointly develop possible problem-solving strategies for complex problems/tasks, for example in the context of ecological issues. However, it can also be played at the IW. If played with a smaller normal screen, then the players of the learning group additionally each use their cell phone.
ActeMotion enables the creation of an interactive stage performance. Via a web-based input mask (in the ALS Portal) it is possible to set the recognition of specific gestures (e.g. right hand on the head, right-hand forms a fist) to play specific media by means of a video projector and/or loudspeakers. If one of the body gestures is recognized, for example by the Kinect V2, images, sounds, videos, or special visual effects can be played. Alternatively, it is also possible to additionally or alternatively use the sensors built into smartphones to detect jumps, rotational movements of the body, etc., when a performing person carries a smartphone, for example in a trouser pocket.
CTsound allows the positions of several people to be captured by Microsoft’s “Kinect” hardware, which is mounted above a “stage”. A video projector mounted next to it projects a point of light about 10cm in size onto the stage. This appears to detect people one after the other from above and to follow them. When the white dot hits a person, it “explodes” into many small white dots that light up briefly before disappearing. The white dot then reappears on the floor and searches for another person. For the “search process” as well as for the “explosion” electronically generated sounds are played over a loudspeaker system. The software can be easily calibrated on-site on a laptop. The tracking, as well as the playback of the sound/image media, happens in “real-time”. The sounds can also be modified using CTsound.
SmartFashion stands for self-designed smart clothing or smart jewelry. Like smartwatches or AR glasses. SmartFashion belongs to the family of wearables. Possible scenarios are interactive reactions by means of actuators integrated into wearables, for example, small colored LED lights). Sensors in the wearables can thus interactively respond to environmental influences, establish communication between self-designed physical-real objects with each other or with other end devices via Bluetooth. A very popular method is to sew microcontrollers, sensors, and actuators, along with mini-batteries, into everyday clothing or jewelry using current-conducting yarn. In this way, pants, jackets, hats, and bracelets are enhanced with a variety of interactive functions. For stage performances, a connection to ActeMotion allows the technology embedded in wearables to be used for interactive sound, image, or video playback.
Written publication permissions have been obtained for all persons depicted on this website in the context of providing information about the KiMM initiative.
Detailed information about my research in the context of KiMM can be found on the Publications page of this website.