Archive

Archive for the ‘4-skills’ Category

Pedagogical rationale of timestretching audio for differentiating instruction

  1. Context: Higher Education in the UK has made considerable investments in digital lab infrastructure to improve second language instruction in times of deteriorating language take-up in the secondary sector, including widening participation. Digital language labs, apart from generic digital media, suffer from a lack of custom-made teaching materials that take advantage of the pedagogic features of the lab: grouping for personalization of teaching and learning. Pedagogical integration and development is needed to achieve the original intentions. A project to timestretch audio language learning materials for the digital audio lab promises integration software, pedagogical materials and, above all, a model of effective digital language lab use in teaching.
  2. Problem: In times of uneven language provision at the secondary school level and of shrinking language program sizes in HE, increasingly language teachers find themselves confronted with uneven language proficiency in their courses. Digital lab technology can help them to overcome the  “one size fits all” approach and personalize the students learning experience, for a greater inclusiveness in language programs and an increased proficiency boost for both the below and above average proficiency student groups.
  3. During my work with the language programmes at an English university, I could witness – and had to record – that the least proficient students, seeing themselves confronted with what was nowhere near “comprehensible input” (Krashen) for them, not only let the communication break down, but appeared so distressed that, despite being fully aware that their language output was being recorded as an assessment for the teacher to evaluate, started to curse and swear (in their native tongue) – while at the same time the upper portion of the class breezed through the exercise without any apparent difficulty.
  4. Proposed Solution:
    1. Technology to the rescue: The slowing down of digital audio – without pitch alteration –has been, while not a perfectly accurate representation of natural slow speech output, a popular benefit of digital technology in the language learning field for several years now (cf. e.g. Calico 2004), and I myself have experimented with it in the digital audio lab (Model imitation and Question – response exercises) and in publications (cf. Plagwitz, Karaoke in the Digital Audio Lab (2006)).
    2. What seems lacking are
      1. both an application that automates (by monitoring one of the network share directories that are part of the digital lab system) the slowing down (and speeding up) of audio for instructors (e.g. in 5% increments from 70% to 120% of original input) that are too time-pressed for producing materials, or even seeking out recordable on-air sources, and
      2. a model implementation in the digital audio lab (using dynamic grouping of students through the digital lab software) that creates exercises that would create exercises that can benefit from this approach (and can be shared), that applies them in a number of suitable (interpreting, ab initio language learning) modules and that assesses the proficiency improvement with this approach (using the outcome exam and a control group).
  5. Benefits: Greater fluency of both the least and most proficient students is to be expect after they were exposed to – as deemed fit by their instructors – slowed down/sped up exercises – ca. 20 exercises in the ab initio language learning module, practicing a small set of suitable new structures and vocabulary compared, with 2 control groups, and five interpreting rounds of 20-30 minutes. We will operationalize this by reusing regular assignment grading and use a control group, also of module-size, which must also use the digital audio lab, but with “one size fits all” audio.

Example 8: Auralog Tell-Me-More Speech Recognition Test

2008/08/29 1 comment

How usable is the Auralog Speech Recognition for language learning? This test, by a non-native speaker of English, gives some authentic data points.

image

The test shows: Auralog Speech Recognition

  1. can be easily tripped up; however, by errors that  a non-native language learner would not normally make
  2. more concerning is that the built-in AI, instead of e.g. escalating to additional feedback or help, like the pronunciation waveforms (which in itself seem to encourage only repeated attempts to mimic a given intonation, while not being fine-grained enough to spot mispronunciations on a word, let alone letter level) – lowers the requirements when a speaker repeatedly fails (which in extreme seems to amount to “waving through” any utterance).
  3. the preset dialogue – only few exercises including wrong answer options, most exercises testing only a comprehensible pronunciation of a given reading text which makes the exercise much easier for the built-in speech recognition, but also much less realistic and useful for a language learner (or more of a reading exercise).

Automating Auralog Tell-Me-More with AutoIt. Presentation at EUROCALL 2008

Auralog Tell-Me-More is a leading language learning software system which provides a vast amount of content in an advanced technical infrastructure that we found lacking in usability within an higher education language learning environment.

AutoIt is a programming language for GUI automation which I used to better integrate the Auralog software into the higher education language learning process, including

  1. programmatic creation of courses and accounts
  2. programmatic extraction and digital repository management for over 30.000 learning units.Click to view a work sample from my portfolio
  3. programmatic creation of 10,000s of learning paths,

Results were presented (screencast) at EUROCALL 2008: “Automating Auralog (pdf)”:

    1. cpurse and account creation

creates 100s of courses , creates and enrols up to 2,000 student accounts every term,

  1. content extraction produces files for adding search and spreadsheet for sort/filter functionality:
  2. learning path creation.

More detailed background information here: plagwitz_auralog_accounts_project_pub.pdf, plagwitz_auralog_project_pub.pdf

Learning material production with NLP for reading comprehension and vocabulary acquisition: gloss and track

Auto-Glossing and Lookup-Tracking in a Personal Corpus for Vocabulary Acquisition

Sanako Lab300 Final exam: Movie listening comprehension with grammar, vocabulary cloze

2006/04/04 1 comment

Here is a raw (unedited) video of a final exam in a German 202 class.

It was delivered with Sanako Lab 300 in a synchronous face-to-face teaching environment.

Students (re)viewed a movie (Lola rennt), while doing target language subtitle-based with self-developed (MS-Word templates using VBA) fill-in-the-gap exercises on grammar and vocabulary – listening comprehension.

Apart from the teacher managing the exam distribution on the Sanako Lab 300 Teacher computer, you can see the teacher watching the students taking the exams – each thumbnail with subtitle text in the Sanako Mosaic window represents one student computer.

The students get the benefit of AI:  lookup of internet resources (which is enabled through VBA with double-click on words in a subtitle which leads to the default dictionary, in this case set to http://dict.leo.org), as well as a dropdown menu with more advanced Dictionaries and Encyclopedia.

The students also get the benefit  of immediate AI feedback to their input – better basis for learning than receiving a corrected homework or exam in a, time-wise, complete disconnect from the learning activity (and the feedback is faster than if it were web-based, since it is local to the client computer).

The teacher gets the benefit of an easy overview of students learning, of routine corrections being performed by AI in the exercise template, and, where s/he finds additional guidance is needed – even if not in this outcome exam situation, then during similar preparatory face-to-face activities – , can  – with the help of the Sanako audio and student computer remote control system – immediately connect  to a student for additional instruction at “teachable moments” (Example here).

Collaborative timeline activity for face-to-face classes on history

  1. An easily produced and repeated classroom activity, originally developed for listening comprehension and speaking practice in  language classes, based on filling out collaboratively a timeline spreadsheet in the digital audio lab:
    1. Listen and process/write:
      1. Advanced German class listens to segments of an authentic German cultural history documentary from the authentic German TV series “100 deutsche Jahre” (which follows a single topic throughout 20th century German history).
      2. And each student enters notable summaries of events with their time of occurrence into a spreadsheet
      3. that the teacher
      4. has at beginning of activity distributed to each individual student using the digital audio labs file management features
      5. and after listening collects from students, merges, either with student author data or an anonymous student identifier (for corrections), into an excel timeline spreadsheet
      6. and visualizes the collaborative outcome as an easily collated timeline on the projector to the entire class.
    2. Speaking: Discuss!
      1. Identify what are the gravity points for the comprehension of the video by the class: Why are these events deemed important?
      2. What are the outliers? Criticism? Justification?
      3. Also correct language errors in the  student output.
    3. nexus_timeline_excel_100_deutsche_jahre
      1. In early 2006, there was no Excel web app – collaboration likely has become simpler now
        1. launch link to publically editable spreadsheet to class
        2. visualize using excel web app charts

How AI and human intelligence can blend in the language lab to form personalized instruction

image

  1. An example from long before mobile computing but still: While I personally like communicative uses of the language lab infrastructure best (pairing, group conferences, with recording,  screen sharing, collaborative writing),
  2. the above (click image to download and play WMV video, also on MAC – sorry, file won’t transcode) may be the 2nd best :
    1. The student is engaged
      1. primarily with a listening (comprehension) exercise using authentic target language media (German chanson),
      2. also with some light writing (recognition of vocabulary words)
      3. and receives automated feedback in response form quiz template.
    2. The communicative aspect is added
      1. through seamless, effortless, surgical and last not least private teacher intervention or “remote assistance”
      2. when the teacher (“automonitoring” all LAB300 students one after the other) notices from afar (even though thumbnail-sized, hence the large fonts of the quiz template)
      3. how the current automated error feedback may not be enough of an explanation, but may have created “a teachable moment”:
      4. Student heard phonetically correctly, but not etymologically. German “Fahrstuhl”, not “Varstuhl””: literally a “driving chair” – after this little intervention, likely a quite memorable compound.
    3. A good example how language lab computers need not get “in between you and your student”, but connect you – just like has become an everyday reality, in the meantime, in the social web world.