Archive
Example 8: Auralog Tell-Me-More Speech Recognition Test
How usable is the Auralog Speech Recognition for language learning? This test, by a non-native speaker of English, gives some authentic data points.
The test shows: Auralog Speech Recognition
- can be easily tripped up; however, by errors that a non-native language learner would not normally make
- more concerning is that the built-in AI, instead of e.g. escalating to additional feedback or help, like the pronunciation waveforms (which in itself seem to encourage only repeated attempts to mimic a given intonation, while not being fine-grained enough to spot mispronunciations on a word, let alone letter level) – lowers the requirements when a speaker repeatedly fails (which in extreme seems to amount to “waving through” any utterance).
- the preset dialogue – only few exercises including wrong answer options, most exercises testing only a comprehensible pronunciation of a given reading text which makes the exercise much easier for the built-in speech recognition, but also much less realistic and useful for a language learner (or more of a reading exercise).
Automating Auralog Tell-Me-More with AutoIt. Presentation at EUROCALL 2008
Auralog Tell-Me-More is a leading language learning software system which provides a vast amount of content in an advanced technical infrastructure that we found lacking in usability within an higher education language learning environment.
AutoIt is a programming language for GUI automation which I used to better integrate the Auralog software into the higher education language learning process, including
- programmatic
creation of courses and accounts - programmatic extraction and digital repository management for over 30.000 learning units.

- programmatic creation of 10,000s of learning paths,

Results
were presented (screencast) at EUROCALL 2008: “Automating Auralog (pdf)”:
creates 100s of courses
, creates
and enrols
up to 2,000 student accounts every term,
- content extraction
produces files
for adding search
and spreadsheet for sort/filter functionality: 
- learning path creation.
More detailed background information here: plagwitz_auralog_accounts_project_pub.pdf, plagwitz_auralog_project_pub.pdf
Auto-Glossing and Lookup-Tracking in a Personal Corpus for Vocabulary Acquisition
Sanako Lab300 Final exam: Movie listening comprehension with grammar, vocabulary cloze
Here is a raw (unedited) video of a final exam in a German 202 class.
It was delivered with Sanako Lab 300 in a synchronous face-to-face teaching environment.
Students (re)viewed a movie (Lola rennt), while doing target language subtitle-based with self-developed (MS-Word templates using VBA) fill-in-the-gap exercises on grammar and vocabulary – listening comprehension.
Apart from the teacher managing the exam distribution on the Sanako Lab 300 Teacher computer, you can see the teacher watching the students taking the exams – each thumbnail with subtitle text in the Sanako Mosaic window represents one student computer.
The students get the benefit of AI: lookup of internet resources (which is enabled through VBA with double-click on words in a subtitle which leads to the default dictionary, in this case set to http://dict.leo.org), as well as a dropdown menu with more advanced Dictionaries and Encyclopedia.
The students also get the benefit of immediate AI feedback to their input – better basis for learning than receiving a corrected homework or exam in a, time-wise, complete disconnect from the learning activity (and the feedback is faster than if it were web-based, since it is local to the client computer).
The teacher gets the benefit of an easy overview of students learning, of routine corrections being performed by AI in the exercise template, and, where s/he finds additional guidance is needed – even if not in this outcome exam situation, then during similar preparatory face-to-face activities – , can – with the help of the Sanako audio and student computer remote control system – immediately connect to a student for additional instruction at “teachable moments” (Example here).
Collaborative timeline activity for face-to-face classes on history
- An easily produced and repeated classroom activity, originally developed for listening comprehension and speaking practice in language classes, based on filling out collaboratively a timeline spreadsheet in the digital audio lab:
- Listen and process/write:
- Advanced German class listens to segments of an authentic German cultural history documentary from the authentic German TV series “100 deutsche Jahre” (which follows a single topic throughout 20th century German history).
- And each student enters notable summaries of events with their time of occurrence into a spreadsheet
- that the teacher
- has at beginning of activity distributed to each individual student using the digital audio labs file management features
- and after listening collects from students, merges, either with student author data or an anonymous student identifier (for corrections), into an excel timeline spreadsheet
- and visualizes the collaborative outcome as an easily collated timeline on the projector to the entire class.
- Speaking: Discuss!
- Identify what are the gravity points for the comprehension of the video by the class: Why are these events deemed important?
- What are the outliers? Criticism? Justification?
- Also correct language errors in the student output.
- In early 2006, there was no Excel web app – collaboration likely has become simpler now
- launch link to publically editable spreadsheet to class
- visualize using excel web app charts
- In early 2006, there was no Excel web app – collaboration likely has become simpler now
- Listen and process/write:
How AI and human intelligence can blend in the language lab to form personalized instruction
- An example from long before mobile computing but still: While I personally like communicative uses of the language lab infrastructure best (pairing, group conferences, with recording, screen sharing, collaborative writing),
- the above (click image to download and play WMV video, also on MAC – sorry, file won’t transcode) may be the 2nd best :
- The student is engaged
- primarily with a listening (comprehension) exercise using authentic target language media (German chanson),
- also with some light writing (recognition of vocabulary words)
- and receives automated feedback in response form quiz template.
- The communicative aspect is added
- through seamless, effortless, surgical and last not least private teacher intervention or “remote assistance”
- when the teacher (“automonitoring” all LAB300 students one after the other) notices from afar (even though thumbnail-sized, hence the large fonts of the quiz template)
- how the current automated error feedback may not be enough of an explanation, but may have created “a teachable moment”:
- Student heard phonetically correctly, but not etymologically. German “Fahrstuhl”, not “Varstuhl””: literally a “driving chair” – after this little intervention, likely a quite memorable compound.
- A good example how language lab computers need not get “in between you and your student”, but connect you – just like has become an everyday reality, in the meantime, in the social web world.
- The student is engaged



