Archive

Archive for the ‘service-is-learning-materials-creation’ Category

Example 7: Exercise dictating in German to an LRC Windows 7 computer

How can we get language students more speaking practice with qualified, but affordable feedback ? Native speaker contact remains difficult to organize even in the days of online conferencing. The LRC hosts language tutoring, but numbers are limited. Enter speech recognition, the holy grail of iCALL,  much easier for learners to relate to than the voice graph that digital audio can be broken down to, and thus for a long time a standout feature of costly second-language-acquisition packages like Auralog Tell-me-More (speech recognition in English tested here) – but now the LRC has Windows 7 Enterprise (and its free add-on language packs), and another crucial prerequisite: headphones with excellent microphones.

We are setting up the new Windows 7 computers in the LRC to allow for speech recognition in Chinese, English, French, German, Japanese and Spanish. Here is an example of me using this facility for a practicing my German during a dictation exercise:

Granted, German is my native tongue; but the example text is from the online component for the final chapter  of the “Treffpunkt Deutsch” 1st-year textbook in use here, which sends the readers to the website of the Swiss (-German) employment agency.

Apart from infrequent words ("Archiven") and Lehnwörtern ("Bachelor" etc.), Windows 7 speech recognition accuracy seems quite impressive. The above example was actually my first dictation, except that immediately beforehand, I invested a few minutes into the standard Windows 7 speech recognition training (aimed at training the user, although may behind the scenes teach the computer a few things about the speaker already also) and a few more minutes of voice training (this one is meant exclusively for the computer, but the user can also see it fail and why). The – rather simple trick to boost speech recognition results – certainly accessible to our students – seems to be to speak not only  clearly, but also slowly, with short pauses between most words.

Speech recognition in these languages is a feature of the Windows 7 (Enterprise/Ultimate version) “language packs” that we installed and switched to – that is why the entire computer interface appears in German. Practicing the L2 with (computer—operating) “voice commands” (instead of with a mouse) is also possible, simpler than replacing the keyboard (mostly) by voice, but not as easy to devise homework exercises for.

Tips for designing exercises using speech recognition: As the example shows ("Archiven") , doing all corrections by voice can quickly become tedious. But there is no pedagogical need to have your students’ bang their heads against this wall. Instead, just ask your students to correct their automatically recognized words manually at the end of their video, after their dictation. This way both you and your students get a clear summary of what they achieved – even clearer if they dictate in MS-Word with the spell and grammar check for the language (automatic with the switch to the language pack for the language) and (using key combination CTRL+SHIFT+E) track changes. We will show you later TBA:how we now enable students to easily record their screen and TBA:upload their screencast into Moodle Kaltura.

Summer 2012 Learning materials creation clinic for preparing oral assessments/assignments

1.     I am holding a “Clinic”, open to anybody who needs help with preparing their classes using oral assessments/assignments in the LRC this fall term – RVSP if interested.

2.     This clinic focuses on material creation for delivery in upcoming specific courses – based on, but different from  my faculty workshops on this topics, If you have not attended, please view the below links for what was covered in the workshops

a.     https://thomasplagwitz.com/2011/08/18/sanako-study-1200-workshop-spring-2011/

b.     https://thomasplagwitz.com/2011/12/08/screencasts-for-fall-2011-workshop-computer-classroom-management-in-the-lrc-using-sanako-study-1200/

c.     https://thomasplagwitz.com/2012/04/06/spring-2012-faculty-workshop-i-how-to-ease-your-end-of-term-oral-assessment-burden-with-the-help-of-the-lrc-moodle-kaltura-and-sanako-study-1200-oral-assessments/  

d.     https://thomasplagwitz.com/2012/04/30/spring-2012-faculty-workshop-i-oral-proficiency-testing-with-audacitysanako/

Specifically:

1.       Materials creation

    1. with SANAKO

                                                             i.      make teacher  audio recording  for model-imitation/question-response oral exam: https://thomasplagwitz.com/2012/01/25/how-a-teacher-best-adds-cues-and-pauses-to-an-mp3-recording-with-audacity-to-create-student-language-exercises/

                                                            ii.      Make teacher recording (https://thomasplagwitz.com/2012/01/11/recording-with-audacity/) for model imitation with voice insert (like reading practice homework assignment, https://thomasplagwitz.com/2012/01/24/how-a-teacher-creates-audio-recordings-for-use-with-sanako-student-voice-insert-mode/ ):

    1. with Moodle

                                                              i.      Moodle Kaltura webcam recording assignment: https://thomasplagwitz.com/2011/11/02/how-to-grade-a-moodle-straming-video-assignment-and-moodle-streaming-video-recording-assignment-glitch-2/

                                                            ii.      Prepare Moodle metacourses learning materials upload: https://thomasplagwitz.com/2011/06/17/moodle-metacourses-part-iv-the-support-workflow-uploading/ and https://thomasplagwitz.com/2011/01/26/moodle-batch-upload-learning-materials-give-students-access/

    1. with PowerPoint (visual speaking cues with timers): https://thomasplagwitz.com/2009/11/18/create-a-powerpoint-slide-with-a-timer-from-template-for-a-timed-audio-recording-exercise/
    2. Materials delivery with SANAKO
    3. remote control student pcs, collaborate over headphones: https://thomasplagwitz.com/2012/05/04/how-you-can-view-the-computer-screens-of-your-class-using-sanako-study-1200/
    4. pairing students’ audio using headphones: https://thomasplagwitz.com/2011/05/11/study-1200-pairing/
  1. You must bring some assessment ideas that fit into your skills course which we will turn into audio recordings. You can also bring prerecorded audio files from textbooks as mp3 which we can edit to turn them into materials. If you would like some examples of what colleagues have done
    1. With Moodle Kaltura: https://thomasplagwitz.com/feed/?category_name=learning-usage-samples&tag=kaltura
    2. With Sanako oral (formative) assessments/(outcome) exams:  please email me, I make accessible to you samples that we do not publish to preserve exam integrity.

 

Making audio cues for model imitation/question-response oral exams with Sanako Study 1200

We can easily record and post-process audio files in the LRC for use with the Sanako Study 1200 oral exam activities.

This can work not only  for outcome exams (course- or chapter-wise), but also or formative assessment:

Think converting your textbook-based “drills” into Sanako, like repetitively recapitulating the newly acquired vocabulary item “donut” with different cues:

Example: “What can you do with [student can enter her favorite new vocabulary item for the current class] on [teacher can ask for one social web service after the other that her students likely are familiar with]?”. In response, student has to practice vocabularry item by forming sentences that fit the vocabulary item that fit like in the whiteboard example.

We can add to these recordings the features explained in the slide below.

image

I’d be happy to play you examples from this slide – and more – in the LRC (not to be published here so that the exam files can be reused).

Using NLP tools to automate production and correction of interactive learning materials for blended learning templates in the Language Resource Center. Presentation Calico 2012, Notre Dame University

Useful debugging tools for setting up your first DkPro project in Eclipse

  1. For easier debugging, you can
  2. show the Maven console: image_thumb27
  3. include both info and errors in the output:
  4. image_thumb30
    1. have a look at background processes: image_thumb32
        1. consult the “marker” view image_thumb[8]
  5. check the Eclipse .log in your Project/metadata-directory: image_thumb[2]

Part-of-speech tagging

A few notes on POS-tagsets, collected online. You can view a larger version here.

My DkPro settings.xml

<?xml version="1.0" encoding="utf-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0                        http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <profiles>
    <profile>
      <id>ukp-oss-releases</id>
      <repositories>
        <repository>
          <id>ukp-oss-releases</id>
          <url>http://zoidberg.ukp.informatik.tu-darmstadt.de/artifactory/public-releases</url>
          <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
            <checksumPolicy>warn</checksumPolicy>
          </releases>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
        </repository>
      </repositories>
      <pluginRepositories>
        <pluginRepository>
          <id>ukp-oss-releases</id>
          <url>http://zoidberg.ukp.informatik.tu-darmstadt.de/artifactory/public-releases</url>
          <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
            <checksumPolicy>warn</checksumPolicy>
          </releases>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
        </pluginRepository>
      </pluginRepositories>
    </profile>
    <profile>
      <id>ukp-oss-snapshots</id>
      <repositories>
        <repository>
          <id>ukp-oss-snapshots</id>
          <url>http://zoidberg.ukp.informatik.tu-darmstadt.de/artifactory/public-snapshots</url>
          <releases>
            <enabled>false</enabled>
          </releases>
          <snapshots>
            <enabled>true</enabled>
          </snapshots>
        </repository>
      </repositories>
    </profile>
  </profiles>
  <activeProfiles>
    <activeProfile>ukp-oss-releases</activeProfile>
    <!-- voriges profile darf nicht auskommentiert werden -->
    <!-- Uncomment the following entry if you need SNAPSHOT versions. -->
    <activeProfile>ukp-oss-snapshots</activeProfile>
  </activeProfiles>
</settings>

trp-learning-materials-starting-pom.xml

<?xml version="1.0" encoding="utf-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.tudarmstadt.ukp.experiments.trp</groupId>
  <artifactId>de.tudarmstadt.ukp.experiments.trp.learning-materials</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <parent>
    <artifactId>dkpro-parent-pom</artifactId>
    <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
    <version>2</version>
  </parent>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.10</version>
      <type>jar</type>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
      <!-- not de.tudarmstadt.ukp.dkpro.core.io -->
      <artifactId>
 				de.tudarmstadt.ukp.dkpro.core.io.text-asl
 			</artifactId>
      <!-- 			<version>1.3.0</version> -->
      <type>jar</type>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
      <artifactId>de.tudarmstadt.ukp.dkpro.core.tokit-asl </artifactId>
      <!--  <version>1.4.0-SNAPSHOT</version> -->
      <type>jar</type>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
      <artifactId>de.tudarmstadt.ukp.dkpro.core.opennlp-asl</artifactId>
      <type>jar</type>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <!-- 
Add de.tudarmstadt.ukp.dkpro.core.stanfordnlp-gpl to the dependencies -->
      <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
      <artifactId>de.tudarmstadt.ukp.dkpro.core.stanfordnlp-gpl</artifactId>
      <!-- tba: type pom and scope  -->
    </dependency>
    <dependency>
      <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
      <artifactId>
 				de.tudarmstadt.ukp.dkpro.core.opennlp-model-tagger-en-maxent
 			</artifactId>
      <version>1.5</version>
    </dependency>
  </dependencies>
  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
        <artifactId>de.tudarmstadt.ukp.dkpro.core-asl</artifactId>
        <version>1.4.0-SNAPSHOT</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
      <!-- 
Add de.tudarmstadt.ukp.dkpro.core-asl with type pom and scope import to dependency management-->
      <dependency>
        <!-- 
Add de.tudarmstadt.ukp.dkpro.core-gpl with type pom and scope import to dependency management-->
        <groupId>de.tudarmstadt.ukp.dkpro.core</groupId>
        <artifactId>de.tudarmstadt.ukp.dkpro.core-gpl</artifactId>
        <version>1.4.0-SNAPSHOT</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>
</project>