Archive

Archive for the ‘4-skills’ Category

Slowing source audio for interpreting classes in the digital audio lab

  1. To judge from listening to Simult. Lesson 1, text 2 on Acebo Interpreter’s Edge (ISBN 1880594323), I am wondering  whether some of our students (= personalization) would need this audio to be simplified, to gain the benefit of a well-adjusted i+1?  I can pre-process the audio :
    1. Where the flatlines = natural pauses are in above graph, insert a audio signal as where students can press voice insert recording,  Example: clip_image001
    2. We can also  insert a pause and a cue at the beginning and end to set students a limit how long they can interpret, but if students operate  the player manually, there is no teacher control and no exam condition, and the students having to manage the technology tends to distract from the language practice.
    3. Slow down the audio without changing the pitch (just have to make sure not to overdo it, else will sound like drunken speech  – my time stretching software would be able to avoid “drunken speech” syndrome, but I have not been able to work on it since briefly for IALLT in Summer 2011 for 3 years now…)
      1. clip_image002
      2. clip_image003
    4. We can use this adjusted with the Sanako grouping feature to personalize instruction (find the right i+1 for each of your student, useful if there are considerable variations in their proficiency): How to group students into sessions  (in 3 different ways)    goo.gl/JgXUP/.

Protected: Elti0162 Syllabus with learning materials for listening and speaking

2014/04/07 Enter your password to view comments.

This content is password-protected. To view it, please enter the password below.

Watch how to start and activate speech recognition from the desktop

Watch how to configure the speech recognition wizard on Windows 7


Choose the same options in your language (every time you log in, until we find a way to set these options on a per-machine level).

Learn and teach writing in your second language on Lang-8.com

image

Improving language learning with technology for me seems to have 2 avenues: AI and human intelligence. Automated feedback on writing provided by proofing tools – even if they have become smarter and more contextual to spot (in MS-Word 2007 and up) common errors like your/you’re or their/there – makes one wonder about the feasibility of the former. But that automated essay-scoring tools which have been developed and deployed (at least for ESL) claim to score similarly as teachers makes one wonder about much more… Correcting writing remains expensive!

So may be we should look into crowd-sourced writing correction which needs no cutting edge NLP, only well-understood WWW-infrastructural technology to connect interested parties, but requires social engineering to attract and keep good contributors (and a viable business model  to stay afloat: This site seems freemium).

Reading online comments and postings in your native language makes one wonder: can language teachers be replaced by crowdsourcing? I became aware of this the language learning website that offers peer correction of writing input by native-speaker through a language learner corpus. I have not thoroughly evaluated the site, but the fact that its data is being used by SLA researchers here (http://cl.naist.jp/nldata/lang-8/) seems a strong indicator that the work done on the website is of value.

To judge by the numbers accompanying the corpus (it is a snapshot from 2010, a newer version is available however on request), these are the most-represented L2 on lang-8.com:  image

Input languages missing on PC21 to PC31

  1. I only had a chance to check the PC’s in the title.
  2. This is the size of a good language bar: CAM05715
  3. These PCs are missing input languages when I am logged in:
  4. CAM05714CAM05709CAM05716CAM05717CAM05720
  5. my phone has a poor camera, so this needs double checking:
    1. looks like pc29: CAM05719
    2. unclear which, but appears to be in 2nd row: CAM05708

Keyboards not loading on pc35


Did this PC get out of synch? It was out of sync with the JAVA also.

Faculty Workshop Spring 2014: "Mira, mamá! Sin manos!". Practice speaking L2 with automatic intelligent feedback by operating LRC PCs through speech recognition instead of keyboard/mouse

  1. When: March 28, 2:15-3:15, April 4, 2:00-3:00
  2. Where: LRCRoomCoed434
  3. What: Language learning speaking practice assignments with automatic intelligent feedback using Windows Speech Recognition
    1. As part of the foreign language tools we installed with Windows 7 this past Fall, we got speech recognition on the LRC PCs for 6 languages (English, Chinese, French, German, Japanese, Spanish ) representing over 85% of our enrolment.
    2. Unlike the speech recognition that comes with learning content packages like Auralog or Rosetta Stone
      1. which had to be purchased, for individual languages, but stopped functioning on the server on a long time ago,
      2. was limited to built-in content ,
      3. was restricted by a separate account system,
    3. Windows Speech Recognition is
      1. free (with the operating system), runs on the local lab pcs, and should be a bit more robust,
      2. content agnostic and hence can integrate flexibly with your curriculum and contribute meaningfully to your students’ progression,
      3. can be integrated with the existing user accounts.
    4. We combine Windows speech recognition with the new LRC screencast software, MS-Office and Moodle to offer a simple self-access assignment type that
      1. is available on all 45 LRC PCs (= scales even to large enrolment languages and 1st-year classes that cannot use the 24-seat Sanako for face-to-face speaking proficiency training)
      2. and blends the “artificial intelligence” of speech recognition with human intelligence to provide students with immediate automated feedback during pedagogically sound speaking practice, with minimal grading overhead for the teacher (= grade secure assignments by looking at the very end of a student-submitted screencast).
    5. This workshop will show actual speech recognition usage and assignment samples
      1. so far in English, French, German;
      2. if you want to bring your own samples to this workshop – there might still be time- , or to an upcoming faculty showcase, I can help you during my biweekly LRC clinics (see LRC main schedule, or schedule your own).
    6. We will step you through – hands-on, including tips&tricks – a sample voice training and assignment completion: Better than my made-up assignments would be if you could bring one or more concrete tasks to be solved using speech recognition that we could prepare assigning to your students. Here are some parameters for that:
      1. Speech recognition can replace mouse and keyboard when operating the computer. Voice commands are simpler than sentences, so this could be a beginner task, as long as you have students study the (limited) command vocabulary (which I will make available during the workshop).
      2. Speech recognition can replace any writing task with dictation. Suggestions for proficiency levels:
        1. I have dictated a web page assigned for reading comprehension in a textbook used in 1200 or even a as a false beginner.
        2. However, a one-time training helping the computer recognize an individual’s voice is required and comes sentences that vary in complexity between languages
          1. English: very easy, Beginner level;
          2. German, French: let’s have a look together, I’d say 1202 level;
          3. Japanese: 3000 level, I was told;
          4. Please test with me during the workshop: Spanish, Chinese.
  4. Download the SlideDeck (too big too embed)