Archive

Archive for the ‘Study-program-is-any’ Category

Protected: Elti0162 Syllabus with learning materials for listening and speaking

2014/04/07 Enter your password to view comments.

This content is password-protected. To view it, please enter the password below.

No usable dual track audio from Sanako Study 1200 version 7 when saving as MP3?

  1. We have not been using the dual track recording capabilities of the Sanako much here yet, or have relied on the diachronic separation of channels that the Sanako voice-insert mode provides. Now, however, we want to apply the Sanako to consecutive interpreting in our MA program where there is more of a need for the reviewing student/grading teacher to switch between source and target language on the recorded dual-track audio.
  2. As far as I remember, dual track recording, one of the core features of the digital audio lab, used to work out of the box in Sanako (up to version 5 on XP?), but to my surprise, no more when I saved a student exercise, the left and right channel were identical (and the source and interpreter voices very hard to separate, the entire interpretation impossible to follow).
    1. I had noticed before that with version 7 (at least, we skipped 6) all recording was dual channel, but simply duplicated the left and right audio channel (isn’t this a waste of bandwidth and storage resources?).
  3. I tested our 7.1 installation (on Windows 7 64-bit), by changing the advanced collection settings, for an interpreting audio file, clapping from the teacher station:
  4. First I changed the tracks to be saved:
  5. Test1: image.This mixes student and program down onto each channel: image
  6. Test 2: clip_image004, Program track only, as expected (no clapping)clip_image005
  7. Test 3: clip_image006 Student track, as expected (only clapping – pretty much)clip_image007
  8. Workaround: After trying whether I can save manually from the student station, it occurred to me to change the file format also
    1. WMA:
      1. dual track
        1. works with “Save AS” from the student.exe (where the mp# options is conspicuously absent, or am I missing something): image
        2. won’t work with “collect” from the tutor.exe: both tracks (saving both is – fortunately – only an option for “save as“ WMA from the  student exe. You can also save only the student track as WMA) get mixed down to one (and the student is far too soft) , as you can witness here: image
      2. WMA is a technically nice, efficient (small file size)  and widely supported format, but does require an add-on installation on the MacOS X, not to mention mobile devices.
      3. WMA on Windows plays in Windows Media Player, but from version 12, Windows Media Player has no easy way to adjust the balance anymore, you have to dig relatively deeply into the OS (mmsys.cpl) itself.
    2. MFF:
      1. dual track works also (saving single track is actually not an option in this format)both using
        1. the student recorder “Save as” (which can also mix both tracks, see above)
        2. “collect” from the tutor.exe: you can fade in the left and right channel with the balance tool that you find in the student recorder to the left of the timeline.
      2. Unfortunately,
          1. the file size quickly gets out of hand: image
          2. and for no obvious reason, the biggest here is 12 times the size, but not longer than the smallest, and also only a 5-minute recording (I know that mff stores also the user’s bookmark information, but  this can hardly be the culprit): image
          3. compare this with how WMA compresses: image
        1. MFF is a proprietary format, which only the Sanako recorder can play. This may be a nice way to get more adoption of the free Sanako student recorder which is great for language learning. However, I had not originally planned on forcing my users to use it who are most comfortable with mp3.
    3. In addition, I now have a the problem with how to switch the Sanako default collection to MFF for interpreting teachers without confusing regular users.

How to use the Sanako dual-track audio recorder

The Sanako Student Recorder (available for free here) allows you to listen on the source track while speaking/recording on the student track. Useful e.g. for interpreter practicing shadowing or simultaneous interpretation. It is as simple as pressing the red record and green play button:

After recording and reviewing, click file/save, and choose your output format.

Common commands in Speech Recognition for all languages supported

(I cut a corner and left out the language variants ZH-TW and EN-UK, sorry, we do not teach those here):

Faculty Workshop Spring 2014: "Mira, mamá! Sin manos!". Practice speaking L2 with automatic intelligent feedback by operating LRC PCs through speech recognition instead of keyboard/mouse

  1. When: March 28, 2:15-3:15, April 4, 2:00-3:00
  2. Where: LRCRoomCoed434
  3. What: Language learning speaking practice assignments with automatic intelligent feedback using Windows Speech Recognition
    1. As part of the foreign language tools we installed with Windows 7 this past Fall, we got speech recognition on the LRC PCs for 6 languages (English, Chinese, French, German, Japanese, Spanish ) representing over 85% of our enrolment.
    2. Unlike the speech recognition that comes with learning content packages like Auralog or Rosetta Stone
      1. which had to be purchased, for individual languages, but stopped functioning on the server on a long time ago,
      2. was limited to built-in content ,
      3. was restricted by a separate account system,
    3. Windows Speech Recognition is
      1. free (with the operating system), runs on the local lab pcs, and should be a bit more robust,
      2. content agnostic and hence can integrate flexibly with your curriculum and contribute meaningfully to your students’ progression,
      3. can be integrated with the existing user accounts.
    4. We combine Windows speech recognition with the new LRC screencast software, MS-Office and Moodle to offer a simple self-access assignment type that
      1. is available on all 45 LRC PCs (= scales even to large enrolment languages and 1st-year classes that cannot use the 24-seat Sanako for face-to-face speaking proficiency training)
      2. and blends the “artificial intelligence” of speech recognition with human intelligence to provide students with immediate automated feedback during pedagogically sound speaking practice, with minimal grading overhead for the teacher (= grade secure assignments by looking at the very end of a student-submitted screencast).
    5. This workshop will show actual speech recognition usage and assignment samples
      1. so far in English, French, German;
      2. if you want to bring your own samples to this workshop – there might still be time- , or to an upcoming faculty showcase, I can help you during my biweekly LRC clinics (see LRC main schedule, or schedule your own).
    6. We will step you through – hands-on, including tips&tricks – a sample voice training and assignment completion: Better than my made-up assignments would be if you could bring one or more concrete tasks to be solved using speech recognition that we could prepare assigning to your students. Here are some parameters for that:
      1. Speech recognition can replace mouse and keyboard when operating the computer. Voice commands are simpler than sentences, so this could be a beginner task, as long as you have students study the (limited) command vocabulary (which I will make available during the workshop).
      2. Speech recognition can replace any writing task with dictation. Suggestions for proficiency levels:
        1. I have dictated a web page assigned for reading comprehension in a textbook used in 1200 or even a as a false beginner.
        2. However, a one-time training helping the computer recognize an individual’s voice is required and comes sentences that vary in complexity between languages
          1. English: very easy, Beginner level;
          2. German, French: let’s have a look together, I’d say 1202 level;
          3. Japanese: 3000 level, I was told;
          4. Please test with me during the workshop: Spanish, Chinese.
  4. Download the SlideDeck (too big too embed)

Example 5: Watch how you can dictate to Windows speech-recognition (e.g. in English) and correct results in MS-Word

  1. Important: Listen carefully: I am not a native speaker, but have a reasonably low amount of errors, because it enunciate, speak clearly and slowly, and separate the words.
  2. Consider it part of the exercise that you will have to re-read and re-type some your output – use track changes in MS-Word:
    1. Make it a game: How good can you get?
    2. If you get really good at it, make a screencast like this one and include it in your Mahara ePortfolio  as authentic evidence of your foreign language proficiency.
  3. Overall, it’s like how I refer to cycling: Beats walking. Anytime. Smiley

A first look at the Google Dictionary extension for Chrome

  1. We
    1. have not pre-installed in the LRC (for that the extension would need to be more manageable by the teacher during face-to-face classes, which include exams),
    2. but can (with some reservations) recommend the Google Dictionary extension (even though it is only available for Chrome). Here is why:
  2. Google dictionary extension provides an interface to Google define and translate
    1. that is convenient (as quickly accessed like glosses) for reading activities in many languages (Q: is the privileged word sense displayed here intelligently chosen?)
    2. while (for some languages more than for others) providing access to additional word senses, usage examples and historical background information
  3. Interface 1: Tooltip,
    1. for English with audio image
    2. for other languages without audio (even though audio pronunciation may be available in Google translate for that language): image
    3. convenient access (I have been loving the tooltip interface since Google toolbar days)
    4. limited, but useful  information,
      1. a word sense – not that this is still not contextually intelligent (Cannot blame them here!) and hence more than one word sense should be offered (here I must blame them: Boo!!): E.g.  here “arch” should at show more than the most common word sense: imageimage
      2. including pronunciation (not IPA, but audio)
    5. Interface 2 (“more”)
      1. For English, a click on “more” leads to the Google “define”search operator (the related etymology search operator has been reviewed here before): image
      2. Interface 3: unfold the search results by clicking on the down arrow at the bottom to access additional information:image =
        1. additional word sense entries
        2. historical:
          1. etymology
          2. frequency data
        3. translation/dictionary entry:
          1. for our learners of languages other than English, the translation appears right in the tool tip, see above;
          2. for our ESL learners, this seems a few too many steps for accessing this information, although a monolingual dictionary is useful in many instances also.
    6. For languages other than English, a click on more leads to Google translate, which (should get its own article, but for what it is worth) can be
      1. more limiting than “define”: While you are given multiple word senses for
        1. Spanish: image
        2. and to a lesser extent, for
          1. Arabic: image
          2. Hindi: image
      2. for many languages the results are much more limiting:
        1. Even if you look up German or French, you revert back to the (pedagogically terrible) single word-sense original “translation” interface ) image
        2. For East Asian languages, you get Roman alphabet transcriptions
          1. e.g. Chinese with Pinyin: image
          2. e.g. Japanese: image
  4. Still no per-user tracking? Here it would make sense for the user.

Google Calendar embedded aggregated calendar won’t show calendar names?

  1. When I add "Calendars to Display" in the "Google Embeddable Calendar Helper" ,
  2. Google calendar lets me select calendars I  added from iCal sources,
  3. but it does not "remember" the names I have given these calendars, image
  4. displaying only the default name "Calendar",
    1. both in the "Google Embeddable Calendar Helper" image
    2. and in the iframe embed:  image
  5. rendering the aggregation feature useless (Which is which?).
  6. Is there a workaround or hidden feature like a “name parameter” in the embed query-string? What is it? I cannot find it in the Google Calendar API reference.