Archive

Archive for the ‘audience-is-teachers’ Category

Configuring UNCC staff (MS-Exchange 10) email on Android 4.1.2

Screenshot_2014-03-15-15-17-08Screenshot_2014-03-15-15-16-31

Screenshot_2014-03-15-15-19-37image

May come in useful not only during initial setup… I had trouble sorting out the webpages addressing different versions, and translating the picture-less instructions, an so have others when they come to the LRC  for help. May this help some.

Faculty Workshop Spring 2014: "Mira, mamá! Sin manos!". Practice speaking L2 with automatic intelligent feedback by operating LRC PCs through speech recognition instead of keyboard/mouse

  1. When: March 28, 2:15-3:15, April 4, 2:00-3:00
  2. Where: LRCRoomCoed434
  3. What: Language learning speaking practice assignments with automatic intelligent feedback using Windows Speech Recognition
    1. As part of the foreign language tools we installed with Windows 7 this past Fall, we got speech recognition on the LRC PCs for 6 languages (English, Chinese, French, German, Japanese, Spanish ) representing over 85% of our enrolment.
    2. Unlike the speech recognition that comes with learning content packages like Auralog or Rosetta Stone
      1. which had to be purchased, for individual languages, but stopped functioning on the server on a long time ago,
      2. was limited to built-in content ,
      3. was restricted by a separate account system,
    3. Windows Speech Recognition is
      1. free (with the operating system), runs on the local lab pcs, and should be a bit more robust,
      2. content agnostic and hence can integrate flexibly with your curriculum and contribute meaningfully to your students’ progression,
      3. can be integrated with the existing user accounts.
    4. We combine Windows speech recognition with the new LRC screencast software, MS-Office and Moodle to offer a simple self-access assignment type that
      1. is available on all 45 LRC PCs (= scales even to large enrolment languages and 1st-year classes that cannot use the 24-seat Sanako for face-to-face speaking proficiency training)
      2. and blends the “artificial intelligence” of speech recognition with human intelligence to provide students with immediate automated feedback during pedagogically sound speaking practice, with minimal grading overhead for the teacher (= grade secure assignments by looking at the very end of a student-submitted screencast).
    5. This workshop will show actual speech recognition usage and assignment samples
      1. so far in English, French, German;
      2. if you want to bring your own samples to this workshop – there might still be time- , or to an upcoming faculty showcase, I can help you during my biweekly LRC clinics (see LRC main schedule, or schedule your own).
    6. We will step you through – hands-on, including tips&tricks – a sample voice training and assignment completion: Better than my made-up assignments would be if you could bring one or more concrete tasks to be solved using speech recognition that we could prepare assigning to your students. Here are some parameters for that:
      1. Speech recognition can replace mouse and keyboard when operating the computer. Voice commands are simpler than sentences, so this could be a beginner task, as long as you have students study the (limited) command vocabulary (which I will make available during the workshop).
      2. Speech recognition can replace any writing task with dictation. Suggestions for proficiency levels:
        1. I have dictated a web page assigned for reading comprehension in a textbook used in 1200 or even a as a false beginner.
        2. However, a one-time training helping the computer recognize an individual’s voice is required and comes sentences that vary in complexity between languages
          1. English: very easy, Beginner level;
          2. German, French: let’s have a look together, I’d say 1202 level;
          3. Japanese: 3000 level, I was told;
          4. Please test with me during the workshop: Spanish, Chinese.
  4. Download the SlideDeck (too big too embed)

How to resolve error “Speech Recognition could not start because the language configuration is not supported”

  1. Problem: I have seen this error CAM05472,
  2. Root cause: when the display language and speech language do not match (the latter is set to default to the former in the LRC, but it seems they can get out of sync), as you can witness here (English display does not match Chinese speech recognition): CAM05474
  3. Solution: Follow the instructions in the error message, i.e.
    1. Access the Speech recognition control panel here: image
    2. Then change the speech recognition language to match the display language, like I am doing here: CAM05475
  4. Quick workaround: Not sure about how quick, but in the LRC, you can also just try and restart the computers, they are “frozen” to a default configuration in display and speech recognition language (English/English – matches).

Example 5: Watch how you can dictate to Windows speech-recognition (e.g. in English) and correct results in MS-Word

  1. Important: Listen carefully: I am not a native speaker, but have a reasonably low amount of errors, because it enunciate, speak clearly and slowly, and separate the words.
  2. Consider it part of the exercise that you will have to re-read and re-type some your output – use track changes in MS-Word:
    1. Make it a game: How good can you get?
    2. If you get really good at it, make a screencast like this one and include it in your Mahara ePortfolio  as authentic evidence of your foreign language proficiency.
  3. Overall, it’s like how I refer to cycling: Beats walking. Anytime. Smiley

Protected: LRC old software inventory

2014/03/07 Enter your password to view comments.

This content is password-protected. To view it, please enter the password below.

Protected: LRC old media inventory for spring cleaning

2014/03/07 Enter your password to view comments.

This content is password-protected. To view it, please enter the password below.

A first look at the Google Dictionary extension for Chrome

  1. We
    1. have not pre-installed in the LRC (for that the extension would need to be more manageable by the teacher during face-to-face classes, which include exams),
    2. but can (with some reservations) recommend the Google Dictionary extension (even though it is only available for Chrome). Here is why:
  2. Google dictionary extension provides an interface to Google define and translate
    1. that is convenient (as quickly accessed like glosses) for reading activities in many languages (Q: is the privileged word sense displayed here intelligently chosen?)
    2. while (for some languages more than for others) providing access to additional word senses, usage examples and historical background information
  3. Interface 1: Tooltip,
    1. for English with audio image
    2. for other languages without audio (even though audio pronunciation may be available in Google translate for that language): image
    3. convenient access (I have been loving the tooltip interface since Google toolbar days)
    4. limited, but useful  information,
      1. a word sense – not that this is still not contextually intelligent (Cannot blame them here!) and hence more than one word sense should be offered (here I must blame them: Boo!!): E.g.  here “arch” should at show more than the most common word sense: imageimage
      2. including pronunciation (not IPA, but audio)
    5. Interface 2 (“more”)
      1. For English, a click on “more” leads to the Google “define”search operator (the related etymology search operator has been reviewed here before): image
      2. Interface 3: unfold the search results by clicking on the down arrow at the bottom to access additional information:image =
        1. additional word sense entries
        2. historical:
          1. etymology
          2. frequency data
        3. translation/dictionary entry:
          1. for our learners of languages other than English, the translation appears right in the tool tip, see above;
          2. for our ESL learners, this seems a few too many steps for accessing this information, although a monolingual dictionary is useful in many instances also.
    6. For languages other than English, a click on more leads to Google translate, which (should get its own article, but for what it is worth) can be
      1. more limiting than “define”: While you are given multiple word senses for
        1. Spanish: image
        2. and to a lesser extent, for
          1. Arabic: image
          2. Hindi: image
      2. for many languages the results are much more limiting:
        1. Even if you look up German or French, you revert back to the (pedagogically terrible) single word-sense original “translation” interface ) image
        2. For East Asian languages, you get Roman alphabet transcriptions
          1. e.g. Chinese with Pinyin: image
          2. e.g. Japanese: image
  4. Still no per-user tracking? Here it would make sense for the user.

Sanako text messaging: How teacher sends URL to all students, to individual, and closes individual response

sanako-textmessaging-all-individual-close

Save yourself some valuable face-to-face class time with instant messaging of clickable URLs.