Archive

Archive for the ‘software’ Category

A PowerPoint Template to base your clicker-like face-to-face class exercises on

2011/12/08 2 comments
  1. Enables easy exercise creation: slide0567_image532
  2. Resides on S:\coas\lcs\labs\lrctest\templates\Teacher.pot;
  3. Requires MS-PowerPoint 2010, as installed on the teacher computer in LRCRoomCoed434.
  4. Training videos are available for download here (requires Windows Media Player on Windows, as installed in the LRCRoomCoed434).
    1. powerpoint_template_overview_default_slide.wmv
    2. powerpoint_template_sequential_slides.wmv
    3. powerpoint_template_interactive_slides.wmv
  5. Usage samples available on request from
    1. German Beginners, teacher_pot_dual_screen_bundeslaender_with_response_analyzer
    2. Intermediate  cc-teacher-pot-interactive-drink-listening-comprehension
    3. and Advanced Classes. cc-teacher.pot-100-deutsche-jahre-example

Symantec Ghost Software inventory of the LRC PCs

  1. This list documents the configuration of the LRCRoomCoed434 and LRCRoomCoed433PCs (that are imaged using the Symantec Ghost client).
    1. View here.
    2. LRC staff click here for edit in browser access to the non-embedded list.
    3. Click here for background information about this inventorying method.

Language Resource Centers computer configuration

  1. This list documents customizations on LRC computers to facilitate language learning.
    1. Click here to view the list.
    2. LRC staff click here for edit in browser access.

How to do model imitation recording exercises to improve language learner pronunciation in the LRC and beyond

  1. Sometimes teachers ask about support for voice recognition in the LRC. The term voice recognition or speech recognition (the former appears to be analogous to face recognition in authentication and other security contexts?) is usually reserved for software that can transcribe your voice into text – still no free option for this, AFAIK. Dragon naturally speaking is the oft recommended market leader outside of education (and within, Auralog Tell me more, see below). Update summer 2012: We are working on enabling the Speech recognition built into Windows 7 Enterprise for English, Chinese (Simplified and Traditional), French, Spanish, German, and Japanese.
  2. Often times, what is actually desired is a digital audio recorder with voice graph, ideally a dual track recorder.
    1. In the LRC student computers, we have for exactly this purpose a digital audio recorder as part of the SANAKO Study 1200language learning system
      1. It features a dual track recorder (allows to listen to teacher track which can be a prerecorded model to imitate on the left channel while recording the student track on the right channel of a stereo track) with a voice graph: sanako_student_exe_pane_player_audio_voicegraph_highlighted. See this dual-track-voice-graph screencast demo from the vendor and also our student cheat sheet from the vendor documentation.
      2. The Sanako is available in the LRC, as well as in many other educational institutions around the world, but neither free nor web-based (although a web-based version seems to be in the works). It currently requires MS-Windows to run.
    2. A popular and free audio editor (but not an SLA – specific application, let alone geared towards model imitation; also, for all practical ends and purposes,  requires an extra download and installations of an MP3 encoder to be able to save recordings as compressed MP3) is Audacity. To use for model imitation exercises,
      1. the student can open a model track (mp3 recommended)
      2. and manage within the program the imitation portion, using the voice graph: elti-lynn-question-response-result-audacity-names1
      3. then export  back out as mp3,
        1. either her responses individually (see my demo screencast, requires Windows Media Player on Windows, which actually shows a question/response rather than a model imitation, but same principle),
        2. or, by deleting the model track, the response parts mixed down to one track,
        3. or also, if, like in my demo screencast, the timeline sequence of model (with pauses) and responses is carefully managed (so that model and imitation do not overlap), mixed down to one track.
    3. In one language program, I have worked extensively with Auralog Tell me more
      1. which was (not exclusively, but arguably too much) based on this pedagogic concept of having students compare the voice graph of their imitation with the model voice graph (while it do did not allow for teachers to upload their own content, and was certainly not free). auralog-tellmemore-voicegraph
      2. To my knowledge, Auralog Tell me more does not allow for adding teacher-produced content as models.
      3. I did like the self-reflective and repetitive practice element. However, I found  that students – apart from intonation and (not useful for not pitch based languages) pitch -, did not benefit as much as one might have expected from viewing the voice graph, indeed tended to get overwhelmed, even confused by the raw voice information in  such a voice graph.
      4. And automated scoring of pronunciation (or speech recognition” – not free form, but on a level that has been commoditized in operating systems like Windows 7, the level of voice-directed selection between a limited set of different options, like menu options, and in the case of Auralog, choosing between different response options) seemed iffy and less than transparent in Auralog Tell me more, even though this is  their primary selling point. E.g. when I made deliberate gross mistakes, the program seemed to change its standards and wave me through ( English pronunciation example; also observed by me when testing Auralog with East Asian speakers of English).
  3. A voice graph  is not the same as a more abstract phonetic transcription (although I do not know whether language learners can be trained in phonetic symbol sets like the IPA).  There are now experimental  programs that can automate the transcription of text into phonetic symbol sets for e.g. Portuguese or Spanish. Maybe you will find that practice with recording and a phonetic transcription of the recorded text is more useful for your students’ pronunciation practice than a fancy voice graph.

MS Universal Language Input Tool offers correction and transliteration on any web page

Using the UIME, you can “type any language with any keyboard on any web page, using only the Roman characters present on every keyboard.”

ms-universal-Language-Input-Tool

And you can install your favorite input language in your web browser, like so:

ms-uime-add-japanese-ie8 ms-uime-add-japanese-ie8-test

ECAR National Study of Undergraduate Students and Information Technology, 2011 Report released

  1. The Educause ECAR for 2011 Lists among its top actionable survey results: “Nail the basics. Help faculty and administrators support students’ use of core productivity software for academic work.
  2. Not a language learning specific result , but a reminder also for the LRC to prioritize:
    1. LRC posts onproductivity software”,
    2. and most of our students’ “academic work” lives online in Moodle.

Easy adding and viewing GeoTags with your phone camera

  1. “Open your camera app, go to “Settings” (sprocket), menu item: “Geotagging”, “On”: android-camera-SETTINGS-GEOTAGGING
  2. Turning the phone’s the GPS on is not absolutely necessary, as this indoor basement shot proves:
  3. pixi-phone-wlgallery-geotag2-wireless-tower
  4. but will likely increase the accuracy of the Geotag in your photo greatly:pixi-phone-wlgallery-geotag-gps
  5. How to view depends on your photo viewer application, above examples are taken from the free Windows Live Photo Gallery.

Using geolocation information in photos for documentation?

  1. During documentation writing today, I noticed that my WlGallery suddenly (?) started displaying the geotags in pictures taken with my Palm Pixi which I set a long time ago to store (and share) location information. I t looks like this has started working  earlier, but gone unnoticed. pixi-wlgallery-geotag I also looks like it uses cell tower triangulation, not GPS data (when the latter is not available?): pixi-wlgallery-geotag2 At such a granularity, this feature can not be useful for generating documentation. But maybe for excursions?
  2. But seems still a bit iffy, on the Pixi side of things: I am pretty sure that previously, I could not see geotags in my photos, even though I tried several image metadata viewers. Also, the Pixi does not always add geotags: is this limited to pictures taken when there are network connectivity issues?
  3. Also seems like editing operations in wlgallery, including create panorama, “eat” the geotags in the output image….