Archive

Archive for the ‘learning-materials’ Category

Arachne, online database for archaeology

ARACHNE is the free [account creation required] object database of DAI and the Institut of Classical Archaeology in Cologne. It provides more than 1 Million images of finds, architecture and excavations with meta information as well as digitised historical literature” (http://www.ariadne-infrastructure.eu/Services/Online-Services: Find more information and help on this page). Example of Advanced Search start choice page: image

Continue with Einzelmotive (singular motifs) (gets you back into an English interface also – the field-specific explanation on the right certainly helps): image

There is auto-search completion/suggestion, however, it seems to work only for German, and very eclectic: image

2011-07-08_BILD0047b Stitch (5000x3941)

Beats having to plaster your surroundings with photos for making your own panoramas. Smiley

Obstacles when adopting OER

  1. OER is a shibboleth inspired by the vast crowdsourcing successes the internet has enabled by connecting like minds and allowing them to share at a fraction of what used to be the cost when publishing houses came into existence. At the same time, I have seen many a OER repository that, compared with Facebook or better still in our context (but already with considerably fewer contributors and more rigorous editors), Wikipedia – looked more like this: What then happened?
    1. Let’s start with: What are the alternatives to OER? That would point to one obstacle being simply awareness: “professors choose to adopt textbooks and rarely think about textbook prices in the process. The student is given a take it or leave it option”. Recent initiatives in the university arena to publish research as Openresearch have their root in the concern that publishers (with a profitability of some nearing 40%) overcharge , plus manage to charge universities through their libraries twice for work that universities themselves have produced and provided to them have done, plus partially with taxpayer already) lead to initiatives to bypass publishers.
    2. Now how about students’ monetary needs? Similarly to research publishers, publishing houses that serve the US higher education market as Not-OER providers have been under scrutiny also, for overcharging and double charging (textbook materials are produced by university teams, while production-costs ). For a long time now, the increase in textbook prices has outpaced inflation (and even tuition inflation). Textbook prices are one factor in what has been lamented as an education system that is too expensive, and too exclusive, during times when the American middle class has not seen an increase in wealth since 1988, the median wealth in the US has for the first time fallen behind other developed nations,[1] student debts have mounted, students’ returns from gainful employment is down and the value of college education is becoming disputed – the fact that college degree surely is valuable compared with not going to college at all is a sign of the monopoly of universities and colleges on higher education that may not last. It already is under attack by potentially disruptive innovation which include paying students that forego college a stipend (instead of tuition) to invest into an enterprise they found, MOOCs and their credentialing.
    3. There are also reactions to 4-year higher education being overpriced which could be more favorable to Community colleges: a preference students have shown to community college which offer a better subsidized, and price-controlled education than other forms of higher ed, so much that access controls have been put in place in California. Other regulatory changes may also threaten the cozy advantage that community colleges have enjoyed – so no reason to rest in your laurels.
    4. And the cost of textbooks – the same as for 4-year colleges, where overall impact of their cost is much smaller – can deter especially price-conscious students. Introducing OER can save students much as one-third on the cost of their college degree (says Lumenlearning, who helped Tidewater Community College base their Z[ero book cost] degree . In return, student numbers will go up, courses will materialize that before would have not, and teachers will teach. Better still: Will students not be much better off with teacher-approved, -integrated OER than with the much more, easily overly radical alternative, relying on MOOCs which – being little more than a textbook, a bit of artificial intelligence and peer-review – , favor those students – and thus, rather than ringing in the democratization of education, might easily prove to be yet another way to leave behind the lower middle class.
    5. But the promise of extra hours and of realizing educational ideals might not be enough, a few extra incentive for teachers might be required. “lack of incentives for teachers to share their work“, and to go the extra mile to adapt shared work. Why? Onto part 2:
  2. OTOH: Why have non-OER providers stayed in business (for textbook publishers, isn’t it rather: thrived, immunized against Amazon) so long, despite the internet being able to easily exchange text and images at least for over 2 decades now? How can we hope to replace what has allowed them to charge more and more? [2]
    1. JISC quotes extra effort for teacher to find, evaluate, integrate (into an institution, into a cohesive course also) and fully master learning content – while teachers do not get the students’ benefit in price savings. Clearly here is an opportunity for the institution to broker between students and teachers.
    2. Publishing houses have a vital interest and are rather successful in making it easy for teachers to adopt their product – in many ways.
    3. The teacher’s workload easily becomes the bottleneck,
      1. not only when teachers produce their own materials in house – not only when not so many hands make no so light work (and can a course really be divvied up more than software by adding “The mythical man month”?), but also when learning material producing automation comes into play and things get a bit complex.
      2. What if you refuse to reinvent the wheel and opt for reuse of materials already produced elsewhere? You still need to gauge the wheel, install it, and actually, it is more like you are the wheel who carries the load, so make sure you are well-rounded in the textbook you adapted. Does the presumably enthusiastic teacher that designed the content have the nerve, time, money , experience to provide all the help textbook publishers to, to explain content that is transparent to her since she conceived it?
      3. What if there is not even a well-rounded textbook to adopt?
        1. Pick and choose may sound like a pleasant activity for a course planner, but only up to a point. In past projects I participated in, when OER were in their infancy, the granularity was too high, very atomic, since we did not have higher units like class plans, chapters or even syllabi that we could fit with the program.
        2. E.g. a learning material project I participated in in the UK in 2008, was meant to create a metadata schema for language learning “objects” to make them reusable and discoverable. A UK-wide work group sponsored by LLAS to gain JISC failed funding for creating a metadata schema for beginning to intermediate language learning. Incidentally, as of last year for JISC, “assigning appropriate metadata is still a challenging issue although utilising social software/web 2.0 services can help with retrieval”. My ensuing localized attempts to reuse existing metadata schema from libraries, government educational agencies and other such institutions only increased the complexity, beyond usability. Disparate resources (either on the internet or In-house: digital media; commercial materials that could not serve as, but were supposed to be mined for syllabi: Auralog) could barely be catalogued. Then there were multiple catalogues, and storage silos with differing capabilities (LMS (Blackboard), erepository (equella/LearningEdge db) – my users deemed that too difficult to wade through.
        3. The term en vogue then was still the “learning object”. Adapted from OO programming, even though it seems a broken promise even there: Building software out of “reusable” objects involves a lot more work and overhead than a child thinks when it plays with LEGO, and new coding paradigms and SDLC techniques are thrown on the wall all the time to try and cope with this. What are chances that the terminology (assuming this is the intent: Encapsulation? Inheritance anybody??) can be applied to learning units (content?), which is much more complex than software objects, just like natural language is much more complex than a computer language.
        4. It is very difficult to form “learning objects” into a coherent syllabus: In my field of teaching, the syllabus is a complex, interdependent progression of exercises and skills involving lexis, grammar and practice in 4 skills (reading, listening, speaking, writing), the design of which is not easily automated or made modular. This may be more pronounced in languages, but hardly non-trivial in other fields either.
      4. Worse still: Assuming we have a textbook or rather a complete syllabus as OER which contain a lot of work and careful planning – while I know that many teachers mind sharing their syllabi fore exactly that reason -, does using shared resources save time compared with starting from scratch? Truly organizing a study program that uses a common syllabus for sections requires a lot of training of the teachers by the author of the syllabus.
      5. Discoverability: but have to do more work evaluating OER. Publishers have the power of the brand and other review mechanisms in place that save teachers time. What would help is an evaluation process that is crowd sourced, or at least within an institution internally coordinated.
      6. JISC quotes further "Technical challenges – particularly choices around content packaging, branding, version control", but hopes that “clear guidelines” alleviate this – guidelines from educational institutions, and JISC itself another public institution. How well will the 19th empire spirit trying to reign in the internet anarchy compete with the 21st century global corporation? Publishing houses seem to manage a lot of these technical issues rather well already, at least the ones that compete with Amazon in knowing their customers, e.g. walk the fine line between stability, that teachers want who are invested in their textbook, and innovation (refresh).
        1. For education, I hopped onto the “sharing economy” bandwagon in the late nineties when running a collaborative language learning links repository first for consortium of Canadian universities and then for my us universities a little over 10 years ago repository where teachers uploaded target culture links with, review assignments and students posted reviews. The scope was limited, and the integration into the syllabus remained a creative challenge. Even under this limited premise, sustainability in the long run quickly became an issue: I remember my first automated broken link checkers in my learning links repository. While the language learning links repository project was fun, as long as it lasted, already then I noticed some typical problems it is fraught with : I had made a lot of inspiring materials ( not only my or other teachers’ input, but very much how to the students responded to the challenge to review authentic target language websites with their limited language skills ) students the available for public use, but for the earliest, the shared database infrastructure technology did not move well with me, and with it the learning materials went down. Modern learning materials would need much more ongoing investment. Will it be upgraded, progress, improve, while managing change for the end user. Textbook publishers have learned to walk a fine line between upgrading their textbooks to keep them current and attractive, and not forcing teachers to relearn everything, and lose their past considerable investment into leanring to teach with a textbook.
      7. JISC finally quotes production quality of publishing houses is better, since QA is better organized. However, the quality problem may be a more fundamental one truly, and more closely aligned to what else is happening on the internet in the last decade: Technical difficulty/oversimplification in a rapidly evolving technical landscape:
        1. Back to Wikipedia: Critical mass & content still simpler than HE: I remember how a department head mocked his dean’s publication list for consisting of mostly only encyclopedia articles. Encyclopedia articles are simpler than research inquiry and educational guidance. Pedagogical OER require more instructional design and also more differentiating interaction than infusion of facts through the “Nuremberg funnel”:

          .

        2. But assuming we have managed to avail ourselves of it, what does educational work with a coherent syllabus which has to blend human and artificial intelligence, nowadays have to look like (simply since it can)? Publishing houses may be in a better position to integrate systems that facilitate this teacher – student interaction, based on a data pipeline and analytics, than plugging relatively simple formats over open interfaces into a relatively generic LMS.
        3. Is “openness” at this point more than a sentimental banner preserved from our salad days? You can download your content from Facebook, but what good does it do you? It does not delete it from Facebook’s database, so does not prevent Facebook from calculating with that stored Knowledge. Nor does it give you the same calculation results. Innovation in the technology field still happens, and in spite of the rhetoric of “openness” – directed against “dinosaurs” like Microsoft – seems to be still happening in proprietary formats – which constitute what open products have to compete with: Can they? What is more impressive in artificial intelligence, than Google’s and Facebook’s data silos and algorithms? What is more closed?
        4. At how much of a disadvantage is OER if it is required to dumb it down, maybe not so much because of its requirement to remain open, but for lack of a common intelligent platform similar to Google’s and Facebook’s for all the participants?
        5. Will it be enough of an advanced platform for OER to compete if a (any) browser load it (and are we sure we know what is implied if Google Chrome is the player? Worse, will we want to be – of course I love Google add-ons for learning). Without support of a server software? If not, who will maintain and innovate on the server-side?
        6. What non-trivial content formats are possible, what are the exchange format, cans we get them actually to work? (Scorm, LTI). In 2000, I had high hopes for the exchange of quiz learning objects of varying (enthusiast teachers, publishers) between a few standard institutional LMS (Blackboard, WebCT allowed to load such content) – this has not taken off, in favor of the textbook publishers providing their own, superior online content distribution platforms: “This is because open content lacks the platforms that provide the type of differentiated learning experiences offered by MyLabs and MindTap. These platforms are big advantages for the major publishers and add perceived value to their content that open content groups cannot currently match”. And it is not downloading content in flashy forms, but also about learning analytics and how the – as discussed during a Pearson retreat last spring – could provide added value by gathering and analyzing and acting data on the data that individuals have entered, and link individuals.
        7. Even if you manage to get artificial intelligence into your OER, do you have hooks for teachers contributing, interacting with students based on easy accessibility, visualization of students’ data. “Blending” artificial (software) and human (teacher, peer) intelligence, which makes the OER stronger. MOOCs have been mocked as mere non-paper textbooks. That is not quite correct in the sense of the traditional textbook sense, MOOCs are more than (paper) textbooks (only paper textbooks the user can freely reuse, hand down – lacking interactivity and intelligent feedback), they provide some form of intelligence: Artificial intelligence, that however is still primitive, black/white, false/correct. Integrating some automated textual evaluation is hard. Breaking down learning goals to assessments that a computer can handle (deliver and automatically grade), is very hard. Assessing not only a percentage, but the understanding of the student, where it is lacking and what would be done best next (personalized path, differentiated instruction) is impossible. MOOCs contain also an element of Human intelligence: Usually the student can exchange information – whatever it is worth – with other students if they move through the MOOC as a cohort. Some “teacher”/ “teaching assistant” feedback. That however is expensive, but also the selling point for higher education.
      8. To overcome all these issues, will an institutionalization be required that looks similar to forming new publishing houses? Will OER actually have to come attached to a software developing powerhouse, similar to Google,  to be able to compete with proprietary formats – and won’t that be a bridge too far for the OER movement? Stay tuned ….


          [1] Canada already in 2010, and, as some still pending upcoming economic data will likely show, some Northwestern European nations also.

          [2] Some may be caused by an oligopoly: more glitz = more revenue, more cut – in spite of what students and educators really need?

Learning-materials-related posts

Here is an overview of learning materials (Creation service) related articles, per language, on this blog/CMS, including shortcut links that save you building the advanced-search URLS as described in the upper right corner here.

Blogposts

With learning materials or on Creation of

To date 7/22/2014

lm

lmC

Grand Total

Arabic

7

4

11

English

8

10

18

Farsi

7

4

11

French

11

8

19

German

12

8

20

Hebrew

2

0

2

Hindi

7

5

12

Italian

8

6

14

Japanese

9

5

14

Korean

7

4

11

Latin

3

0

3

Mandarin

8

6

14

Polish

5

4

9

Portuguese

9

4

13

Russian

8

5

13

Spanish

9

9

18

Swahili

5

4

9

Yoruba

4

4

8

Grand Total

129

90

219

File renaming utilities

When working with foreign language (learning) digital media files electronic repositories, typical problems include having to:

  1. adding metadata information to the filename
  2. handling of foreign language characters in filenames across operating and file systems, including code pages

A good file renaming  utility can work wonders in such situations.

I have been using the excellent BRU (Bulk rename utility) for a while, and always liked its flexibility – which is apparent from in its (initially somewhat intimidating) UI:

Now I found to my surprise, and confirmed with the help of the support forum, that the foreign language character support is lacking from BRU’s Regular Expression implementation.

Enter Renamer, another file renaming utility, which features a stable and beta version, pdf documentation and a wiki.

As you can see in the following screenshot, Renamer makes it possible, using Unicode character codes in Regular Expressions, to replace e.g. all Mandarin characters in a filename.

Renamer also has in-built support for common tasks like cleaning up filenames by stripping common tags, transliterating foreign alphabets or adding file numbering (serializing).

Both utilities are free and highly recommended, but also see TBA:part II for limitations.

UPDATE:

image

Google new maps’ photo tour slideshow viewer

image

<Grumble> If these internet companies invent anything else, I won’t get out much anymore.</Grumble> (I have always loved “travelling on a map” too much alreadySmile).

This is not your grandparents’ photos feature in Google maps anymore. The new photo tour automatically (I must assume? Based on GIS data included in the photo, image recognition to cluster motifs?) intelligently tags the map includes highlights of well-known sights, and it seems to automatically group similar shots/motifs, providing the feel of an in-depth exploration, even presence. And the occasional grandparents in the foreground remind us, that this remains a crowd-sourced project… Smile

Now how to plan a better intercultural map exploration class with this…

Free interactive online learning materials for Heinle Interaction

  1. Available here – in spite of the prominent user login button, you do not need to sign up.
  2. Rather simply click the “Select Chapter” to get started. image
  3. You then have access to some of these types of exercises, per chapter: image
  4. Free (a (free) account is not needed):
    1. Tutorial Quiz
    2. Audio
    3. Web Search Activities
    4. Concentration
    5. Heinle Playlist
    6. Google Earth Coordinates
    7. Web Links
  5. Not free
    1. Flashcards
    2. Video (except for chapter 1 as taster) image
    3. Podcasts
    4. Crossword
    5. Chapter Glossary
  6. What this content is good for:
    1. Practicing. Including with a tutor, for since this content is not assessed, there is no ethical issue if the tutor helps with these materials.
    2. It is from edition 8, which is not the current edition – but I expect it to be still reasonably to the current chapters chapter:
      1. Le commerce et la consommation
      2. Modes de vie
      3. La vie des jeunes
      4. Les télécommunications
      5. La presse et le message
      6. Le mot et l’image
      7. Les transports et la technologie
      8. A la fac
      9. La francophonie
      10. Découvrir et se décourvir

How to type phonetic symbols on a computer

  1. Web-based On-screen-keyboards (point-and-click; low learning curve, but no fast typing speed; typing into a textbox from where you can copy/paste the result into other programs):
    1. http://westonruter.github.com/ipa-chart/keyboard/: Sounds are systematically organized. Suitable for learners, but also good for teacher demonstrations. image
    2. Partially based on keyboard shortcuts: http://www.ipatrainer.com/user/site/index.php?pageID=ipawriter: image
      1. http://ipa.typeit.org/full/: Other than the English version, the full version includes non-English sounds. The interface is optimized for fast typing (sorted by keyboard key). Presumably better for teachers using a screen projector as a whiteboard. image
      2. i2speak.com (reviewed here earlier): imageimage
      3. Update: Richard Ishida’s seems also impressive,
        1. image
        2. and you can use phonetics terminology to get characters selected, like so: image
  2. Windows-based:
    1. http://www.phon.ucl.ac.uk/resource/phonetics/: MS-Windows keyboard layout. May be good for even faster typing, if you can memorize the keyboard layout or add keyboard stickers (we unfortunately have too many languages vying for our hardware keyboard space already). Requires download & installation (may be added to the LRC keyboards during next imaging if we receive enough requests).
    2. http://staff.washington.edu/dmontero/IPACharmap/.
    3. http://sourceforge.net/projects/allchars/: If you are use to the ALT+### method of entering characters and are still on XP, this may be for you: You can generate your own keyboard shortcuts for phonetic characters.
    4. MS-Word:
      1. http://email.eva.mpg.de/~bibiko/downloads/uniqoder/uniqoder.html: Allows to select IPA-Symbols from a toolbar. Untested.
  3. There are also always X-Sampa and CXS and ASCII-IPA: ways of writing IPA in plain ASCII messages  – but yet another thing to teach novices in phonetics may be a bridge too far.