credit:blingcheese.com |
Why do (even advanced) EFL learners fail in these situations and what can EFL teachers do against it?
Richard Cauldwell, author of the Streaming Speech course-ware and maintainer of a blog with the same title, recently announced the publication of an application (a so-called 'App') for the iPod. The App will be called Cool Speech - a 'cool name'! What's it going to be about? Here's what the creator writes on it in one of his blog entries:
A Hotspot is a moment in a recording that contains familiar words which are difficult to hear because they are spoken so fast. You learn to understand the words in these Hotspots by touching them on-screen. There are three kinds of touch:In another blog entry Richard Cauldwell explains the term 'hotspots' thus:
You can hear the whole speech unit,
You can tap on the Hotspots, and hear them as they were originally spoken,
You can tap twice on the Hotspots and hear them spoken slowly and carefully.
The purpose is to teach you the relationship between fast unclear speech and slow clear speech, so that you will understand fast speech in everyday life.
[Hotspots] are moments in spontaneous speech where familiar, frequent words (including weak forms) are mushed out of shape and combined in such a way that are difficult to perceive. This happens in fast stretches of speech: typically those which precede, and lie in between, prominent syllables. Using the multi-touch capabilities of tablets, users will be able to do intensive listening, and improve their ability to perceive such words.So, let's assume that, when, for example, I hear [əŋənəbəˈleɪt] (this sample sentence is taken from Ashby, (2011), Understanding Phonetics, p. 7), I understand <late>, but not the preceding mélange of sounds. If I tap on the hotspot I hear [əŋənəbə]. If I tap on it twice, I probably hear [aɪm ɡəʊɪŋ tə bi] or even [aɪ æm ɡəʊɪŋ tuː biː] - we don't know yet. As of this moment I know that what the person had said should have been understood by me as: "I'm going to be late".
From what I've read about this application so far I don't quite see how learning in some systematic way will take place. Having been told that [əŋənəbə] pronounced as part of a particular longer stretch of speech by a particular speaker means <I'm going to be> does not ensure that I will able to decode another instance of [əŋənəbə] said by a different speaker at a different point in time as the same sequence of words, nor does it increase the probability that I will be able to decipher other 'hotspots'. Thus the question remains open how learning in the sense of the stable change of the skill of decoding relaxed speech is going to be achieved. To me it seems more like a light bulb moment. But, maybe, I'm wrong and there's more behind it than we've been told so far.
One should not forget that the number and types of reductions NSs have at their disposal are almost infinite and that in many a case it is only the cotext and/or the context which allows understanding a sequence of sounds "mushed out of their shape".
credit:herestheveg.blogspot.com |
I have had similar doubts about the value of recordings of 'authentic' speech as learning materials. In this internet age access to native speech is cheap and easy, so I don't see much of a market for listening materials.
ReplyDeleteMoreover, many films nowadays have subtitles for the hard-of-hearing, subtitles which very often 'resolve' such "hotspots".
ReplyDelete