Thursday 3 November 2011

listening to relaxed English

credit:blingcheese.com
If you are an EFL user/learner, you may have gained the experience that, when you overheard two native speakers (= NSs) talking to each other (in an English accent you're basically accustomed to),  you did not understand a single word (or fragments only) because the NSs were seemingly speaking much too fast. What you perceived as hasty or hurried speech was, in fact, the result of the employment of words pronounced by the native speaker with very little articulatory effort and less precision than would be needed to enunciate such words in their citation forms. Due to this low effort the words were pronounced more rapidly. The native speaker did so because he could be fairly certain that his communication partner would not fail to understand him. There are, however, persons who speak very fast in almost any situation (e.g. Robert Peston).

Why do (even advanced) EFL learners fail in these situations and what can EFL teachers do against it?

Richard Cauldwell, author of the Streaming Speech course-ware and maintainer of a blog with the same title, recently announced the publication of an application (a so-called 'App') for the iPod. The App will be called Cool Speech - a 'cool name'! What's it going to be about? Here's what the creator writes on it in one of his blog entries:

A Hotspot is a moment in a recording that contains familiar words which are difficult to hear because they are spoken so fast. You learn to understand the words in these Hotspots by touching them on-screen. There are three kinds of touch:
You can hear the whole speech unit,
You can tap on the Hotspots, and hear them as they were originally spoken,
You can tap twice on the Hotspots and hear them spoken slowly and carefully.
The purpose is to teach you the relationship between fast unclear speech and slow clear speech, so that you will understand fast speech in everyday life.
In another blog entry Richard Cauldwell explains the term 'hotspots' thus:

[Hotspots] are moments in spontaneous speech where familiar, frequent words (including weak forms) are mushed out of shape and combined in such a way that are difficult to perceive. This happens in fast stretches of speech: typically those which precede, and lie in between, prominent syllables. Using the multi-touch capabilities of tablets, users will be able to do intensive listening, and improve their ability to perceive such words.
So, let's assume that, when, for example, I hear [əŋənəbəˈleɪt] (this sample sentence is taken from Ashby, (2011), Understanding Phonetics, p. 7), I understand <late>, but not the preceding mélange of sounds. If I tap on the hotspot I hear [əŋənəbə]. If I tap on it twice, I probably hear [aɪm ɡəʊɪŋ tə bi] or even [aɪ æm ɡəʊɪŋ tuː biː] - we don't know yet. As of this moment I know that what the person had said should have been understood by me as: "I'm going to be late". 


From what I've read about this application so far I don't quite see how learning in some systematic way will take place. Having been told that [əŋənəbə] pronounced as part of a particular longer stretch of speech by a particular speaker means <I'm going to be> does not ensure that I will able to decode another instance of [əŋənəbə] said by a different speaker at a different point in time as the same sequence of words, nor does it increase the probability that I will be able to decipher other 'hotspots'. Thus the question remains open how learning in the sense of the stable change of the skill of decoding relaxed speech is going to be achieved. To me it seems more like a light bulb moment. But, maybe, I'm wrong and there's more behind it than we've been told so far. 

One should not forget that the number and types of reductions NSs have at their disposal are almost infinite and that in many a case it is only the cotext and/or the context which allows understanding a sequence of sounds "mushed out of their shape".
credit:herestheveg.blogspot.com





2 comments:

  1. I have had similar doubts about the value of recordings of 'authentic' speech as learning materials. In this internet age access to native speech is cheap and easy, so I don't see much of a market for listening materials.

    ReplyDelete
  2. Moreover, many films nowadays have subtitles for the hard-of-hearing, subtitles which very often 'resolve' such "hotspots".

    ReplyDelete