Orient Expression

21 June, 2009

Say what you see…?

Filed under: Japanese,language — pyrotyger @ 4:22 pm
Tags: , , ,
Excerpt from a 1436 primer on Chinese characters
Image via Wikipedia

I’ve often heard the Japanese writing system described as being principally logographic, in that the written characters – at least the kanji, which are principally derived from written Chinese – represent words rather than sounds. The converse would be a phonographic script, in which characters represent sounds (phonemes) rather than words. In fact written Japanese combines these two approaches by using both iconic graphemes and a couple of syllabaries, allowing logographic words to be connected and embellished with a grammatical subtlety that Chinese dialects sadly lack.

Sorry if that was a bit wordy. Wikipedia is a great help for linguistic terms.

I wasn’t convinced that logogram is an appropriate term, even for the kanji used in Japanese. Since any kanji can be read in a number of very different ways phonemically, depending on context, but the idea it represents is more consistent, the term ideogram might be more accurate.

Yet this notion is strongly opposed in the article to which I’ve linked, which states that ideograms “represent ideas directly rather than words and morphemes, and none of the logographic systems described here are truly ideographic.” As it turns out, I’ve got this whole concept arse-about-tit. Although I though logos was ancient Greek for “word”, it doesn’t actually mean that in the grammatical sense. Rather, it was used to define the concept or idea underlying a word or argument – the word’s soul, if you will – while lexis is the term used to describe the grammatical entity. This explains why logos is used in all sorts of religious and philosophical contexts where lexis wouldn’t be appropriate, and also explains why we call company brands “logos” even when they don’t feature words at all.

There you go. Another etymological mystery solved.

Whatever the linguistic definition, I find the eastern practice of combining discrete morphemes in iconic form to express complex notions and ideas to be both beautiful and inventive.

Furigana(振り仮名) text with furigana(ふりがな), as an...
Image via Wikipedia

It does make the written language very challenging to learn, though. If you can’t read a kanji, you can’t read it; you can’t even read it out to guess at the context, since it’s just an inscrutable symbol. The use of furigana – ruby hiragana (syllables) written over a kanji to guide pronunciation, often for teaching purposes or texts rich with specialist kanji – is of great help to a learner, but is nothing more than a workaround to an intractable challenge of learning Japanese.

And yet…

On more than one occasion – and increasingly frequently – I have the bizarre and unsettling experience of reading a kanji without actully understanding it. I mean that sometimes I will literally be able to read aloud the pronunciation of a symbol that I’ve only come across once or twice (or sometimes a hundred times – curse my memory), and have no firm idea what it means. It’s a little bit like bumping into someone you don’t recognise, but knowing their name – it’s the complete opposite of the usual mental block that occurs, and feels like knowing the answer but struggling to find the question.

Douglas Adams would probably be able to explain the frustration and disorientation better than I could.

Clearly something bizarre is happening in my brain. There is some direct association going on in there between the visual representation and the phonetic word, totally bypassing the usual intermediary of meaning. Most of the time I’ll recognise what a kanji means (or not…) and shortly afterwards I’ll remember its pronunciation, with that gap being reduced to an instantaneous pause so that the two come to mind simultaneously, but jumping from A to C without the all important B getting a look-in is really frustrating and a little bit spooky.

What’s going on in there? Is this unsettling confusion between lexeme, logos and phoneme a sign that my brain is slowly adapting to the task of understanding Japanese inherently, or a sign that I probably never will? It brings back to mind a post I made back in October about translation and machine intelligence:

do [translators] listen in one language, then switch their thinking to the other – donning a different thinking-cap, as it were – before trying to express the nebulous ideas and idiosyncracies in a natural fashion? I’m quite certain that it’s possible to “think” natively in more than one language…

Well, I know from certain people who are competently trilingual that, yes, it is possible and indeed inevitable when you become fluent.

I couldn’t tell you what language such people dream in though. To dream in Japanese would be an achievement indeed. I just hope it doesn’t end up being anything like Natsume Sôseki’s Ten Nights Of Dreams.

Dammit, this has got me thinking about the role of tonality and aesethetics in language, especially Chinese/Mandarin, and the importance of the right-brain in such languages. That’s an interesting topic for another time, I think, but feel free to have a look in Fundamental Neuroscience, p654 (pdf warning) if you’re curious.

Advertisements

16 October, 2008

I fear I to be unable such a thing do, Dave.

Filed under: language — pyrotyger @ 1:19 pm
Tags: ,

Learning a language is like digging a moat for your sandcastle, as the tide inexorably rises. Or maybe like gardening. It isn’t enough to say “There, I’ve done that bit – now I can move on” – you must constantly revisit and renew your earlier endeavours, or they will be washed away, overgrown, lost like tears in the rain…

I have a pretty good facility for languages, I think. I don’t know why – a memory for detail and vocabulary, decent ability to pick up accents, or simply enough interest to make it stick – but whatever the reason, it’s something I struggle with less than most. Some years ago, during a very brief and somewhat abortive relationship with a lovely South African girl, I couldn’t help trying to pick up a bit of Afrikaans as a courtesy.
The accent wasn’t difficult – light on the tip of the tongue, heavy on the pharynx – and the grammar was the simplest I’d ever encountered (except perhaps Chinese), so it was good fun to throw new phrases I’d learned into conversation, and have the occasional slow, stuttering conversation in her native tongue.

As you can imagine, the opportunities to reprise my conversational Afrikaans have been somewhat scarce since then. I didn’t realise just how much of it I’d lost until someone offered to make me a cuppa tea. “Please”, I wanted to respond, and perversely chose to do it in Afrikaans. Only… I couldn’t remember the word!!
I mean, please, for goodness’ sake! It’s got to be one of the first ten words or phrases you learn in any language, and I was stumped. From having been able to understand and construct simple sentences, I suddenly had next-to-no vocabulary, just six years later.

The phrase I wanted (I remembered after a few moments) was Asseblief – roughly “if you please”. And yet I had no problem recalling the phrase for I only speak a little – it’s a pretty language, but I never use it. Obviously this phrase was one for which I’d had more use…

Human memory, of course, works nothing like a database. There are no convenient boxes in which to store information. There is no empty Tweetaalige Woordeboek (bilingual dictionary) waiting for you to indelibly inscribe it with every acquired transliteration.
Memory serves its purpose by retaining and reinforcing that which is used frequently, and slowly losing grip on that which is fleeting or trivial. The passage of memory from short-term, through its various stages, to long-term memory and (in the case of a skill like languages) into active process has been thoroughly researched by neuroscientists, linguists and tinkering hobbyist educational reformers for decades, and it all comes down to the three ‘R’s of learning:

  • Repetition
  • Redundancy
  • Repetition

(The above stolen from a Jhonen Vasquez comic about the spirit-crushing drudgery of state schooling, but I like it anyway.)

So it’s about what you use, and how often you use it. You can even unlearn your native tongue through atrophy. I know of a man who moved from England to Germany in his early thirties. Now at 65, he is still in touch with his friends in England – but he finds he can only communicate, haltingly, over the phone. If he tries to write or email, he struggles with the English language. In a Firefox-esque feat, he now thinks in German, quite naturally, and struggles to do so in English.

I wonder: will I ever be that good at Japanese? If I work hard, and move over there someday then… well, why not?

The process of professional translation intrigues me; I find myself wondering, how does it work in their heads? Do they listen in one language, and then express it quite naturally in the other without any intervening explicit process? Or do they listen in one language, then switch their thinking to the other – donning a different thinking-cap, as it were – before trying to express the nebulous ideas and idiosyncracies in a natural fashion? I’m quite certain that it’s possible to “think” natively in more than one language…

Even then, translation is not a simple process. Grammar notwithstanding, even syntax can become confusing when expression is rendered in culturally-significant shades of meaning.

I recall hearing of an assembly in the European parliament being brought to a standstill as, during a speech by the French representative, several of the English-speaking delegates burst into laughter. Having made an appeal for calm and rational consideration of the issues, he exclaimed that what the problem needed was “la sagesse Normande”.
The English translators, quite faithfully, relayed the speech thus:
“What we need is Norman Wisdom!”

That’s not the half of it though. Humans, with their inherent understanding of the ideas behind the words, can translate faithfully rather than accurately. Computer software has no such cognitive gifts at its disposal, and the results of even the most sophisticated attempts at translation are derided throughout the blogosphere.

It’s the same problem: a database can give you a word-for-word equivalent, but nothing cogent or intuitive – and even with simple words, cultural ignorance can lead to confusion. A generation or two ago, there was no distinct word for “green” in common use! あお (ao) is taken to mean blue, but it was also used for green not so long ago, and some Japanese still use it as such. This sort of cultural knowledge is invaluable when trying to make sense of, for example, Natsume Sooseki’s Ten Nights of Dream. It’s easy to get stuck trying to understand the significance of the lily’s blue stalk…

Does this mean that elderly Japanese people can’t tell the difference between blue and green? No, of course not…
And yet, there is some truth in that statement, bizarre as it may sound. Not in an extreme sense, but studies have shown the importance of language to perception. According to research undertaken at Goldsmith College (and almost certainly many other studies since), the range of words you have for different hues affects your ability to distinguish between them. If we had 20 words for subtly different shades of orange in the English language, we would perceive them as distinct colours, and would recall them as such without difficulty.

It all smacks of Derrida and Phenomenology, doesn’t it…?

This ties in nicely with another study (thank god for New Scientist) investigating the way in which our infant brains adapt to perceive distinct sounds characteristic to our mother tongue. Through repeatedly hearing – and presumably expressing – certain ranges of sound and learning to interpret them as the same sound, we lose the ability to distinguish between the subtle variations. This is quite necessary, for the sake of efficiency in communication, but can be a hindrance when learning a new language.
The classic example is the Japanese l/r sound, which is neither one nor the other. Through careful and diligent study, one can relearn the distinctions lost in infancy, but it is difficult – the mind learns to perceive certain patterns in the chaotic landscape of reality, and convincing our brains to jump tracks in its well-worn neural grooves is hard work.

So how can there be any hope for computers? Is it possible, somewhere in the hypothetical space-opera future, for software to “understand” language in the same way that humans do? Derrida or Heidegger might argue that all of perceived reality is exactly that – perception only. Given that language is the exclusive realm of signifiers and symbols, one might suppose that computers – which deal only with symbols and signifiers – would be ideally suited to the task. Can one be “trained”, in the manner of a human mind, to have intrinsic understanding of a concept? Can an artificial mind be kicked out of its paths of databases and into a more functional, fluid form of expression and translation?

Perhaps the answer lies in that last question. Functional programming languages (Haskell, Lisp) operate on a basis somewhere beyond the mechanical strictures of Structural languages (Pascal, Aida) or the deliberate and measured methods of Object Oriented Programming (Java, C++). My brother (the Dysfunctor – get off your arse and fix your Blog, mate) could tell you a million times more than I could about this topic, but I have some very basic understanding. All things are functions – processes, if you will – and everything is signified rather than explicit. Sound familiar?

Artificial Intelligence (the emergent kind) and a computer really learning a language are in the same chapter of philosophy – the same page, even – because language, perception and intelligence are so closely linked. They’re pretty much a blurry smear of concepts, as any drunk philosophy undergrad will rant. There’s no point trying to tackle one without approaching the others, but if we come at it side-long, with a very long game-plan in mind, and functional programming as the tool (or the precursor to a better one), then who knows…?

Still more curious: if we created machines with the ability to learn and communicate, but didn’t teach them anything, what language would emerge from their society? What could we learn from their linguistic development?

Before they wiped us all out, I mean.

25 June, 2008

Mada wakarimasen (I don’t yet understand)

Filed under: Japanese,language — pyrotyger @ 6:35 pm
Tags: , ,
My Japanese teacher set me the task of writing a four-panel comic last week. I just had time to finish it before my lesson, so I didn’t get the chance to scan it in. You’ll have to make do with hastily-shot camera-phone photos (my translations follow each panel):

LONG DAYNot again! You lazy #%!!*!

I work so hard, and you just do nothing…

I’m always looking out for you, but you never show any gratitude!

Myow? (Not again…)

The translation in the last panel was going to read “Feed me” – which would be funnier and, let’s face it, much more true to life – but I don’t know how to conjugate the Imperative Form yet, and I like having the cat throw the starting exclamation back at the woman.

The more astute students of Japanese will spot my obvious error (apart from the laughable simplicity of most of the grammar I’ve employed) – the second panel should end せん (Negative form) rather thanす. It probably reads to a native Japanese-speaker the same way as a double-negative does to me, meaning it probably makes your eyes bleed.
Well nuts to it. It’s my first comic ever to see the light of day, so just one glaring error in an unfamiliar language is good enough for me. Mind you, this post is now available for comment, so I’m sure I’ll be told soon enough just how many other linguistic gaffes I’ve managed to cram into four panels…

The more astute students of art, humour and other matters of taste will spot the fact that my comic is neither pretty nor funny, since I can’t draw or come up with jokes. Strictly speaking I imagine the only thing that qualifies this as a “comic” is the fact that it has four panels. This gives it the same artistic merit as a Ford Transit, only without the choice of colour an optional SatNav.
Still, I did make the effort to go slightly manga-ey in the first panel, and that’s got to count for something. Please.

Aside from this travesty of the modern medium, the lesson was even more interesting than usual. We over-shot my allotted hour – by about an hour! – partly thanks to the distraction of a burgeoning friendship between Hiromi’s son & me (based principally on DS games and the ability to pull faces), but mainly due to an extensive discussion about language and learning.

We chatted about our experiences of learning different languages and, as is often the case, the act of discussing the topic caused my thus-far nebulous ideas of the subject to coalesce into a clearer opinion. In essence, I think we progress through successive stages of fluency, something to the tune of:

  • parroting – repeating words and phrases exactly as you hear them.
  • knowledge – getting to know what those words & phrases mean, and recombining them in context.
  • understanding – coming to grips with the interplay of context and content: conjugation, form and style (this is where you start to appreciate the fundamental differences between languages with different roots)
  • application – using your understanding to apply the language in different everyday contexts: on-the-fly construction of appropriate sentences, the beginnings of real expression.
  • habit – over time, on-the-fly processing becomes embedded: at this point, you’re able to actually converse at a practical depth and speed.
  • intuitive use – the habits embed deeper: you can pretty much “think” in the language.

So by this token, my learning of English as a native should have followed a similar pattern, right? Well, there are probably hundreds of books and papers on the subject, but nothing makes for a blog-entry like an embarrassing anecdote…

Cast your imagination back to my childhood – we’re talking 20 years here…
*pause for a little cry*
…sat cross-legged on the floor of the village primary-school’s assembly hall, staring up stiff-necked at the projected lyrics on the wall, hoping some poor kid doesn’t wee themselves again (there really is nothing as pitiful as a little boy sat silently with his red, tear-streaked face in his hands as, one by one, his former friends leap away excitedly from the slowly-expanding pool of wee in which he stews…).
The song may be a well-known one, or it may be something our musically-inclined head teacher composed himself. He wrote assembly songs with the same casual frequency that other people make a cuppa tea. What matters is, the last line of the chorus was a sustained:

“And praise your hoooooo-ly naaaaaame!”

(Thinking about it, Mr Johns was a bit secular to have written that.)
The line was not – I can’t stress this enough – the more confusing “And prisha-horrrrrr-lee-naaaaaaay!”

Not that I was aware of this, becase that line had mostly smudged off the acetate sheet, but then it didn’t matter. I was parrotting the phrase in order to sing the song, but without knowing much about the lyric’s significance and only having heard it sung the same way, I didn’t have any reason to think differently. I was hardly likely to have the opportunity to apply the phrase in my playground banter and have it corrected, and frankly it served its purpose without requiring any knowledge of its real meaning.
At the time all I cared about was getting through the song so I could relax my neck, and not sound like Andrew Bunting in the process (a boy whose curiously rich, tone-deaf bass was probably the cause of all that embarrassing incontinence – I suspect he would have caused whales to beach themselves and go into premature labour given a sufficiently low melody)

Those who were fortunate enough to watch TMWRNJ, back in the day when there was anything on telly on a Sunday except bloody Hollyoaks, may remember Richard Herring banging on about a similar misunderstanding, insisting that Jesus was “the Lord of the Dance Settee” (said he). Come on, there are whole books dedicated to children mishearing speech and believing that “the ants are my friends“, or whatever.

The point is, we learn the same way as children, it’s just that when learning something new we don’t have any preconceptions with which to judge our current understanding – until we come to learn a second language.

So last night, in learning how to conjugate verbs to give the Past Tense, I discovered that shouting できました (“dekimashita”) at the end of a round of Hiragana-bingo in our early lessons was not equivalent to shouting “House!”
It meant “finished”.
I have developed some minimal level of proficiency in Japanese, which has helped me to evolve this little island of (incorrect) Knowledge, giving it a land-bridge to the ever-expanding continent of Understanding, wherein the highway-planning agency of Directed Learning is extending it’s road network of Application, so the articulated lorries of Habit can start to wear their useful grooves into my neural pathways…

So, for all that I find Kanji attractive and interesting, learning stroke-order is as nothing when compared to trying to compose an admittedly un-funny joke, for in such ways are we forced to re-evaluate the misheard lyrics of our early lessons, and come ever closer to understanding the heart of what is, at this stage, still a foreign language.

Which is why you’ve been subjected to my Ford Transit of a comic.

Dekimashita.

Create a free website or blog at WordPress.com.