{Arrest This Man, He Talks In Maths } spacer

Blog : Archives : Homepage

With your feet in the air, and your head on the ground . . .


{Friday, August 01, 2003}

Jess and I were talking about the development of moral and social reasoning this morning, over coffee & pastries at Carberry's. She asked when kids learn to cooperate to succeed at a task; I didn't know, but I thought it was a great, very relevant question.

So, this is from an email I just sent my advisor: (read the first link - none of what I say may make any sense, without it, and it's interesting stuff)

We seem to focus on Theory of Mind as the root of social cognitive development; this makes lots of sense, to me, as you're pretty limited in your ability to reason about social situations if you lack understanding of the internal beliefs, desires, etc of agents around you.

We had a conference on Moral Development, and this made a little less sense to me; there was discussion of the development of empathy, and some Theory of Mind, and some nativist proposals concerning innate moral rules. This was all interesting, but I've always had trouble looking at the world through a moral lens; morals aren't very real, to me: they don't map onto the physical world, and they don't seem to offer very deep explanations of human action.*

Basically, I see the world this way: moral rules are a shortcut to socially functional behavior. The "Golden Rule", or the "ethic of reciprocity", for instance, is simply stated, simply understood, and pretty effective (if followed) for a culture; however, from a social psychology (or game theoretical) perspective I think you would say there's a _reason_ this is a "good" ethic: if you consistently follow this rule, you will generally not piss people off, and if others notice this, your reputation (honor) will improve and other individuals will trust you. All of this will result in: (A) a maximization of cooperative opportunities with greater rewards than are obtainable alone, and (B) a minimization of retributive actions against you. There are always other good (but high risk) strategies like "leverage your power and exploit everybody else because they can't do anything about it", but this only really works for a few individuals; most would get hurt by retribution.

In any case, if you want to follow this effective "ethic of reciprocity" you can either (1) learn the rule, or (2) understand all the social psychology.

It seems marginally obvious that (1) is easier.

But. It doesn't generalize that well; while the ethic of reciprocity is a really good one, there are lots of other moral/ethical rules to learn, and they pretty much all conflict, from time to time, with immediate interests or desires that you have. When faced with such a conflict, you have a decision to make: you have to consider your desire D, your moral code M, and (if you want to think a little harder) the social consequences \ of your possible actions \.

Adults consider social consequences all the time, regardless of whether ethics are involved. My position (not supported by any data, that I know of) is that 95% of the time when there is a conflict between D & M (such that ethics might be involved) the choice of action based on consideration of social consequences will match the choice of action based on your moral code, as, at least in the long term, violations of cultural-norm morals lead to deep, negative social consequences.

This means that if you're willing to invest in a careful consideration of social consequences, you don't need moral or ethical codes.

The point of all of this is really simple:

I suspect kids are limited in their understanding of social dynamics, as compared to adults. (duh)

I suspect there's a gradual transition from rule-based strategies to consideration of social consequences . . . late in development (adulthood?) for most people; possibly never, for some.

And finally . . .

I think there's an important difference between a basic-level belief/desire theory of mind (what I've seen in the literature), and a "dynamic" theory of mind that integrates the affects of our actions on the internal states of others. This latter dynamic ToM seems like the real core system of adult social cognition. I suspect this system begins to develop fairly early on; once you have a full "static" ToM and any kind of general learning mechanism, you've got all the tools you need to build it.

I'm trying to think of an empirical way to get at this; it seems like there should be something simple from game theory - some variation on the prisoner's dilemma adaptable to a toddler game/task - that would let me test whether kids understand the affect of their actions towards a co-participant, in an initial stage, on their success / reward in a second stage that has a large "cooperative" payout. However, I realize that attempting to simulate real, rich social interactions and social learning in a brief "game" or "task" might be a poor approach. Might there be a way to do this with "verbal reasoning" style questions concerning fictional social interactions detailed in (very) short stories?

posted by Miles 3:29 PM

Comments: Post a Comment