Feelings of Science

David Hume's philosophy of morality is distinct from most approaches in that it does not postulate some set of principles or criteria for moral behaviour.

Rather, he argues that we are governed by a 'moral sense' that tells us when an act is right or wrong. "Extinguish all the warm feelings and prepossessions in favour of virtue, and all disgust or aversion to vice: render men totally indifferent towards these distinctions; and morality is no longer a practical study, nor has any tendency to regulate our lives and actions." (Hume, Of Morals)

The idea here isn't that Hume is arguing for some sort of moral relativism and 'anything goes', though he has often been mischaracterized that way, but rather that other putatively ''objective' measures of morality are crude instruments, and that our own sensations are fine-tuned detectors of moral nuance that can be developed, through practice and experience, into reliable measures of morality.

It's a bit like the difference between monitoring the gauges on a dashboard and a driver feeling how the car responds to the road around corners and while braking. An experienced and sensitive driver can tell if there's something wrong with the car well before any objective instruments can because the car doesn't 'feel' right.

Note again that this isn't an 'anything goes' theory of auto mechanics. There is an objective fact of the matter as to whether the car is performing badly or not. But this fact is not equated with dashboard monitor readings, or indeed with any particular measure that can be determined a priori. Prior to an actual breakdown or measurable malfunction, it's no more than wear and tear in a car's engine the human mind can sense well before any more coarsely-tuned measuring device can.

This sort of juding appropriateness by feeling isn't limited to ethics and auto mechanics. Rob Cottingham posted this cartoon today and then discussed how he came to produce it:


He writes, "For me, the trick is to not overthink it, because that’s a sure route to paralysis... ultimately, the goal is to create the cartoon I want to make, and then reach more of the people who will enjoy that cartoon." In a cartoon there can be a million variables that go into the definition of 'good' and the cartoonist does not create a cartoon to specifications, but rather, works by feel.

These considerations emerged today as I discussed the concept of a connectivist research methodology with Sheri Oberman. Now I hadn't really thought in such terms about connectivism - I remarked that I see myself as more akin to an explorer than an experimenter, and that my methodology is based more in Paul Feyerabend than in anything else: "The idea that science can, and should, be run according to fixed and universal rules, is both unrealistic and pernicious. It is unrealistic, for it takes too simple a view of the talents of man and of the circumstances which encourage, or cause, their development. And it is pernicious, for the attempt to enforce the rules is bound to increase our professional qualifications at the expense of our humanity."

But again, this isn't an 'anything goes' methodology, and Oberman is right, I think, to suggest that a connectivist methodology would be based in some significant way on connections. And when I reflect on my own practice it does seem to me that my own work is based in forming connections - though, more specifically, it is based in acting as a node in a network, and not in network-forming per se (I think the concept of 'building networks' is a bit misleading; if we want to be a part of a network we must be in the network, as a node, and not outside it, as a god).

But what would that methodology look like? Again, I could probably draw out some criteria - I've talked about the importance of autnomy and diversity, etc., in the past, and these qualities certainly characterize my own practice. And perheps, after the fact, you could measure my own research performance against, say, an 'autonomy index', and determine to what degree I practices and promoted autonomy in my own work.

But that's not how I actually evaluate my own work. It's not that the criteria are wrong. It's that, first of all, the criteria that determine whether my work was a success or not do not emerge until later, and second, even then, I evaluate my work according to how it feels against any such criteria (indeed, it would drive me crazy to try to evaluate against such criteria).

For example, the Skype conversation I had this morning, and practices like that Skype conversation (I have another in less than half an hour, and routinely have short conversations where I talk to people interested in this and that). I'll pose one 'research' question: should I record them? (Another: should I blog about them after? Etc.) I don't record them because I want the conversations to feel more like practice rather than performance. Is this a correct methodology? What would tell me whether it was? I won't know until some time in the future the basis on which these conversations were a success or otherwise. But I do know I have a pretty good feel for such things, so I that's what I use.

Would it be better if there were some criteria against which I made my decision whether or not to record? No, because the success of the conversation is based in much more than whether or not it is recorded, and so any such standard would be artificial and arbitrary. And, in some important respects, wrong.

Obermann mentioned knowing whether a dance is successful. It's the same sort of thing, again. I know whether I am dancing well by how it feels when I'm dancing. If I'm feeling awkward, not knowing where to put my feet, unsure if I'm holding my partner properly, and all the rest (and I speak from experience here) then I know I am dancing incorrectly. By contrast, if these concerns fall by the wayside and I feel only a smoothness of motion and attachment to my partner, then the dance is progressing well.

Now, a couple of things. I could assess my dance against a dancing design pattern, consisting (for example) of a series of step marks imprinted on the floor (kind of like the old 'figures' in figure skating). This would certainly be objective, and measurable. But it would be incorrect - I could dance poorly even while hitting every step, and dance well even though missing the mark. Indeed, the point of the dance is do no more than to merely replicate a best practice, it is to take it and make it something more.

Again, though, note that this is not an 'anything goes' theory of dancing. Nor is it even a theory that supposes that my own standard of 'good' dancing is static (and hence, forever primitive). As I dance, as I watch other dancers, as I discuss dancing with my dance partner (or with total strangers I've bumped on the dance floor) my sense of dance becomes more refined. What I feel changes. What used to feel pretty good now seems to me to be slow and simplistic. As I evolve, I strive to be a better dancer, and my sensation becomes one of detecting this improvement in my dance.

This is an important point. One of the fundamental difficulties with the empirical sciences is that the science of measurement - which is what we need in order to obtain experimental results - is itself an empirical discipline, and itself subject to amendment and improvement over time. Nowhere is this more evident than in personal perception - our tastes in music when we are young are (typically, and with some caveats) laughable when we are older. Did I really buy that Bay City Rollers album? Yeah - I did.

The literature of aesthetics is full of references to things like the refined palate in wine tasting, the expertise of the chef in cooking, the appreciation of a master carpenter for a fine mortise and tenon. People who study colour closely are able to distinguish differences in tint and tone that will escape a novice. Our quality of experience improves over time, and it does so because our capacity to perceive nuance, distinction and difference is improved, and this reflects the impact of hundreds of thousands of individual experiences over time on our mind. Our brains, quite literally, become shaped into better perceivers (given the appropriate practice and experience).

As this is true of individuals, so it is true of the assessment of science and research in society generally. For while on the one hand we have today a trend toward objective criteria-based assessments of research and science, a connectivist approach (if there were one) would suggest that the acceptance of a research methodology or a specific research program is an emergent phenomenon describable only in terms of that programs placement in the wider network?

What does that mean? One way of stating it is that society as a whole feels good about a given program, and senses discord about another. Think of the sort of social approbation or revulsion we feel for certain moral or immoral acts - there is for example in Canada a certain crime of murder and dismemberment that has just been committed, and there is a widespread sense of revulsion regarding the fact that it was recorded and posted on the internet. This isn't a matter of voting or counting individual preferences, or of violating some guideline, law or precept; it's more like a whole body response to the phenomenon.

What constitutes it? Well precisely it is the set of interactions each of us with the others, the call-in radio shows, the blog posts, and the rest, combined with our inner sensations about the act as they are expressed in a myriad of ways, some not even connected to the act (that's why simply counting votes would be inappropriate; it completely misses the changed way we regard each other in the grocery store, a change imperceptible and almost unidentifiable, but if you were sensitive to it and looking for it, you might say you saw it there).

This form of social perception is the ultimate judge of the adequacy of any research program (or musical taste, or dance moves, or cartoon, or any of the rest of it). Again, it is not an electoral process, nor a market behaviour (quite the contrary; these are mass phenomena intended and designed to magnify the needs and interests of individual members of society, rather than to reflect the sense of society as a whole).

And - importantly - like personal perception, social perception is itself subject to refinement and improvement. It's as Richard Duscho writes, "the proper game for understanding the nature and development of scientific knowledge is engagement with the ongoing pursuit and refinement of methods, evidence, and explanations and the subsequent handing of anomalies that are a critical component of proposing and evaluating scientific models and theories."

The mechanisms we use to validate are - or ought to be - similar to those used to validate great cuisine, or dancing, or auto mechanics. Both society as a whole and experts in particular play a role. Society probably defines relevance - and again, relevance may not be immediately apparent. Most of society understands this, and we have always kept a place for abstruse researches, not because we understand them, but because we don't. Experts in the mean time are needed to distinguish the gold from the dross, the genuine from the imitation; their own inner sense of the discipline has been finely honed.

We need both, and we need these to be undefined, rather than specified in terms of some sort of code of guidelines or best practices or whatever, not only because such are hopelessly inaccurate abstractions on the judgements that are actually made, but also because by their very nature they are resistant to the sort of growth and personal development every society needs. When societies learn to feel and not just to measure, arts and sciences flourish; when they return to standards and specifications, they have lost that capacity, and a decline has begin.

Comments

  1. Kia ora e Stephen

    I have to say that your post reeks of postmodern culture. This is not a criticism. It’s an observation. I take a few examples from your post to explain this.

    Rob’s line, “. . . the trick is to not overthink it, because that’s a sure route to paralysis”, which you quoted, is postmodern as, in any context. Rob used it in an artistically creative context, however, not in a scientific one, so I wonder about the contextual relevance of your use of this in relation to Science.

    You align your (scientific?) methodology with Paul’s, whose premise is that “the consistency condition which demands that new hypotheses agree with accepted theories is unreasonable because it preserves the older theory, and not the better theory”. This opinion uses a juxtaposition of the words ‘old’ and ‘better’ in a way that may suggest to some that new is good and old is bad. How postmodern is that!

    I agree with the dancing bit that you extemporise on. The line, most often attributed to G K Chesterton, that summarises this is, “if a thing’s worth doing, it’s worth doing badly”. This, of course, is entirely subjective. There are many instances in Science history that have shown that, with the passage of time, what was once revered as great science was ultimately found wanting. Nevertheless, there’s probably a similar array of instances when guffawed experimental results were later found to be plainly indicative of things that were then understood to be significant. Just reflect on the very earliest (pre-electronic technology) attempts to determine the speed of light in the context of the significance of errors in making measurements. Again, of course, this is all entirely subjective depending on the point of view.

    I teach senior secondary school Chemistry, and will often coach high-flying students in techniques to do with answering questions in an exam. Part of that summarises the three M’s, the essence of how Science is most often understood by people who think they know (subjective) what science thinking is about: the macro (overall effect, observation or expectation) – the micro (the minutiae, the detail of the postulated activity of fundamental particles (however small) and – the model which can be used to appreciate a visual idea or concept of the macro or micro postulated phenomena. Science is like that. Someone who thoroughly understands all this knows that there is no set or fixed rule or suite of rules for ‘doing’ Science. Rather, there are different ways of looking at things and certainly different methods used when attempting to find out more, whether done by a secondary student or a research fellow or even a would-be-Science entrepreneur.

    Science, its methods and all that it holds for humanity can never be thrown into one common box. At present, I believe that there is a tendency to do just that with all things pertaining to Science.

    Nga mihi nui

    ReplyDelete
  2. This is not the first time I have been called post-modernist and it certainly will not be the last.

    I have long questioned modernist assumptions about progress and the unity of the sciences. That said, I wear the post-modernist mantle lightly - it does not matter to be a post-modernist, it matters that I can provide the most reasonable (and dare I say?) accurate account of learning and discovery.

    Probably the most accurate thing that can be said here is that scientific method is a moving target, or perhaps better, a random walk. Success in science depends less on methodology and more on recognition and insight. New science creates new methodology; the two walk in tandem.

    ReplyDelete
    Replies
    1. Kia ora e Stephen

      I have thought long and perhaps even hard about your words, "scientific method is a moving target". Something about these wrangled with me. I cannot see method as a target. Rather, I feel that method may be used to aim and perhaps hit the target - yes. I then wondered, what is the moving target if it is not method? Only then could I find some logic in your statement. Surely it is what method aims at that is the target – and I agree that, clearly, it seems to be moving and perhaps previously tried methods may not be so useful for aiming at that.

      Nga mihi

      Delete
  3. You say, " ... success is something that we perceive rather than something we measure."

    This captures my unease with learning analytics, which might be a good idea if used to perceive but could easily be a miserable idea if we only measure. You do seem to suggest, however, that even perception is amenable in part to analysis. For example, in your dance example you say, "the point of the dance is ... more than to merely replicate a best practice, it is to take it and make it something more." And you mention a couple indicators, "a smoothness of motion and attachment to my partner."

    Still, in an institutional education setting, the pressures to measure far outweigh any tendencies to perceive. So I remain doubtful that learning analytics will be as helpful as it might.

    Nice post. Thanks. ... Gary

    ReplyDelete

Post a Comment

Your comments will be moderated. Sorry, but it's not a nice world out there.

Popular Posts