Mar 14, 2015

Family of Mind (Internal Family Systems)

English: System Dynamics Modeling as One Appro...
English: System Dynamics Modeling as One Approach to Systems Thinking (Photo credit: Wikipedia)
Followers of this blog and others that I write -- yes, I mean all two or three of you -- may have noticed that after years of inconsistent posting I am posting like a mad demon. I think that there's a reason for this; I think that I know the reason; and I hope that writing about what I think is the reason will not suddenly reactivate the dreaded Wannabe Blogger Syndrome, which I have had to endure for years.

This part of my journey starts with the mindfulness course from The Great Courses, which I blogged about (or will have blogged about) here.

In that post, I write about Mindfulness Based Stress Reduction" (MBSR) (Wikipedia ref, here.) Subsequently the course mentioned something called "Internal Family Systems," or IFS which I researched, and which I credit with my blogging renaissance, among other good things.

IFS traces its roots to many different disciplines, but for purposes of discussion let's tie it back to the "Society of Mind," which I wrote about here.

Society of Mind proposes that our minds are not unitary; instead, they are composed of many "agents," each of which has its own orientation, skills, and goals and which can cooperate, compete, and even subvert one another. Kind of like fractal people inside people.

IFS created by therapist Richard Schwartz, who observed his patients saying things like "a part of me wants to do this, while another part of me wants to do that" as they discussed their internal conflicts. Schwartz started trying to understand what the parts of his patients wanted, and how they related to each other. And because he was a family therapist he structured the "parts of me" idea with the structure and dynamics of family. I think of Schwartz's "parts" concept as similar to Minsky's "agents" concept, but more familiar because we know families better than we know societies in the large.

Like family members in an ordinary functional family, the members of a functional internal "family of mind" can work for the good of the family. Even when they find their roles in conflict, they find ways to work things out. Like family members in an ordinary dysfunctional family, the members of a dysfunctional "family of mind" can be at war with one another and act to harm other family members even to their own detriment. 

The families we grew up in and the families of which some of us may have formed always combine the functional and dysfunctional modes. As a result, the family metaphor is familiar, and can be useful.

The idea of having "parts of myself" that did not always cooperate was familiar to me. Sometime in the morning I'd feel that a distinct "part of me" wanted to get up and do stuff, while another "part of me" wanted to stay asleep. And me? I seemed to be the part that was watching the other two parts and wondering: WTF?

In the IFS system, "parts" fall into three groups: protectors (sometimes called managers) are parts that try to keep the family stable and functioning; exiles are parts that are dysfunctional and need to be controlled; firefighters are parts that take extreme actions when protectors don't control exiles. And then there's a unique part that Schwartz refers to as "the Self." The Self is always calm and compassionate, open and non-judgmental: an internal Buddha figure. But the Self is often "blended" with parts and loses its unique character.

Parts are "activated" by circumstances, including the behavior of other parts, and when they are activated, each parts acts in accord with its nature, history, experience, and assumptions. The set of parts that are activated and their interactions determine how the perceived human person behaves.

IFS suggests that you have conversations with these parts. I know it sounds a bit looney, but I was up for it.

So in my half-waking state, a part of me wanted to get up and greet the day; a part of me wanted to sleep, and my Self was watching, not knowing what to do. IFS gave my Self a strategy: I decided to talk to the part that wanted me to stay in bed. 

As I talked with that part, I realized that this part also activates when I am writing: I become incredibly exhausted. Sometimes just the idea of writing is enough to make me tired, so I read stuff instead. Thus a million open tabs on my browser, and nothing written.

IFS suggests you give names to the parts: after all, you can't tell the players without a program. So I chose (or it chose) "Morpheus" as a name.

What did Morpheus want?

As the conversation in my mind evolved, it seemed that Morpheus was a protector: it wanted to avoid conflict and the discomfort that exiled parts might experience, and its solution was simple: to put everyone to sleep. When I pointed out that not every part of me wanted to sleep, that some parts of me (and me, my Self) wanted to write, not sleep, Morpheus had an answer. "Sleeeep! Sleeep!" And I got sleepy.

No, really.

This happens often when I write. I'll be full of energy, ready to rock, and sometimes as soon as I start to write and sometimes after a little writing, I get tired. My usual response was to succumb and nap, or to do some physical exercise--or browse the web. I kept trying to talk to this Morpheus thing, genuinely interested. And Morpheus would talk a bit, and then interrupt: "Sleeeep! Sleeep!" 

Now that I knew what was going on, or what I thought was going on, I was able to explain firmly but politely that I understood that Morpheus was trying to be helpful, but this was not helping. Eventually (and I may have had to take a few naps in the process) I learned that there was another part of me, one that Morpheus was trying to protect by putting "all of me" to sleep. I perceived this part as an exile: the fragile, vulnerable, fearful, sad little boy that I used to be.

I suppose everyone is different now than they were as youngsters, but to me the contrast between the person that I am now, and the part that slowly revealed itself was stark. I look at failure as the necessary price for learning. Failure sometimes hurt, but the hurt does not last. There is nothing that I have ever done that I now feel shame for having done. Mistakes, failures, doing things that were stupid and even shameful are what's gotten me to who I am today, and I feel pretty good about who I am today.

I was a timid, fearful person for a great part of my life. As I grew into the responsibilities of having a family I often succeeded because my fear of failure, and the shame I knew I would feel if I failed were much, much, much greater than any other fears and discomforts I'd experience if I did what I needed to do for my family to prosper.

After retirement the fear subsided, and I found that I was able to confront any new thing without fear of discomfort, or embarrassment, or incompetence. I would back away from things that were truly physically risky, but that became the limit of my concern.

Writing? I'm entirely prepared to write things that are shit, and not care about it. Because I believe that the way to get good at writing is by doing a lot of shitty writing. And because I fucking like writing.

That's how I feel. But not all of me feels that way.

That part that I identify as an earlier version of me worries about these things. I remember my failures as facts and mainly remember what I have learned; that part of me remembers mainly the feelings of hurt, shame, pain. It remembers feeling worthless, wanting to be dead rather than to endure its continued existence, but not being able to die because how would the family survive?

It was surprising to discover that the old version was "alive and unwell." It was surprising because I did experience it standing in my way. It wasn't present. Until I started these conversations with parts of me, I wasn't aware of it. And, I learned, how could I be aware of it? Whenever that part of me began to activate itself, Morpheus would put it, and whatever other part of me was awake, to sleep.

I named that part of me Little Michael. I could not write, so the developing narrative went, because Morpheus was doing what Morpheus could do to keep Little Michael from suffering the feelings that little Michael was stuck in suffering. Morpheus protected Little Michael by keeping us from completing writing projects and by putting us to sleep soon after we started.

Frustrating and puzzling as it was, I (the Self) could tolerate failure to write. But Little Michael could not endure what he had to endure: the pain of choosing a word and feeling there was a better word that one could not think of; the pain of choosing the wrong label for a post; the embarrassment of spelling something incorrectly and having it discovered by someone else; the agony of knowing that something could have written better than it had been, and yet not being able to produce that better thing.

I could say: "Fuck it! I love to write, and I'm going to write."

Little Michael could only curl up in a little ball and cry.

Or Morpheus keep peace in the family, by putting us both to sleep.

Since having my first conversation with Morpheus, and with Little Michael, and others in my internal family, things have been changing, and my blogging is just one piece of evidence. When I do my morning pages I'll sometimes have a conversation with a struggling family member and so far the outcomes have all been good ones.  Each conversation helps me clarify the dynamics of my internal family system; each helps me be clearer about what is my Self and what is a "part;" and parts of me whose development has been stunted are starting to grow up.

Little Michael isn't the pussy he used to be. And he can even chuckle a bit at my writing that.

Is it real, or just a story I've made up? In the end, it does not matter. Minsky points out that there are good reasons why we can't happen on some new idea and just change our minds. If we could, then our whole self could be hijacked at any time by the agent bearing the idea; and no one could trust us if we were that changeable.

So there's a social contract in place. We can change slowly. We can change with great effort. And we can change after a truly significant event: a near-death experience; falling in love; finding Jesus. Reading a book full of good ideas does not qualify.

Or we can discover an internal family system that "explains" our dysfunction, and engage with it, and experience something with enough explanatory power to let us do what we could do all the time: change.

Whatever it is, I'll take it.

Schwartz's theory is interesting because it's self-similar across scale: that is, the dynamics of the internal family system and the dynamics of the external family systems, and of other interpersonal relationships are much the same. And the theories cross boundaries: the "parts" of one external family member sometimes interact with "parts" of another external family members and produce conflict or other forms of dysfunction.

References:  IFS website (The video is pretty lame, so don't bother with it)
A more thorough description of the model, here.

Mar 13, 2015

Sam Harris vs. William Lane Craig debate

Here, we are at Notre Dame, in 2001 for the second annual "God Debate." In In this corner, William Lane Craig, Christian apologist. In this corner, Sam Harris, unapologetic atheist. Up above, perhaps, God is watching the debate with a couple of buddies, and laughing.

"Those fucking mortals!" God says.

Anyway.

I don't remember how I found the debate. Perhaps YouTube suggested it. But no matter. Find it I did, and this morning, intending to write on a different topic entirely, it came to mind. And then I opened the eight new tabs that I blogged about here.

For those who don't know, and even for those who do, a Christian apologist isn't a person who apologizes for Christianity. An apologist is "a person who offers an argument in defense of something controversial."  Christian Apologetics is enough of a thing to have its own Wikipedia page, here. And Sam Harris, who I described as an "unapologetic atheist," is also an apologist. See what I did there?

Anyway.

This morning I was going to write about something that happened in the "Spiritual Journeys" course that Bobbi and just finished, and the debate seemed relevant and before you could say "Don't open any more tabs," eight new ones were opened! So of course I had to metablog on that topic, as I mentioned, here, before I get to the original first topic, which I hope will have a reference real soon, here.

Anyway.

I listened to the debate and concluded that William Lane Craig was by far the superior debater and that Sam Harris had done a terrible job of breaking down Craig's arguments. From the transcript at Craig's website, here:
In tonight’s debate I’m going to defend two basic contentions:
1. If God exists, then we have a sound foundation for objective moral values and duties.
2. If God does not exist, then we do not have a sound foundation for objective moral values and duties.
Very specifically Craig says he's going to avoid the question of whether God exists. Or how we might know of his existence. He says:

I shall not be arguing tonight that God exists. Maybe Dr. Harris is right that atheism is true. That wouldn’t affect the truth of my two contentions. All that would follow is that objective moral values and duties would, then, contrary to Dr. Harris, not exist. 
His debate point is conditional: if God exists, we have a foundation; if not, then not. And his subtext, elaborated later is this: that if we don't have a sound foundation for objective moral values and duties then WTF, anything someone might choose do on moral grounds is on subjective moral grounds, which means (Craig avers) it's just a matter of opinion. And since people like ISIS would in future years think that it's moral to chop heads off, well, that's as well founded, on moral grounds, as Mother Theresa ministering to the poor.

And sadly, for Team Atheist, Harris does not clearly address his point. (Disclosure:  I am not a member of Team Atheist, but some of my best friends are atheists, so I can root for Team Atheist; and some of my best friends are Christians, so I root for Team Christianity, too). He does, in fact address it, as I learned from reading the transcript, and as was clear to me after reading reviews of the debate. I came to the same conclusion that another favorite blogger, Luke Muhlenhauser came to here:
As usual, Craig’s superior framing, scholarship and debate skills ‘won’ the debate for him. 
Too bad. Because his argument, which sounds so good, is really a bad one, and it's unfortunate, for the sake of good-debating form, that Sam Harris did not say what I would have said.

Craig's argument, "If God exists, then we have a sound foundation for objective moral values and duties" translates to: "If an entity that I have not yet described, defined, and about whose existence I will not debate at this time, exists, then we have a sound foundation for objective moral values and duties."

He then describes some of the characteristics of this Thing-that-he-does-not-want-to-debate-the existence-of and his description, in my view brings his thesis to this:
If an entity whose nature provides a solid foundation for objective moral values and duties exists, then we have a sound foundation for objective moral values and duties.
Harris, for his part, does not make his better and more coherent argument relevant. My favorite blogger, Scott Alexander, makes the argument clearly here:
If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive), and if He made them for some good reason, then that good reason, and not God, is the source of morality.
Not having made this important point, Harris proceeds to provide testable criteria against which an act can be tested for greater or lesser morality. The criteria are themselves neither objective, nor subjective, but definitional, and consistent with our native sense of what is moral and what is not moral. The point that Harris fails to make is that to have a reasoned discussion real things (like behavior) at an abstract level (like morality) you have to define the abstraction, and then able to test whether some real thing does or does not match the definition. So a mammal is an abstract category with a definition that lets us test whether or not a particular entity does or not belong to the category mammal. And God is an abstract category (a Singleton, according to monotheistic religions) that must be defined in order that one can tell whether or not a particular entity belongs to the category God. And in the case of monotheistic religions, assuming one found anything that met the criteria one would then have to demonstrate that no other entity fit the category.

Scott Alexander's argument, quoted above is part of a much longer, well reasoned 13,000 word essay on consequentialism, which I will unnecessarily, but for your convenience, re-link to here.

The section most relevant to this post, in its entirety is quoted below:
What would it mean to say that God created morality?
If it means that God has declared certain rules and will reward those who follow them and punish those who break them - well, fair enough, if God exists He could certainly do that. But that would not be morality. After all, Stalin also declared certain rules and rewarded those who followed them and punished those who broke them, but that did not make his rules moral.
If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive), and if He made them for some good reason, then that good reason, and not God, is the source of morality.
If it means that God has declared certain rules and we ought to follow them out of love and respect because He's God, then where are that love and respect supposed to come from? Realizing that we should love and respect our Creators and those who care for us itself requires morality. Calling God “good" and identifying Him as worth respecting requires a standard of goodness outside of God's own arbitrary decree. And if God's decree is not arbitrary but for some good reason, then that good reason, and not God, is the source of morality.
Newspaper advice columnists frequently illuminate moral rules that their readers have not thought of, and those rules are certainly good ones and worth following, but that does not make newspaper advice columnists the source of morality.
References: transcript.
The first review of the debate I found thought Craig was weak, Harris awesome. WTF? I disagree. But it's here. And it led me to a far better review.  The author of that review says:
[Addendum: looks like Luke is going to be more thoroughly picking apart the arguments. He also has a nice round-up of reviews.]
"Luke," I guessed was Luke Muehlhauser, a frequent contributor at the LessWrong rationalist community. And it was.

And here are part 1, part 2, and part 3 of his thorough analysis of the debate. And here's Luke's bio.

My desperate attempts to close browser tabs, and catch up on my posting

Here I am again, behind on my planned posting, and with a broken posting process. Two broken processes, actually. Let me explain.

I suffer from a variety of Posting Paralysis Diseases (PPDs) some of which I have written about, and which I will not, repeat not, interrupt my flow to properly reference. If you care, use my custom search to find them. If you don't care, then why am I bothering to link to them.

But I digress.

As usual.

One of my PPDs is Topic Paralysis Paralysis, TPD. TPD is indicated when the blogger becomes functionally paralyzed when trying to choose a topic.  "Try this one," a part of me would say. "How about this other one," another part would offer. Then the two would debate, with other parts chiming in with their favorites, while I watched, interested. That's part of my problem: it's all so fucking interesting.

Then a day would pass, no post would be written, and retrospectively I'd be frustrated--though in an interested way.

Lather, rinse, repeat.

So near the end of February, to alleviate my TPD I attempted Temporal Blog Assignment Therapy, TBAT. I printed out a calendar for the month. And I went through various lists of topics about which I had considered blogging and went through my Google Search History to see what I'd read about that I thought was blogworthy. And I wrote a topic on each day. And I resolved to try to stay current, and write back posts for the assigned topics for the assigned days. Or tried to. This worked in part because it reduced the number of debate options and stuff started coming out of the end of my blogging pipeline--though for different reasons than having applied TBAT. And which I will write about at another time.

Soon.

Anyway, TBAT helped.

Kinda.

I was blogging more, and enjoying it more, but I still found myself opening more and more interesting and blog-worthy browser tabs. So many that I did not have calendar days in which to place them. So I added a new form of therapy: Browser Tab Closing Therapy, or BTCT. It works this way: I had a browser tab open, then if it was a post-worthy topic I would not close the tab it until I'd blogged about it. And this worked.

Kinda.

But today I realized both processes were broken. The write-stuff-on-the-calendar process was broken because I left my calendar at the place in Southwest Harbor where we'd met for our "Spiritual Journeys" course. The write-about-your-tabs-before-you-close-them process was broken because I was still opening tabs faster than I was closing them. Without tab-triage I have on my desktop machine more than fifty open tabs (I quit counting when I realized my algorithm was flawed and the incremental return on incremental investment was--surprise: less than zero.) My smartphone has more than a hundred! WTF?

And this very morning, while trying to write my first post for the day (not this one) I opened eight new tabs! WAT!! (A subject for another post, perhaps).

And now it's time for the Blogger's Prayer: Oh, God! Please don't let me open another tab so that I can find a link to add to WAT! Let me do it much later, when I've closed all my tabs and I can get around to editing my posts. Please!

Anyway, I'm in a groove, on a roll and, engaged in a third metaphorical activity that does not come to mind immediately but which I could easily find if I would just open another fucking tab. But I won't.

I'm going to declare victory and post, then post on this morning's newly opened tabs, on a debate between Sam Harris and William Lane Craig, here.




Mar 11, 2015

When will getting old stop being interesting, and start being upsetting?

I'm going to die. I get that. And it's either going to happen after a long decline, or due to a catastrophe. Or decline followed by, but not caused by a catastrophe. Or catastrophe directly caused by a decline. But it's going to happen.

Sucks. But interesting. At least to the present version of me.

On the way to death things usually go downhill. I know people in their nineties who are still incredibly sharp. Maybe they aren't as good as they were in their fifties, but I didn't know them in their fifties. I just know them now, and I know that if I enter my nineties as sharp as they are, I'll count it as a win.

But will I get there? And will I be (relatively) unimpaired. There's not a lot to go on.  Here's some data, and a speculation:

Datum: My dad died at 86 after a catastrophe. Years before he he'd had a stroke that crippled one side of his body and slurred his speech (probably due to a partly crippled tongue). His mind was not notably impaired, except for depression, reported by my Mom.

Datum: My mom died at 94, after a long decline. It took away her short term memory but left all the positive parts of her personality intact. What she'd said in a conversation five minutes before was lost to her, by the time she died. The distant past was still clear enough for her to retell a semi-relevant story of something that happened thirty years earlier--sometimes three times in a single conversation.

Speculation:
Doing the math, I'll die at age 90 (the average of 86 and 94) with a quarter of my mind gone. (the average of Mom and Dad). The chances of my dying in a car crash before then have been reduced since the recent night that a cop pulled me over after I passed him doing 75. He'd probably been doing 60 on a road marked for 50. "Don't you think I was going fast enough?" He asked me. "I was stupid," I confessed. Several times. He told me to use better sense, and let me off. And since then I've slowed down.

In the meantime I keep close watch for signs of cognitive decline. Bobbi and I alternate reading selections from "A Coastal Companion" at breakfast. Right now I do the odd days, she the even. When a month has an odd number of days, we switch. As I read I pay attention to my stutters, hesitations, and mispronunciations. They seem to be growing in number but I am an imperfect observer. There no way to determine to what degree my self-observations are causing my observed lapses. It seems to me that things have gotten worse, but there's no way to determine whether that's true.

But if I have gotten worse, it's interesting rather than alarming. At least, right now. But will it stay that way? I imagine a future me who is unable to say three words without getting one wrong. Or one who has to speak slower, as I sometimes find myself doing, and who can't pick up the pace when he finds himself doing that, as I usually do when I notice.

Present me hopes that future me will continue to find the entire process interesting. Present me hopes that future me will not find it depressing. But ya never know, do you?

And that makes it interesting. At least to present me.




Mar 9, 2015

Humans need not apply

CGP Grey is a dude who publishes awesome videos on YouTube. At the bottom of this post I've embedded the video that prompted this screed: about the coming of automation, and the problems it's going to present.

Short form: people are going to be displaced from their jobs by machines. Count on it. It's going to happen, and we need to think about how to deal with it before it happens.

There are two theories about the coming increased automation. One is the "this has been predicted before, it's never happened, and it's never gonna happen" theory. The other is the "this time it's different" theory. I think that this time it's going to be different.

Proponents of the "this has been predicted before" theory point to the Luddites, the poster-children for the wrongness of the belief that this will happen. The Luddite movement arose in the 1800's when the automation of textile production--for example the invention of lace-making machines--began to put artisans--for example lace-makers--out of work. The Luddites were right--lace making machines did obsolete artisanal lace-making--but they did not account for the overall benefit of less expensive end-products--like lace--and the creation of new, and better jobs for those displaced.

The Luddites were so wrong that there's a fallacy named after them and this description says what I just said, only more economistically:
Economists apply the term "Luddite fallacy" to the notion that technological unemployment leads to structural unemployment (and is consequently macroeconomically injurious). If a technological innovation results in a reduction of necessary labour inputs in a given sector, then the industry-wide cost of production falls, which lowers the competitive price and increases the equilibrium supply point which, theoretically, will require an increase in aggregate labour inputs.
If you've got a fallacy named after you then you must be wrong. Right?

Maybe not. I go with the "this time it's different," crowd, because--well, because this time it's different.

Today's human beings, the ones who can read, write, drive cars and send texts, and even the vastly smarter ones who can do nuclear physics are not all that different from the humans whose peak economic activity was gathering nuts and berries. Those folks developed brains. Nut and berry gathering is not that challenging, but living in a human society--even a primitive human society--is demanding. So those brains had to be good ones--or no babies.

Over time the economic environment changed to match the social environment and people had to develop and exercise a wider range of skills. The basic talents that people apply to accounting, woodworking, and computer programming were always there. The opportunity to use the talents was not.

Historically human brains were under-utilized. Working in a factory takes physical skills, but it also takes mental ability, which is why humans work in factories and why monkeys, who might work for less money, do not. Over time we've created jobs that use more of our natural human talent. But we're limited. We can create new jobs because of cultural evolution, but we are constrained because the hardware on which the software of culture operates changes very, very, very slowly. And we are limited.

Not every human who can forage can learn to hunt or to farm--though most can. Not every human who can learn to use a hoe can learn to run a tractor--though most can. The fall-off rate may not be large but it is steady, and for every set of new skills that must be acquired to do a job there are some who cannot. Why? Because their hardware is not sufficient to let them do it. Thus, not every "manual laborer" can become an effective "knowledge worker." And not every knowledge worker can become a computer programmer. It's not a matter of education, though education can help. But all evidence says that you can send someone to school for years to learn a skill for which they don't have the basic smarts and they'll never be very good. Not for want of trying, but for want of wiring.

The relentless march of technological improvement is a march toward better and better jobs that fewer and fewer people can do well. Working in a salt mine is a job that almost any healthy person can do well enough to not be beaten to death by the salt-mine-overseer for non-production. Working in an office is a job that some healthy people cannot do well enough to get fired. The jobs are better. The consequences of non-performance are less dramatic. But there are fewer people who can do the newer jobs.

In the limiting case, which we are not close to approaching, but which we will approach soon enough, there are relatively few people who can do jobs that computers cannot do better.

Today there are still enough jobs that most people can do so that most people can find work. But to me the trend is clear: the jobs that will remain will not be jobs for everyone--even with the best possible education, even with the best computer assistance to help them.

And the trend is accelerating, and that's the second problem.

Back in the old days, when change was slow people lost their jobs slowly and new opportunities appeared slowly. But the rates were well matched. The old generation might end up out of work, but there were new, and better jobs for the new generation--except for the small, incremental number who could not do the new jobs. Never mind! There are not that many of them. We can afford to support them with some sort of "social safety net," or with jobs that are not strictly necessary but not clearly worthless, or by spreading the work among more people. In 1860, according to this report, the average work week in manufacturing in the United states was nearly 70 hours. And working in manufacturing was probably one of the better jobs you could get. Now it's something north of 40 hours a week.

Of course we do create new opportunities faster than before, but new opportunities are not the same as new jobs. In 2014 Facebook, a company with $2.5B in revenue had fewer than 10,000 employees. Yes, I know a billion dollars isn't what it used to be, and there are lots of people who do work that Facebook pays for who are not employees, but was there a company in 1914 with comparable revenues with that few employees? I haven't done the research, but I don't believe so.

I believe that as change accelerates the number being displaced will grow faster than our willingness to spread the work around or to share the benefit. It's a matter of willingness, not ability.

Some work will always have to be done by humans. If most the world's population has to work forty hours a week to provide what the world's population needs in order to have a satisfactory life, and if productivity increases so that that only one fourth as much human effort is required, then what happens?

We can do some job-sharing, but we can't let everyone work 10 hours a week because some of the work will be of a kind that most people cannot do, no matter how many hours they work.

Do some people work 40 hours and others let benefit from their productivity without working? Or do they let them starve?  Starvation is not an option. If a substantial number of people find themselves at the point of starvation they will exercise their right to do whatever the fuck they need to do to survive, which includes taking up arms and overturning the established order.

The worst realistic case is that a balance will be struck between dissatisfaction, "welfare" programs, and suppression.


Mar 8, 2015

Happy cohabiversary to us.

On March 8, 1968, Bobbi and I started living together. For many years, even years after we were married - about two and a half years later - we celebrated our cohabiversary.  Our marriage was not such a big deal. It was just a formality. Something we did for other people. The big deal was when we started living together. That was for us.

Then, over the years, cohabiversary faded into the background, and anniversary took center stage. Nowadays we go to the Arborvine restaurant to celebrate and from time to time we've let our cohabiversary slip by unnoticed. This year we notice. We're going out to see a show at the Collins Center and have dinner at the Fiddlehead restaurant on this, our 47th cohabiversary.

Fifty is coming up. Stay tuned.

By the way, the Urban Dictionary now says cohabiversary is a thing.

Mar 7, 2015

The crisis of democracy

Is democracy in crisis? Really, I don't know. I can argue that it is. And I can argue that it's increasingly successful as a method of governing.

In 1975 a bunch of policy wonks wrote this book, called "The Crisis of Democracy." Wikipedia summarizes its content here.

I have not read it, but I may at some later time, and I wanted not to lose the link.

Mar 6, 2015

Man vs Squirrel: Part N

We like to feed the birds that visit our back yard. Yes, I know. The hard working birds that refuse to accept our handouts are out in the cold working hard to feed their families while lazy birds, willing to go on welfare flock to our feeder. They probably consider it an entitlement. Nature's balance is being overturned by our support of non-producing birds. We don't care. We're lucky enough to be born high on the food chain; we've got enough food for ourselves; and we're willing to share it with other, less fortunate creatures.

But not with the fucking squirrels.

Squirrels, as far as we are concerned, should work for a living. They should not prey on the largesse that we provide for out-of-work birds.

The problems have no respect for human intentions. We're feeding birds with a bird feeder. It says that on the box. In fact, it says that it's a "squirrel-proof bird feeder." Can't the damned squirrels read?

The "squirrel-proof bird feeder" was never squirrel proof. As soon as we put it up, strung on a wire between two trees in our back yard, squirrels discovered they could walk across the wire, drop down on the top of the bird feeder, hang down, holding on with their hind legs, then drop down and catch themselves on the little spring mounted perches that hold the weight of a bird, but tip when something heavier lands on them.

Unless the thing that lands on them has hands. Or paws, or whatever it is that squirrels have. Didn't think about that, did you, squirrel-proof-bird-feeder-designers, did you? As they drop down a squirrel grabs the perch with its little fore-appendages, arch its little furry back, grabs the opposite perch with its hind appendages, and then sticks its furry nose in the bird-food-dispensing-aperture in front of the perch, and mows away.

Our first solution was to invert a clear plastic bowl over the top. That made vertical envelopment, "a maneuver in which troops, either air-dropped or air-landed, attack the rear and flanks of a force, in effect cutting off or encircling the force" according to Wikipedia, impossible. But squirrels have other tactics: they jumped from the tree to the left of the feeder and caught themselves on the perch on the way down. Munch munch! I moved the feeder until it was out of range of leaping squirrels from the left, and not yet in range of squirrels that attempted to leap from the top of one of The Three Sisters to the right.

And for a while, there was a standoff. The squirrels contented themselves with the seed that birds dropped on the ground. Occasionally they would longingly eye the feeder. Sometimes they'd climb the tree and staring at it, wondering if they could make the leap. But they'd learned better.

Until the recent snows raised ground level to within squirrel-leaping-distance of the bottom of the feeder. Once they found that they could do that, it was all over. They remembered leaping from the trees, and with less far to fall, quickly perfected their abilities.

The war was on again!

But man is not to be overcome by squirrel. At least this particular man is not to be overcome by those particular squirrels. I repurposed the roof-rake that I bought to clear our porch skylight and other roof surfaces as a temporary feeder holder. It sits in the yard, stuck in the deep snow. The plastic shield that eventually replaced the upside-down bowl and which used to be on top of the feeder has been moved so that it prevents assault from below.

But soon the snow will melt and the rake will no longer stand. What then? 




Mar 4, 2015

Easily discredited arguments do not explain anything

A friend of ours forwarded Bobbi an email titled: “This explains everything” and asked for comments. Wisely, she forwarded it to me, and stupidly I spent a ridiculous amount of time analyzing it. Having made that foolish time investment, I’ll make the small incremental investment needed to turn it into a post.

The email claims that it explains everything but didn’t explain anything to me other than why liberals lose arguments when they don’t check their facts and when they exaggerate their statements beyond reason. Of course, conservatives do this too. I’m just picking on liberals because it’s a liberal email that hit my inbox.

For the record, my political sentiments, to the degree they’ve been articulated, match what my favorite blogger “Scott Alexander” says in this post.

What if we abandon our tribe’s custom of conflating free market values and unconcern about social welfare?

Right now, some people label themselves “capitalists”. They support free markets and oppose the social safety net. Other people call themselves “socialists”. They oppose free markets and support the social safety net. But there are two more possibilities to fill in there.

Some people might oppose both free markets and a social safety net. I don’t know if there’s a name for this philosophy, but it sounds kind of like fascism – government-controlled corporations running the economy for the good of the strong.

Others might support both free markets and a social safety net. You could call them “welfare capitalists”. I ran a Google search and some of them seem to call themselves “bleeding heart libertarians. “ I would call them “correct”.

That’s me. Correct.

Anyhow, the email that provoked this outpouring (aka rang). It copy-pastes an article that I found here, on a site called “The Tyee.” The title: “Downsize Democracy for 40 Years, Here’s What You Get” with the subhead “New signs civilization is veering toward collapse.” It argues:

The exceptionally successful four decades campaign to change the “ideological fabric” of society has put western civilization on a track to irreversible collapse, according to a major study sponsored by NASA’s Goddard Space Flight Center. The study focused on population, climate, water, agriculture, and energy as the interrelated factors determining the collapse or survival of civilizations going back 5000 years.

Well, there are problems—a lot of them. Anyone with rudimentary googling skills, or who reads the article in the Guardian that the Tyee article cites as supporting their claim, here, or who actually looks for the underlying study here will discover that it wasn’t a major study sponsored by NASA.

The Guardian article (as amended) says: “The HANDY model was created using a minor Nasa grant, but the study based on it was conducted independently.”

If they read the study, they’ll find a link at the bottom to a press release here that says this:

A soon-to-be published research paper ‘Human and Nature Dynamics (HANDY): Modeling Inequality and Use of Resources in the Collapse or Sustainability of Societies’ by University of Maryland researchers Safa Motesharrei and Eugenia Kalnay, and University of Minnesota’s Jorge Rivas was not solicited, directed or reviewed by NASA. It is an independent study by the university researchers utilizing research tools developed for a separate NASA activity.

As is the case with all independent research, the views and conclusions in the paper are those of the authors alone. NASA does not endorse the paper or its conclusions.”

One might chalk this up to bad timing on The Tyee’s part. NASA might have posted this announcement after the Tyee article was written. Except the facts say that they didn’t. The article is dated 26 Jan 2015 (again, ref here) and the NASA statement is dated 20 March 2014, here. Ten months earlier. And the referenced Guardian article had already made the correction long before the Tyee article was written. They footnote:

  • This article was amended on 26 March 2014 to reflect the nature of the study and Nasa’s relationship to it more clearly.

Here we have an article written by people who either did not know the article on which they base their argument is NOT a “major study sponsored by NASA” or who did know, and said it anyway, because, hey, saying it’s NASA makes it more credible.

The NASA study is not optimistic…
The NASA reports says [sic]…

The NASA study highlights …

Right, I get it. NASA. But if not NASA, then who?

Well, it was written by a Research Assistant, a Ph.D. candidate, and their Prof.

Their bios are here.

And they did not say that the study says that western civilization is on a track to irreversible collapse. So much not that the authors had this to say in a follow-up Q&A at that same site, here

“Our article does not make a “doomsday prediction of the collapse of society.” In fact, we state in multiple locations in the article that our model shows that a sustainable outcome is possible, including right in the abstract, where we state that the model shows that “collapse can be avoided, and population can reach a steady state.”

For someone who is open-minded and wants to investigate this problem, a little bit of work discredits it. And discredits the source as well. And is likely to discredit anything else, valid or not, that they have to say.

This is just stupid.

Do I believe that income and wealth inequality are problems? Yes.

Do I think that resource utilization issues are real issues? Yes.

But I don’t think that bad arguments for good issues help. I think they hurt. If you are trying to convince people to support your agenda, the WORST way to do it is by citing something that is so easily discredited.

And I haven’t even gotten into the paper itself, which I have not studied, but read carefully enough to see that while their arguments are interesting, they explain very little.

The argument (again, the paper is here) is based on a mathematical model. Climate models are mathematical models based on well-verified quantitative statements of physical law as we understand it; they use real data as input; they attempt to calibrate themselves by using past data to “predict” more recent data. And they still have problems and are vulnerable to legitimate criticism. Not to mention wacko wingnut criticism.

This model is different.

It is not based on physical law; it does not use any data as input; it is calibrated against exactly nothing. The math is based on the metaphor that nature is “prey,” and humanity is “predator” and on equations, similar in form, that provide a measure of explanation of the cyclic behavior of certain predator and prey populations.

It’s a metaphor, folks. Yes, it is an explanation, but “it’s the will of God” is also an explanation. And “it’s the will of God” is about as good an explanation as this one’s equations are.

Meaning, not.

At all.

Click here to subscribe to 70 Years Old. WTF! by Email

Pages