WHEN IS IT A REAL JOB?

The Rise of Temporary Work

It is a difficult fact to face but businesses do not want to be stuck with most of their employees. There are the very creative and productive workers, of course, the ones essential to their profitability and success. They do want those. But then there are all those who hold down jobs that require health insurance, workers compensation, Social Security contributions, retirement funds, who are covered by anti-discrimination laws, who file complaints, and so forth.

Some day, robots and computers may be devised to take their place, but right now technology is not fully up to replacing many of the functions that still require real people.

Recently, a professor of sociology, blogging for The New York Times, noted that the temp industry added more jobs in the United States than any other. This trend is so strong that low wage, temporary jobs “threaten to become the norm.” He noted that employers that could have invested in their work forces, instead have “generally taken the low road: lowering wages and cutting benefits, converting permanent employees into part-time and contingent workers, busting unions and subcontracting and outsourcing jobs.”

The trend towards temporary work started after WWII when white middle class woman started working for “pin money.” As an executive for Kelly Girls, one of the first organizations to seize on this trend, put it at the time: “The typical Kelly Girl… doesn’t want full-time work, but she’s bored with strictly keeping house. Or maybe she just wants to take a job until she pays for a davenport or a new fur coat.” (See “The Rise of the Permanent Temp Economy.”)

That was before the earning power of spouses became essential and two-income families became the norm. But the precedent has been established. Now many who desperately need work are forced to take these low-paying jobs. “Temp” used to define the desire of the worker for short-term employment. Now it defines the desire of business to avoid commitment.

Temp workers are a significant portion of the one-third of working adults who do not earn enough to support themselves and their families, according to the Census Bureau. But unless you have the Bureau’s statistical tools, you may never see them. They are hiding in plain sight, not “unemployed,” not receiving benefits, indeed increasingly viewed as a part of our “recovery.”

This highlights the role that our classifications play in defining social reality. If the unemployed is defined as those who are looking for work, to cite another example, then the millions who have given up looking are no longer “unemployed.” If poverty is defined as a certain level of income, then those who don’t qualify are not impoverished. If banks are fined for questionable practices, then the bankers who defined and authorized those practices have done nothing wrong.

Increasingly we are aware how the news is “spun.” But we don’t often stop to notice how much our fundamental social realities are being spun as well – by definition.

Guns, Paranoia, and Twinkies

What Drives A Trigger Happy Country?

Since the school shooting in Newtown Connecticut, the National Rifle Association reports gaining 100,000 new members. Gun sales have gone through the roof, many items are in short supply. Prices have doubled or tripled. What’s going on?

It’s the law of the market that objects in short supply increase in value. According to The New York Times, one gun dealer said: “many people were stocking up on high-capacity magazines in anticipation that they might be banned. Two weeks ago,” he said, “a 30-round rifle magazine was $12, but it now fetches $60.” The Times went on to quote a pastor in Tennessee who likened the current run on ammunition to the rush to buy Twinkies last year after its maker, Hostess Brands, announced it was closing. “It’s the same thing,” he said. “When you are threatened with the possibility that you are going to lose something, you get a bunch of it.” (See, “Sales of Guns Soar in U.S. as Nation Weighs Tougher Limits.”)

But what exactly are we in danger of losing? The market may be like that, but guns are not Twinkies.

Obviously something else is going on. To be sure, the outrage following the Sandy Hook killings did increase the likelihood that some form of gun control would be enacted. So the logic of the market was not wrong. Moreover, the danger it illustrated no doubt led many to believe they needed guns to protect themselves from others with guns. Anyone who had thought of purchasing a gun for self-defense probably felt pushed to act before it was too late.

But it’s not just fear. Almost 50 years ago, the historian Richard Hofstadter published his classic article on “The Paranoid Style in American Politics,” in which he noted the prevalence of conspiracy theories throughout our history. Over the years, various “threats” to our way have life been targeted: Masons, Catholics, Communists, gold traders, international bankers, and so forth. Rage is the fuel behind these movements. True, people search to pin the blame on someone for trends they fear, but they are enraged that their values, standards and sense of security feel threatened. Hofstadter’s use of the psychiatric term “paranoid” points out how deeply irrational such beliefs are, how immune to argument or evidence, and destructive they become.

The current debate about gun control is yet another example. And it is hard to find calm, rational voices in the discussion. Many sources of rage combine to account for this upsurge in the market for guns. Racism is clearly one source as we experienced a similar surge after the election – as we did four years ago when Obama was first elected. The debate over immigration is almost certainly another one, as people hate and fear the new immigrants from Asia and the near east.

Finally, I suspect that the continuing financial crisis also exerts a pull. It does not seem as if the guns are being pointed in the direction of Wall Street and the big banks. But as people feel the danger and the pinch, they fear their neighbors more. The 99.9 percent may well be more willing now to target anyone who seems to pose a threat to their security. And that’s a very big group.

What “Big Data” Can’t Do

The Human Factor

Sometimes called “the internet of things,” Big Data has arrived. It will “replace ideas, paradigms, organizations and ways of thinking about the world,” said Professor Brynjolfsson, Director of M.I.T.’s Center for Digital Business at a recent conference. Well, maybe. But it’s worth thinking about what it might not be able to do.

As Steve Lohr put it in his year-end review of the field in The New York Times, such claims rely “on the premise that data like Web-browsing trails, sensor signals, GPS tracking, and social network messages will open the door to measuring and monitoring people and machines as never before.” Computer algorithms, using that data, will enable us to “predict behavior of all kinds: shopping, dating and voting, for example.”

All that is true, and we see this at work as the internet already tracks every search we make on our computers. We can’t escape innumerable hints and suggestions of what else we might want to buy. Nothing is forgotten or ignored. And those are the more detectable signs of how we are being tracked.

But as Lohr points out such predictions are based on mathematical models and our models are made by human intelligence. Once set up, the models crunch data quickly and efficiently, but, being devised by humans, they themselves are not only fallible but also vulnerable to misuse.

Much attention has been paid to the invasion of privacy inherent in such models. What are we inadvertently revealing about ourselves? And who will use that information to manipulate and control us? It is happening now, of course, but it will only get worse. And how will we know?

A danger of another kind is the lack of sophistication and accuracy in the models used. Good programs require math and computer skills but also an ability to be innovative and thoughtful. Lohr notes that the McKinsey Global Institute projected that the U.S. would need 140,000 to 190,000 more workers with “deep analytic expertise.” He quotes Claudia Perlich, chief scientist at an online adtargeting startup in New York: “We can’t grow the skills fast enough.”

It is not just computer and math skills that are needed. Lohr notes: “Listening to the data is important, but so is experience and intuition. After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?” (See, “Sure, Big Data Is Great. But So Is Intuition.”)

To be clear, that includes the unconscious information to which we are inattentive because it sometimes seems irrelevant, sometimes unfashionable and sometimes unwanted. The point is that it is often precisely that information — unsought, unexpected, perhaps even difficult to accept or grasp – that reveals what we most often need to know.

At M.I.T.’s recent conference, Lohr reported that a panel asked about big failures in Big Data could come up with no examples. Later, however, someone in the audience commented that Big Data failed to foretell the credit crisis and financial crash of 2008. Oh!

Could it be that the specter of its potential leads its adherents to neglect or downplay the human factor? Does Big Data make people over-confident or smug? If so, that’s just the kind of problem that Big Data can’t solve?

Neuroeconomics?

A New Science? Or An Old Dream?

Economists like facts, facts with numbers. And they like to link those facts with human behavior, specifically choices we make about spending, saving and investing.

For many years, they had a favorite theory of motivation. As Robert J. Shiller put it recently: “people are rational, and thus . . . systematically maximize their own happiness, or . . . their ‘utility.’” Increasingly, that theory had been called into question by the “anomalies” of human economic behavior that a new breed of “behavioral economists” have called attention to, specific ways in which we do not act in our own self interest. They have opened up the field to a more complex and realistic view of motivation that psychologists and psychoanalysts have shared for years.

More recently, some economists have been pursuing the hope of finding hard evidence in the brain to account for our economic behavior. Now that the brain can be studied directly through neuro-imaging, there might be real facts to uncover, not just collections of behavioral mechanisms or theories or statistical correlations.

Shiller, a Nobel laureate in Economics, recently sought to assess those efforts in a review of a new book by Robert Glimcher, Foundations of Neuroeconomic Analysis. The review is respectful but cautious. He refers to the “neuroeconomic revolution,” suggesting a major shift in the making, one that will be “disturbing” to most of his colleagues. He notes that Glimcher, who has an appointment in the Economics department at N.Y.U., has uncovered “tantalizing evidence.” His agenda is “to transform ‘soft’ utility theory into ‘hard’ utility theory by discovering the brain mechanisms that underlie it.”

That suggests he is less committed to discovering new ideas than confirming old orthodoxies. “Glimcher . . . is seeking a physical basis for [prevailing economic theory] in the brain,” Shiller notes. But what concerns him most is that Glimcher and his colleagues “have yet to find most of the fundamental brain structures.”

At the very end of his review, the gloves come off: “Maybe that is because such structures simply do not exist, and the whole utility-maximization theory is wrong, or at least in need of fundamental revision. If so, that finding alone would shake economics to its foundations.” That would not bother Shiller too much, I suspect, as he is already convinced that the “utility maximization theory” is wrong. (See, “The Neuro-Economics Revolution.”)

But as a psychologist two things worry me more about the neuro-economics project. One thing is that Glimcher and his colleagues do not appear to be looking for newer and better ideas, just confirmation of what they already “know.” As a result, they may persuade themselves that they have found it, and, in the long run, that would set back the whole project.

The other concern is that they are barking up the wrong tree. The brain is adaptive, flexible, plastic. It is not governed by logic or, even, fixed ideas. It is built up by experience over time, and if there are invariant structures they are as likely to be impediments to our evolutionary survival rather than aids to discovering reality.

THE AGE OF APOCALYPSE

Trying to Understand

We are surrounded by visions of the end of time. The recent obsession about the “end” of the world when the Mayan calendar runs out is just one example of how our culture is fixated on apocalypse. Hollywood epics, enhanced by special effects, provide multiple visions of how our world will end, or what it will be like to live on after the destruction of civilization as we know it.

Earlier decades had similar preoccupations. Mutant monster movies thrived after World War II. Before that, mad scientists unleashed their Frankensteins on an unsuspecting world. In retrospect we can grasp what specific social realities such visions were responding to. Hiroshima explains the radioactive monsters, while the runaway effects of new technologies – planes, rockets, radios, electricity, drugs — go far to explain the fears of science. Those discoveries simultaneously made us more powerful and more vulnerable. Our power over nature has not been an unmixed blessing.

But what lies behind our current fascination with the end of the world? Increasingly, we grasp that we have lost control over events. Global warming is making it clear that we lack the collective will to avert environmental disaster. Economic crises demonstrate that our politics are helpless in the face of the financial industry ability to pursue profit at any cost, while the gap between the rich and the poor keeps widening. Our international organizations are unable to contain rogue states that harbor terrorists and build nuclear weapons or, merely, decimate their own citizens. At home there are senseless out breaks of violence in schools, shopping malls, and subway platforms. Everywhere we look we see danger and looming catastrophes.

But it’s not just the threats we perceive. It’s the paralysis we feel. Government seems broken. Citizens have lost faith in the integrity of their institutions, while corruption increasingly seems less the exception, more the rule. How to protect ourselves? Where to go?

And what can we believe in? God is increasingly irrelevant for most people, or else, in the strict eyes of fundamentalists and fanatics, is seen as unforgiving and punitive. Free markets for a time seemed the answer to most problems, but that faith has evaporated. Socialism is discredited. Liberalism has come to seem obsolete.

I am exaggerating, of course, but to make the point that these visions of apocalypse come out of our own experience. For some, the end of the world seems a fitting finale to our moral and civic collapse. Some see it as simply inevitable. For others it may feel like a deliverance, a way out of the mess we have made. A few seem to think that they will escape by building a boat or a pod or visiting a holy mountain, but the rest of us are more simply haunted by an inevitable doom.

The poet T.S. Eliot famously wrote that the world would end, “not with a bang, but a whimper.” Personally I think if the world ever does come to an end, it’s likely to be in that fashion. But there is no doubt that most people are waiting for a big bang, and some may even be looking forward to it.