Pages

Saturday, April 30, 2011

Public TV: An Alternative History

In science fiction we often speak of “alternative histories”—thus how things might have been had we decided things differently in the past. What if the U.S. had remained a neutral in World War II, for instance? The other day I got to thinking about another alternative history. What if the U.S. had rejected, rather than embraced, a market-based approach to public communications, particularly television?

This came to mind as Congress once more prepared to starve or entirely to rob PBS of public funds. Then Market Size Blog yesterday published (here) data on the BBC’s licensing fee income in 2010. That triggered this post. It brought to mind Brigitte and me, a young couple, watching television in Germany. No ads in sight. Television is a genuine public enterprise over virtually the entire world today, and has been since its dawn. Television is supported by a licensee fee imposed under state law. In some places TV is supported by governments grants alone—no fees—but in most places a fee is charged, usually per household; other users (hotels, corporations, etc.) pay a fee per set. It is only in the United States that television is entirely commercial, only here that public TV is a minor player, an also ran, underfunded, the Begging Channel.

Now it is a fact that TV (with radio) has become a complex industry. Generalizations understate the complexity, but, well, here goes. In most advanced countries today Channels 1 and 2 (as they tend to be called) are public; other channels, sometimes numbered, sometimes named, feature advertising. Generally these other channels only have a relatively small market share. And, significantly, in most countries households have to pay to use the airwaves—even if people only watch commercial television or listen to commercial radio. They cannot opt out. The airwaves are treated as a national possession. So are they here—but here we’re giving them away to commercial interests.

This said—I looked up what fees are levied in Germany, the United Kingdom, in France, and in Japan. Wikipedia provides a very handy site for this purpose here, showing fee or grant structures across the world. In the case of Germany and the United Kingdom I also checked the numbers independently by consulting websites in each country. Here is what I found, going from highest to lowest; I’ve converted monthly fees-per-household to dollars at the most recent exchange rate: Germany - $26.68, United Kingdom - $20.27, Japan - $17.84, and France - $14.60. These are, as stated, charges per month; they include TV and radio. If no TV is registered a lower radio fee is charged; in Germany, for instance, the radio-only fee runs $8.55 per month.

Now I got to wondering what might have been had the United States adopted this method of funding television back when that medium first appeared in the 1950s. We have 117.5 million households in the United States, and television penetration is at the 99 percent level. We also have 1.3 million doctors offices, approximately 2.5 million hotel rooms, 6 million corporations of some size, and 18,000 nursing homes. We could add to that college dormitories, military establishments, not-for profit corporations, etc. Let us, in round numbers, assume another 12 million licenses to add to our households, thus 129.5 million all told. And let us assume that we pay Germany’s rate, a round $27 per month or $324 a year for each license. That would produce $41.958 billion a year in fee income for a national public television system.

Now you might wonder. Isn’t that just a drop in the ocean of U.S. television advertising revenues? Well, let’s take a look. Total U.S. advertising revenues, for all media, were $131.1 billion in 2010 according to Kantar Media (the gatekeeper on this subject here). Television represents approximately 38 percent of that, thus $49.8 billion. Suddenly our fee-based projection isn’t just a drop any more. It represents 84 percent of all the money our current television depends on for its programming. And indeed, in most other countries, where some commercial television is present, the commercial slice is also roughly that size, ranging upward of 10 percent.

Just imagine the alternative history we missed: sixty years of television—news, documentaries, and entertainment—uninterrupted by advertising. Sixty years of entertainment uninfluenced by advertising—thus driving the programming to an ever lower, ever more sex- and violence-drenched content in effort to hold an ever-more-jaded audience.

But what would the programming have been like? Well, we know what it would have been like. We can and we do watch it, usually on PBS. We watch and love British, Australian, and less frequently French and German shows. The programming would have blended the extremes of American ideology, a little to the left of center, perhaps (as in Britain), but not very far. Brigitte and I recall that, in Germany, weekend programming tended to be kitschy, but most of the time it was elevated in content and responsible—rather than exploitive—in tonalities. The mysteries and spy-thrillers were tense. The love stories romantic. The channels showed plenty of sports; and the same old sports prattle. We’re not talking Soviet TV here.

I wonder. Would the U.S. public have benefitted from avoiding the storms, winds, tornadoes, indeed tsunamis of unremitting, concentration-destroying commercialism that have washed, thundered, and swept over two generations of Americans? Might the level of public discourse been a little more elevated? And would we, perhaps, have wasted less of our wealth on junk?

We’ll never know. That’s how it is—with alternative histories.

Thursday, April 28, 2011

In Case You Wondered…

Very recently I discussed fractional exponents. In that routine I used only multiplications, divisions, and pulling square roots. But what about pulling square roots by hand? There is a simple formula for that. Alas, it is iterative. Here it is:

square root (eventually) = ½ * ((number /guess) + guess)

A straightforward “guess” is simply to divide the number by two. Let me demonstrate this formula by iteration. Let’s say that we want the square root of 3.13. That will be our number. Half of that is 1.565. That will be our guess.

IterationResult of EquationResult x Result
11.78253.1773062500
21.76923036473.1301760832
31.76918060203.1300000025
41.76918060133.13

In the first iteration, the guess is 1.565. In each successive iteration, the guess used is the result of the last operation. Thus in Iteration 2 the guess is 1.7825. And so on. We stop as soon as result times result is our number.

The bigger the number, the more iterations. The square root of my year of birth (1936) is 44. That took 9 iterations. The square root of 9,876,543.21 is 3142.696804—and that took 15 rounds of what are simply three of the four basic arithmetic operations of multiplication, division, addition, and subtraction.

If you absolutely insist on doing things by hand, it is easy to automate square root pulling in Visual Basic or whatever language you prefer. And such a subroutine could be built into the code I’ve devised (and point to in the earlier posting) to make the operation as “pure” as possible.

Wednesday, April 27, 2011

What Are These Images?



This one looks like the face of a young dog, doesn’t it?



And this one is also canine, I would say. Might it not be a fox?

Well, it turns out that both of these are sunspots appearing on the face of Mother Sol today! The top one is part of Sunspot 1199 and the bottom one is part of Sunspot 1195. The sun is into painting dogs today. Each of these parts of sunspots are much, much larger than our mother, Earth.

Tuesday, April 26, 2011

Hand Calculation of Fractional Decimal Exponents

You have probably looked at numbers like this one: 30.7 and wondered how one actually calculates a number like that. We understand exponents as meaning multiplication of the number by itself exponent times. Thus 32 means 3 x 3. But how do you render that when the exponent is a fraction? Suppose that it was 30.2?

This is not a practical question. Any decent spreadsheet or scientific calculator (Excel or TI-36X in my case) can produce the number in the flash of an eye. But sometimes we wonder how it is really done. This question started to irritate me a while back. In the process I discovered that I shared this irritation with lots of other people. And if they go on the Internet, they will not, repeat, will not, get a satisfactory answer. The answers that are on offer are trivial. They never address questions like the above—three to the power of 0.7. At best you will be told to use log functions or log tables. And that works. But I got to wondering how log tables were calculated. That turned out to be a bit of a nightmare presented here a while back.

Since then I’ve discovered that there are no straight-forward ways to solve that puzzle above except by methods just as complicated as finding the log of a number by hand. When the exponent is an integer value, the process is serial multiplication. To be sure, if the exponent is big, you’ll be at it for a while, e.g. 3139. It turns out that if the exponent is a fraction, the process is serial root pulling. And if a whole number and an fractional power are both present, e.g. 32.7, you have to engage in both operations. First 3^2=9 (by multiplication), then 3^0.7 (by root pulling), equal to 2.15766928, and the last action required is to multiply those two numbers (9 x 2.15766928) in order to get the answer, 19.419.

I went through a dog-in-a-junk-yard process of finding an algorithm that did not itself involve doing anything beyond multiplication, division, and pulling square roots. No powers or exponential functions, no logarithms, common or natural. Finally I hit on the idea of using the very same method the eighteenth century Euler used to find logarithms—to solve for the value of a number raised to a fractional decimal exponent.

I will explain the method here. I first did it using Excel, as before. Next (the process is mind-numbingly slow), I automated it using a Visual Basic program. Both are in an Excel spreadsheet that you can download here.  If you have problems, e-mail me. You can find my e-mail in the About tab. The spreadsheet also has the Visual Basic program embedded within it, accessible by Alt-F11; it also handles negative fractional exponents.

Here is how the process works. And for my example I am using 30.7.

It occurred to me that every number to the power of 1 is that number itself, thus 31 = 3. At the same time one to the zeroeth power (10) is also always 1. This gave me the anchorage for applying Euler’s method—two values that, as it were, bound any fractional decimal. 0.7 is greater than 0, the exponent of 1 and is less than 1, the exponent of any whole number to be raised.

Therefore I started with three numbers. The first is 1. The second is the number to be raised. Third is the decimal fraction to which I want to raise the second.

I arranged the first two as follows:

Col ACol B
NumbersPowers
10
31

I wished to raise 3 to the power of 0.7. (30.7) 0.7 then became my target in Column B — and the value corresponding to it in Column A would then become my answer.

The first two, and their powers, are knowns. Their powers are obtained without any calculation. The number and the target are supplied by the user, in this case me.

The procedure is as follows:

In Col A we multiply 1 * 3 and take the square root of the result (=1.7320508). The formula is SQRT(1*3).

In Col B, we add 0 and 1 and then divide by 2 (0.5). The formula is (0 + 1)/2

The results are entered as the next item in each column:

10
31
1.73205080.5

Our target is 0.7. In the next step, we look in Column B for one value that is higher than the target and one that is lower— and then repeat the process. In this case the next step is to obtain the square root of (3 * 1.7320508) and to divide (1 + 0.5) by 2. The results are shown below. Needless to say, if our last value is the target, we don’t have to go on.

10
31
1.73205080.5
2.2795051 0.75

Once more we find the two values that bracket our target most closely (0.5 is lower, 0.75 is higher) and once more undergo the process.

What we are looking for is 0.7 in column B. When it appears, we’re done—and our answer is in Column A.

This process is automated as a computer algorithm in the spreadsheet I offer for downloading. The program is set to run maximally 35 iterations. It may cycle fewer times if the exponent in Column B matches our target to 10 decimal points. In the case shown above, the answer after 35 cycles is:

10
31
1.73205080.5
2.2795051 0.75
. . .
2.1576692790.6999999997

My TI-36X Scientific Calculator produces 2.15766928; so does Excel. Both round one decimal earlier.

Now it turns out, of course, that in “real life,” meaning life before electronic calculators, finding square roots was an equally painful and slow process of division and multiplication until the right number of digits was found. Which tells me why, in the post-Renaissance period western humanity’s mad nerds (whose characteristics I seem to share) developed all kinds of helpful tables, not least root tables and the logarithms.

Have as much fun as I did!
---------------
Added later: In this later post I describe the process of getting your own square roots by hand as well...

Saturday, April 23, 2011

The Moral Hazards of Insurance

Wikipedia’s article on the history of insurance (here) contains a paragraph under the heading Moral Hazard in which only the risks of the insurer are mentioned. People take out fire insurance and then, later, set fire to the house to collect the insurance. Suicide and murder are committed to collect life insurance. People lie on their applications. An article in today’s New York Times on the business page, “Not All Homeowners’ Policies Are Alike,” reminded me of moral hazard once again. It can cut both ways. I’ve noticed years ago that moral hazard is a kind of center of this industry around which it necessarily rotates. Companies are very eager to sell insurance but display a marked reluctance to pay out.

The article tells of the labors of Daniel Schwarcz, a professor at the University of Minnesota Law School, to adopt uniform language in homeowners’ policies. The traditional language evidently insured the buyer against “direct physical loss to property,” but in many policies modified language has been substituted calling for “sudden and accidental direct loss to property.” Under this new clause, according to the article, a homeowner might be denied coverage of damage due to vandalism (not accidental) or the fall of an old tree (not sudden but long in coming). The companies, according to Schwarcz, are rigging the language to aid deniability—of coverage—by the ambiguities introduced by language.

My 1956 Encyclopedia Britannica tells me that fire insurance, the core of homeowners’ insurance, expanded in the twentieth century to cover damage from wind, water, and explosions. “The expansion of this branch of fire insurance company operation was particularly marked in the United States,” the EB says. Back then already. Now things are changing back—but in a fuzzy sort of way, by means of linguistic changes.

Whereas, I might here underline, the whole foundation of insurance has always been clarity. You’re either alive or dead, and life insurance doesn’t pay out when you’re simply ill. The object insured must be clearly definable: this house, this ship. Ambiguity is best left out of the policies—rather than increased by adding words the meaning of which gets rapidly foggy as soon as real dollars must be handed over.

Moral hazard is implicit in this business when those insured and those insuring are distinct and different entities. Like it or not, an insurance company’s best case scenario is one in which no losses are incurred at all. Therefore the temptation to limit payout is always present within the company—and indeed machinery is also present to keep that payout to a minimum and then, if possible, to delay that payment as long as possible. Whereas the best case for the buyer of insurance is to get paid fully for the loss, promptly, and in a sum equal to the actual replacement value of an equivalent property, in an equivalent neighborhood, today.

Insurance has always had two forms—the commercial and the mutual kind. What we have today is mostly of the former. The latter, mutual societies, once called “friendly societies” and “benevolent societies,” were owned by the members themselves. They made all of the contributions and also decided on controversial payouts. In such cases thinking up schemes under which the company can refuse full or even partial payment will not automatically benefit some bright new fellow hoping soon to become vice president.

Friday, April 22, 2011

Legislated Pi

I mentioned the other day (here) meeting a lawyer once who could not believe in the indestructibility of matter. Today I chanced across an amusing but true story in Jan Gullberg’s Mathematics from the Birth of Numbers. The story concerns the value of Ï€, which is 3.141593…. That number times the radius of a circle multiplied by itself produces the area of a circle. The dots behind the digits indicate an irrational number. It never terminates and does not have a repeating pattern of decimals. It’s an awkward number—was so especially before electronic calculators came into use—yet used in all manner of ways in math. It was also of great interest for those who thought that they could gain fame and glory by squaring the circle.

Well, the story has it that a physician in Indiana, around about 1896, concluded that simplifying Ï€ to the value of 3.2 could greatly benefit humanity in all manner of ways. Indeed he thought that he might make money from this idea. With that in mind he found an influential person in Indiana’s state legislature who introduced a bill defining Ï€ as having the value of 3.2 hereafter. The state house passed the bill 67 to 0. As luck would have it, a professor of mathematics just happened to be present at the legislature when the Senate was about to begin its debate on this bill. In a pause of the debate, he succeeded in persuading several senators that the bill was nonsense. The upshot was that the debate was postponed “until a later date.” The legislature of Indiana has not since returned to the subject.

My author generously omits all names in this account, except the name of the state; however, humility is indicated for each and every one of us, no matter where we live. There is truth in saying that we get the legislatures we deserve.

Thursday, April 21, 2011

Danse Macabre

The New York Times this morning breathlessly tells me that another Mideast Peace Plan is in the offing. More. Evidently President Obama and Prime Minister Netanyahu are racing to be the first to unveil one—evidently because he who’s first wins, the other loses. Obama (of course) will make a well-framed speech. Netanyahu will address the U.S. Congress. Horse race, conflict, Israel, Peace Plan, edge of chair, stay tuned. Danse Macabre.

Now if the madness were merely in the media, it wouldn’t matter much. If a genuine peace plan were actually in the offing and the parties were genuinely serious concerning the actual plan, and its outcome, rather than serious about other things, I’d fault the media for failure to dig deeper and failing to tell the public something of substance. But here the situation is so clear and evident, has repeated so often, is rooted so firmly in relative power relationships, yet another repetition of the Danse Macabre is not worth noting at all.

Netanyahu must do something to keep the money flowing. In 2009 Israel received $5.38 billion in economic and military aid and in grants and credits—second to no one else except Iraq and Afghanistan—countries we are actually running or trying to. In 2009 Israel’s grants and credits were $1.992 billion, down from $2.955 billion in 2008. Its economic/military grants are also down, from $2.508 billion in 2007 to $2.425 billion in 2008; no economic/military aid data were available for 2009 from my source here. This means that our 2008 largesse was worth 2.6 percent of Israel’s GDP in 2010. So much for Netanyahu’s motives. As for his intentions actually to cede land to the Palestinians, to “tear down that wall,” to echo a former President, or to stop building on Palestinian lands—those intentions are non-existent. Therefore it must be about the money—and, as we know, paper and talk are cheap. You can tear up the paper later and un-say the words in Hebrew.

Now as for our President, he appears to suffer from the delusion that speeches are action. And now that—a mere two years and four months into his first term—the news are all about an election still 19 months in the future, it is high time for Obama to do something about his legacy. Every president must decorate his legacy with a bit of glass-diamond called Mideast Peace Plan, and Obama will deludedly believe that to make a good speech is to act. So let not a Netanyahu steal a march. Out with the brilliant ideas, stunningly delivered.

Danse Macabre. I wish I could sit this one out.

Tuesday, April 19, 2011

House-Garage Distance Indicator

Houses are quite close to garages, but two hundred years ago the distance between functionally equivalent structures (house-stable) was much greater. Now two hundred years from now, I predict that the HGDI will once more return to 1811 levels. Why do I say this? Well, the smell of gasoline in a sealed tank is almost impossible to discern, but the smell of horse manure in a stable calls much more attention to itself. And in two hundred years we won’t have any gasoline left but we’ll still have horse manure.

Sunday, April 17, 2011

Curious Alliances: Pakistan

Yet another uproar concerning events in Pakistan made me look up numbers. A drone attack killed 40 Pakistani civilians. Coinciding with this event a Pakistani court released a CIA official charged with shooting down two men at an intersection. He was released by the court after blood money had been paid to relatives of the two victims. The payment of blood money is legal in Pakistan—legal, yes, but the Pakistani street has a more, shall we say nuanced, understanding of such things and hence erupted.

These are very curious alliances. Consider for instance that in the period 2001 through 2008, $5.9 billion in U.S. tax dollars went to Pakistan as foreign and military aid. In the 2003-2008 period an additional $2.9 billion found their way to Pakistan in the form of grants and credits. Just looking at the years for which I have data for both categories, our support of Pakistan totaled $7.6 billion, thus in the 2003-2008 period.

Here is a graphic of this aid/grant support from the Statistical Abstract (here) year by year. Note here that my source did not show grants and credits for the years 2001 and 2002.


The $1.655 in aid and grants for 2008 is equal to almost one percent of Pakistan’s GDP. If they had given us aid and grants also proportional to our GDP, they would have had to send us $136 billion.† That would be worth approximately 25 percent of our Pentagon’s basic budget in 2010. The upper levels of Pakistan’s ruling elite need our aid to keep themselves in power—or it certainly helps.

But let’s try to see this from the perspective of the ordinary Pakistani. Supposing that China was giving us $136 billion in aid and grants. And their military had killed 40 people on the outskirts of Springfield, Illinois, and one of their agents had gunned down two people at an intersection in Lawrence, Kansas—and was then released scot free except for paying a fine—how would we view all this? Wouldn’t we be ready for an American Spring?
----------
†Here the arithmetic. Pakistan’s GDP: $177.902 billion; U.S. aid and grants: $1.655 billion. As percent of Pakistan’s GDP: 0.93 percent. U.S. GDP $14,624 billion times 0.0093 equals $136 billion. U.S. 2010 Pentagon base budget: $533.8 billion. $136 divided by $533.8 equals 0.255 times 100 equals 25.5 percent.

Saturday, April 16, 2011

Remembering Donora, Love Canal

In October of 1948 in Donora and in Webster, Pennsylvania a temperature inversion produced an air pollution disaster. Layers of warm air high above the Monongahela river valley managed to trap masses of cool air full of smoke from burning coal in a steel mill and a zinc smelter. Some twenty people actually died, and around seven thousand residents of the area became ill. Those who’ve worked professionally in pollution control will know that it was Donora that triggered the earliest serious air pollution control efforts in this country.

It was also in October 1948 that Paul Richard Le Page, currently governor of Maine, saw the light of day. Today my paper tells me that Le Page “announced a 63-point plan to cut environmental regulations.” One of these was suspending a law to monitor toxic chemicals in children’s products (NYT, this date, p. 1).

Governor Le Page, to be sure, was not yet five when, in April of 1953, Hooker Chemical sold a piece of land to the Niagara Falls School District to build a new school. There was a little wrinkle here. Hooker had used that land to bury toxic wastes. Indeed, Hooker even drilled some holes and demonstrated to the school board that toxic wastes were, really, under ground. The school board, however, insisted on buying the land. Hooker agreed but inserted a paragraph in the purchase agreement in an attempt to free itself of future liabilities. That land later became very famous as Love Canal—but the young Le Page was still too young to take in all of the hoopla that arose from it. And as for governors and such, we don’t insist that they must pass examinations to prove their competence. Mere popularity is sufficient to inaugurate them into office with all due ceremony.

My point here, simply, is that environmental regulations came about for a reason. And that reason is not that people in government, with too little to do, dream up ways of obstructing the holy march of the Free Market to the divine radiance of a future millennium. No. The reason for environmental regulations was tawdry ordinary things like people suffocating, children dying, and mothers giving birth to babies with birth defects.

But what can you do? Our educational system is evidently failing—and has been for a while. I well remember, I can never forget, a speech I once gave in a small town in Kansas or Arkansas or someplace like that, in the 1970s. On environmental subjects. And a distinguished-looking grey-haired elderly lawyer in attendance challenged me when I said that burning things does not destroy them—that the same matter, stuff, material still existed after burning as had existed before. He wouldn’t buy that—despite my saying that if we surrounded a fire with a great big balloon and then, later, chemically measured the amount of carbon present there in gaseous form we would still have the same amount of carbon inside the bubble as we had had before the fire had been started. This he wouldn’t buy.

Do we have to kill people every other generation or so before we’re permitted to regulate emissions, dumping, and even food purity? Evidently so.

Another point of mine is that government is there for a reason too. It’s not some sort of toxic substance we must get rid of or make as small as possible. But it’s a free country, isn’t it? And qualifications for voting are a non-starter.

Thursday, April 14, 2011

To Serve and to Newspeak

The episode passed a little too quickly for much notice by our all-too-busy press, but here is what went down March 13, thus a month ago. Philip J. Crowley, the State Department spokesman, resigned after having described the treatment of PFC Bradley Manning, accused of leaking document to Wikileaks, as “ridiculous and counterproductive and stupid.” Manning was kept (maybe still is) naked and in solitary confinement; the grounds for this treatment is to prevent him injuring himself. He has yet to be tried.

Crowley resigned because we have a very curious but unwritten law that all those in public service must obey. They must learn and flawlessly speak Newspeak, more specifically to say in public only what, back in the bad-old communist days, used to be called the party line. They must never even indirectly criticize policy because, the fiction is, all policy in the executive branch directly links back to the President. And you don’t serve the public as a public servant, you serve the President. Right?

A brief wiggle in the news is likely now because the German Parliament’s Committee for Human Rights and Humanitarian Assistance wrote a letter to President Obama on April 12 and made it public, albeit in German, today. The letter is signed by Tom Koenigs, the committee chair. If you read German, you can see it here. But I’ll save you the bother. Here it is in English:

Dear Mr. President,

I turn to you concerning the conditions under which Bradley Manning is being held in investigatory confinement at the Quantico Marine Corps Base in Virginia. I do this in the name of the members of the committee for Human Rights and Humanitarian Assistance of the German Parliament. Based on information we have, Mr. Manning’s conditions of confinement are unnecessarily hard and have a punitive character. According to our information he is confined in an isolation cell without cushion or blankets and is undergoing sleep deprivation. In addition, on the grounds of alleged danger of suicide, his clothing has been removed. The circumstances of his confinement thus violate Article 10 of the International Pact for civil and political rights. Rights (IPbpR), which the United States has ratified. According to Art. 11 IPbpR it is required that “all persons who have been deprived of freedom be handled with humanity and respect for human dignity.” With this as background, I would like to ask you, in the name of my colleagues, to look into the conditions of Mr. Manning’s confinement and to ensure a humane implementation of the same.

With friendly greetings, Tom Koenigs
This would seem to indicate that Mr. Crowley not only violated Newspeak and thus indirectly insulted Big Brother, for which resignation would seem barely sufficient punishment, but he also committed whistle blowing, but with great lack of precision—in that he failed to cite chapter and verse of the applicable IPbpR.

I hope all those Arabs, enjoying what media label The Arab Spring, will carefully study this matter in efforts to learn how to imitate the behavior of genuine democrats and to turn themselves into our worthy successors.

Wednesday, April 13, 2011

Statistical Abstract: How Many Copies Are Sold?

The one statistic you won’t find in the Statistical Abstract of the United States is the number of copies of that publication actually sold. The Statistical Abstract is much-loved by data mavens; and what I discovered is that their number is tiny.

The book is physically published and distributed by the Government Printing Office. The GPO also proudly lists the Stat Ab as one its best sellers. Indeed it ranks third of the top twenty five. The top three are Your Federal Income Tax for Individuals 2010, The Financial Crisis Inquiry Report, and the Statistical Abstract, 2011. But GPO’s own bragging list doesn’t offer any numbers.

I bent myself into a pretzel and, finally, did manage to find an answer of sorts in a draft document put on the web by the GPO itself. The document is part of some sort of report to Congress, I think. Here I found some numbers of the 2004/2005 FY sales of that book by the GPO. Mind you, the Stat Ab is also sold as a CD, but I couldn’t find anything on CD sales of the product. And since the rise of the Internet, most people, I would say, use the electronic versions of the book readily available there. But those who would axe the Stat Ab will point to the figures I am about to unveil. Herewith the data for 2004/2005:

Paperback version: 2,189
Cloth version: 1,135
Standing orders for paperback: 473
Standing orders for cloth: 670

The unimpressive total—at least from a commercial point of view—is therefore 3,324.

Now the American Library Association tells me (here) that there are 122,101 libraries in the United States. Of these 9,221 are public libraries. If all of GPO’s copies are bought by public libraries, just over a third bother to buy it. But it’s not only libraries that buy it. At my old alma mater, Editorial Code and Data, Inc., we bought and still buy a paper copy, although in practice we use the CD. But the only visible political support for the publication of this book appears to come from librarians. The elites among them, I would guess.

What I’ve gotten out of this painful experience—the threat to the good old Stat Ab—is a realization of the size of the community committed to preserve and maintain the statistical lens by which we see the otherwise invisible (because they are too huge) structures of modernity.

Changing News Habits

Talking of the Statistical Abstract, I chanced across its Table 1134 the other day in the 2011 edition accessible here. It brings us information on newspapers from the year book of Editor & Publisher going back to 1970. I’ve indexed circulation and U.S. population for the period 1970-2000 using the provided decade and one 5-year intervals, and then annually from 2002 through 2008. Both U.S. population and daily newspaper circulation are indexed at 100 in 1970 and then the changes indicated from that year forward. In 1970 circulation stood at 62.1 million daily; our population was 205 million. In 2008 circulation had dropped to 48.6 million, population had increased to 304 million.


Our habits are changing. To tell the truth, exactly what it means is anybody’s guess. Some people who no longer get the paper, still read it on electronic devices. Others who still get the paper—as we get the Detroit News—don’t read the damned thing. It usually sits on a chair still in its red plastic wrapper, sometimes for days on end—unless some necessary shopping causes Brigitte to find relevant ads. We don’t read it because its ink-to-information ratio has radically changed. We still read the New York Times, and daily. But many millions of others get their awareness of the news from radio, television, the Internet, and their cell phones.

I don’t read gloom and doom into the drop, drop, drop, the Chinese water torture our old print media now suffer. I never actually believed in the presence of a vast, well-informed, and highly responsible citizenry trembling with eagerness to do good. Responsible elites are always very small. To get a kind of indication of that, take a look at next post. There you will learn something about the circulation of the Statistical Abstract itself.

Monday, April 11, 2011

Save the Statistical Abstract

Very disturbing news came to me by way of an observant former colleague. She pointed me to this site. It reports that the Obama Administration has evidently decided to nix the Statistical Abstract. The request to kill one of the oldest products of the Bureau, and one most used by the public, is in black-and-white in the U.S. Census Bureau’s Budget Estimates, presented to Congress, for Fiscal Year 2012. Do you want to read the words? Okay. Here is the text itself (pages 79-80), and here is what it says (FTE stands for full-time employees):

1. Terminate Statistical Abstract (Base Funding 24 FTE and $2.9 million; Program Change minus 24 FTE and minus $2.9 million):The Census Bureau requests a decrease to terminate the Statistical Abstract program. The FY 2012 budget request is the result of a review of both ongoing and cyclical programs necessary to achieve Department of Commerce and Census Bureau goals and difficult choices had to be made in balancing program needs and fiscal constraints. The availability elsewhere of much of the information in the statistical abstract has led the Department and Census Bureau to the difficult decision to terminate the program.
I have often asserted on LaMarotte, emphatically, indeed with passion, that the statistical functions of the Federal Government—the very best in the world, believe me—are the eyes through which we see. The Statistical Abstract of the United States, to be sure, is an aggregation of data made available to the public, thus the citizenry, and doing away with it does not, automatically, also nix the functionalities that produce the original data professionally and thoughtfully assembled by those 24 FTEs. Wow! We can afford to expend $300 million a day on the war in Afghanistan but we can’t afford $2.9 million a year to produce a publication that we have been producing since 1878? The Stat Ab in 2011 is in its 130th edition!

We’ve managed to publish the Statistical Abstract right through World War I, the Depression, World War II, the Korean Conflict, and Vietnam—and now, suddenly, because a bunch of financial speculators majorly screwed up our financial markets—now, suddenly, we have to furlough 24 people and shed $2.9 million? Give. Me. A. Break!

Even when people lose their jobs and then break their eyeglasses, they do order new ones. We can’t mess with the lens by which we see the world. Save the Stat Ab, you blundering idiots—or surely I’ll—well, impotence is great when you live down here in the so-called real world. But I’ll think of something.

Added Later: Commenter Joyce Simkin left a link to Change.org where you, too, can sign a petition in support of saving the Statistical Abstract. Unfortunately the link she left does not work. It is HERE in proper working order.

Sunday, April 10, 2011

The BEST Index

In an earlier post (here) I sketched the history of the U.S. poverty threshold. I noted there that the threshold was based on looking at the cost of a sustaining diet. The cost of that “food plan,” multiplied by three, got us a person’s or family’s total expenditures. The measure has not changed substantially since. If you earn enough for “food budget-times-three,” you are Okay. If not, you are in poverty. Attempts to make that measure a lot more sophisticated have not advanced much since 1963 when the measure was first introduced.

Today, thanks to a hat tip from Monique Magee (of Market Size fame, see Blogroll), I can point to a thoughtful and analytically rich approach for re-basing poverty. It is the BEST Index. It is produced by Wider Opportunities for Women (WOW) and supported financially by the Ford and the W.K. Kellogg foundations. WOW’s site is here and its site for BEST here; that last link permits you to download WOW’s extensive tables in Excel format.

The index is based on a much more careful estimation of costs that meet the basic needs of a person, alone or with children—and also, by extension, a two-adult household with or without children. BEST provides much, much more detail than the Census-based poverty tables I’ve also reproduced in an earlier post.

BEST stands for Basic Economic Security Tables. These tables are built up from survey data on housing, utilities, food, transportation, child care, personal and household goods, health care, emergency savings, retirement savings, taxes, and tax credits. The tax credits are deducted. More detail on how these values are determined and calculated—thus, for instance, where WOW obtained its data for auto insurance and the fact that collision coverage is not included but liability coverage is, can be obtained from this full report. I particularly like WOW's inclusion of savings. Yes! That's the way to think—wholistically and comprehensively!

What the BEST Index actually shows is what most of think when we look at the official poverty tables. They’re far too low. Let me illustrate this by a bar chart:


What we see here is median income levels for 2009 on the left, the poverty thresholds in the middle, and the BEST Index for 2010 on the right. I don’t much like the “median” measure. It means that half earn more and half earn less. I got the median family income figure by averaging data published by the Census Bureau for states; that one understates the total because low-population states (usually with lower median values) vote as much as populous states (usually with higher median values). All other numbers are national.

Note especially that the BEST index for 1 worker is lower than median earnings of either men or women but substantially higher than the poverty rate for single person. Further, BEST’s value for a family of 4 is more than three times greater than the poverty threshold for the same size family.

Now, of course, keeping the official poverty rate low keeps support expenditures down—thus eligibility for food stamps, Medicaid, welfare payments. Our poverty rates are intended to minimize budgets rather than to optimize the welfare of the American people.

Saturday, April 9, 2011

Calculating the Log of a Number by Hand

I’ve become interested in logarithms—for a reason. Sometimes, like others, I use log scales to show graphics. The reason is that they make it easier to highlight some features of the data. In the process of writing a still unpublished post on that subject, I got to looking deeper at the origins of logarithms. My own penchant is to understand things from the bottom up. Thus I became aware of the fact that one finds very few resources on the Internet that go deeply into ancient things—whereas the easy and superficial is almost over-documented. My initial wonder arose over fractional exponents. Let me get into that first.

We know that the log of numbers of the common (base 10) logarithm are “powers of ten.” The log of 10 is 1; it is not multiplied by anything. It is the base. But 100, which is 10 x 10, is written as 102 and the log of 100 is 2. Log 1,000 is 3, 10,000 is 4, and so on. On multiples of 10, count the zeroes and you know the logarithm of the number. But what about other numbers? If I ask my spreadsheet or calculator for the log of 2, the answer will be 0.30103. Put another way, it means that 2, expressed as a power of 10, is that fraction, and that 10.30103 equals 2. But while the logs of 10 and its multiples make sense, and in every case tell me how often to multiply 10 by itself, how does one handle a fractional log? How do I get from 10.30103 to obtain the answer, 2, without using a logarithmic table? I can’t picture how a number might be multiplied by itself a fractional number of times.

My quest to understand the logs thus began by looking at its originator, John Napier (1550-1617), to see how he came up with those fractions. But it turns out that Napier’s initial logarithm was base 10,000,000—and while tracking his calculations is possible (here is his own account of it)—I was looking at an example of our own now dominant log to the base of 10. In what follows, therefore, I am echoing Leonard Euler (1707-1783), and more specifically a fine article by Ed Sandifer available here the content of which I am here rendering for the amateur. In that process, as I will show, one doesn’t really calculate these logs; one finds them, painstakingly, by pulling roots. First I’ll show a spreadsheet that calculates the log of 2. Next I will explain it.

Euler begins with two numbers the log of which is known from the outset, 1 and 10. The number 2 is between these two. Their logs are 0 and 1. Euler proceeds by multiplying 1 and 10 and pulling the square root. One times 10 is 10, and the square root of that is 3.16228. He performs the identical operation on the logs. Adding logs is to multiply them, therefore 0 + 1 = 1. This he divides by two, equivalent to pulling the square root. The result is 0.5. One can check this result by using a calculator. Sure enough, log(3.16228) is 0.5. The operations are placed under labels: A, B, and this last one, under C. Now, let us understand. He is looking for 2. His first result, 3.16…, is greater than 2. He next multiplies the two values closest to 2 on either side, thus the 1 in A and 3.16.. in C. He multiplies them and pulls the square root again, placing this result (1.77828) into row labeled D. Next he performs the same operation in the log column: 0 + .5 = .5 divided by 2 = .25. And, sure enough, checking that we find that log(1.77828) is .25. So now we’re on our way. Using this method, we proceed to find the two values above to either side of 2 and as close to it as possible. Those two are in rows C and D. Using those we proceed to the next step. I think I’ve gone far enough to make the procedure clear.

When continued, and the illustration documents the steps, we finally get, in row T, the answer we are looking for. We get the number 2 and, in the log column, the value of .301029. The actual number Excel gives me, if I take it out to 20 decimal points (the command typed in the cell is =log(2)) is .30102999566398100000; Excel shortens that to .30103.

Let me here indicate the Excel commands used. In the column labeled Actual, the formula in cell B8 and subsequent cells, references changing as indicated by the comment, is:

=SQRT(B7*B6)

In the column labeled Log, the formula in cell C8 and subsequent cells, references changing, is:

=(C6+C7)/2

The reader might fault me for not pulling my square roots manually. Sorry. I cut myself some slack there. That too is a lengthy process with large numbers, but I do know how to do that.

Finding the first few logs is difficult and time consuming. But later the very nature of the exponents makes the job go faster. Adding logs is multiplication, deducting logs is division. Having the logs of 1, 10, and 2, we can rapidly proceed thus:

• The log of 5 is obtained by dividing 10 by 2, thus 1 - .30103 = .69897.
• The log of 4 is easy too, 2 x 2, thus .3013 + .3013 = .60206.
• The log of 8 is 4 x 2, therefore .30103 + .60206 = .90309.
• The log of 20? No problem. 2 x 10 = .30103 + 1 = 1.30103.
• The log of 40? It is 1.60206.
• And 400? It is 2.60206. Simple, 100 x 4, 2 + .60206.

And so on. Prime numbers divisible only by 1 and themselves require the long process. 3 and 7 require slow calculation. But after that it is easy to calculate the log of 6, 9, 12, 14, 60, 70, 90, and so on. Not that we have to. A thirty dollar calculator or an Excel spreadsheet has them all. But it is valuable to understand things from the ground up.

One learns in this process that the integer portion of a log number always refers to multiples of ten (thus of the base), and the fractional parts to “fractions” of ten. The fractions are obtained by division, obtaining roots, the integers by multiplication.

In this process I also came to understand that logs are a geometrical series; the terms change by the same ratio. In a log base 10, the geometrical progression is represented by 10: 10, 100, 1000, etc. In a log base 2, the progression is by 2: 2, 4, 8, 16. In the first, integer logs will be exact multiples of 10—all other numbers will be or have a fractional component. In a log of base 2, integer logs will be multiples of two. The unchanging ratio is multiplication by the base or division to detect the root of the base.

In arithmetical series, the terms always change by the same amount. In the decimal system that amount is 1 or a fraction of 1. When a geometrical series is used to index an arithmetic series, the advantage is that multiplication and division of the indexed arithmetical series may be replaced by adding or deducting logs. This very much increases the efficiency of manual calculation—which was Napier’s motivation for developing logarithms in a time when mechanical calculators, and never mind lightning speed electronic devices, were still far in the future.

I’ll indulge my fascination with this subject in future posts as well—and perhaps they will help others too.

Friday, April 8, 2011

Housing Crisis: Up Close and Personal

Acquaintances of ours decided to refinance their home. Now, mind you, this is a prosperous couple, both employed, in good jobs, never any financial problems. The house is wonderfully located. When you hear that famous real estate slogan—Location, Location, Location—in their case the answer for their residence is Yes, Yes, Yes!! So what happened? Interest rates are way down—and therefore they applied for a new loan to take advantage of that and to refinance their property. They are experienced. The last time they did this, the bank wanted to lend them much more than they had asked for. But at that time they had said, “No, thanks.” This time the bank refused to lend. Why? Here is a graphic that tell us why:


In a single sentence, the appraised value of their home had declined substantially in the recent four-year period as shown. This is not a made up example. Everything you see is based on actual records—although the dark blue bars are calculated to show everything in constant dollars.

How could this have happened? Well, appraised valuation is based on the actual selling price of other residences in the same area. Appraisers must base their judgment of hard numbers, and those numbers come from recent actual sales. In that area, as in most others, the houses that move are foreclosed residences being sold at distressed below rock-bottom pricing. Hence their house is now not only way below the price they paid for it—its current value is now even lower than the mortgage still left on it. They’ve lost not only the totality of their equity but, as you can see from the 2011 bars, they owe payments for a substantial bit of nothing.

Fancy-dancy talk about the Market and Bubbles and such does not void the judgment that not only our friends but literally millions of homeowners have been defrauded by their government—which, in the name of free markets and the wisdom of our financiers, has allowed fraud and speculation to spread unchecked. Yes. This house might not have appreciated as much absent this fraudulent background. But the value of this residence wouldn’t have dropped out of sight as it has.

Fortunately for them, they don’t have to move. They can wait. And things will gradually turn around. But listening to them talk, this experience has certainly put a chill on their general purchasing behavior. I see that as the inverse of the so-called “wealth effect”—which makes people confident because their property is appreciating. To be sure, the bars will start creeping up again. Slowly, slowly. But there are lots of people out there who must move. As for you, reader, don’t immediately go for the blue, brown, red, beige, or green box where your mortgage stuff is filed—or folders, or piles. It might not be good for your health. Ask your doctor first—as the unending ads never stop telling us…

Tuesday, April 5, 2011

Poverty: A Wider View

Poverty measurement as we know it began in the 1960s. Mollie Orshansky, an economist working for the Social Security Administration—but, nota bene, she had also spent thirteen years working for the U.S. Department of Agriculture—developed the first definition of a poverty threshold in 1963. In 1965 the Office of Economic Opportunity adopted the thresholds. In 1967 the Bureau of the Census began to publish poverty statistics. In 1969 the old Bureau of the Budget (BOB), forerunner of the White House Office of Management and Budget (OMB) gave this measurement an official status by a directive.

So how did Orshansky calculate what poverty meant? She used as her basis the lowest of four so-called “food plans” the Department of Agriculture had developed, called the Economy Food Plan, thus a minimum adequate diet for families of different sizes. She used the cost of this food plan as her basis. Next she multiplied the total by three to include all other expenses. That multiplier also came from Agriculture, specifically a 1955 survey conducted by the Department. It showed that the ratio of food to other expenses for families was 1:3. The concept is simple enough. If a family’s income is equal to or greater than the Economy Food Plan times three, the family is all right—not well off, but coping. If not, the family is in poverty. This approach has remained essentially unchanged, although Congress has offered, but not passed, legislation to modernize the measure in light of the much changed economy.

With this definition on hand, and data from the Consumer Price Index available for adjusting the purchasing power of food for any year, poverty data have been projected back to 1959. The graphic that follows shows the history of this threshold from 1959 through 2009. The shaded bars indicate recessions.


Three striking features of this graphic:

• A portion of the population keeps slipping in and out of poverty all depending on the economy’s performance.

• The poverty rate (percent of the population below the threshold) has hovered between 11 and 15 percent since 1966 and has never dropped lower than 11 percent. Thus that percent appears to be people experiencing hard-core poverty. That’s around 33 million people.

• The recent trends are contrary to earlier patterns. Poverty did not ease after the 2000 recession, as it eased in earlier ones. Nor did the rate improve.

The next graphic shows participants in the Food Stamps program, a Department of Agriculture venture. I’m showing data from October 2008 through January 2011. I’ve indicated total persons and households at the beginning and end of this period—and they are essentially the same as counts of persons and households in poverty. A little food helps when you’re in poverty. I am showing the average monthly value of the program for participants. The last, most recent rates were $132.81 per months for persons and $282.84 for households.


The source for the historical graphic is here, the source for the Food Stamps Program graphic (officially the Supplemental Nutrition Assistance Program) is here. The historical data from Census may be obtained here in numerical format. Select Table 2 from the list for an Excel spreadsheet.

Monday, April 4, 2011

Poverty Table 2009

Last year around this time, I published (on the earlier version of this blog) the official Poverty Table for 2008. Now a new table, for 2009, became available in September 2010—and I expect we shall have to wait until the fall to see data for 2010. Therefore these are the latest numbers available. Here is what they look like.

The table is not quite legible, but if you click on it, you can see a much-enlarged version. For convenience, I’m also reporting, below, the weighted average poverty thresholds for major categories in larger type. The weighted averages are based on the relative number of families in each subcategory (shown above, not here) based on number of children. They provide a ball-park view of poverty rates eleven categories by size.

The meaning of these numbers is simply that if your income is the one shown on this table—or lower—then you are poor. If higher, then not. Thus if an old man or woman is living alone on $10,290 a year, he or she just misses the mark by a buck! But we have to draw a line somewhere, don't we?

Contemplating these numbers every now and then is a useful exercise in mixed feelings: Relief that we are above this line; anguish that some are at or even well below it. And, neutrally, the understanding grows. Now we know that the word poverty means in America these days. The imagination gets a workout when you contemplate—say you are a family of four—getting along on $21,954 a year!

Careful reading of the note under the full table will inform us that povery rates have been lowered in 2009 over against 2008. The reason for this is that changes in the CPI, the Consumer Price Index, are used in the calculation of these rates. Now the interesting aspect of using the CPI is that food and fuel have been rising while the rate as a whole has dropped since the last round of calculations. But when you have to get along on incomes such as those shown here, food and fuel will tend to be a whole lot bigger a percentage of total income than for those who are floating above the lines drawn here toward the bottom of the economy.

The tabulation I am showing is available from the Bureau of the Census here.

Sunday, April 3, 2011

A Peek Down the Barrel

Brigitte chanced across an article by Verlyn Klinkenborg that appeared in the New York Times back in September of 2006 (here). Klinkenborg is the lyrical observer of life out in the country, but in this article he wrote about gun laws in Minnesota. We were both astonished. To be sure, we’re both aware of Tea-party Powerhouse Michelle Bachman, who hails from there, but the extent to which our once-home state had become transmogrified—home of Mondale, the DFL, political right-thought-squared, and sole hold-out of the Reagan sweep—came as a shock. Minnesota has a “concealed carry” law since 2005 and a “shall issue” provision, too. There are “shall issue” and “may issue” states. In the latter case the person must demonstrate a compelling need to own and carry a gun; in the other a permit will be issued unless the applicant is a felon, etc.

I thought I’d look to see if there are any trends over time that tell us about gun-ownership. Is it on the rise? And the consequences of such? Are they visible? Our first look will be at permits issued, something tracked by the Justice Department. The source of the data I am using can be found here.


What we see here is a brisk trade in guns. People applied for 8.6 million permits in 1999 and for 10.8 million in 2009. Since the passage of the Brady Act background checks have been mandatory—and thus we have a way of counting the transactions. In this eleven year period, people filed 94.2 million applications, 1.6 million received denials, but 92.6 million got the go-ahead. Whether or not they actually purchased a gun is not reported by my source. That’s a lot of handguns—when you think about it.

Let’s next look at deaths due to firearms in homicides and legal intervention. That phrase means people who were murdered by firearms, including those who died in firefights with the police—and policemen who died at the hands of criminals.


In this graph—the data come from the Centers of Disease Control and Prevention (here)—we have data up to and including 2007. In this period, deaths assignable to this composite category increased more rapidly than population, from 11,127 in 1999 to 12,983 in 2007. That produces an annual growth rate of 1.9 percent—over against a population growth in the same period of 1.39 percent. The homicide rate, measured in deaths per 100,000 population, therefore, also increased, from 3.99 to 4.3 per 100,000 people.

Let’s last look at a slightly narrower category, murder by kind of weapon used. I have these data from the Bureau of Justice Statistics here.


The numbers here go back to 1976 and end in 2005. (The Justice System is usually somewhat late with its numbers.) The numbers from 1999 forward match the earlier table nicely. Homicide numbers are lower than numbers for homicides and deaths by legal intervention combined. In 2005 for instance, the last graphic yields 11,346 murders (the sum of handguns and other guns); the earlier CDC data show 12,682 for the broader category. Two points about the last graph. One is that handguns certainly own the murder market. In 14 of 30 years of this period, handguns alone were involved in more murders than all the other weapons combined—and if we sum up both handguns and other firearms, they represented 64 percent of all murders in the 1975-2005 period, handguns alone account for 49 percent.

The second point is that huge bulge in the middle of this table. Alas, the bulge was not produced by an increase in the number of guns but by the tell-tale tracks left by the Baby Boom as it aged. The Boom passed through its critical 15-34 age group between 1975 and 1990. I mean the range in which most people commit their violent crimes, if any. The big growth in the murder curve is in that period, slopping beyond it just a little.

Having looked at these few patterns, I am persuaded that the gun-issue has much more to do with politics, thus collective feelings of anxiety and/or aggression. The use of guns to inflict lethal harm, by contrast, is governed more by demographics than anything else. Unless, of course, our social fabric gets genuinely unraveled, in which case the presence of millions of guns in private hands will make that unraveling a lot more “interesting.”

Concerning difficult environments, it may interest you to see this site, maintained by Wikipedia. It shows 60 countries around the world ranked by firearms-related death rates. The United States ranks eighth after the following countries. The death rates vary by year but represent deaths per 100,000: South Africa (74.57), Columbia (51.77), Thailand (33), Guatemala (18.05) , Brazil (14.15), Estonia (12.74), and Mexico (12.07). The U.S. rate shown is 10.2 for the year 2004. This rate, also obtained from the CDC, is higher than the rate for 2004 (4.07) that I show. The difference here is that accidental deaths are included; they are excluded from the homicide-and-legal-intervention category. Guns can be dangerous—even when no violence to other humans is actually intended.

Saturday, April 2, 2011

Employment: Update March 2011

It pleases me to present the first employment update on this new version of LaMarotte. It is a monthly posting. My purpose is to track the recovery. In 2008 and 2009 (“Great Recession”) we lost a total of 8.66 million jobs. How are we doing in gaining these jobs back—and once more reaching employment level as it was in December 2007? To show that is the purpose here. Quite early this month, and on April 1st at that, the Bureau of Labor Statistics brought good news (here). We posted a net gain of 216,000 jobs in March. The situation actually improved more than that overall. BLS also revised data for January and February upward, another 12,000 jobs gained. Happily, in March, even manufacturing grew jobs.


In this post I also show a pie chart to indicate visually total jobs recovered. The pie itself represents 2008-2009 jobs lost (8.66 million), the slice the jobs recovered (1.42 million). That slice has increased from 13.4 to 16.4 percent in yesterday’s report. A thickening slice is good. But we’ve still got a long ways to go to reach the situation we enjoyed 3 years and 3 months ago.

Now it is undeniable that the great pit that appeared in 2008 and 2009 was not caused by a natural disaster but by financial malfeasance. It was a human disaster. The still weak upturn in employment, however, is also merely the sick economy finally showing signs of life. It was not the result of enlightened political policies and programs, although these probably helped some. The contrary is being claimed—so that the New York Times headline includes the phrase a “Lift for Obama.” It amazes me that Administrations always take credit for good news but never claim to have caused the bad… It is also a sign of our times that everything that happens is always explained by pointing at political figures. We are a community. All those jobs were created by hundreds of thousands of decisions inside many, many thousands of companies...