The Capitalist’s Dilemma – too much money and no idea of how to use it

 Engagement Ring Luxury Tax Monopoly

 

Who Solved the Capitalist’s Dilemma?
[Via asymco]

In The Capitalist’s Dilemma, Clayton Christensen and Derek van Bever introduce a powerful new theory which explains the relative paucity of growth in developed economies. They draw a causal relationship between the mis-application of capital in pursuit of innovation and the failure to grow.[1]

In particular, they observe that capital is allocated toward the type of innovations which increase efficiency or performance and not toward those which create markets (and hence long term growth and jobs.) This itself is caused by a prioritization and rewarding of performance ratios rather than cash flows and that itself is due  to a perversion of the purpose of the firm.[2]

For this statement of causality to be confirmed we need to observe whether it predicts measurable phenomena. For instance, we need to see whether companies which create markets apply capital toward market-creating innovations and whether companies which create value through efficiencies or performance improvements hoard abundant capital.

Over the entire global economy, the pattern of capital over-abundance is easy to see. The amount of cash or securities on balance sheets is extraordinary and unprecedented (estimated at $7 Trillion, doubling over a decade). However, growing cash is not a perfect indicator of inactivity. Cash is the by-product of earnings after investment. So if operating profits are growing and investment is growing, but not as fast, then it’s possible to grow cash while still growing investment.

[More]

Sure to be a classic. The data sure seem to support their theory – that capital is no longer a scarce resource and capitalists had better realize this. But they continue to use false narratives and metrics to underutilize capital while lining their pockets. As Adam Smith first discussed, capital is supposed to be used to enhance the wealth of nations, of society, not just to enhance the wealth of capitalists..

From the article by Christensen:

In our view the crux of the problem is that investments in different types of innovation affect economies (and companies) in very different ways—but are evaluated using the same (flawed) metrics. Specifically, financial markets—and companies themselves—use assessment metrics that make innovations that eliminate jobs more attractive than those that create jobs. We’ll argue that the reliance on those metrics is based on the outdated assumption that capital is, in George Gilder’s language, a “scarce resource” that should be conserved at all costs. But, as we will explain further, capital is no longer in short supply—witness the $1.6 trillion in cash on corporate balance sheets—and, if companies want to maximize returns on it, they must stop behaving as if it were. 

This partly results in the income disparities we see. It also means that the economy is not as strong as it should be, that people are not being hired at the rates they should and that the world is in much worse shape.

Rentseeking – money making money – was abhorred by Adam Smith. Market-creation – money producing new markets – is pretty much what capitalism is supposed to bring into existence.

But too many companies think they can make more money by rent-seeking approaches.

The asymco article tries to produce a better metric to see which companies are creating markets and which are just holding steady. It is a worthwhile effort but we need better approaches.

Fixing this will not be easy and will be a large part of the battle we fight for the next 15 years or so.

Another in a series of great Microsoft predictions

microsoft by Robert Scoble

‘Another Nail in Apple’s Coffin’
[Via Daring Fireball]

Harry McCracken, looking back at Microsoft Bob, 15 years after its release:

Analyst Charles Finnie of Volpe, Welty & Co. called Microsoft’s product a threat to the very existence of Microsoft’s competitor in Cupertino. “Bob is going to be another nail in Apple’s coffin unless Apple can somehow raise the standard yet again on the ease-of-use front,” he told the AP.

[More]

I had forgotten about Bob. Has it only been 15 years? Seems much longer. I wonder what Charles Finnie up to today and whether he is a little embarrassed about the comment.

[Listening to: Melancholy Man from the album "Time Traveller (Disc 2)" by Moody Blues]

Now we have article 2.0

ruby on rails by luisvilla*
[Crossposted at SpreadingScience]
I will participate in the Elsevier Article 2.0 Contest:
[Via Gobbledygook]

We have been talking a lot about Web 2.0 approaches for scientific papers. Now Elsevier announced an Article 2.0 Contest:

Demonstrate your best ideas for how scientific research articles should be presented on the web and compete to win great prizes!

The contest runs from September 1st until December 31st. Elsevier will provide 7.500 full text articles in XML format (through a REST API). The contestants that creates the best article presentation (creativity, value-add, ease of use and quality) will win prizes.

This is a very interesting contest, and I plan to participate. I do know enough about programming web pages that I can create something useful in four months. My development platform of choice is Ruby on Rails and Rails has great REST support. I will use the next two months before the contest starts to think about the features I want to implement.

I’m sure that other people are also considering to participate in this contest or would like to make suggestions for features. Please contact me by commenting or via Email or FriendFeed. A great opportunity to not only talk about Science 2.0, but actually do something about it.

While there are not any real rules up yet, this is intriguing. Reformatting a science paper for the Internet. All the information should be there to demonstrate how this new medium can change the way we read articles and disperse information.

We have already seen a little of this in the way journals published by Highwire Press are able to also contain links to papers published more recently, that cite the relevant paper. Take for example this paper by a friend of mine ULBPs, human ligands of the NKG2D receptor, stimulate tumor immunity with enhancement by IL-15.

Scroll to the bottom and there are not only links in the references, which look backwards from the paper, but also citations that look forward, to relevant papers published after this one.

So Elsevier has an interesting idea. Just a couple of hang-ups, as brought out in the comments to Martin’s post. Who owns the application afterwards? What sorts of rights do the creators have? This could be a case where Elsevier only has to pay $2500 but gets the equivalent of hundreds if not thousands of hours of development work done by a large group of people.

This works well for Open Source approaches, since the community ‘owns’ the final result. But in this case, it very likely may be Elsevier that owns everything, making the $2500 a very small price to pay indeed.

This could, in fact, spear an Open Source approach to redefining how papers are presented on the Internet. This is because PLoS presents its papers in downloadable XML format where the same sort of process as Elsevier is attempting could be done by a community for the entire communtiy’s enrichment.

And since all of the PLoS papers are Open Access, instead of the limited number that Elsevier decides to chose, we could get a real view of how this medium could boost the transfer of information for scientific papers.

I wonder what an Open Source approach would look like and how it might differ from a commercial approach?

*I also wonder what the title of the book actually translates to in Japanese?

Technorati Tags: , , ,

Two a day

hard drive platters by oskay
[Crossposted at SpreadingScience]
15 human genomes each week:
[Via Eureka! Science News - Popular science news]

The Wellcome Trust Sanger Institute has sequenced the equivalent of 300 human genomes in just over six months. The Institute has just reached the staggering total of 1,000,000,000,000 letters of genetic code that will be read by researchers worldwide, helping them to understand the role of genes in health and disease. Scientists will be able to answer questions unthinkable even a few years ago and human medical genetics will be transformed.
[More]

Some of this is part of the 1000 Genomes Project, an effort to sequence that many human genomes. This will allow us to gain a tremendous amount of insight into just what it is that makes each of us different or the same.

All this PR really states is that they are now capable of sequencing about 45 billion base pairs of DNA a day. They are not directly applying all of that capability to the human genome. While they, or someone, possibly could, the groups involved with 1000 genomes will take a more statistical approach to speed things up and lower costs.

It starts with in depth sequencing of a couple of nuclear families (about 6 people). This will be high resolution sequencing equivalent to 20 passes of the entire genome of each. This level of redundancy will help edit out any sequencing errors from the techniques themselves. All these approaches will help the researchers get a better handle on the most optimal processes to use.

The second step will look at 180 genomes but with only 2 sequencing passes. The high level sequence from the first step will serve as a template for the next 180. The goal here is to be able to rapidly identify sequence variation, not necessarily to make sure every nucleotide is sequenced. It is hoped that the detail learned from step 1 will allow them to be able to infer similar detail here without having to essentially re-sequence the same DNA another 18 times.

Once they have these approaches worked out, and have an idea of the level of genetic variation expected to be seen, they will examine just the cgene oding regions of about 1000 people. This will inform them of how best to proceed to get a more detailed map of an individual’s genome.

This is because the actual differences expected to be found among any two humans’ DNA sequences is expected to be quite low. So they want to identify processes that will highlight these differences as rapidly and effectively as possible.

They were hoping to be sequencing the equivalent of 2 human genomes a day and they are not too far off of that mark. At the end of this study, they will have sequenced and deposited into databases 6 trillion bases (a 6 followed by 12 zeroes). In December 2007, GenBank, the largest American database had a total of 84 billion bases (84 followed by 9 zeroes) that took 25 years to produce.

So this effort will add over 60 times as much DNA sequence to databases as have already been deposited! It plans to to this in only 2 years. The databases, and the tools to examine them, will have to adapt to this huge influx of data.

And, more importantly, the scientists doing the examining will have to appreciate the sheer size of this. It took 13 years to complete the Human Genome Project. Now, 5 years after that project was completed, we can potentially sequence a single human genome in half a day.

The NIH had projected that technology will support sequencing a single human genome in 1 day for under $1000 in 4 years or so. The members of 1000 genomes are hoping to be able to accomplish their work for $30-50,000 per genome. So, the NIH projection may not be too far off.

But what will the databases look like that store and manipulate this huge amount of data? The Sanger Institute is generating 50 Terabytes of data a week, according to the PR.

Maybe I should invest in data storage companies.

Technorati Tags: ,

Ask a question. Fix a problem.

drop by *L*u*z*a*
[Crossposted at SpreadingScience]

How Do I Add FriendFeed Comments to My Blog:
[Via chrisbrogan.com]

Hey, smarter people: how do I add a FriendFeed comments module under my blog comments? I want to see all these great comments. Just found these several days later:

FriendFeed

Man, so many great people saying great things, and I didn’t engage at all. : (

Not only is this blog entry a great example of how to start a conversation (i.e. ask your community), the comments are a great example of how the conversation progresses. They provide a solution, naturally, but there is also extensive debugging help to get it to work. Eventually, the creator of the needed plug-in arrives to help and ends up making his own software better.

So by asking for help, the community not only provided an answer to Chris, it helped troubleshoot and make the product even better. All in less than 24 hours. How is that for a development cycle!

Technorati Tags: ,

This is important

RNA Tie Club from Alexander Rich

[Crossposted at SpreadingScience]

Kevin Kelly — The Technium:
[Via The Technium]

Scenius is like genius, only embedded in a scene rather than in genes. Brian Eno suggested the word to convey the extreme creativity that groups, places or “scenes” can occasionally generate. His actual definition is: “Scenius stands for the intelligence and the intuition of a whole cultural scene. It is the communal form of the concept of the genius.”

Individuals immersed in a productive scenius will blossom and produce their best work. When buoyed by scenius, you act like genius. Your like-minded peers, and the entire environment inspire you.

The geography of scenius is nurtured by several factors:

Mutual appreciation — Risky moves are applauded by the group, subtlety is appreciated, and friendly competition goads the shy. Scenius can be thought of as the best of peer pressure.
Rapid exchange of tools and techniques — As soon as something is invented, it is flaunted and then shared. Ideas flow quickly because they are flowing inside a common language and sensibility.
Network effects of success — When a record is broken, a hit happens, or breakthrough erupts, the success is claimed by the entire scene. This empowers the scene to further success.
Local tolerance for the novelties — The local “outside” does not push back too hard against the transgressions of the scene. The renegades and mavericks are protected by this buffer zone.

Scenius can erupt almost anywhere, and at different scales: in a corner of a company, in a neighborhood, or in an entire region.
[More]

Kevin discusses a specific instance of scenius but the idea is something that needs greater examination. Because innovation, creativity and new insights rarely if ever happen because of a single person in isolation. They happen in a social network made up of the right mix of people to allow innovation to blossom. However, an important aspect, especially today, is that the scene for this genius does not need to occupy the same space. The specific network can be made up of people physically separated.

An example from my set of the woods involves a single man who was able to create a scenius that transcended location. It starts at Cambridge University in England in the mid to late 1950s. Using their superb intellects and their well-connected social network, Watson and Crick were able to discern the structure of the DNA molecule. They published this in 1953.

Now this great discovery was noticed by a pre-eminent physicist, George Gamow, who, to my mind, is one of the great scientists of the 20th century, not only for his own work but for his impact on other scientists. Here is how Wikipedia starts his entry:

George Gamow (pronounced as IPA: [ˈgamof]) (March 4, 1904August 19, 1968) , born Georgiy Antonovich Gamov (Георгий Антонович Гамов), was a Russian Empire-born theoretical physicist and cosmologist. He discovered alpha decay via quantum tunneling and worked on radioactive decay of the atomic nucleus, star formation, stellar nucleosynthesis, big bang nucleosynthesis, nucleocosmogenesis and genetics.

Nice, wide ranging scientific career. Look at his accomplishments (again from Wikipedia):

Gamow produced an important cosmogony paper with his student Ralph Alpher, which was published as “The Origin of Chemical Elements” (Physical Review, April 1, 1948). This paper became known as the Alpher-Bethe-Gamow theory. (Gamow had added the name of Hans Bethe, listed on the article as “H. Bethe, Cornell University, Ithaca, New York” (who had not had any role in the paper) to make a pun on the first three letters of the Greek alphabet, alpha beta gamma.)

The paper outlined how the present levels of hydrogen and helium in the universe (which are thought to make up over 99% of all matter) could be largely explained by reactions that occurred during the “big bang“. This lent theoretical support to the big bang theory, although it did not explain the presence of elements heavier than helium (this was done later by Fred Hoyle).

In the paper, Gamow made an estimate of the strength of residual cosmic microwave background radiation (CMB). He predicted that the afterglow of big bang would have cooled down after billions of years, filling the universe with a radiation five degrees above absolute zero.

Gamow published another paper in the British journal Nature later in 1948, in which he developed equations for the mass and radius of a primordial galaxy (which typically contains about one hundred billion stars, each with a mass comparable with that of the sun).

Astronomers and scientists did not make any effort to detect this background radiation at that time, due to both a lack of interest and the immaturity of microwave observation. Consequently, Gamow’s prediction in support of the big bang was not substantiated until 1964, when Arno Penzias and Robert Wilson made the accidental discovery for which they were awarded the Nobel Prize in physics in 1978. Their work determined that the universe’s background radiation was 2.7 degrees above absolute zero, just 2.3 degrees lower than Gamow’s 1948 prediction.

I have to love any genius who authors a paper that makes such a great pun. Some of the best geniuses are great tricksters (Feynman loved to pick locks or break combination safes.)

But my story is not about Gamow and the big Bang theory. I’ll let this, from Nobelprize.org, discussing the breaking of the genetic code, provide some context for Gamow’s genius, and how he created a scenius that spanned continents:

When the structure of DNA was made known, many scientists were eager to read the message hidden in it. One was the Russian physicist George Gamow. Many researchers are ”lone rangers” but Gamow believed that the best way to move forward was through a joint effort, where scientists from different fields shared their ideas and results. In 1954, he founded the “RNA Tie Club.” Its aim was “to solve the riddle of the RNA structure and to understand how it built proteins.”

The brotherhood consisted of 20 regular members (one for each amino-acid), and four honorary members (one for each nucleotide in nucleic acid). The members all got woolen neckties, with an embroided green-and-yellow helix (idea and design by Gamow).

Among the members were many prominent scientists, eight of whom were or became Nobel Laureates. Such examples are James Watson, who in the club received the code PRO for the amino acid proline, Francis Crick (TYR for tyrosine) and Sydney Brenner (VAL for valine). Brenner was awarded the Nobel Prize in Physiology or Medicine as recently as 2002, for his discoveries concerning genetic regulation of organ development and programmed cell death.

Early Ideas Sprung from the “RNA Tie Club”

The members of the club met twice a year, and in the meantime they wrote each other letters where they put forward speculative new ideas, which were not yet ripe enough to be published in scientific journals.

In 1955 Francis Crick proposed his “Adapter Hypothesis,” which suggested that some (so far unknown) structure carried the amino acids and put them in the order corresponding to the sequence in the nucleic acid strand.

Gamow, on the other hand, used mathematics to establish the number of nucleotides that should be necessary to make up the code for one amino acid. He postulated that a three-letter nucleotide code would be enough to define all 20 amino acids.

Eight out of 20 won Nobel prizes (although there is some humorous ways to look at this that give better clues on how this was accomplished). Not very bad odds. Much like Kelly’s mountain climbers. The scenius attracts, nourishes and sprouts geniuses. But it is the first scientific scenius I am aware of that was not tethered to a single location and some very critical things came up from these interactions. For instance, Crick delineated the 20 amino acids used to make up proteins as an intellectual exercise, written on a pub napkin. He was right.

This group worked a lot to try and figure out how RNA made protein, thus the name RNA Tie Club (Gamow made sure each had an appropriate tie for their amino acid). There were many informal and speculative papers that they wrote to each other (remember that this was a time where biology and genetics were mainly descriptive. Speculation and deductive approaches to biology were not commonly used.) Many of these approaches were flat out wrong. But these errors allowed them to eventually gain some wisdom.

Some of the papers have become parts of biology lore, because the speculations turned out to be correct and led to really important breakthroughs in the field. Here is the most important one, Francis Crick and his Adaptor hypothesis, the paper for the RNA Tie Club that developed tRNA and a degenerate genetic code as a model. On Degenerate Templates and the Adaptor Hypothesis is one of the most famous unpublished papers I know of.

To get some idea of how this all worked, check out Watson’s response to Crick Adaptor paper for the RNA Tie Club. Watson was at CalTech at the time.

Gamow. was here for 4 days – rather exhausting as I do not live on Whiskey. Your TIECLUB note arrived during visit. Am not so pessimistic. Dislike adaptors. We must find RNA structure before we give up and return to viscosity and bird watching.

So, Gamow, who was at George Washington University at the time, was in California visiting one RNA Tie Member when the paper from another member arrived. Pretty interesting network.

So much of the early innovations in molecular biology were driven by the interactions of the RNA Tie club. All because a tricky physicist created a scenius without a specific location. Think what could be accomplished today with such a network using Science 2.0 approaches.

Being able to create and foster such a scenius will be an important part of many organizations.

Technorati Tags: , ,

Email and time

watch by Darren Hester

[Crossposted at SpreadingScience]

NYT: Businesses Fight the Email Monster They Helped Create:
[Via 43 Folders -]

Lost in E-Mail, Tech Firms Face Self-Made Beast – NYTimes.com
Is Information Overload a Billion Drag on the Economy? – Bits – Technology – New York Times Blog
If you’ve seen the video of my Inbox Zero talk at Google, you may recall the moment when a few attendees start mentioning the hundreds of internal email messages they receive (and send) in a given day. I still remember, because I almost fainted.

Whenever I hear these and similar stories, the same question always comes to mind: “What does a company get out of its employees spending half their day using an email program?” Well, apparently, it’s a question a lot of people are starting to ask. Including Google.

A story in today’s New York Times covers Sili Valley’s new interest in curbing unnecessary interruptions and helping stem the flow of endless data.

Intel and other companies are already experimenting with solutions. Small units at some companies are encouraging workers to check e-mail messages less frequently, to send group messages more judiciously and to avoid letting the drumbeat of digital missives constantly shake up and reorder to-do lists.

A Google software engineer last week introduced E-Mail Addict, an experimental feature for the company’s e-mail service that lets people cut themselves off from their in-boxes for 15 minutes.

A few more stats for you:

A typical information worker who sits at a computer all day turns to his e-mail program more than 50 times and uses instant messaging 77 times…

I’d also draw your attention to this infographic illustrating data points from recent studies on “workers’ efficiency at information-intensive businesses.” 28% of a typical worker’s day is spent on:

Interruptions by things that aren’t urgent or important, like unnecessary e-mail messages — and the time it takes to get back on track.
[More]

As with almost all new technologies, people will have to work things out. Too many people treat email as an immediate task. They will leave off of the phone call they are on to answer an email.

I’m going to talk about the supposed need to respond to relevant emails sent by colleagues. The almost spamming that can occur with email, where a tremendous amount of time is spent wading through a plethora of irrelevant emails (say 300 or more), is a discussion for another time.

My view has always been that if someone at my organization wants an answer immediately, they can track me down personally, whether I am in my office or not. The next level, a quick answer, can be gotten with a phone call. If I am out, they can leave a message. An email message is the lowest level.

This is because email is supposed to remove time and place from a response. Face-to-face is restricted in both time and space. Now and both of us in my office. Phones remove place but still determine time. Now but where we are does not matter. Email should only be for messages where the time and the place are unimportant. At a time and location of my choosing.

If someone sends me a time sensitive email, they can call me up and tell me to respond to the email <grin>

If I am involved in something, such as my own project, here is the order of priorities that will require me to break off:

  1. Immediately deal with anyone entering my office that needs something done NOW
  2. Let any phone call go to voicemail. I can check the voicemail when it is convenient for ME. If it is important they will leave me one or track me down personally (see 1)
  3. Nothing else

When I send emails, they either are in response to a previous email, an answer or question for a colleague, an acknowledgement of some event, or some general information to spread (Hey, have you read this article in Nature?) If I need an answer now, I call.

If they are out, I leave a voicemail and may send an email just to make sure there is another route. If it needs an immediate response and I can not find them, sending an email does not absolve me of my responsibility to find an answer. “Well, I sent them an email” does not solve the problem if it needs an answer now!

I usually do check my email several times a day but only when it is convenient for me. I control when I respond. One of the benefits of Web 2.0 tools is that they remove the need for people to simultaneously occupy the same place at the same time for any information to be exchanged. Place and/or time are independent. A blog or a wiki disperses information in this way. Email should also but too many people use it for other purposes.

I need to control when my distractions distract me. Too many people let email interrupt what they are doing. They just can not seem to leave it alone if they know an unopened email is present. Heck, I’ll even sometimes let a voicemail sit there for a time before checking it. I control when I answer it and will not let that blinking red light determine my response.

I do recognize I am strange in many ways and not typical. However, at least I ‘feel’ like I have some control over these distractions.

Try this exercise once or twice a year: Go for a week without wearing or having access to a watch. Many people freak without being able to determine NOW just what time it is. But I find it very relaxing in a Zen kind of way.

Because, it turns out that you can easily stay on top of the time without a watch. Timekeepers are found throughout our culture, either wall clocks, TVs, computers or even cell phones (My son no longer wears his watch. He uses his cell phone to tell time.) In fact, cell phones and computers are much better timekeepers because they are accurately updated, usually to atomic clocks.

But this exercise really does demonstrate how few events are dependent on the exact time. Sure there are events where knowing the time is important but it is educational to find out how few these really are.

Email is like a wristwatch. Only check it when absolutely needed. Life is much easier when either time or email can be ignored. I have more important things to do with my 28%!

Technorati Tags: ,

Stretching data too far

gyre by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
6:1 or None?:
[Via Deep Sea News]

I like Miriam, she is a lady that gets it. Go there now and read her excellent post on the story behind the 6:1 ratio of plastic to plankton that is often touted in the media and why it is flawed.

“Though I admire Algalita’s work, the 6:1 plastic:plankton ratio is deeply flawed. Worse, it is flawed in a direction that undermines Algalita’s credibility: It may vastly underestimate plankton and overestimate plastic. Here’s why, based off the methodology published in Moore et al’s 2001 paper in Marine Pollution Bulletin.”

Read the comments on this post…

The post on the 6;1 ratio is an excellent demonstration of the proper examination of a research protocol. There are some very obvious defects in the methodology used, some so great that it puts into doubt the entire point. Things like comparing dry weight, where the living organisms are mostly water. In fact, using weight to determine the ratio is somewhat misleading.

There are so many other things that are more important about the Gyre than the ratio. The lack of zooplankton for instance. The lack of species diversity would be another. The use of 6:1 ratio just hurts credibility.

Open and Transparent

hands by Shutr

[Crossposted at SpreadingScience]

Doctors Say ‘I’m Sorry’ Before ‘See You in Court’ :
[Via New York Times]

In 40 years as a highly regarded cancer surgeon, Dr. Tapas K. Das Gupta had never made a mistake like this.

As with any doctor, there had been occasional errors in diagnosis or judgment. But never, he said, had he opened up a patient and removed the wrong sliver of tissue, in this case a segment of the eighth rib instead of the ninth.

Once an X-ray provided proof in black and white, Dr. Das Gupta, the 74-year-old chairman of surgical oncology at the University of Illinois Medical Center at Chicago, did something that normally would make hospital lawyers cringe: he acknowledged his mistake to his patient’s face, and told her he was deeply sorry.

Think about what might happen if the lawyers took a lower profile and the doctors admitted their mistakes, if they were open with their patients. Turns out, something significant happens. Most people accept the apology and forgive the doctor.

This approach directly contradicts what most lawyers advise.

For decades, malpractice lawyers and insurers have counseled doctors and hospitals to “deny and defend.” Many still warn clients that any admission of fault, or even expression of regret, is likely to invite litigation and imperil careers.

But with providers choking on malpractice costs and consumers demanding action against medical errors, a handful of prominent academic medical centers, like Johns Hopkins and Stanford, are trying a disarming approach.

People get really angry when they find out the error was concealed and that it might happen again. As with political scandals, it is the coverup that causes the problems.

So what happens if the doctors and hospitals are open with their patients?

At the University of Michigan Health System, one of the first to experiment with full disclosure, existing claims and lawsuits dropped to 83 in August 2007 from 262 in August 2001, said Richard C. Boothman, the medical center’s chief risk officer.

“Improving patient safety and patient communication is more likely to cure the malpractice crisis than defensiveness and denial,” Mr. Boothman said.

Mr. Boothman emphasized that he could not know whether the decline was due to disclosure or safer medicine, or both. But the hospital’s legal defense costs and the money it must set aside to pay claims have each been cut by two-thirds, he said. The time taken to dispose of cases has been halved.

The number of malpractice filings against the University of Illinois has dropped by half since it started its program just over two years ago, said Dr. Timothy B. McDonald, the hospital’s chief safety and risk officer. In the 37 cases where the hospital acknowledged a preventable error and apologized, only one patient has filed suit. Only six settlements have exceeded the hospital’s medical and related expenses.

From 262 to 83 in 6 years. Defense costs down by two-thirds. Malpractice cut in half. These are game changing numbers, in the completely opposite direction from what lawyers said would happen.

The hospitals have also taken to following up the apology with fair compensation. This has had the effect of changing the behavior of malpractice attorneys.

There also has been an attitudinal shift among plaintiff’s lawyers who recognize that injured clients benefit when they are compensated quickly, even if for less. That is particularly true now that most states have placed limits on non-economic damages.

In Michigan, trial lawyers have come to understand that Mr. Boothman will offer prompt and fair compensation for real negligence but will give no quarter in defending doctors when the hospital believes that the care was appropriate.

“The filing of a lawsuit at the University of Michigan is now the last option, whereas with other hospitals it tends to be the first and only option,” said Norman D. Tucker, a trial lawyer in Southfield, Mich. “We might give cases a second look before filing because if it’s not going to settle quickly, tighten up your cinch. It’s probably going to be a long ride.

In all likelihood, more money ends up in the patient’s pocket and less in lawyer fees. As long as the awards are also open, so that the hospitals can not manipulate the settlements too much, and people can really see that they are not committing the same errors again and again, the beneficial cycle of this should not only drive malpractice suits lower but also help care in the hospitals.

Quality improvement committees openly examine cases that once would have vanished into sealed courthouse files. Errors become teaching opportunities rather than badges of shame.

“I think this is the key to patient safety in the country,” Dr. McDonald said. “If you do this with a transparent point of view, you’re more likely to figure out what’s wrong and put processes in place to improve it.”

For instance, he said, a sponge left inside an patient led the hospital to start X-raying patients during and after surgery. Eight objects have been found, one of them an electrode that dislodged from a baby’s scalp during a Caesarian section in 2006.

This looks like a program that could have huge effects across the country. By admitting their errors and treating the patients like rational human beings, the doctors remove themselves from antagonistic relationships, the hospitals spend less money on lawsuits and the standard of care goes up.

All by showing a little openness and transparency.

Technorati Tags: , ,

East-bound Train

Train by kevindooley

Train a Comin’ to Snohomish?:
[Via All Today's News - Sightline Daily]

The idea to start commuter train service between Snohomish and Bellevue has piqued interest among residents. Tonight, the City Council is set to hold a workshop on the plan.
[More]

Well, I am sure I will be retired by the time this comes to fruition but I can hope, can’t I? The Eastside often gets the short end of the Puget Sound stick when it comes to commuter options but there has to be more things that can be done other than build more roads.

Technorati Tags: ,

Follow

Get every new post delivered to your Inbox.

Join 431 other followers

%d bloggers like this: