Winning the Hardware Software Game Winning the Hardware-Software Game - 2nd Edition

Using Game Theory to Optimize the Pace of New Technology Adoption
  • How do you encourage speedier adoption of your product or service?
  • How do you increase the value your product or service creates for your customers?
  • How do you extract more of the value created by your product or service for yourself?


Latest Comments

  • Anonymous said More
    Great explanation for the relationship... 4 days ago.
  • Anonymous said More
    nice analysis, thanks Wednesday, 21 October 2020
  • Anonymous said More
    The fact that CBD from marijuana is... Sunday, 14 June 2020
  • Anonymous said More
    This was excellent and extremely... Tuesday, 21 April 2020
  • Anonymous said More
    Well written. Well constructed. Tuesday, 13 August 2019

Information Sets

Faulty Sampling


A recent article in the NYT, “Weighing Medical Costs of End-of-Life Care” by Reed Abelson, uses the cases of two hospitals, UCLA and the May Clinic, to discuss the issue of how to provide cost effective medical care:

[C]ritics in the Obama administration and elsewhere who talk about how much money the nation wastes on needless tests and futile procedures. They like to note that U.C.L.A. is perennially near the top of widely cited data, compiled by researchers at Dartmouth, ranking medical centers that spend the most on end-of-life care but seem to have no better results than hospitals spending much less…

According to Dartmouth, Medicare pays about $50,000 during a patient’s last six months of care by U.C.L.A., where patients may be seen by dozens of different specialists and spend weeks in the hospital before they die. By contrast, the figure is about $25,000 at the Mayo Clinic in Rochester, Minn., where doctors closely coordinate care, are slow to bring in specialists and aim to avoid expensive treatments that offer little or no benefit to a patient…

According to Dartmouth, Medicare pays about $50,000 during a patient’s last six months of care by U.C.L.A., where patients may be seen by dozens of different specialists and spend weeks in the hospital before they die. By contrast, the figure is about $25,000 at the Mayo Clinic in Rochester, Minn., where doctors closely coordinate care, are slow to bring in specialists and aim to avoid expensive treatments that offer little or no benefit to a patient…By some estimates, the country could save $700 billion a year if hospitals like U.C.L.A. behaved more like Mayo. High medical bills for Medicare patients’ final year of life account for about a quarter of the program’s total spending. Under the House health care legislation pending in Congress, hospitals providing more cost-effective care would be rewarded, while hospitals identified as high-cost centers might even be penalized, perhaps receiving lower payments from the government...

[T]he Dartmouth end-of-life analysis … considers only the costs of treating patients who have died. Remarkably, it pays no attention to the ones who survive … The Dartmouth analysis prompted [another] study [by UCLA affiliates] of why some hospitals spent so much more on dying patients than others and what they got from their efforts … What they found seemed to contradict the Dartmouth thesis. The hospital that spent the most on heart failure patients had one-third fewer deaths after six months of an initial hospital stay … When looking at all patients hospitalized for heart failure, for example, the variation in use of resources was 27 to 44 percent lower than when they looked at only the patients who died...

[UCLA affiliates also noted] that there are also fundamental socioeconomic differences between patients in the poorer sections of Los Angeles and those in the Mayo Clinic’s small and solidly middle-class hometown of Rochester, Minn. … [and] health care costs are significantly higher in areas of poverty…

“Sometimes more medical care is better … but the question is when.”

The discussion in the article highlights two critical errors, confusing information sets and comparing disparate samples, which frequently pop up in data analyses, and which serve to muddle, if not completely invalidate, the analyses’ reported results.


Information Sets

An information set contains all the (important) things that are known at a specific point in time. Naturally, information sets change over time as events and actions occur.

Looking forward, no one can know what is going to happen in the future. One can only use the information that is available to him at the time, that is, ex ante or before-the-fact information, to make his best guess as to what they think the world will look like in a week or a month or a year. It is only after events and actions occur, that is, after we know what the ex post or after-the-fact information is, that we will know for sure what the world actually looks like at that time.

Yet, people are frequently forced to make decisions in the present to prepare for the future, while not knowing exactly what the future will look like. While people generally use the ex ante information available to them to make the best decisions they can under conditions of uncertainty, they often end up being negatively judged on the basis of ex post information used by others for what turns out to be the wrong decisions. This a well-known faux pas referred to as Monday morning quarterbacking.

1st Example of How Information Sets Affect Decisions

My favorite illustration of the importance of understanding and distinguishing information sets is the lottery example.

Suppose there is a lottery that will be held on Sunday. The winner of the lottery will receive $1 million cash, the odds of winning the lottery are 1 in a million, and tickets for the lottery cost $1 each (that is, they are sold for an actuarially fair price). Tickets for the lottery may be purchased anytime during the week (Monday through Saturday) before the Sunday of the drawing.

Suppose you pay a dollar and buy a ticket on Monday, on Wednesday someone steals your ticket, and then on Sunday the drawing takes place, and your (stolen) ticket wins.


To illustrate the importance of distinguishing how information sets differ at different points in time, consider the following question:

Suppose the guy who stole your ticket is captured, but he no longer has the ticket, and he is forced to pay you restitution. How much should he be forced to pay you?

Does your answer depend on the timing of the order for restitution? It should. The value of the stolen ticket is $1 up until the time the drawing is held, and the value is $1 million after that. So if restitution is ordered based on information available before the drawing is held, the thief is only liable for $1, but if restitution must be paid based on information available after the drawing is held, the thief must pay you $1 million.

Implications for 1st Example

Generally what happens in damages litigation is similar to this scenario. The Defendant takes an action at some point in time (Wednesday in the lottery example) that harms the Plaintiff. The Plaintiff sues the Defendant for damages, but it takes several years for the case to go to trial (the trial takes place after Sunday in the lottery example), during which time the Plaintiff ends up being relatively unsuccessful in his ventures, while the Defendant ends up being relatively successful in his ventures. At the time of trial, the Plaintiff generally uses ex post information to argue for higher damages, while the Defendant uses ex ante information to argue for lower damages.

Theoretically, the Court is trying to enforce a system of laws that will establish proper incentives for the population. In a nutshell, this leads the Court to consider the time at which the Defendant’s wrongful action took place to be the relevant point in time for which information should be used to establish damages.

2nd Example of How Information Sets Affect Decisions

Let’s consider another example that’s closer to the situation discussed in the article. Suppose you’re going to take a vacation, and you’re trying to decide between either going to the beach or going skiing. You have to book your vacation two weeks in advance, and it’s currently three weeks before your vacation will start, which means you have a week to decide where to go. The weather at the beach could be rainy, partly cloudy or sunny. Conditions on the slopes could be no snow, fresh snow, or packed snow. Here’s what happens during the week before you book your tickets:


Based on the information you have during the earlier part of the week, it seems pretty clear that you should probably head for the slopes, rather than the beach. However, towards the end of the week, conditions start to change, and with this later information, it’s not so clear anymore that your best bet would be to hit the slopes.

Suppose on Sunday you decide to book your tickets for the beach, because you figure the rain and snow have both stopped for a while, and you predict you’ll have sun at the beach and packed snow at best on the slopes. So you go to the beach, but it turns out the rain and snow both return during your vacation.

Then of course when you get back to work after your vacations ends, your colleagues call you an idiot for choosing the wrong locale. Had you known it would rain at the beach and snow on in the mountains, you obviously would have chosen to go skiing rather than going to the beach. But the whole point is that at the time you had to make your choice, you did not know what the weather would be like. Rather, you used the information you had to make the best decision you could, but it turned out after-the-fact to be the wrong decision.

Implications for 2nd Example

When critics accuse doctors of ordering needless tests or performing futile procedures, they are generally guilty of Monday morning quarterbacking. Would a doctor perform a test or procedure knowing it won’t help the patient? Almost certainly not. (While there are slimy doctors who will perform tests regardless of their need simply to be reimbursed, these doctors are the exception, not the rule.) Rather, the tests and procedures can only be deemed unnecessary if it is known that they will have no effect on the patients’ health. Yet, this can only become known after the patient has been diagnosed, where diagnosis generally requires performing the tests and procedures. More succinctly, doctors’ decisions must be made using ex ante information; yet they are often judged as having made the wrong decision by critics with ex post information.

Logically, doctors can only be accused of providing unnecessary tests and procedures if, using the Court’s reasoning, it can be shown that before he performed the tests that the doctor knew the results would be irrelevant, that is, that the doctor already knew the patients’ diagnosis but ordered the tests anyway. Under these circumstances, it would only be the slimy doctors who are charged with wasting money. Otherwise, to order that fewer tests and procedures be performed is simply using the situation as an excuse for rationing medical care.


Faulty Sampling

The government believes that too much money is being spent on healthcare in the US, and it is trying to figure out how to reduce the amount of money spent. If this is to be accomplished without reducing the quality of care, then wasteful spending must be reduced. The only way to know where the waste is occurring is to measure all the waste in the system. But obviously, this is not feasible.

Standard practice for determining some quantity in the population without measuring every occurrence entails:

(1) Selecting a sample, that is, taking a small, but sufficient, sample from the population that’s representative of the population. In the case of medical system waste, the Dartmouth study cited in the article chose UCLA hospital as the sample used to represent all medical care in the US.

(2) Measuring the quantity at issue in the sample, that is, measuring every occurrence of the quantity itself or of some proxy for the quantity in the sample. The Dartmouth study cited in the article used a proxy for medical waste. The proxy was medical care provided to people who died despite receiving care.

(3) Generalizing from the sample to the population. The Dartmouth study generalized from the sample, UCLA hospital, to the population, the US healthcare system, by concluding that too much care was being provided for people who ended up dying shortly after the care was given.

There are two common mistakes made when attempting to gather information about a population by using a sample: choosing a sample that is not representative of the population and using a proxy in lieu of the true quantity that’s not actually representative of the true quantity at issue. In both cases the conclusions drawn from the study about the population will generally be inaccurate at best, and completely invalid at worst.

Sample Selection

The validity of being able to use a sample in lieu the entire population to draw conclusion about the population rests on the assumption that the sample is just like the population regarding the quantity of interest, only smaller in size. If this turns out not to be true, then you cannot be sure that what you find to be true of the sample is also true of the population.

There are two types of samples that are commonly used that often turn out not to be representative of the population: convenience samples, and self-selected samples. Generalizations about the populations made based on sampling from these two types of populations are often faulty.

Convenience samples are groups to which analysts have easy access. They might be entities with which analysts have prior relationships, entities that happen to be in close contact to analysts at the right time, or entities that are particularly visible. Convenience sample are used because they are easy to access, not necessarily because they constitute a cross-section of the population of interest. Any time convenience samples are used to draw conclusions about some population, you should be wary of the validity of the conclusions.

Self-selected samples are composed of entities that volunteer to be part of a study. Volunteers often come forward because they have their own agenda to promote, rather than to simply provide information for the study at issue. People will volunteer, for example, if they have particularly positive or particularly negative feelings or experiences with the issue at hand and they want others to know about them. Again, you should be wary of conclusions drawn about a population based on samples composed of self-selected entitles.

In the Dartmouth study, UCLA was chosen as a sample hospital to be used as the basis for measuring waste precisely because the hospital has a reputation for being aggressive in providing care for patients. At the same time, the Mayo Clinic was chosen as a point of comparison for UCLA precisely because it is known for being conscientious about providing cost-effective care. As such, any comparisons should have been expected to provide large differences. Neither hospital is representative of the average hospital in the US, so any conclusions drawn from a comparison of what goes on at the two cannot be used as a valid estimate of measures at the national level.

Proxy Measurements

How do you measure a nation’s well-being? Gross national product (GNP), a measure of the total amount of goods and services produced within the nation, has long been used as a proxy for the more abstract measure of national well-being. Recently, however, it has been noted that GNP is an inadequate measure of a nation’s well-being, since it does not take into account the amount of environmental resources used to produce the goods and services (see my earlier post Problems with Aggregate Measures). As such, other measures that account for both economic and environmental conditions will probably appear in the near future in lieu of GNP as proxies for national well-being.

It often difficult for analysts to measure the quantity they are interested in measuring (e.g., national well-being), either because the quantity is too abstract or it is too difficult to measure directly. In such cases, analysts will choose some proxy (e.g., GNP) to measure or use in lieu of the quantity of interest (e.g., national well-being). If analysts do use proxies, then they must be sure that the proxies are valid substitutes for the quantities of interest; that is, they must be sure that by measuring the proxies, they will get a good estimate of the quantity of interest; otherwise, any conclusions drawn about the quantities of interest that are based on the proxies will be problematic, as seen in the case of national well-being.

In lieu of measuring all the waste in the hospitals, the analysts for the Dartmouth study measured the care provided to patients who ended up dying soon after they received the care. In other words in the Dartmouth study, medical care provided to patients who died soon after was used as a proxy for healthcare system waste. Unfortunately, this proxy suffers from information set problems described above in the beach vs. skiing example: For many (most?) of the patients who end up dying soon after expensive medical tests or procedures were performed, the doctors did not know the patients would die until after-the-fact. In other words, the only way doctors could refuse to provide care to patients who will die anyway is if at the time that they must decide whether or not to provide the care (before-the-fact or ex ante), they have after-the-fact (ex post) information as to whether or not the patient will die anyway, which, obviously, they don’t have.

When analyses are performed in an attempt to measure some quantity, you can get a better idea of whether or not the analyst is actually measuring what he purports to measure by asking yourself the following three questions:

(1) What is it that the analysis is trying to measure (in theory)?

(2) What is the analyst actually measuring?

(3) Will estimates of what the analyst is actually measuring provide good estimates of the quantity that the analyst is trying to measure?

More Blogs

Cannabis Cultivation: Seeds vs. Clones

26-09-2020 - Hits:1763 - Ruth Fisher - avatar Ruth Fisher

Cannabis plants are dioecious, that is, they are either male or female. Plant reproduction occurs naturally, when male plants pollinate female plants, causing female plants to produce seeds. New cannabis plants can thus be cultivated by collecting seeds from fertilized females and replanting them, or by buying seeds generated by...

Read more

Cannabis Cultivation: Indoor vs. Outdoor vs. Greenhouse

22-09-2020 - Hits:1448 - Ruth Fisher - avatar Ruth Fisher

There are three basic locales for growing cannabis: indoors, outdoors, or in greenhouses. Greenhouses enable growers to benefit from natural light, while also being able to strategically block out light to induce quicker flowering. Budget-friendly greenhouse operations are more subject natural climate variations, while higher-end greenhouses are more similar to...

Read more

Would the Endocannabinoid System Have Been Discovered Earlier without the Ban on…

10-06-2020 - Hits:1588 - Ruth Fisher - avatar Ruth Fisher

Cannabis was used medicinally in the Western world from the mid-1800s through 1940, even though doctors did not understand cannabis’s mechanisms of action. The Marijuana Tax At of 1937 Federally banned the use of cannabis in the US for either medical or recreational uses, and it restricted scientific studies of...

Read more

How Regulations Shape the Cannabis Industry

16-05-2020 - Hits:2367 - Ruth Fisher - avatar Ruth Fisher

  The cannabis industry is highly regulated, and the various regulations play a powerful role in shaping the structure, and thus outcome, of the industry. This analysis examines the following questions: How do cannabis market regulations shape market structure? Are the resulting outcomes favorable to suppliers and/or consumers? What are the pros and cons...

Read more

Cannabis Industry Rollouts: Lessons Learned from States’ Experiences

27-04-2020 - Hits:1754 - Ruth Fisher - avatar Ruth Fisher

Bart Schaneman from MJ Business Daily recently released, “2020 Cultivation Snapshot: U.S. Wholesale Marijuana Prices & Supply.” The information contained in the report helped cement certain insights I’ve had about the evolution of the cannabis market. Background info In addition to the myriad other laws and regulations, all states essentially have two...

Read more

A Data-Generating System: A Framework for Data Assessment

14-04-2020 - Hits:1069 - Ruth Fisher - avatar Ruth Fisher

Suppose I gave you, the Data Analyst, a dataset of information on sales of Ford automobiles, and suppose I told you to use that dataset to predict total national sales of Ford automobiles for next 12 months. What would you want to know about the data you were given? If you...

Read more

Hemp and CBD Market Supply

06-04-2020 - Hits:1890 - Ruth Fisher - avatar Ruth Fisher

The information in this post was compiled in an attempt to understand 2 issues: Does the cultivation of hemp differ depending on the hemp product supplied (fiber, seed, or flower)? Is the CBD produced from hemp (cannabis with ≤ 0.3% THC) identical to the CBD produced from marijuana (cannabis with > 0.3%...

Read more

Trends in Cannabis Patents Over Time

08-12-2019 - Hits:2353 - Ruth Fisher - avatar Ruth Fisher

Patent Counts by Year I searched the USPTO patent database for all patents for which the patent abstract contained any of the following terms: cannabis, cannabinoid, marijuana, tetrahydrocannabinoid, or cannabinol. My search yielded 914 patents. As seen in Figure 1, there were only a handful of cannabis patents each year until the...

Read more