I recently attended a talk by Gernot Wagner (with some commentary by Richard Zeckhauser) on the implications of the IPCC's widening of their "likely" 2100 global temperature rise from 2.0 to 4.5 degrees to 1.5 to 4.5 degrees. It is entitled: "Expecting a Black Swan but Getting a Dragon: Deep Uncertainty and Climate Change"
The major point of their argument was that a shift in the kurtosis of the distribution (made by taking out the "most likely" estimate of climate sensitivity) increased the net cost (in a metric known as willingness to pay (WTP)) by increasing the uncertainty of future predictions. There is nothing wrong with this as an exercise: two different pdfs passed through a certain set of filters and evaluated will produce different things. In this cases, pdfs of varying kurtosis produce more of a net WTP as the kurtosis increases. Nothing to see here. The problem is that this is taken incredibly seriously by the public, as the results from this and similar studies have been used to price carbon. This increase in IPCC related uncertainty has the potential effect of doubling the cost of carbon from $40 to $80 per ton, though this depends on your pricing metric. These exercises are interpreted (and the author's are entirely complicit, spending a majority of the presentation talking about climate science) as being real authoritative pricing schemes, which they can not be. Suppose I have a pdf of future warming which peaks at 3 degrees with a kurtosis of 2, and variance of 1 degree, a gaussian. The IPCC's release would indicate that the mean of the distribution would shift by half a degree, the kurtosis would increase, and so would the variance. So why is this not considered? How can we price carbon using the kurtosis shift but not include the mean shift too? It's a damning question, but not one that is considered. The authors, when confronted, go at great lengths to discuss the limitations of the model. Yet when left to speak freely, and in publications, the results are discussed as saying something real about the world. The rather bald-faced contradiction is difficult to swallow. A link to the discussed paper is here Here is a link to a recent paper discussing the negative correlation between African literacy rates in the colonial and post-colonial times and the "slave export intensity" during the pre-colonial era. Cherokee Gothic rightly points out that this is an example of economic path-dependency, a concept familiar to mathematicians: for certain quantities, it isn't where you end up, but how you got there. Example: if one pegs their net worth at the value of the stock AAPL, and through some sorcery predicts every upturn and downturn in the stock price, liquidating at peaks and converting all cash to stock at local minima, in a year that person would have a considerable amount more money than the person who held the stock fixed, and much more than the person who made the opposite choices. The path taken to the end is what caused the discrepancy in wealth after a year. Path dependancy is a familiar trope in political theater and is, depending on the political and philosophical bent of the person, the reason for gender and race gaps in education, poverty, and incarceration. It can be summarized (thanks to Scott E. Page) in the "old Bostonian jump roping rhyme" I eat my peas with honey. I’ve done it all my life. It makes ’em taste quite funny, but it keeps them on the knife. And so this recent paper attempts to gauge the level to which slave export has biased literacy rates in African countries over the proceeding centuries. The answer? According to the abstract: "a negative and signicant relationship between slave export intensity before the colonial era and literacy rates during the colonial era." Here's the data used to support that claim, buried in a plot in the supplementary material. This image plots literacy rate against some normalized quantity representing pre-colonial slave exports as a percentage of the extant population. Where's the trend? To me (and this is just me), this appears to be a classic case of oversimplification. If the author wrote this paper with the exact opposite conclusion, I would be equally swayed. What causes the four outlier groups in slave export to be there? Why is there seemingly no trend? Why did the author connect two clusters with a line and call it a trend? Bad science, even in Path Dependancy A new Nature highlight of a GRL article discusses attempts made to reconstruct temperature records from the battery temperatures of smartphones. Since smartphones natively keep track of their battery temperature for safety purposes, they also help to keep track of the external temperature. This is used to obtain temperature in the "urban canopy", aka the low atmospheric boundary layer in which humans live that is often much warmer than the atmosphere just a few tens of meters high. Whether this will actually work is a secondary question, because at best battery temperatures are a poor proxy for atmospheric temperature. They are altered by not only the ambient temperature but the physical location (inside, outside, in a pocket, in direct sunlight) of the phone, as well as the memory/CPU usage of the phone. Overeem (not this guy) et al. used a "straightforward heat transfer model" to gauge air temperature. With T0 a constant equilibrium temperature, Tbat the observed battery temperature, m a transfer coefficient, and epsilon white noise. In other words, the temperature in the urban canopy relates linearly to the temperature of the battery. Presumably there is some "resting" battery temperature, and so integrated over the entire domain In other words, in general, batteries are anomolously warmed by the temperature of the boundary layer by heat conduction. Seems fair to me. While I sincerely doubt that error is white noise (there is absolutely a bias associated with the fact that cell phones are typically inside of something, be it a climate-controlled office or pocket), its an interesting use of modern proxies nonetheless.
Despite how sincerely marketers would like you to believe it, there hasn't been much innovation in transportation over the last 50 years. Excepting Ralph Nader, automobiles are more or less the same as they were in the 70's. The average age of the airplanes in the sky is 14 years. Amtrak runs on the same rails it has since the 80's. It is enough that Tyler Cowen claims: " When it comes to transportation at least, There is a Great Stagnation" Yet when Iron Man himself comes and suggests there's a new way of doing things (even though it is a very old way indeed), people get excited. There's a cycle of popular opinion which follows the advent of new, fanciful, catch-all ideas. We see it with wonder-drugs, wonder-food, and wonder-engineering, each cultivated by their own niche of bloggers, supporters, and idealists, and it has a name: the "Gartner hype-cycle": Gartner Research posited the existence of the hype-cycle to explain the wild ways in which expectations grow, diminish, and plateau for new technology, and also to suggest how far away emerging technology was, all the way back in 1995. It applies just as well for new engineering developments.
So while dozens of people have written about the impossibility of the hyperloop project, its "Astronomical Pricing", and exhibited general skepticism, its worth noting that Elon Musk is to date the only person willing to put enough money into R&D to attempt such a project. Thats interesting enough, even if he fails. Over the next few years, however, remember the Hype-Cycle. By tempering expectations, we'll temper disillusionment, and appreciate how fun (if arrogant) thinking this big actually is. A pretty interesting article from Massey University a few years back attempts to understand how much of the placebo effect is related to the conditioning effect (when I take a drug, I am conditioned to believe I will feel better because of my surroundings, so I feel better) or the expectancy effect (when I take a drug, I expect that it will work, so I feel better).
A number of interesting conclusions come out of this, notably that our tolerance to drugs is associated to our surroundings (as an example, consider examples of "learned tolerance", whereby drugs like alcohol affect people differently depending on their surroundings, leading to overdoses for alcoholics who find themselves drinking in unfamiliar places). But my favorite one concerns the fact that, like humans, rats exhibit the placebo effect. This can serve to root out the misconception that the placebo effect is some mystical, made-up response at a high intellectual level. Rather it is an innate physical mechanism which exists across the animal kingdom. This can help to explain its complete prevalence (and its relationship with confirmation bias) among those susceptible to believing pseudoscience, and the reason we have such a hard time getting rid of it. Over the last decade or so the phenomenon of the penny-stock "pump and dump" scam has risen in prevalence, mostly thanks to a variety of email spammers (I get something like 10-20 emails a day on the subject). The general idea behind the process is simple:
This tactic is usually used in the lightly regulated "pink slip" stock exchanges, and often triggered on small fluctuations in spammed-about stocks. It is not to be confused with HFT (high frequency trading), in which much smaller gains are accomplished by exploiting latencies in network connections. As is the case in the world, these scams provoke Golden Age theories (see comments) decrying technology and its corruptive, harmful role in society. Of course this is baloney, because the "pump and dump" scheme is as old as time. Here's an example, from an article in the vietnam vet about the "Leech Fever" taking over some villages in Vietnam. A group of wealthy investors comes in and buys up leeches, for reasons unknown. This causes a tremendous flux of people out of their jobs and into the rivers, trying to catch the abundant leeches to sell them. Local "businessmen", called "collectors", aggregate leeches by paying larger and larger prices for them, banking on the demand caused by the foreign investors. As this false demand grows, so too does the prices of leeches. The investors then sell their initial supply, make a tidy profit, and disappear to a town covered in dried leeches. So this is not a high-tech development, just a new and creative scam. Moral of the story: If you are buying something which you know to have no value beforehand, don't be mad when you get left with a bag of leeches. Paraphrased from George Philander: The hierarchy of science: Mathematics atop Physics atop Chemistry all the way down to the social sciences, is also an inverse hierarchy of simplicity. The problems which can be simply described and for which there can be no disagreement lie at the top, and the most difficult ones, the ones which permit a type of field-wide dissonance lie at the bottom. The most complicated and most relevant problems are those ones which are the least "pure" This hierarchy of purity was once cartooned by Randall Munroe of XKCD:
Here's two interesting images thanks to Eugenia Kalnay: The first is a plot of output (metric tons) of different crops in North Korea over time, the second fertilizer use, also in metric tons. Notice that there is a sharp drop-off of over 70%. This coincides with the 1991 end of perestroika, or the final collapse of the Soviet Union. Around this time, the rapid denationalization of fuel industries in Russia and economic turbulence resulted in an elimination of oil aid from Russia to North Korea. Similarly, the lack of fossil fuels limited the ability to produce and implement nitrogen fertilizer in North Korea, especially compared with other Eastern countries. Say what you want about North Korea, but its incredible isolationism has made it well suited to gauging the impact of macroeconomic policy, and here serves to prove the point that availability of fossil fuels, not cropland or technology, is the leading order determiner of agricultural capability. If you've been under a rock the last week, former New England Patriots tight end Aaron Hernandez has been indicted for first degree murder. Anyone owning an Aaron Hernandez jersey is allowed to exchange it for any other actual Patriots player's jersey for free, the reasoning being that having NFL fans wearing the jersey of a suspected murderer is not good publicity for the league or the individual team. Lemon Laws were initiated in the United States to counteract information asymmetry between the buyer of a product and the seller. Buyers are at an immediate information disadvantage to sellers, since the seller has much more of an understanding of the flaws of the product he is selling. When the product turns out to be worth less than the price the buyer paid based on prior knowledge available to the seller the buyer is defrauded, and depending on the product, has some legal ability to recoup losses. Now the non-ironic value of a Hernandez jersey is zero, and the average Patriots fan had absolutely know idea how bad of a guy he was, though it appeared to be common knowledge to insiders. His jerseys are lemons.
The Patriots are enacting an even stronger warranty (maybe a double-lemon law?), since even they didn't know Hernandez was as bad a guy as he turned out to be. So Hernandez is partially to blame for the lemoning of his own jerseys. This is sort of like a car company knowing its cars' third-party manufactured brakes wouldn't last the life of the car, only to find out that, unbeknownst to them, the brakes didn't work at all! Who should really be responsible for covering the "loss of value of Hernandez jerseys?" (assuming of course people who bought them actually care). Perhaps both the Patriots and Hernandez are equally at fault. I'm not sure how exactly to answer that question. At least here it is refreshing to see information asymmetry work in both directions. Tyler Cowen has a blog series called "Markets in Everything" where he links to examples of bizarre areas of specialization for which there is a market... for example the market for coats made out of chest hair, topless paintings of Bea Arthur, etc, etc... I'd like to start my own: Bad Science in Everything. Just like market economies exist for even the strangest of goods, bad scientific research permeates all corners of human existence. Todays example: a study marked by Outside Online as STUDY: LONGER RUNS ARE EASIER. 25 runners tried to do a 200 miles race. They were tested three times along the races path for their nueromuscular fatigue and other body indicators. These results were compared to runners tested running 100 mile races and less, and lo and behold, the 200 mile runners performed better. The culprit, sleep deprivation, perhaps. This provides for great copy and has done so all over the web, from CBS News to Deadspin. It is also some bad, bad science.
Of the 25 runners, just 9 completed the race. They were only tested 3 times. Let me repeat. 25 runners. 9 finishers. Over 60 % of the runners who even though they could finish a 200 mile race could not. This subset was then compared against runners running 100 mile races, 50 miles races, and marathons. So all 9 of these runners' bodies were capable of running 200 miles, which makes them utterly unique, both for human beings and for ultra-marathoners. To compare them to runners running a considerably smaller distance implies that anyone who is capable of running a marathon, or even a 100 mile race is capable of running a 200 mile race, which clearly is not the case, even for those who train for it. So here we have 9 highly specialized data points which are probably not even useful for comparison to other long-distance runners. Yet they are also being used to make an inference about the other 7 billion human beings. I think I could probably run a marathon after a few months of training, but I would be utterly wrecked. To say that if (and I couldn't) I ran 200 miles I'd be feeling better? Thats some bad science. You might argue that this isn't the implication of the study, that the point is that ultra-hyper-marathoners have special fatigue properties of their neuromuscular system. This is probably true. Yet it is certainly what is being inferred in the press. Bad science in everything. |
AuthorOceanographer, Mathemagician, and Interested Party Archives
March 2017
Categories
All
|