Thursday, February 16, 2017

Practicing information presentation design

All skills need practice, and designing ways to present information is a skill.

Since I can't show work-related materials for legal reasons, and also I don't make as many presentations as I used to when it was, well basically my job, I keep my information-design skills in top shape by applying them to more entertaining matters.

(Click the images to magnify.)


Augmenting quotes: South Australia power woes.

In this case the quote is a tweet, but it works with longer quotes too.


I admit that there are elements of chartjunk in my design: the background of wind turbines and the Australian flag in an Australia outline. But those serve as additional cues to what really happened (and where): reliance on non-dispatchable capacity has made the South Australia grid a joke among electrical engineers.

South Australia has been featured on this blog before, for a worse case of the same problem.


A larger version of augmenting a quote: Popular Science shills for WaterSeer


In this case, it's an augmentation to critique, not to support. Thunderf00t has a science-accurate, still very snarky video about the WaterSeer:


(Added on Feb 19, 2017.)


Annotated photos: Oroville Dam repairs.

Engineering, that neverending fight between Nature and Man! In the case of the Oroville Dam, Nature's side got a lot of help from Man, or maybe one should say from politics, incompetence, bad design, and bad luck.


There were two points I wanted to make: the scale of the problem (which is nicely contextualized by the size of those dump trucks) and the misclassification of soft soil as a spillway. This one photo from the California Department of Water Resources, with minimal annotation, makes the case quite clearly. All that was needed was to make the points more salient for less attentive audiences.


Property maps: Science-adjacent television shows.

Recently I found myself binge-watching Numb3rs (from the DVDs, since Netflix has dropped them); it's one of the few fiction shows that actually included teaching vignettes. Charles Eppes would explain real mathematical concepts with simple illustrations and computer graphics.

Pondering that, I realized that The Big Bang Theory also does a little bit of that, much less of course, but there's one major difference: Charles Eppes is cool and well-accepted by the non-mathematicians on the show (and dating Navi Rawat, a/k/a Amita), while the scientists in TBBT are portrayed as total nerds.

The other TV show that portrayed a science-y person as cool was MacGyver (the original, the new one might as well be called McBourne); but in that show the science was terrible. But MacGyver was cool and more importantly, his approach to solving problems was "use your brain, not your fists."

Having been exposed to MacGyver early on, I started carrying around a Swiss Army knife, duct tape, and a lighter (I don't smoke, but MacGyver carried around strike-anywhere matches which were difficult to find in Portugal). I currently own eleven SAKs, from a small keychain model to one of the largest ones that's still practical to use. I don't own the ludicrously fat one.

So, there are two dimensions, goodness of science and coolness of scientists, which my MBA training says necessitates a two-by-two matrix:


But I'm a quant too, so I can do numbers and graphs. Using multi-dimensional scaling on similarity ratings (my own, so there's a clear researcher effect) on a number of television shows, we find more granularity:


House MD and Bones have better science than MacGyver and the vast majority of TV science fiction, but they don't discuss the science much. There are some times when Brennan introduces some real science in the discussion or House points out something accurate, so they aren't "teching the tech," but unlike TBBT or Numb3rs, there's never elaboration.

The scientists are portrayed as less nerdy than those in TBBT (and the general portrayal of people with technical skills in other shows); both Brennan and House have social foibles, but they are highly functioning and comfortable with themselves. They don't make science "cool" per se, but they make scientists central to society (curing people, solving crimes), rather than ivory tower researchers with no connection to the real world.

Numb3rs had a lot of support in the math community; a few links:

Side-by-side comparisons: EEVblog versus Thunderf00t.

Most data only becomes information when adequate context and knowledge are applied. In many cases, a contrast table (a side-by-side comparison) along appropriate variables can make the relevant points more salient. Behold:


This table was inspired by the coincidence that both EEVblog and Thunderf00t made debunking videos recently, one a good video with technical demonstrations and a clear analysis of what was shown, the other a snark-filled collection of fallacies, namely guilt-by-association (with Solar Roadways) and distraction (the video keeps talking about PET as if that was the plastic to be used).



(Yes, I know Thunderf00t's real name is known, but since he was doxxed, I don't use his real name.)


Note: someone asked what's suspicious about Thunderf00t's recent increase in the rate of video releases and the change in topic mix. When a male of the species increases money-making activities and starts avoiding topics like feminism, that's a strong indication that his mind has gone under the control of a female woman of the opposite sex, or what Millennials call "hooking up." Should the hypothesis be correct, we should see indications of more direct female oppression soon, like button-down shirts and a haircut.

(The obvious suspiciousness of an alleged Australian who's that pale is unquestioned.)

Sunday, February 12, 2017

Word Thinkers and the Igon Value Problem

Nassim Nicholas Taleb did it again: "word thinkers," now a synonym for his previous coinage IYI (Intellectuals Yet Idiots).
I often say that a mathematician thinks in numbers, a lawyer in laws, and an idiot thinks in words. These words don’t amount to anything. 
A little unfair, though I've often cringed at the use of technical words by people who don't seem to know the meaning of those words. This sometimes leads to never-ending words-only arguments about things that can be determined in minutes with basic arithmetic or with a spreadsheet.


To not rehash the Heisenberg traffic stop example, here's one from a recent discussion of the putative California secession from the US (and already mentioned in this blog): people discussed California's need for electricity, with the pro-Calexit people assuming that appropriate capacity could be added in a jiffy, while the con-Calexit people assumed the state would instantly be blacked out.

No one thought of actually looking up the numbers and checking out the needs. Using 2015 numbers, California would need to add about 15GW of new dispatchable generation for energy independence, assuming no demand growth. (Computations in this post.) So, that's a lot, but not unsurmountable in, say, a decade with no regulatory interference. Maybe even less time, with newer technologies (yes, all nuclear; call it a French connection).

There was no advanced math in that calculation: literally add and divide. And the data was available online. But the "word thinkers" didn't think about their words as having meaning.

And that's it: the problem is not so much that they think in words, but rather that they don't associate any meaning to the words. They are just words, and all that matters is their aesthetic and signaling value.

Few things exemplify the problem of these words-without-meaning as well as The Igon Value Problem.

In a review of Malcolm Gladwell's collection of essays "What the dog saw and other adventures" for The New York Times, Steven Pinker coined that phrase, picking on a problem of Gladwell that is common to the words-without-meaning thinkers:
An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “sagittal plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong. [Emphasis added]
Educational interlude:
Eigenvalues of a square $[n\times n]$ matrix $M$ are the constants $\lambda_i$ associated with vectors $x_i$ such that $M \, x_i = \lambda_i \, x_i$. In other words, these vectors, called eigenvectors, are along the directions in $n$-dimensional space that are unchanged when operated upon by $M$; the $\lambda_i$ are proportionality constants that show how the vectors stretch in that direction. Because of this $n$-dimensional geometric interpretation, the $x_i$ are the matrix's "own vectors" (in German, eigenvectors) and by association the $\lambda_i$ are the "own values" (in German, you guessed it, eigenvalues). 
Eigenvectors and eigenvalues reveal the deep structure of the information content of whatever the matrix represents. For example: if $M$ is a matrix of covariances among statistical variables, the eigenvectors represent the underlying principal components of the variables; if $M$ is an incidence matrix representing network connections, the eigenvector with the highest eigenvalue ranks the centrality of the nodes in the network.
This educational interlude is a demonstration of the use of words (note that there's no actual derivation or computation in it) with deep meaning, in this case mathematical.

Being a purveyor of "generalizations that are banal, obtuse or flat wrong" hasn't harmed Gladwell; in fact, his success has spawned a cottage industry of what Taleb is calling word-thinkers, which apparently are now facing an impending rebellion.

Taleb talks about 'skin in the game,' which is a way to say, having an outside validator: not popularity, not social signaling; money, physical results, a verifiable mathematical proof. All of these come with the one thing word-thinkers avoid:

A clear succeed/fail criterion.

- - - - - - - - - -

Added 2/16/2017: An example of word-thinking over quantitative matters.

From a discussion about Twitter, motivated by their filtering policies:
Person A: "I wonder how long Twitter can burn money, billions/yr.  Who is funding this nonsense?"
My response: "Actually, from latest available financials, TWTR had a $\$ 77$ million positive cash flow last year. Even if its revenue were to dry up, the operational cash outflow is only $\$ 220$ million/year; with a $\$ 3.8$ billion cash-in-hand reserve, it can last around 17 years at zero inflow."
Numbers are easy to obtain and the only necessary computation is a division. But Person A didn't bother to (a) look up the TWTR financials, (b) search for the appropriate entries, and (c) do a simple computation.

That's the problem with word thinking about quantitative matters: those who take the extra quant step will always have the advantage. As far as truth and logic are concerned, of course.

Tuesday, February 7, 2017

Schrödinger's Cat Litter


"Quantum mechanics means that affirmations change the reality of the universe."
Really, there are people who believe in that nonsense. I don't know whether affirmations work as a psychological tool (ex: to deal with depression or addiction), though I've been told that they might have a placebo effect. But I do know that quantum mechanics has nothing to do with this New Age nonsense.


The most misunderstood example: Schrödinger's cat

A common thread of the nonsense uses Schrödinger's cat example and goes something like this:
"There's a cat in a box and it might be alive or dead due to a machine that depends on a radioactive decay. Because of quantum mechanics, the cat is really alive and dead at the same time; it's the observer looking at the cat that makes the cat become dead or alive. The observer creates the reality."
No, really, this is a pretty good summary of how the argument goes in most discussions. It's also complete nonsense. The real Schrödinger's cat example is quite the opposite (note the highlighted parts):


(Source: translation of Schrödinger's "Die gegenwärtige Situation in der Quantenmechanik," or "The current situation in quantum mechanics.")

As the excerpt shows, Schrödinger himself described applying quantum uncertainty to macroscopic objects as "ridiculous." In fact, in the original paper, Schrödinger calls it burlesque:


In other words, this New Age nonsense takes Schrödinger's example of misuse of a quantum concept and uses it as the foundation for some complete nonsense, doing precisely the opposite of the point of that example.

Sometimes "nonsense" isn't strong enough a descriptor, and references to bovine effluvium would be more appropriate. In honor of the hypothetical cat, I'll refer to this as Schrödinger's cat litter.


Say his name: Heisenberg (physics, not crystal meth)

Schrödinger isn't the only victim of these cat litter purveyors: the Heisenberg Uncertainty Principle also gets distorted into nonsense like:
"You can't observe the position and the momentum of an object at the same time. If you're observing momentum, you're in the flow. If you're observing position, you're no longer in the flow."
As I've mentioned before, when over-analyzing a Heisenberg joke, the uncertainty created by Heisenberg's inequality ($\Delta p \times \Delta x \ge h/2\pi$) for macroscopic objects is many orders of magnitude smaller than the instruments available to measure it. TL;DR:
Police officer: "Sir, do you realize you were going 67.58 MPH?
Werner Heisenberg: "Oh great. Now I'm lost." 
Heisenberg's uncertainty re: his position is of the order of $10^{-38}$ meters, or about 1,000,000,000,000,000,000,000,000,000,000,000,000 times smaller than an inch.
And yet, these New Age cat litter purveyors use the Heisenberg uncertainty principle to talk about human actions and decisions, as if it was applicable to that domain.


What are the "defenders of science" doing while this goes on?

Ignorance, masquerading as erudition, sold to rubes who believe they're enlightened. Hey, I'm sure many of the rubes "love science" (as long as they don't have to learn any).

Meanwhile, "science popularizers" spend their time arguing politics. Because that's what science is now, apparently...


Thursday, February 2, 2017

Primal entertainment

Really, totally primal. 😉

Ron Rivest talking about RSA-129 (a product of two prime numbers that was set as a factoring challenge in 1977) and its factorization in 1994 using the internet:



RSA-129 = 114381625 7578888676 6923577997 6146612010 2182967212 4236256256 1842935706 9352457338 9783059712 3563958705 0589890751 4759929002 6879543541
=
3490 5295108476 5094914784 9619903898 1334177646 3849338784 3990820577
$\times$ 
32769 1329932667 0954996198 8190834461 4131776429 6799294253 9798288533.

Inspired by that video, here are a couple of fun numbers, for numbers geeks:

😎 70,000,000,000,000,000,000,003 is a prime number. It's an interesting prime number, because the number of zeros in the middle (21) is the product of the 7 and the 3, both of which are, of course, prime numbers themselves. This makes the number very easy to memorize and surprise your friends with. If you want to confuse them, just say it like this: "seventy sextillion and three."

😎 99,999,999,999,999,999,999,977 is also a prime number, the largest prime number under a googol ($10^{100}$) that has the form  $p = 10^{n} - n$, with $n = 23$, meaning that if you add 23 to this number you get $10^{23}$ or a 1 followed by 23 zeros. Here's how you say this number: "ninety-nine sextillion, nine hundred ninety-nine quintillion, nine hundred ninety-nine quadrillion, nine hundred ninety-nine trillion, nine hundred ninety-nine billion, nine hundred ninety-nine million, nine hundred ninety-nine thousand, and nine hundred seventy-seven." Hilarious at parties.

Saturday, January 28, 2017

Learning, MOOCs, and production values

Some observations from binge-watching a Nuclear Engineering 101 course online.

Yes, the first observation is that I am a science geek. Some people binge-watch Kim Cardassian, some people binge-watch Netflix, some people binge-watch sports; I binge-watch college lectures on subjects that excite me.

(This material has no applicability to my work. Learning this material is just a hobby, like hiking, but with expensive books instead of physical activity.)

To be fair, this course isn't a MOOC; these are lectures for a live audience, recorded for students who missed class or want to go over the material again.

The following is the first lecture of the course, and to complicate things, there are several different courses from UC-Stalingrad with the same exact name, which are different years of this course, taught by different people. So kudos for the laziness of not even using a playlist for each course. At least IHTFP does that.


(It starts with a bunch of class administrivia; skip to 7:20.)


Production values in 2013, University of California, Berkeley

To be fair: for this course. There are plenty of other UC-Leningrad courses online with pretty good production values. But they're usually on subjects I already know or have no interest in.

Powerpoint projections of scans of handwritten notes; maybe even acetate transparencies. In 2013, in a STEM department of a major research university. Because teaching is, er…, an annoyance?


The professor points out that there's an error in the slide, that the half-life of $^{232}\mathrm{Th}$ is actually $1.141 \times 10^{10}$ years, something that he could have corrected before the class (by editing the slide) but decided to say it in class instead, for reasons...?

The real problem with these slides isn't that handwriting is hard to read or that use of color can clarify things; it's the clear message to the students that preparing the class is a very low priority activity for the instructor.

A second irritating problem is that the video stream is a recording of the projection system, so when something is happening in the classroom there's no visual record.

For example, there was a class experiment measuring the half-life of excited $^{137}\mathrm{Ba}$, with students measuring radioactivity of a sample of $^{137}\mathrm{Cs}$ and doing the calculations needed to get the half-life (very close to the actual number).

For the duration of the experiment (several minutes), this is all the online audience sees:



Learning = 1% lecture, 9% individual study, 90% practice.

As a former and sometimes educator, I don't believe in the power of lectures without practice, so when the instructor says something like "check at home to make sure that X," I stop the video and check the X.


For example, production of a radioactive species at a production rate $R$ and with radioactive decay with constant $\lambda$ is described by the equation at the top of the highlighted area in the slide above and the instructor presents the solution on the bottom "to be checked at home." So, I did:


Simple calculus, but makes for a better learning experience. (On a side note, using that envelope for calculations is the best value I've received from the United frequent flyer program in years.)

This, doing the work, is the defining difference between being a passive recipient of entertainment and an active participant in an educational experience.


Two tidbits from the early lectures (using materials from the web):

🤓 Binding energy per nucleon explains why heavy atoms can be fissioned and light atoms can be fused but not the opposite (because the move is towards higher binding energy per nucleon):


🤓  The decay chains of Uranium $^{235}\mathrm{U}$ and Thorium $^{232}\mathrm{Th}$:

(Vertical arrows are $\alpha$ decay, diagonals are $\beta$ decay.)


Unfair comparison: The Brachistochrone video


It's an unfair comparison because the level of detail is much smaller and the audience is much larger; but the production values are very high.

Or maybe not so unfair: before his shameful (for MIT) retconning out of the MIT MOOC universe, Walter Lewin had entire courses on the basics of Physics with high production values:


(I had the foresight to download all Lewin's courses well before the shameful retconning. Others have posted them to YouTube.)

Speaking of production values in education (particularly in Participant-Centered Learning), the use of physical props and audience movement brings a physicality that most instruction lacks and creates both more immersive experience and longer term retention of the material. From Lewin's lecture above:


Wednesday, January 25, 2017

Not all people who "love science" are like that

Yes, yet another rant against the "I Effing Love Science" crowd.

Midway through a MOOC lecture on nuclear decay I decided to write a post about production values in MOOCs (in my case not really a MOOC, just University lectures made available online). Then, midway through that post, I started to refine my usual "people who love science" vs "people who learn science" taxonomy; this post, preempting the MOOC post, is the result. Apparently my blogging brain is a LIFO queue (a stack).

Nerd, who, me?

I've posted several criticisms of people who "love science" but never learn any (for example here, here, here, and here; there are many more); but there are several people who do love science and therefore learn it. So here's a diagram of several possibilities, including a few descriptors for the "love science but doesn't learn science" crowd:



The interesting parts are the areas designated by the letters A, B, and C. There's a sliver of area where people who really love science don't learn science to capture the fact that some people don't have the time, resources, or access necessary to learn science, even these days. (In the US and EU, I mean; for the rest of the world that sliver would be the majority of the diagram, as many people who would love science have no access to water, electricity, food, let alone libraries and the internet.)

Area A is that of people who love science and learn it but don't make that a big part of their identity. That would have been the vast majority of people with an interest in science in the past; with the rise of social media, some of us decided to share our excitement with science and technology with the rest of the world, leading to area B.

People in area B aren't the usual "I effing love science" crowd. First, they actually learn science; second, their sharing of the excitement of science is geared towards getting other people to learn science, while the IFLS crowd is virtue signaling.

People in area C are those who learn science for goal-oriented reasons. They want to have a productive education and career, so they choose science (and engineering) in order to have marketable skills. They might have preferred to study art or practice sports, but they pragmatically de-prioritize these true loves in favor of market-valued skills.

As for the rest, the big blob of IFLS people, I've given them enough posts (for now).

- - - - -

Note 1: the reason to follow real scientists and research labs on Twitter and Facebook is that they post about ongoing research (theirs and others'), unlike professional popularizers who post "memes" and self-promotion. Or complete nonsense --- only to be corrected by much smarter and incredibly nice Destin "Smarter Every Day" Sandlin:



Note 2: For people who still think that if one of two children is a boy, then the probability of two boys is 1/3 (it's not, it's 1/2):


and the frequentist answer is in this post. Remember: if you think a math result is incorrect, you need to point out the error in the derivation. (There are no errors.)

This particular math problem is one favorite of the IFLS crowd, as it makes them feel superior to the "rubes" who say 1/2, whereas in fact that is the right answer. The IFLS crowd, in general, cannot follow the rationales above, though some may slog through the frequentist computation.

Friday, January 13, 2017

Medical tests and probabilities

You may have heard this one, but bear with me.

Let's say you get tested for a condition that affects ten percent of the population and the test is positive. The doctor says that the test is ninety percent accurate (presumably in both directions). How likely is it that you really have the condition?

[Think, think, think.]

Most people, including most doctors themselves, say something close to $90\%$; they might shade that number down a little, say to $80\%$, because they understand that "the base rate is important."

Yes, it is. That's why one must do computation rather than fall prey to anchor-and-adjustment biases.

Here's the computation for the example above (click for bigger):


One-half. That's the probability that you have the condition given the positive test result.

We can get a little more general: if the base rate is $\Pr(\text{sick}) = p$ and the accuracy (assumed symmetric) of the test is $\Pr(\text{positive}|\text{sick}) = \Pr(\text{negative}|\text{not sick})  = r $, then the probability of being sick given a positive test result is

\[ \Pr(\text{sick}|\text{positive}) = \frac{p \times r}{p \times r + (1- p) \times (1-r)}. \]

The following table shows that probability for a variety of base rates and test accuracies (again, assuming that the test is symmetric, that is the probability of a false positive and a false negative are the same; more about that below).


A quick perusal of this table shows some interesting things, such as the really low probabilities, even with very accurate tests, for the very small base rates (so, if you get a positive result for a very rare disease, don't fret too much, do the follow-up).


There are many philosophical objections to all the above, but as a good engineer I'll ignore them all and go straight to the interesting questions that people ask about that table, for example, how the accuracy or precision of the test works.

Let's say you have a test of some sort, cholesterol, blood pressure, etc; it produces some output variable that we'll assume is continuous. Then, there will be a distribution of these values for people who are healthy and, if the test is of any use, a different distribution for people who are sick. The scale is the same, but, for example, healthy people have, let's say, blood pressure values centered around 110 over 80, while sick people have blood pressure values centered around 140 over 100.

So, depending on the variables measured, the type of technology available, the combination of variables, one can have more or less overlap between the distributions of the test variable for healthy and sick people.

Assuming for illustration normal distributions with equal variance, here are two different tests, the second one being more precise than the first one:



Note that these distributions are fixed by the technology, the medical variables, the biochemistry, etc; the two examples above would, for example, be the difference between comparing blood pressures (test 1) and measuring some blood chemical that is more closely associated with the medical condition (test 2), not some statistical magic made on the same variable.

Note that there are other ways that a test A can be more precise than test B, for example if the variances for A are smaller than for B, even if the means are the same; or if the distributions themselves are asymmetric, with longer tails on the appropriate side (so that the overlap becomes much smaller).

(Note that the use of normal distributions with similar variances above was only for example purposes; most actual tests have significant asymmetries and different variances for the healthy versus sick populations. It's something that people who discover and refine testing technologies rely on to come up with their tests. I'll continue to use the same-variance normals in my examples,  for simplicity.) 


A second question that interested (and interesting) people ask about these numbers is why the tests are symmetric (the probability of a false positive equal to that of a false negative). 

They are symmetric in the examples we use to explain them, since it makes the computation simpler. In reality almost all important preliminary tests have a built-in bias towards the most robust outcome.

For example, many tests for dangerous conditions have a built-in positive bias, since the outcome of a positive preliminary test is more testing (usually followed by relief since the positive was a false positive), while the outcome of a negative can be lack of treatment for an existing condition (if it's a false negative).

To change the test from a symmetric error to a positive bias, all that is necessary is to change the threshold between positive and negative towards the side of the negative:



In fact, if you, the patient, have access to the raw data (you should be able to, at least in the US where doctors treat patients like humans, not NHS cost units), you can see how far off the threshold you are and look up actual distribution tables on the internet. (Don't argue these with your HMO doctor, though, most of them don't understand statistical arguments.)

For illustration, here are the posterior probabilities for a test that has bias $k$ in favor of false positives, understood as $\Pr(\text{positive}|\text{not sick}) = k \times \Pr(\text{negative}|\text{sick})$, for some different base rates $p$ and probability of accurate positive test $r$ (as above):


So, this is good news: if you get a scary positive test for a dangerous medical condition, that test is probably biased towards false positives (because of the scary part) and therefore the probability that you actually have that scary condition is much lower than you'd think, even if you'd been trained in statistical thinking (because that training, for simplicity, almost always uses symmetric tests). Therefore, be a little more relaxed when getting the follow-up test.


There's a third interesting question that people ask when shown the computation above: the probability of someone getting tested to begin with. It's an interesting question because in all these computational examples we assume that the population that gets tested has the same distribution of sick and health people as the general population. But the decision to be tested is usually a function of some reason (mild symptoms, hypochondria, job requirement), so the population of those tested may have a higher incidence of the condition than the general population.

This can be modeled by adding elements to the computation, which makes the computation more cumbersome and detracts from its value to make the point that base rates are very important. But it's a good elaboration and many models used by doctors over-estimate base rates precisely because they miss this probability of being tested. More good news there!


Probabilities: so important to understand, so thoroughly misunderstood.


- - - - -
Production notes

1. There's nothing new above, but I've had to make this argument dozens of times to people and forum dwellers (particularly difficult when they've just received a positive result for some scary condition), so I decided to write a post that I can point people to.

2. [warning: rant]  As someone who has railed against the use of spline drawing and quarter-ellipses in other people's slides, I did the right thing and plotted those normal distributions from the actual normal distribution formula. That's why they don't look like the overly-rounded "normal" distributions in some other people's slides: because these people make their "normals" with free-hand spline drawing and their exponentials with quarter ellipses, That's extremely lazy in an age when any spreadsheet, RStats, Matlab, or Mathematica can easily plot the actual curve. The people I mean know who they are. [end rant]