## Tuesday, October 29, 2013

In the wake of the failure of the constitutional referendum to abolish the upper house, or Seanad, in Ireland, many people are clamouring for Seanad reform to make it “more democratic” and “less élitist” — in other words, they seem to want the Seanad elected by the exact same procedure that has given us an incompetent shower of party-political asshats in the Dáil for as long as anyone can remember.

What would be the point of two parallel houses directly elected by general franchise? We need only look to Washington to see how well that works. And what's wrong with élitism anyway? It's not like we need more publicans in Leinster House and fewer professors, nor does the Seanad have any real power to subvert the intentions of the democratically and directly elected Dáil in any case.

There's little disagreement that reform of both houses of the Oireachtas is highly desirable, so how about this...

The general idea here is that ministerial portfolios should be fixed prior to a general election, and three seanadóirí, suitably qualified in subject-matter related to that portfolio, would be directly elected by general franchise to a corresponding 3-member “bench”. This would mean that every minister would be “shadowed” by 3, at least somewhat knowledgeable, senators. Their job would be to directly scrutinise the his/her legislative and executive actions. Any scope or mechanism for the government to make the seanadóirí rubber-stamping cronies, or redefine their roles or the roles of ministers, is eliminated. This is the germ of an idea; we can argue the details, but what follows in an outline of how I think this could work.

First, the status quo in Ireland in relation to ministerial portfolios would be abolished with extreme prejudice. The brain-dead state of affairs where the Minister for Agriculture and Fisheries today can be replaced with a Minister for Ice Cream tommorrow — is both staggeringly wasteful and subject to populism. I originally got my ham radio license in 1988 from the “Department of Tourism, Transport, and Communications”, I now get it from the “Department of Communications, Energy & Natural Resources”, with I don't know how many “different” bizarrely-named departments, redesigned logos and letterheads, and civil service reorganizations in between.

For my idea to work, we need some kind of consistent and static — or at least not easily changed — set of ministerial portfolios, fixing ministers' roles and corresponding government departments prior to a general election. I can think of no argument against this other than that it's different from the an existing “system” that has little or nothing to recommend it.

To support this up-front fixing of ministerio-departmental portfolios, amended constitutional provisions are required: either the portfolios must be explicitly enumerated, or any change must be encumbered by requiring a supermajority in both houses, or, even better, some combination. A workable solution might be to fix the size of the cabinet at 15, and stipulate that “there shall be ministries for finance, health, education, and basket-weaving, the names and areas of responsibility of the remaining 9 ministries to be decided by two-thirds majority of both Houses, no changes to take effect until after the following general election”. That immediately does away with the stupid and wasteful gerrymandering of portfolios every year or two.

Having decided the 13 portfolios before the election, the Seanad could be constituted, with its current 60 members, as follows:
• 11 to be appointed by the Taoiseach, as is currently the case;
• 10 to be elected by extending the university franchise to all university graduates, an increase of four; and
• 39 to be elected by general franchise to 13 three-member “benches” (three seats is a bench, right?), each bench directly corresponding to a ministerial portfolio.
The following “qualification rule” shall apply:
• Candidates for each bench must be formally qualified for that bench; and
• No candidate shall be a current member of, funded by or on behalf of, nor run under the imprimatur or official endorsement of, any political party.
In addition, three “5 year rules” shall apply:
• No person, having been a candidate in a non-Seanad political election in the preceding 5 years, may be a senator or a candidate for election or appointment to the Seanad — the “no has-beens” rule; and
• No person, having been a senator in the last 5 years, may be a candidate in a non-Seanad political election — the “no wannabes” rule.
• No person, having lost in a Seanad election, may be appointed to the Seanad by the Taoiseach for 5 years — the “no second bite at the cherry” rule.
Now, admittedly, the “formally qualified” rule is a bit tough to nail down — it's easy enough to expect candidates in the “Finance” bench to be accountants or economists, but it's harder to define for, say, “Education”; how would qualification for that be assessed? Is being a schoolteacher enough or would you have to have an Ed.D.? It's a problem, for sure, but not an insurmountable one.

Also, in this context, a “non-Seanad political election” means a general, local, or European election anywhere in Europe. Together, these two rules remove the “wannabes” — cronies appointed to raise their political profile in preparation for running in a general election for the Dáil — and “has-beens” — cronies who just lost their seat in the Dáil (or other political assembly). The extension to Europe as a whole prevents cross-border wannabes and has-beens from the UK Parliament or Northern Ireland Assembly. Although we lack the jurisdiction to prevent a former senator running for the Northern Ireland Assembly, we can disenfranchise and disbar him/her from ever voting or being a candidate in any future election in Ireland if he/she does, or subject him/her to fines or imprisonment.

How about that for accountability? Every minister has 3 senators on his/her ass, permanently. With no way to stack the deck with has-beens and wannabes, even with no increase in power, the new Seanad would raise the standard of political discourse, while being more democratic.

The chief objection I anticipate is that voting for 13 people is too complicated for the average dimwit or that counting the votes would be too time-consuming. Fine, then: let everyone vote for just one bench, or a few benches, of their choice. That way, everyone decides what's important to them, and few people are voting purely for the sake of it on benches they have no knowledge of or don't care about. This would be at least different from the party political voting pattern that characterizes Dáil elections. That doesn't seem like a bad thing to me at all, and having a different second house is the entire point of the bicameral system.

## Footnote on the Irish Parliamentary System

In Ireland, the Oireachtas, consists of Uachtarán na hÉireann (President of Ireland), who is directly elected to this largely ceremonial and powerless position; the 60-seat Seanad (Senate), an indirectly elected “upper house” with no real power; and the 166-seat Dáil, (House of Representatives or National Assembly), which effectively holds all legislative and executive power. This power is wielded by a 15-member cabinet (comh-aireacht, a seldom-used word since it is almost never necessary to distinguish between the cabinet and the Government, or Rialtas), elected by the Dáil from amongst their membership, with a seldom-exercised constitutional provision to have up to two seanadóirí (senators).

Constitutionally and practically, the Seanad is almost entirely powerless, apart from a smattering of limited and never-used constitutional functions, such as the impeachment of a judge or the president. The Seanad can amend legislation, but the amendments are more like suggestions: they go back to the Dáil, and if the government of the day doesn't like the amendments, the bill can pass into law without Seanad support after 180 days or less, in the case of financial bills. At worst, the Seanad can delay non-financial legislation by about 9 months.

In practice, once a general election — in which members are elected to the Dáil by general franchise — is over, one of the two large parties (Fine Gael and Fianna Fáil), in conjunction with one of the smaller parties or a group of independents, will have a majority in the Dáil, which will “elect” the leaders of those parties to the cabinet, which is the new government. The leader of the largest party will be the new Taoiseach (Prime Minister), the leader of the second largest party (of the coalition making up the cabinet, not the Dáil overall) will be the Tánaiste (Deputy Prime Minister), and the cabinet positions — Ministers for Finance, Health, etc. — will be assigned to senior figures in the governing parties according to their relative strengths and the importance of the position. It is usual for the largest party to keep the Ministry for Finance for one of their own, for example.

The Taoiseach, once elected, then appoints 11 people to the Seanad. The original idea was that these would be trusted advisers and experts, but, in practice, they have always been political cronies: has-beens, who just lost their seat in the Dáil, and wannabes, who are hoping for a seat in the Dáil in the future and have been appointed to the Seanad to raise their public profile in preparation for the next general election.

The remaining 49 seanadóirí (senators) consist of 6 elected by graduates of certain Irish universities, and 43 elected from 5 so-called “vocational panels”, which consist of union, local government, and other representatives.

## Tuesday, August 27, 2013

### Installer Tips for Open Source Developers

So, I've recently had to install a lot of software from source. I used to do this a lot — back in the days when I was a Linux hobbyist and had nothing better to do — but in recent years, I have tended to just live with whatever versions of software libraries come with whatever Linux distro I happen to be using on a particular machine, usually the latest Ubuntu LTS (although I've used Slackware, Red Hat, SuSE, Mandriva, Yellow Dog, and CentOS in addition to Irix, Solaris, and FreeBSD).

One thing that hasn't changed one iota in (almost) twenty years is the utter cluelessness of niche developers when it comes to packaging their software, and it's driving me nuts. Just because your software isn't totally mainstream doesn't mean you can subject your (potential) users to an installation nightmare of manually editing Makefiles and making decisions about minutiae of configuration and installation.

The rules are very, very, simple indeed. Everyone who's ever installed anything from source knows the rules, so why the hell don't niche developers?

Here it is. The standard way of installing from source is

After I download a (well-behaved tarball), I expect to be able to do this:

$tar xzf foo-1.2.3.tar.gz$ cd foo-1.2.3
$./configure$ make
$make test|check$ sudo make install

There are a small number of variations, of course, you can use 7zip or bzip2 instead of gzip, and so on. At a push, you can inflict CMake or SCons on me, but, frankly, I don't care how much of a pain in the hole the Autotools are (and, as a developer, I hate them), ideally you should use them because that's what your users expect, and they don't really care if it takes you a day of frustration to figure out. If that's significant in the context of your overall software development, it's probably not worth installing your software in the first place.

If you insist on inflicting your egotistical notions of how installation from source should work, the one thing you absolutely cannot do is ignore now well-established conventions about where stuff goes: absent any explicit instruction from me, you install your shit in the appropriate directories in /usr/local and nowhere else. I don't care that it's easier for you to just copy great gobs of shit into /opt/IAmSoImportant and leave all the painful configuration to me, and for the love of all that is good and right, if you install anything directly in /usr, interfering with the packaging system of my distro, I will hunt you down like a rabid dog and hammer a mechanical pencil into your eye with your detached and still-bleeding leg. I know it's hard to decide whether you should put your headers directly in /usr/local/include or in a subdirectory thereof, or your libraries in /usr/local/lib or in a subdirectory thereof, and it's often subtly debatable how “platform independent” that file really is, so whether it should go in /usr/local/share/foo or somewhere else is, in the end, a judgement call. I know. I sympathise. I really do. But if you can't decide, and you wrote the damn thing, how the hell do you expect me to?

In short, if you want your software to ever have the slightest chance of “catching on” and becoming popular, you can't just do the fun stuff, you have to suck it up, make a decision or two, and do the mundane and tedious stuff that makes installation easy for your users and packaging easy for distributions.

End of rant :o)

## Thursday, August 8, 2013

### Rapid Charging Electric Cars

One of the things that comes up repeatedly with electric cars is how fast you can charge them. The current answer is “overnight” or, at least, on the order of hours. Some more bullish electric car proponents argue that charging stations could be built that would reasonably match gasoline filling speeds. I don't think that's plausible.

The EPA limits gasoline filling speeds to 10 gpm (gallons per minute), or 0.63 l/s (liters per second). The volumetric energy density of gasoline is about 36 MJ/l (megajoules per liter), which means that, at the gas station, energy is flowing into your tank at a rate of 36 MJ/l * 0.63 l/s = 22.7 MJ/s = 22.7 MW (megawatts).

Now, suppose that the battery-to-wheels efficiency of an electric car is five times the tank-to-wheels efficiency of a gasoline car, which is a fairly reasonable assumption. Then we only need a charging power of 22.7/5 =  4.5 MW. Generously supposing 90% efficiency from grid to battery, we need “only” draw five million watts off the power grid.

That is an absolutely vast amount of electric power: not much less than is needed to supply 4,000 American homes. This is not a plug-in device.

A reasonable rule of thumb for distribution level electricity supply is “100VA/kV” in other words, if you were to be a 10kV primary customer, you can expect to be able to draw at most 1 MVA (the difference between VA and watts is not important for the current discussion). On that basis, the 5MW charging station will need to be a 69 kV subtransmission customer of the local power utility. One charging station, not a gas station forecourt with 8 of them.

Now, let's talk about slew rate. You can't just turn on a 5 MW load (for want of a round number) like a 60W lightbulb. Call your local power company and ask them how quickly they would allow a subtransmission customer to turn on a 5 MW load. The answer cannot be faster than they can spin up a gas turbine. The slew rate of a “hot” GE gas turbine generator is, at most, 5%-per-minute. In other words, to be able to even ramp-up a 5MW charger to full power in one minute would take 100MW of spinning reserve, and only after that have you hit gas pump-equivalent power.

Let's look at that another way. Take the 85kWh (kilowatt hour) battery in the top-end Tesla Model S. 85kWh is 306MJ. Suppose you want to charge that sucker in a minute flat, which is not unreasonable given its range of 265 miles: one minute would give you ten gallons of gas, at least enough to run a modern luxury car for that distance. To supply 306 MJ in 60s is 306/60 = 5.1 MW. Pretty much the same number.

In synopsis, refuelling an electric car at the same rate as a gas pump, any way you look at it, requires something of the order of 5 MW of electric power. This is, practically speaking, impossible. Notice that I haven't mentioned cost. Economically speaking, it is utterly beyond any reason: the charger alone would cost millions. Recharging an electric car at even ten percent of gas-pump equivalent speeds (requiring “only” a half-megawatt charger), presents enormous technical challenges in electricity supply.

In short, the technical challenges of recharging an electric car on consumer-acceptable timescales is almost nothing to do with the car or its battery.

## Wednesday, July 17, 2013

### NatGeo on Biofuels

What breakthroughs do biofuels need? asks National Geographic.

The real breakthrough would be recognition that biofuels simply cannot reasonably be expected to address a significant fraction of our energy needs. The only breakthrough worth having would be a tenfold increase in the solar efficiency of photosynthesis, which would only be possible through advanced genetic engineering, is very far beyond our current capabilities, and would be vehemently opposed by environmentalists.

The basic problem is that photosynthesis is horribly inefficient in converting sunlight to usable fuel. The maximum theoretical energy conversion efficiency of sunlight to biomass, not useful fuel, is just 6% [Zhu et al., 2008]; real efficiencies are considerably lower.

Brazil's sugarcane ethanol production is the absolute gold standard for large-scale biofuel production. The Brazilians started this bandwagon in the 70's and have aggressively optimized their ethanol production for 40 years. The most bullish prediction is that — if they keep improving at the same rate as they have in the past — they'll be able to average 9,000 liters of ethanol per hectare per year by 2018 [Goldemberg, 2008].
 Brazilian Ethanol Yield Over Time
What's the energy content of 9,000 liters of ethanol? The highest value I could find is 23.4 MJ/l (megajoules per liter), corresponding to the HHV (higher heating value) of anhydrous ethanol. This means that 9 kl (kiloliters) of ethanol yields at most 211 GJ (gigajoules) of thermal energy on combustion.

Now, how much sunlight falls on a hectare? In energy slang a “sun” is 1 kW/m^2 (kilowatt per square meter). That's about the peak insolation (energy density on the ground from the sun) at noon at the equator, but of course, the sun doesn't shine at night, insolation falls with latitude, and there's seasonal variation. An insolation map of Brazil suggests that a reasonable value for the sum of the insolation over a year may be up to 2,000 kWh/m^2 (kilowatt hours per square meter), so let's be super-generous to biofuels and use a stingy figure of 1,000 kWh/m^2, which is more like the correct value for Ireland than Brazil, and equates to 36,000 GJ/ha.

So, every hectare gets at least 36,000 GJ of sunlight and produces at most 211 GJ of ethanol. That's an energy conversion efficiency of, at best, 211/36000 or less than 0.6%. A fair figure (using the LHV of ethanol, production of 7kl/ha, and 1,750 kWh/m^2) would be less than half of that. I think it's fair to say that the solar-to-liquid-fuel energy conversion efficiency of sugarcane ethanol production is currently no more than one quarter of one percent, less by the time the input energy necessary to grow, harvest, ferment, and distill the ethanol (at least one tenth of the energy produced) is accounted for.

You can do far better than this generating hydrogen in your backyard using a modern solar PV panel (20% efficient) to power a commercially available electrolyser (73% efficient). Under reasonable assumptions, today you can produce hydrogen via solar panels and electrolysis with more than 50 times the efficiency of sugarcane ethanol production. I'm not arguing for hydrogen-powered cars, merely illustrating the horrible inefficiency of ethanol production.

Speaking of cars, to ram the point home, let's suppose that we could magically transform all of the vehicles in the United States into flex-fuel vehicles capable of running on 100% ethanol. Let's further suppose that we could out-do the future Brazilians at their own game and obtain fantastic yields of 10 kl/ha (or 1 l/m^2). Let's further suppose that we could substitute ethanol 1:1 for gasoline, meaning that we would need 500 billion liters of ethanol per year just for gasoline, never mind other energy needs. How much land would that require? About 50 million hectares. That's about half the total area of the United States or about three times its arable land area. If you use actual figures for US corn ethanol production (about 3,750 l/ha)? You need 8 times the arable land area of the USA.

So when I see headlines like this, I say, “So what, it's 1% efficient?”

I don't know how we can address our need for a gasoline substitute (easily transportable, high energy density, short “recharge” time, etc.), but it seems pretty clear that biofuels are merely a distraction.

## Sunday, June 30, 2013

### Holy Fukushima: Scaremongering is Everywhere!

It seems like this kind of thing has been doing the rounds:

I got a message on Facebook saying “I'd really like you to do a blog piece on this”, so here it is.

The above image is, in fact, an ocean wave amplitude graphic for the April 2011 Fukushima earthquake from NOAA. It has nothing to do with the “fallout” from the Fukushima Daiichi nuclear plant. I repeat: it has nothing whatever to do with nuclear radiation of any kind — it is an ocean wave amplitude graphic. Somebody took this innocent graphic and maliciously emblazoned it with a scaremongering lie. A slew of ignorant anti-nuclear Internet Luddites then reposted this complete fabrication, which has been swallowed wholesale by some of the more gullible members of the public.

In many cases, the above graphic has been replaced by this one:

Now, this one is actually a particle simulation from the New Zealand based ASR Ltd., a marine consulting company. If you actually go to the original page, it says (in their block capitals): “THIS IS NOT A REPRESENTATION OF THE RADIOACTIVE PLUME CONCENTRATION”. What this graphic actually tells us — if their computer simulation is accurate and reliable — is that, in the year after Fukushima, nothing (radioactive or otherwise) in the ocean surface currents could possibly have gotten much further than about halfway across the Pacific Ocean, which is quite a different thing from what the scaremongers would have you believe, and says nothing whatsoever about dilution or concentration. This was a publicity stunt by a private company showcasing their technology; it is neither peer-reviewed science, nor the report of a competent panel of experts.

So, is radiation from Fukushima killing Americans?

If you look at similar peer-reviewed science [Behrens et al., 2012], whose graphics have also been used for scaremongering, what you find is that the radioactivity of 137Cs — everyone's favorite radioisotope — off the coast of California (blue box IV) due to Fukushima peaks at about 1.2 Bq/m3, while further North it might peak around 2 (cyan box II). To put this in perspective, the background of 137Cs in the Pacific is about 3 Bq/m3, and the background level of radon in the air averages 5–15 Bq/m3, depending on where you live. In other words, this kind of increase — another couple of becquerels per cubic meter — isn't going to make a whole lot of difference to anyone.

The expert consensus is similarly undramatic: by far the most pessimistic part of the WHO's assessment is that the lifetime risk of thyroid cancer for a 1 year old female in the most affected parts of Fukushima prefecture may increase by 70%. Wow! 70%. But here's the thing: the baseline lifetime risk of thyroid cancer for women is 0.75%, 70% of that, the additional lifetime risk, is just 0.5%. I'm not saying I'd like my risk of some kind of cancer to increase by 0.5%, but it's not anything that I'm going to get my knickers in a knot over either.

If you look around, you'll find out that the total Fukushima release was 900 PBq — by any standard an enormous amount of radiation — about one sixth of a Chernobyl, about equivalent to the fallout from a 2Mt nuclear warhead, or approximately bugger all compared to what the Americans, the British, the French, and the Russians were doing throughout the 50's, 60's and 70's in the Pacific, Siberia, and the Nevada desert.

You'll also find that a reasonable estimate of the total number of additional cancer-related deaths attributable to Fukushima is about 130. That's less than half the number of coal-mining deaths over the last 10 years in the US, less than a day-and-a-half's worth of road traffic fatalities, or a few weeks of coal-mining deaths in China. In other words, also bugger all.

The reality is that nuclear power plants are actually pretty safe in the grand scheme of things, it's just that when there is an accident, it's big and it makes a big splash on the news. It's a bit like the way a plane crash that kills 300 people is a major news event, but the 300 people who die on our roads every few days in an unnoticeable trickle never get on CNN.

So, to answer the question, the danger to Americans from Fukushima is essentially zero. If you're going to start washing your vegetables in filtered water — as some of the sensationalist anti-nuclear liars in the lede would have you do — think again… with a little more skepticism and balance.

## Monday, June 24, 2013

### Announcing “ftrace”

If you've ever had to understand someone else's largely undocumented code, you may have wished there was some way of narrowing down your search through the source to a simple case or two until you had learned your way around.

I'm in that situation with the Linux “perf tools”, which I'm trying to use in my research. I'm reasonably familiar with strace and ltrace, which trace system and library calls, respectively, but I don't really care about these: I just want to trace the “local” function calls so that I can get a handle on how perf works. That should be easy, right?

Mmm... no. It turns out that I'm not the first person to have this problem.

One of the suggestions on StackOverflow, from Johannes Schaub, is to use readelf to identify all the functions symbols, set breakpoints for all of them in gdb, and retrieve the last frame of the backtrace at each breakpoint. Nice idea. Horribly slow, but a nice idea.

So, since I've been using OpenGrok to browse the kernel tools source, so it would be nice to be able to integrate with it, like this:

 OpenGrok with 'ftrace'
It would be really nice to have a callgraph from a particular program invocation, like this:

 Callgraph from 'ftrace' via GraphViz 'dot'
So, I wrote a few hundred lines of Python to do what Johannes suggested, plus a bit more. After the pattern of strace and ltrace, I called it ftrace, and you can get it from github:
It's horribly slow, it's hacky, but does what I need for the time being.

## Friday, June 21, 2013

### The NewLeaf Tragedy

Now, anyone who knows me knows that I'm opinionated: I'm not often ambivalent on “issues” because I think about them and decide which side I'm on, and genetically modified organisms — GMOs — are no different. So, here's the thing: I'm generally, though not unreservedly, in favor of GM technology subject to appropriate safeguards. I think most public opposition is a lazy bandwagon, fueled by a Luddite echo-chamber of mindless and ignorant fear-mongering activism that is impervious to evidence and reason. I think GMO-free products and “organic” produce are no more than a brilliant marketing tool to separate the gullible from their money.

One absolutely magnificent scientific achievement, an unqualified good for humanity, was killed by Luddism and ignorance: Monsanto's NewLeaf potato.

To understand what happened, the background begins with a family of pesticides collectively called “Bt” (for the harmless soil bacterium, Bacillus thuringiensis, from which they are derived). Bt is generally regarded as harmless to the extent that it is approved for use on certified “organic” crops with zero wait-time between spraying and harvesting; it is produced by breeding vats of one of a handful of B. thuringiensis strains and extracting the spores or the active pesticidal proteins: the Cry family of δ-endotoxins, and spraying on crops in suspension. Different Cry proteins are toxic to different ranges of insect species, so each different Bt strain yields a fairly narrow-spectrum insecticide depending on what exact Cry protein combination the particular Bt strain expresses. The usual targets are coleoptera (beetles) and lepidoptera (butterflies and moths), the most pestiferous insect classes, but avoiding hymenoptera (bees, wasps, and ants), which are generally benign or even useful (some Bt strains express proteins that are toxic to certain sawflies, but not other hymenoptera). Cry proteins have been confirmed to be entirely non-toxic to vertebrates, but might be immunogenic to humans in high doses. Technical details aside, Bt is, or was, widely used on potatoes to control the Colorado potato beetle, an endemic pest that is devastating to potato crops.

The other piece of background information that you need is that the Russet Burbank potato is the most widely grown potato cultivar in the US, preferred for french fries by fast food chains, and is widely grown on a very large scale for this purpose.

Now, Monsanto succeeded in splicing the Cry3a gene from Bt into the Russet Burbank potato — this was the NewLeaf potato — so that it would express a protein known to be toxic to the Colarado potato beetle at a few ppm (part per million) in the leaves, sufficient to kill them, but totally harmless to us (the Cry3a protein concentration in the tuber is less than 180 parts per billion; at this concentration, it wouldn't matter if it was strychnine). Clever, eh? No, not just clever: a bloody marvel of modern science is what it was. We should've had a ticker-tape parade for these guys.

NewLeaf was a massive success. Half of Idaho grew it. It was in every french fry you ate for several years. Then the Luddite activists stepped in. It was “dangerous”, they said. It was “unnatural”, they said. It was “frankenfood”, they said. It was harmless and brilliant. It contained the same pesticide that's sprayed on their beloved “organic” produce by the ton, the only difference was how it got there.

Unfortunately, the PR campaign to demonize NewLeaf was a massive success too. Misinformed consumers revolted; fast food chains stopped buying NewLeaf potatoes; farmers stopped growing them; and, after a few iterations (there were later “versions” of NewLeaf with different genes), Monsanto stopped producing them altogether because there was no demand any more.

Now, there are basically two purposes for which genes are spliced into crops like potato, soy bean, and corn:
• Pesticide expression (such as the Cry3a protein in NewLeaf)
• Herbicide resistance (such as glyphosate resistant “Roundup Ready” soy)
I have no problem with either of these, and I see no way that either of them can plausibly cause any harm. I've explained the situation with NewLeaf in detail, and glyphosate is actually pretty innocuous stuff that basically breaks down readily in the soil.

My concern is that transgenic crops have been so massively successful that, in some cases, it can be difficult for farmers to get seeds that aren't genetically modified, and their choice of what to plant should be preserved. It seems like there may be a risk of a monoculture arising, which history (the Irish Potato Famine of the mid 19th century, for example) suggests is a really bad idea.

Provided that the availability of a wide range of cultivars is preserved, I see no reason to fear GM.

## Wednesday, June 19, 2013

### What's Cheap At Six Grand per Liter?

So, it seems that most dog owners dose their dogs with “spot-on” topical antiparasitics one a month to prevent fleas and ticks. It looks like I'll have to pick one for Scooter. There are a range of products to choose from, the popular brand names include Frontline, Advantage, and Advantix.

But, boy, are these things expensive!

Look at PetArmor, a generic for Frontline, and one of the cheapest options, in medium dog size. You get 3 doses; each dose is a tiny 1.34 ml plastic pipette, and the cost at Walmart is $25. That's$25 for about 4ml, or $6,250 per liter. OK, so maybe working out a cost per liter is not entirely fair, since it comes in three tiny plastic vials, but if you can get 1ml glass vials for$100/1000, or 10¢ each, online, these little plastic ones can't be all that expensive, so the ingredients must be super-expensive or something, right?

Mmmm… no. The basic Frontline (or PetArmor of Fiproguard) is a 9.8% solution of fipronil in, presumably, some kind of oil (fipronil is only very slightly soluble in water and the mechanism of action suggests a chemically inert edible oil, probably some kind of mineral oil like the stuff you buy in Ikea to rub into your wooden chopping boards at $10/liter or 1¢/ml). Fipronil itself can't be all that expensive, since you can get it in a 9.1% suspension (in water) for about$100/liter for termite control (Termidor SC or Taurus SC). Even if the entire cost of these products was solely in the fipronil, each vial would contain about 15¢ worth. So the material cost for 3 vials of these topical antiparasitics, including packaging, can't reasonably exceed \$1, so why does it cost 25 to 60 times that? Inquiring minds want to know.

## Friday, June 14, 2013

### Introducing Scooter

I was considering a post that mentioned Scooter, and then realized that not everyone knows who, or what, Scooter is:
 No
 No
 No
 No

 Yes
 Yes
Hope this clears it up.

## Thursday, June 13, 2013

### Leaving Certificate Mathematics 2013: Paper 2, Question 8 “Solution”

So, the State Examination Commission in Ireland made a giant cock-up. At issue is a particular mathematics question, so — as someone not entirely ignorant of basic mathematics — I thought I'd explain the problem.

 Leaving Cert Mathematics 2013, Paper 2, Question 8

The problem is that a triangle is completely specified by three values — either angles or lengths — provided that at least one of them is a length: one length and two angles, two lengths and one angle, or three lengths. Given one of these, you can compute the missing values: two lengths and one angle, one length and two angles, or three angles, respectively.

The question text specifies two lengths: $$|HR|=80~\text{km}$$ and $$|RP|=~110\text{km}$$ and one angle: $$\angle{r}=124^\circ$$ (or $$\angle{HRP}=124^\circ$$ if you prefer the more verbose notation). The diagram shows one angle consistent with the text, $$r=124^\circ$$, and — in the English version of the exam — another angle, $$h=36^\circ$$, which is inconsistent with the values given in the text.

If we take the text as correct, and ignore the diagram, let's see what happens — the problem now is: find $$\angle h$$ given (dropping the units and angle symbols for convenience): $|HR|=80 \\ |RP|=110 \\ r=124$
We know that the sum of the internal angles of any triangle is $$180^\circ$$, so: $r+h+p=180 \Rightarrow h+p = 180-r = 56$
If we drop a line vertically from the apex, $$R$$, to a point, $$X$$, on the base, we then have two right-angle triangles “back-to-back”, i.e. sharing the line-segment $$RX$$.

From the right-angle triangle, $$HRX$$, on the left, we have: $|RX|=|HR|\sin(h)$ and from the right-angle triangle on the right, $$PRX$$, we have: $|RX|=|RP|\sin(p)$ Combining these two: $|HR|\sin(h)=|RP|\sin(p)$ But we know that $$h+p=56$$ or $$p=56-h$$, so $|HR|\sin(h) = |RP|\sin(56-h)\qquad\qquad(1)$ Now, a basic trigonometric relation (listed in the “log tables” that every candidate gets in the exam) is $\sin(A-B)=\sin(A)\cos(B)-\cos(A)\sin(B)$ Applying this to the RHS of (1), we get $|HR|\sin(h)=|RP|(\sin(56)\cos(h)-\cos(56)\sin(h))$ Rearranging: $(|HR|+|RP|\cos(56))\sin(h) = |RP|\sin(56)\cos(h)$ or $\tan(h) = \frac{\sin(h)}{\cos(h)} = \frac{|RP|\sin(56)}{|HR|+|RP|\cos(56)}$ or $h = \tan^{-1}\left( \frac{|RP|\sin(56)}{|HR|+|RP|\cos(56)}\right)$ Substituting in the values we know:  $h = \tan^{-1}\left( \frac{110\times 0.8290}{80+110\times0.5592}\right) = \tan^{-1}(0.6444) = 32.80^\circ$ Therefore $32.80\neq 36 \Rightarrow \text{The SEC are morons}$

Now, the question that is actually asked is “Find the distance from R to HP”. There are at least two further problems:

• the distance from $$R$$ to the line-segment $$HP$$ is not uniquely defined (we must assume that the perpendicular distance, i.e. $$|RX|$$ is intended); and
• even under the simplifying assumption that the Earth is a sphere, latitude makes a significant difference to the answer (consider point $$R$$ being at the North pole vs. line-segment $$HP$$ lying on the equator), and non-Euclidean geometry is not on the syllabus.
If one uses the given value for $$h$$, the problem becomes utterly trivial: $|RX|=|HR|\sin(36^\circ)$ If we don't ignore the diagram, we have two angles and two lengths and must decide to discard either one of the angles or one of the lengths (in the foregoing, we discarded $$h$$ from the diagram and kept the two lengths and angle $$r$$ from the text), so there are actually four choices for how to proceed.

All in all, a gargantuan cock-up and staggering incompetence from the SEC. Imagine that nobody there spotted any of this!

## Friday, June 7, 2013

### Accuracy in Newspapers

Peter Murtagh had an article in the Irish Times today about a firearm belonging to the late Lord Louis Mountbatten being returned to his family, entitled “Mountbatten handgun returned to family by Defence Forces and Garda”. The firearm is pictured (below) and the original text of the article said that it's a “Barretta” .22 caliber.

 A late model FN M1906 chambered for .25 ACP is not a “Barretta .22”

Everything about this description is wrong. First of all, the name of the Italian arms manufacturer in question is “Beretta”, with one “r” and an “a” only at the end. Since the original article was published, they've half-corrected the spelling: it now reads “Berretta”.

Secondly, the firearm pictured was manufactured by the Belgian Fabrique Nationale, better known as “FN”, so it's not a Beretta at all (the first clue is that it has an ordinary ejection port and not the open-top slide that you might expect for an older Beretta). It has the oval “intertwined ‘F’ and ‘N’” logotype of FN, not either of the circular logos used by Beretta (older models have a “PB”, for Pietro Beretta, logotype; more modern ones have a “three arrows” emblem). This can be verified by the simple expedient of turning the damn thing over and reading what it says on the other side:

FABRIQUE NATIONALE D'ARMES de GUERRE HERSTAL BELGIQUE
BROWNING'S PATENT-DEPOSE

Finally, it's chambered for .25 ACP and not .22 caliber.

The particular weapon pictured is actually the third iteration of FN's Model 1906: you can tell this from the flange on the front of the trigger, which was added in the third version. The first version, released in 1905, had only the Colt 1911-style grip safety that you can see in the photograph; a thumb safety was added (on the other side) in the second version, and enlarged in the third version when the front of the trigger was also widened. This is by far the most common version, accounting for over a million of the 1.2 million or so made. The basic design was licensed from legendary firearm designer John Browning by FN, amongst many other manufacturers. A dozen or more firearm manufacturers produced “Baby Brownings” like this between 1905 and 1940 or so.

In fairness, Beretta produced a number of superficially similar pocket pistols during the same period, but they are markedly different in many respects, having an open-top slide, much different grip safety and trigger guard, and, most importantly, a completely different company logo!

Now, I'm not a gun nut by any means, but the ported (rather than open-top) slide and misspelling of the supposed manufacturer's name made me investigate a little further. It took me all of 15 minutes on the Internet to find out the above. If I can do it, so can a professional journalist.

## Tuesday, May 14, 2013

### Arithmetic Intensity Origins

Arithmetic intensity is a key concept in the Roofline Model [Williams et al., 2009], which is important to my own research, and has definitely gone mainstream, making it into the lexicon of computer and computational science and the latest editions of John Hennessey and Dave Patterson's highly-respected books, “Computer Architecture: A Quantitative Approach” and “Computer Organization and Design”. In both books — the former on p.286 et seq., and the latter on p.668 et seq., — the treatment cites the same “Williams et al. 2009”, which inadvertently kinda sorta gives the impression that the idea of arithmetic intensity is due to them (in this case, the “et al.” is Andrew Waterman and Dave Patterson). This seems odd — I would have thought that the term, “arithmetic intensity”, must have been around long before the Roofline Model — but, having been introduced to the term around that time, I couldn't be sure. Where does this term originate?

As it happens, “arithmetic intensity” is a surprisingly new term. Looking back through the literature with Google Scholar and CiteSeerX, it seems that the first occurrence in computer/computational science might have been as early as 1992 [Walter, 1992], but in that case, it seems most likely that it was a rhetorical flourish — synonymous with “containing lots of arithmetic” — rather than a term intended to be coined with any semblance of a consistent technical meaning. It essentially lay dormant for almost 10 years before resurfacing in a Stanford whitepaper by Bill Dally, Pat Hanrahan, and Ron Fedkiw in 2001, where it is loosely defined:
This in turn enables architectures with a high degree of arithmetic intensity, that is applications with a high ratio of arithmetic to memory bandwidth.
In 2002, it appears about half-a-dozen times [Dally, Kapasi et al., Owens et al., Purcell et al., Schröder, Jayasena et al.]. In all but one case in publications (formal or otherwise) about stream computing and GPGPU computing originating at Stanford. In the immediately following years, the term took off, with over 1,000 occurrences in 2012 (see Figure 1).

 Figure 1: Occurrences in Search vs. Year
So, it seems that someone in Stanford, perhaps Bill Dally, has a pretty good claim on coining the term “arithmetic intensity” with its current meaning around 2001.

## Thursday, March 28, 2013

### The Era of Personal Broadcasting

I'm a reluctant latecomer to the era of “personal broadcasting”. For many years, I have regarded blogs and twitter and suchlike with considerable disdain: it isn't a conversation; it isn't social; it's self-indulgent; in some sense, it embodies a certain hubris: that one has something to say that is worth hearing.
I know all this, yet I frequently find myself saying the same things over and over, and wishing that there was somewhere that I could write these things once and have them persist so that, rather than repeating myself, I could just point and say, “Look, I've already thought about this, and here's what I think”.
And so, my blog: Fuinneamh.
“Fuinneamh” is the Irish language word for energy. It is pronounced ['fwɪn.jəv] (approximately “fwinyev”) , or pretty close. I was tempted to call it “Innealtóireacht Acmhainní Fuinnimh”, a direct translation of “Energy Resources Engineering” — my course of study at Stanford — but it's that's a helluva mouthful for non-Irish speakers and I'm not that much of a gaelgóir (Irish speaker) anyway.
I don't expect that it'll be entirely, or even mostly, about energy, despite the title. I have eclectic interests — I have been called an “intellectual butterfly” — and I was born, and remain, a philosophaster, the most polite euphemism I could find for “bullshitter”.