Yoram Bauman is a trained economist and stand-up comedian. New Scientist spoke to him about his new book, The Cartoon Introduction to Economics.
You claim to be the first and only stand-up economist. How did that happen?
By accident. While I was at grad school I wrote a parody of an economics text book and performed it during a humour session at an American Association for the Advancement of Science conference. One thing led to another: people wanted to hear economics jokes, then were willing top pay me to tell them, and then I finally got a website. I wrote on the site that I was the first and only stand-up economist - and if it says it on the Internet, then it must be true.
What’s so funny about money?
What's on everybody's minds right now is that people have less of it than they used to. I think during a time like this being able to take issues like economics that are weighty and difficult and to be able to share a common laugh about them is very valuable.
The other part of it is that people have very strong stereotypes. They think that economics is dull and that economists are the kind of people who think supply and demand is a good answer to the question: “Where do babies come from?”. Stereotypes like that make for good comedy material.
You still do some research. Does it ever feature in your comedy?
I’m not a hardcore academic, but I have a PhD in environmental economics. I do an hour-long comedy routine, but at the end I throw in five or ten minutes about climate change and the terrific revenue-neutral carbon tax in British Columbia.
Your new book is The Cartoon Introduction to Economics Volume Two: Macroeconomics. What is it about cartoons that lend themselves to explaining economic concepts?
Most economics textbooks are about 600 pages long and full of graphs: they’re very daunting. A cartoon book is accessible. I have lots of fans - OK, actually, maybe not lots - I have some fans that send me emails telling me that their 10-year-old children are reading the books. I think the book appeals to the kid in all of us.
So who exactly is the book aimed at?
Well, the 10 -year-olds are very precocious, let’s be clear about that. The book’s aimed at high school and college students as an introduction to economics, but also at the general public too. I think at college there’s a tendency to focus on the mathematics of economics and to ignore the personalities and the history. The book fills some of those gaps.
A mathematician, an engineer and a psychologist go up to a buffet… No, it's not the start of a bad joke.
While most of us would dive into the sandwiches without thinking twice, these diners see a groaning table as a welcome opportunity to advance their research.
Look behind the salads, sausage rolls and bite-size pizzas and it turns out that buffets are a microcosm of greed, sexual politics and altruism - a place where our food choices are driven by factors we're often unaware of. Understand the science and you'll see buffets very differently next time you fill your plate.
The story starts with Lionel Levine of Cornell University in Ithaca, New York, and Katherine Stange of Stanford University, California. They were sharing food at a restaurant one day, and wondered: do certain choices lead to tastier platefuls when food must be divided up? You could wolf down everything in sight, of course, but these guys are mathematicians, so they turned to a more subtle approach: game theory.
Applying mathematics to a buffet is harder than it sounds, so they started by simplifying things. They modelled two people taking turns to pick items from a shared platter - hardly a buffet, more akin to a polite tapas-style meal. It was never going to generate a strategy for any occasion, but hopefully useful principles would nonetheless emerge. And for their bellies, the potential rewards were great.
First they assumed that each diner would have individual preferences. One might place pork pie at the top and beetroot at the bottom, for example, while others might salivate over sausage rolls. That ranking can be plugged into calculations by giving each food item a score, where higher-ranked foods are worth more points. The most enjoyable buffet meal would be the one that scores highest in total.
In some scenarios, the route to the most enjoyable plate was straightforward. If both people shared the same rankings, they should pick their favourites first. But Levine and Stange also uncovered a counter-intuitive effect: it doesn't always pay to take the favourite item first. To devise an optimum strategy, they say, you should take into account what your food rival considers to be the worst food on the table.
If that makes your brow furrow, consider this: if you know your fellow diner hates chicken legs, you know that can be the last morsel you aim to eat - even if it's one of your favourites. In principle, if you had full knowledge of your food rival's preferences, it would be possible to work backwards from their least favourite and identify the optimum order in which to fill your plate, according to the pair's calculations, which will appear in American Mathematical Monthly (arxiv.org/abs/1104.0961
So how do you know what to select first? In reality, the buffet might be long gone before you had worked it out. Even if you did, the researchers' strategy also assumes that you are at a rather polite buffet, taking turns, so it has its limitations. However, it does provide practical advice in some scenarios. For example, imagine Amanda is up against Brian, who she knows has the opposite ranking of tastes to her. Amanda loves sausages, hates pickled onions, and is middling about quiche. Brian loves pickled onions, hates sausages, shares the same view of quiche. Having identified that her favourites are safe, Amanda should prioritise morsels where their taste-ranking matched - the quiche, in other words.
Leonardo da Vinci's Vitruvian Man is the most widely recognised drawing on the planet. A study of human form and proportion, the iconic depiction of a man standing with arms outstretched, framed by circle and square, has come to epitomise the notion of imitation as the sincerest form of flattery: it has been used by Disney, appeared on Euro coins, and even been parodied on The Simpsons.
But the image wasn't a product of imagination alone. In his new book, Da Vinci's Ghost, Toby Lester uncovers its long and intricate history, revealing that da Vinci was in the business of imitation himself. Indeed, the drawing built on the idea that the human form was a precisely proportioned structure representing the measure of all things - a philosophy developed by the Roman architect Vitruvius during the rule of Caesar Augustus.
When it comes to environmental change, most people have an opinion. Few, however, have answers. But the University’s Environmental Change Institute (ECI) has been trying to change that – a process that has not only involved complex natural and social science, but squaring up to the task of communicating research to the outside world, too. Twenty years on and the world’s environmental problems are far from solved, but the ECI has helped inform government policy and increase public understanding.
‘When the ECI was set up through benefaction in 1991, it was extremely innovative and ambitious,’ explains Professor Andrew Goudie, who led the Institute’s founding task force. ‘It was designed to be problem-led and therefore interdisciplinary, running big research programmes across climate change, ecosystems and energy.’ Those same research areas are just as important today, and so far some 300 research projects have been completed.
Much of the work has been carried out in collaboration with academics from across the University, working with the ECI to address the world’s most pressing environmental problems. From fine-grained social studies in the UK to long-term monitoring in tropical forests and, most recently, the world’s largest climate forecasting experiment, that variety is part of the reason the ECI has achieved such success in influencing national and international policy. Take, for instance, the UK Climate Impacts Programme (UKCIP), funded by the government since 1997. The programme began by working with individual regions of the UK to assess the impact climate change might have. ‘We worked with stakeholders around the country to help them understand how climate change could affect them and what they might want to do in response,’ explains Dr Chris West, Director of UKCIP. As well as geographical regions, UKCIP works closely with service providers like the healthcare sector and utilities companies – a process designed to engage key decision-makers with the risks of climate change. Indeed, UKCIP’s stakeholder-led approach is acknowledged worldwide as pioneering, and is regularly held up as a model for others.
If such a scheme was designed to raise awareness, then more recent work is capitalising upon it. Spearheaded by Professor Myles Allen, climateprediction.net is the world’s biggest climate prediction experiment to date, jointly run by Oxford e-Research Centre’s Volunteer Computing, the Department of Physics and the ECI. ‘Our original experiments were focused on idealised physical science questions,’ explains Professor Allen, ‘but our latest stream of research, the Weather at Home experiment, sponsored by The Guardian, is focused on the immediate impacts of climate change and how it affects extreme weather.’
Just outside Seville in Spain, a tower the size of a modest skyscraper stands proud amid a sea of mirrors, sprawling over 185 hectares. These adjustable mirrors, called heliostats, focus the sun's rays on a central receiver atop the tower, lighting it up like a beacon. But unlike conventional solar technology, this plant produces power all night long.
This is Gemasolar, the world's first commercial-scale solar power plant to keep its energy flowing for 24 hours. It stores solar energy by heating up molten salts. This heat is then converted into electricity in the daytime and during the hours of darkness.
"In the past, prototype plants have been designed to generate electricity when the sun is shining," says Greg Glatzmaier, senior engineer at the National Renewable Energy Laboratory in Golden, Colorado. "But Gemasolar uses large storage tanks [for molten salt], and a very large heliostat field, so when the sun is shining it collects enough solar energy to create electricity after the sun's gone down."
The huge array of 2650 heliostats reflects sunlight onto a receiver perched on the tower. Its energy is transferred to the salts – a mixture of sodium and potassium nitrates – which circulate in the receiver. This concentrated solar power raises the temperature of the salts to 500 ºC, which in turn is used to raise steam to drive turbines. The plant boasts a maximum output of 20 megawatts.
The super-hot salt is stored in insulated tanks, providing capacity to produce electricity for up to 15 hours without sunlight. During a sunny spell in June 2011, the plant's owners Torresol Energy announced it had achieved the holy grail of solar energy production: generating electricity for 24 hours straight.
The plant had only been operating for a month at that point, and the eventual goal is to supply uninterrupted electricity throughout the summer. Over a whole year Gemasolar aims to produce the equivalent of 6400 full-capacity hours, says Torresol Energy's Santiago Arias Alonso. That's 73 per cent of a full year, with the gap accounted for by reduced sunlight in winter. "Our expected output will be more than 100 gigawatt-hours a year," he says. That's enough to provide 25,000 homes with electricity.
Before the days of sleek surfaces and brushed aluminium, my mouse had three buttons. But that central selector has shrivelled into a runty little scroll wheel; the third nipple of the computing world. Where did it all go wrong?
Actually, the rise and fall of the three-button mouse is an emotional tale, one to tug at the heartstrings of even the most ardent technophobe. Well, I exaggerate a little, but when I started digging around to find out where my beloved third button had disappeared to, I found it a fascinating story, so I’m going to pass it on. It starts in an age where computer parts were made of wood. No, seriously.
The Mouse’s Tale
Long, long ago in a place far, far away…. OK, OK. A little over three decades ago in sunny California the first mouse was invented by the forward-looking Douglas Engelbart. His first prototype, built in 1963, was made of wood — wood! — with cute little metal wheels, and the device picked up its now universal name because of the cord that extended from its rear. He demonstrated the first production model on December 9 1968 at the Convention Centre in San Francisco, at an event that’s gone down in history as “The Mother of All Demos”.
It’s tough to compare the release of that mouse to modern product launches. Prior to that, people had only ever thought to use a keyboard with a computer. This thing was era-defining. What’s more, it had — you guessed it — three glorious buttons. Count ‘em, Mac fans.
Apparently during the design process Engelbart advocated “as many [buttons] as possible” and the only reason the team settled for three is because they “could not find anywhere to fit any more switches”. Sadly, Engelbart never made a single penny in terms of royalties from his mouse. But he could at least console himself with kick-starting a completely different way of interacting with computers — one that hasn’t been bettered since.
Magic Number, or Just a Crowd?
That said, the mouse took time to catch on. Its big break is up for debate, sure, but I’m going to stick my neck out and suggest that it was copier kings Xerox who shoved it into the mainstream. Xerox used mice internally at their Paolo Alto research centre throughout the ’70s, but when they decided to revolutionise office computing in the early 80s — which, err, they didn’t quite manage — they decided to unleash the mouse on the public, too. It pains me to say that, by the time Xerox came to launch their revolutionary Star in 1981, its mouse featured just two buttons.
Things get worse. Inspired by the Star, Apple released the Apple Lisa in 1983 and then the the Apple Macintosh in 1984, both of which also shipped with a mouse. But after experimenting with between one and four buttons — just imagine Douglas Engelbart’s happy little face at the prospect of four — Apple plumped for an unconventional single selector. They claimed it made the user experience more straightforward. I claim it was a stupid idea that forced users to rely on keyboard shortcuts. In fact, I have proof, because Apple mice can now be tweaked to function as four button devices — an acknowledgement that they got it wrong in the ’80s. Sorry, fanboys.
But all hope was not lost! There were noble crusaders keen to fight the three-button cause, and they came in the shape of PC manufacturers. From their inception, Windows and Linux were designed to work well with three button mice. Hell, Linux was — and still is — better if you use one, as the central button can be used to paste text in terminal windows. Imagine the power; the lack of keyboard commands!
Oil and gas might be running out, but renewable power sucks so much it counts for less than 10 per cent
of all the energy we use. The answer? Recreate the sun using nuclear fusion, in a sleepy corner of the UK.
No, really. Over the past few decades scientists and engineers have been scratching their heads over how to solve our energy problems using the process that powers stars. In fact, Europe’s biggest nuclear fusion device, the Joint European Torus
(JET), is leading the way, and it’s hidden away in the tranquil Oxfordshire countryside. “In effect we’re making a miniature version of the sun in the laboratory,” explains Nick Holloway from JET. But how the hell do they do it, and when will it be powering our laptops?
Alleviating Any Con-Fusion
To answer that, first a crash-course in nuclear physics using my love life as a metaphor. Much like many of my romantic encounters, fusion requires bringing together two objects that tend to repel each other. By joining two hydrogen nuclei, it’s possible to create a new helium nucleus, and simultaneously release serious amounts of energy. For some perspective, just 25g of hydrogen isotopes — the same weight as two iPod Shuffles — could produce enough electricity to last an average European a lifetime. Hear that, solar power?
The tricky bit, though, is getting those damned nuclei close enough to fuse, because they both have a positive charge which means they repel each other. The solution isn’t elegant, but it does work: throw enough heat at the hydrogen, and it becomes plasma — a hot soup of nuclei and electrons, with enough energy to overcome the repulsion. A bit like the lubricating role alcohol used to play in my romantic encounters. But, unlike my love life, the reaction needs to reach a scorching 100 million C to get going — which, if you’re wondering, is ten times hotter than the sun. So how on earth do they do that in a country where 20 degrees celsius is considered tropical?
Ironic, isn't it. Those distorted words that websites have you type to prove you aren't a machine are in fact easy for software to decode, mainly because words are chosen with little insight into how secure they are.
"Many websites use CAPTCHAs, and there are a lot of designs floating around," explains Elie Bursztein from the Stanford Security Lab
in California. These mimic the original puzzles developed by Luis von Ahn and colleagues at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Bursztein and colleagues decided to investigate how the different methods fared across as many sites as possible to work out how to make them more effective.
The team's software, aptly named Decaptcha, works in stages. First it removes lines through letters, then it isolates each of the warped letters. Each character is processed to make it more legible, and software reads the letters and assembles them into the original word.
The team tested their software on 15 sites, including Google, eBay and Wikipedia. Schemes are usually deemed secure if they can be broken less than 0.01 per cent of the time. Bursztein easily cracked many of the CAPTCHAs he studied, breaking the likes of eBay 37 per cent of the time and Wikipedia 25 per cent of the time. The only sites to resist attack were Google, and sites using Google's more recent iteration, reCAPTCHA. The team presented their work this week at the Conference on Computer and Communications Security
Oxford engineers may not seem the most likely people to mix with the the likes of the film and television industries, but a team of researchers in the Department of Engineering Science is doing just that to develop ways of extracting huge quantities of information from moving images.
‘When I started using Google, about a decade ago, I was blown away by what it could do for the web,’ says Professor Andrew Zisserman. ‘I went away, understood how it worked, and decided I wanted to engineer something like that for vision. That way, I could effortlessly search images or videos for – well – anything, and get the results immediately. Just imagine: you could see someone on TV, click on their face, and instantly find out what else they’ve appeared in,’ he continues. ‘So that’s what I set out to do.’
His dream – to search huge quantities of footage and instantly find specific clips containing particular objects or people – sounds ambitious even now. But unperturbed by the magnitude of the problem or its possible impact for the film and television industry, Zisserman’s team of researchers has been working on the task by advancing a field known as computer vision: the science of making machines that can ‘see’ by recognising patterns and shapes.
The whole process starts with training computers to recognise specific objects amongst millions of images. ‘Basically, you measure visual features in the images,’ explains Zisserman. Based on those features – which might be sharp edges, shapes or textures – software can be taught to pick out images which illustrate an object, irrespective of the viewpoint, lighting or even partial obstruction. Then, when the software is shown a new set of images, it scores and ranks them depending on the presence of the key features, much like a Google search.
The problem is that searching video in this way places huge demands on computing resources – a problem that Zisserman’s team has been trying to overcome. ‘You can represent an object by a jumble of iconic bits, but it turns out that it doesn’t matter where they are. For a motorbike, you might have a wheel, a seat...and just that you have them somewhere in an image is enough to recognise an object,’ explains Zisserman. He and his student Josef Sivic dubbed this concept ‘visual words’ and it lies at the heart of making searches much more efficient. So efficient, in fact, that even Google now uses the technology in its image search system Google Goggles.
The team has used the technique to analyse a common gripe of Hollywood movie makers: continuity errors. These lapses in consistency, where two shots of the same scene don’t quite match, pop up all too often. So Zisserman, together with Dr Lynsey Pickup, has been playing what is effectively a giantgame of spot-the-difference: developing software that automates the process of spotting the mistakes. By scanning frames of a movie that should theoretically contain objects in the same physical locations, the team can detect subtle differences – a job previously left for over-enthusiastic film fans.
While that is proof that finding objects within footage is possible, video search also needs to be able to identify humans – an altogether tougher task. Changing facial expression, differing hair styles and constant movement make actors extremely difficult for computers to identify. By using cutting-edge facial recognition, though, Dr Mark Everingham, working in Zisserman’s team, can identify the eyes, nose and mouth, using distinctive features around these areas to reliably spot faces time and again. Indeed, by following the detected face between frames, it is possible to track actors as they move around a set, and even automatically assign the correct character name to each face through space and time.
Unsurprisingly, this is making a big impression commercially as it allows video content to be labelled and searched automatically. ‘At VideoSurf, we run a scaleable video search engine. We do for video what Google does for text,’ explains Eitan Sharon, Chief Technology Officer and co-founder of VideoSurf. ‘We’ve developed a smartphone app that lets you point your phone at any screen – even live TV – and in a few seconds it can tell you what the video is, who’s in it, even recommend other videos you might like.’ All of this takes its cue from University of Oxford research. ‘Andrew Zisserman has really left his mark on computer vision over the last decade,’ he adds. ‘He’s changed the way we think about and tackle video, and shaped what we do.’
In the 1970s, sociologist David Bloor suggested - to reactions of perhaps equal horror and delight - that science is an inherently social activity. He believed we should recognise that, beneath the stereotypical white coats, scientists really are normal people too.
Thanks in part to insightful biographies of scientists, the world of research has been shown to be as full of competition, gossip and revelation as any other human endeavour. In the same spirit, Einstein on the Road provides a window into the great man's life.
More than any other scientist, Einstein had his life scrutinised, but this book, based on his detailed travel diaries, affords more personal glimpses. So we learn about his fondness for forging friendships while playing Mozart, his absolute disdain for press conferences and ignorant journalists, and his take on his superstardom, which once saw him saunter down a Hollywood red carpet with Charlie Chaplin.