At the tender age of 27, John Burn-Murdoch is one of the leading young lights of data journalism in the UK. His brief career to date has already taken in The Guardian, The Telegraph, Which? Magazine, and The Financial Times, where he’s been working in a coveted data journalist role since 2013. Raised in Yorkshire, Burn-Murdoch also channels his passion for spreadsheets and statistics as a visiting lecturer at London’s City University, sculpting the next generation of data enthusiasts. On a crisp December afternoon at Borough Market, he talked to Peter Yeung about the issue of objectivity in data, the risk of cronyism in the data journalism community, and how the FT are unique.
How does the FT differ from other publications?
I would say our newsroom is more numerate than most. That’s not to say everyone has a maths degree, but we have some people that have previously worked as analysts in banks, for example. That means a lot of the day to day data journalism, the quick-fire stuff, is handled by the reporters without them even thinking about it. You could say that a lot of the numerate data journalism that comes out of the FT won’t even come past our desk – it just happens. That means that those of us on the data, stats and interactive teams are afforded a bit more time to dig deeper into things. We might have more of a week-scale publication schedule, with some quicker articles in between. Whereas places like The Guardian publish two or three pieces on the data blog on any given day. There are different ways of doing things, but most of it for us is having the capacity to take a little bit more time.
How did you get into data journalism?
I never really thought about journalism until the third year of my undergraduate geography degree. I was a bit disillusioned with the course, and needed to do something extra-curricular. I started working on the student paper and really enjoyed it. I didn’t even know data journalism was a thing then. My first taste of professional journalism was doing some work experience at The Guardian during the London riots, because they were suddenly looking for lots of people to come in and do some research. That was inherently data-related work. Then I started a Master’s in Data Science [the first in the UK] at Dundee University, but I only did the first term of that because it was impossible to fit in, since I was working full time. It was distance learning, but there was also a four week period of intense sessions in Dundee. I absolutely loved it, but it proved too much of a struggle with time.
Why are you a lecturer at City University?
I’d say for two reasons. Number one, as trite as it sounds: giving something back. When I was studying at City, James Ball was lecturing at the same time as being a data journalist at The Guardian. And with something like data journalism, which is quite a rapidly evolving field, often it’s better to have someone who’s actively involved in the field teaching it. City got in touch, and there weren’t many data journalists in London to be honest, and I was obviously one of the one’s they knew, and it sort of ran from there. The other bonus for me is that it keeps my own skill sets ticking over. I kind of feel like everyone wins.
The data journalism community is quite tight-knit. What are the advantages and drawbacks?
I think it’s mainly an advantage. There are obvious drawbacks in terms of cronyism and when people are interviewed for jobs there’s always a temptation to hire the people that are familiar. But I think there are big advantages in terms of collaboration: digital journalism as a whole, and especially it seems anything where data analysis and web development are involved, seems to be inherently very collaborative. The whole concept of open source is about riffing on other people’s work, taking something someone else has done and adding to it. That collaborative spirit is a massive help. Without it, we wouldn’t move along as quickly. But as a counterpoint to the cronyism, because of the skill sets now required we are now seeing a lot of people from outside of that bubble. If anything, data journalism is less cronyistic than journalism as a whole.
With its history in computer-assisted reporting, data journalism has tended to be focussed on investigations. But should there be more quick, reactive data journalism?
There are obviously lots of cases where you can do good quality data journalism very quickly. Alberto Nardelli is one of the best at using a quantitative mindset and skill set, but with the breaking news agenda. But, having said that, I think inherently the best data journalism, if you judge it in terms of the level of analysis, and the ability to find a news line that other people don’t have, takes time. Quick-reaction pieces can only be done if you spend a hell of a lot of time familiarising yourself with your beat, and building up your data sources. It’s definitely possible, but to be knocking out really top level data journalism multiple times a week is really difficult.
Is data journalism more objective than other forms of journalism?
Like those that have answered it before me, and probably much better, I would say it’s not necessarily more objective. It certainly can be. Data journalism can be more objective than vox pop journalism, purely because of things like sample sizes. When you’re trying to extrapolate and talk about national or global trends, you can be more objective. But there are issues with data quality, and issues because your starting point is always a question you want to answer. There are few journalists of any type who start with a completely naive position. Some people might have an agenda even though it’s a completely unconscious one. It’s very difficult to go in completely blind.
How do you establish the line between pushing an agenda and finding a story?
The obvious one is talking to people in the know, especially those who you think might disagree with you. If you set out to ask a question of a data set and you go to an expert who has already written extensively with the same angle as you, it won’t help much. Even before that, you can do your own tests by interpreting the data in different ways, making sure there aren’t any other counter-explanations in there.
Who is doing the most interesting data journalism right now?
That’s a really tough question. Obviously, the FT. No, no: all sorts of places. There are some obvious ones such as the New York Times, which does pure data journalism and visual journalism, constantly raising the bar and doing fantastic stuff. Berlin Morgenpost won the Information is Beautiful Award for being the best data visualisation team, and they do some amazing stuff. Pro Publica, with their data-driven investigations, do incredible work. Bloomberg have been doing some amazing visual work recently and the same goes for the Wall Street Journal. The good thing is that I’m having to think a lot harder than I would say five years ago, when I could have reeled off two or three and there wasn’t anyone else.
If you could lead your own data team, what would it be like?
I’ve never actually thought about that, it probably speaks to a lack of ambition. Personally, I like the idea of a team of specialist-generalists, so people whose skill is both technically and in terms of subject matter having interests and skills in all areas. Kind of like it is at the FT now – one week we are working on climate change, the next it’s Boko Haram terror attacks, then maybe it’s something about the global oil trade, and then something on tennis. You want everyone to have a base level of technical experience, but it’s always nice when people are pulling in their own directions to a certain extent. For me the team would be two-thirds coming up with ideas generated internally, and the other third doing amazing collaborations with other parts of the news room. Very roughly speaking, that’s what I’d look for.
This interview has been edited for brevity and clarity.