Should OC Transpo Spend Money on Real-time Arrival Signs?


About two weeks ago, OC Transpo GM John Manconi sent a memo to council about OC Transpo’s latest investment in its modernization, announcing that the service was installing 74 digital ‘transit information signs’ in many of the Transitway and O-Train stations. This is another step forward, so the argument would go, in the service’s progression towards preparing for the arrival of the Confederation line at some point in (likely late) 2018.

On the surface, this is a sensible decision—more info, happier riders, etc. The idea underlying it is also fairly common-sense: it is better not only for you, but for the service as a whole, if you know how long it will be until your bus arrives.

But this idea gets thrown around a lot in a way that tends to lack nuance or scrutiny. On the one hand, this makes sense: digital signage where no signage previously existed is a fairly controversy-free investment, especially in the shadow of a multi-billion dollar project like the Confederation line. Often, I’ve heard this idea from riders— “I don’t mind waiting for the bus as long as I know how long I will have to wait,” or something to that effect.

But on the other hand, investment in this area raises at least a few significant questions—efficacy, impact, and cost among them—all coming back to a core problem: is it worth it to buy these signs?

This is a problem that researchers from American universities — Yanbo Go, Parastoo Jabbari and Don MacKenzie from the University of Washington and Jiarui Tao from the University of California, Irvine — tackled in a paper titled ‘Effects of a Public Real-Time Multi-Modal Transportation Information Display on Travel Behavior and Attitudes,’ published this month in the Journal of Public Transportation. The paper, while dense and filled with numbers, is nevertheless instructive when dealing with some of the key questions raised by OC Transpo’s investment in digital signage.


In the 2017 transportation budget included $2 million for real-time information signs, with half of that money kicked in by the feds.

From the 2017 Transportation Capital Budget; the first column reflects total cost in 000’s, the second federal contribution and the third contribution from OC Transpo.

From the 2017 Transportation Capital Budget; the first column reflects total cost in 000’s, the second federal contribution and the third contribution from OC Transpo.

It’s here that it is worth noting what these costs are in a practical sense. This cost does not reflect the cost of actually creating and displaying the information; it reflects only the initial cost to display information that already exists. After all, OC Transpo arrival time data is available in a number of places—Transit App (aptly, or poorly named, depending) and a number of other real-time transit time apps use OC Transpo’s API to provide the same information that OC Transpo is paying to put up on the screen.

This is not an insignificant point. I have to ignore the question of who does and does not have the means to use a data-enabled smartphone to make this point, but the point still stands: a lot of people have a data-enabled smartphone, especially in a city where the average income is up around $90k a year and many people are using phones paid for by their employer. OC Transpo, then, is paying a great deal to communicate information that it has effectively already been communicating at a low cost by licensing their data out to developers.

How do you measure a transit service?

The second question worth asking is a conceptual one: what’s the best way to measure the quality of your transit service?

Customer satisfaction comes up a lot, in part because it is seen as an effective proxy for why ridership numbers are going up or down. If you have generally happy riders, it makes sense when your ridership goes up. If you have unhappy riders, declines in ridership (such as those OC Transpo has been experiencing intermittently over the past few years) start to make sense as well.

(Anecdotally, customer satisfaction numbers can be used to downplay declining ridership, a la “yes, we’re losing riders but our customer satisfaction remains high.” It’s easy to use that to say that the decline in ridership must be due to other factors. Logically, though, I’m less convinced; of course satisfaction remains high, since all the unhappy people might’ve just stopped taking the bus.)

There’s another way to look at this, though, which I think is worth reflecting on: whether your riders are happy or unhappy is a bullshit metric that misses the point of public transit—to get the most people where they need to go—nearly entirely.

Yes, this is an overly-reductive argument in almost every way, but the point, I think, stands. If your primary measure of how well you are doing from a transit perspective is how happy people are, you are likely missing some of the more important parts of transit.

(I will cede here that customer satisfaction is, though, a flexible stand-in for all sorts of other things which are harder to measure, such as efficiency of the service, suitability to riders’ needs, and how much it influences their transit decisions. But still.)

When you drill it down, though, it’s a useful metric for transit services to consider because it, largely, is what is in their control. The big system-level decisions end up being made at different levels of government; you need look only to the ever-ridiculous Scarborough Subway Extension debate to see how transit planning is almost always the domain of regional politics. Transit agencies can, however, make investments within their existing systems that improve their customer satisfaction scores, things like new comfy seats, heated bus shelters—and real-time info displays.

This is all to say that it’s worth thinking not only about what an agency is saying when they cite their own satisfaction numbers—and to look closely at how and why they are deploying them—but to think further with regards to what role those numbers should play in the transit discussion in the first place.

My point is, in part, that the only question that matters is the one that Ge et al. were looking at: does investment in real-time information signage actually improve the service by making more people ride?

I’ll try to spare you the number crunching

The study sets out four hypotheses which are incredibly vanilla to the average reader:

That people will think a service with real-time signs has enough information.
That people will feel more familiar with a service with real-time signs.
People will think the bus is a more efficient option cp to the car.
People will prefer to take transit if signs are installed.
The first three ultimately all point to the fourth, which is to say that the authors set out to prove one core hypothesis: that the signs made more people take transit.

Skipping through the methodological and data analysis stuff, we can get to the results of the study.

The first finding is a big one, but one that is fairly obvious when you think about transit use: that most people never used the screens. Roughly two-thirds of people reported never using the screens as part of their commute. (It’s worth noting that there is a chance this number is a bit too high; the results of the study were self-reported, and so someone who glances at the screen every now and again to check the time may not even be aware of doing so.)

Only 7 per cent of people, in fact, reported using the signs.

Perhaps more significant, though, is this point: it didn’t actually get more people to use the service.

(One analysis of the data suggested that the signs actually made people more likely to drive alone, which the authors dismissed, saying “this may be a spurious correlation.”)

Only 3.7 percent more people took transit in the treatment group, compared to 4.1 percent more people in the control group. Uhhh.
In the end, the study found:

“little evidence was found that the installation of a real-time multi-modal display screen in an office building lobby changed the building occupant travel choices, satisfaction, familiarity, or attitudes toward alternatives to private car travel over the course of a six-month study period.”

The reasons for the poor impact ultimately came down to factors about the riders themselves: most people who took transit already knew when to expect the next bus; most people already used their smartphones for real-time info; most people didn’t cluster around, or want to talk to, the signs; most people just weren’t that enthusiastic about bus times.

Though the authors were able to extrapolate a set of ‘best practices’ from the data—aim it at places where gaps in Wi-fi and cell coverage exist, or where fewer buses are running; install them conveniently; market them to people as a sexy improvement—the ultimate finding was that “transportation researchers are urged to conduct more careful evaluations of interventions, using appropriate experimental or quasi-experimental research designs.”


There are some important contextual matters worth remembering here, when thinking about whether or not these kinds of investments are worth it for the City of Ottawa. The first is that this study, while only one of a handful, was nevertheless done in a city where real-time data already existed. It’s important, because this is the case in Ottawa as well. There is, as always, an app for that.

This study’s conclusion—that, essentially, real-time info screens don’t make much of a difference—does not bode especially well if someone was looking at the city’s investment. There are reasons, of course, that it might make sense to put the signs in, that don’t come with data attached to them: they make people feel reassured, especially late at night; they are an effective way to communicate disruptions; they are, as every Toronto subway rider knows, a lovely way to display the news, etc. etc. etc.

But what they aren’t, really, are a low-cost way to actually increase participation in the service. When people say things like “I’d take the bus more often if I just knew when it would come,” those people (according to this study anyways) are either outliers, or are bullshitting you (and very likely themselves) to justify why they are taking the car.

In the best assessment of ‘why put them in?’ the answer is more of the former paragraph: you do it because they are useful for things other than increasing ridership. But if you find that argument unconvincing—as many people might—you might be likely to see $2 million on digital signage as a $2-million vanity project, especially since that information is already available to riders.

And the larger point bears some thought as well: OC Transpo is in the middle of trying to turn itself into a more modern transit service with the launch of the LRT. Part of that process is a visionary one: what kind of service do you want to look and feel like? Investment in say, digital signage, seems common-sense because to many people it is. What is less common-sense is to forgo that in favour of spending $2 million elsewhere. But doing so might be the more visionary way forward.