Wednesday, October 28, 2015

Five reasons why they should test autonomous cars in Ontario

Did I say five? I meant six…

Paul Leroux
It was late and I needed to get home. So I shut down my laptop, bundled myself in a warm jacket, and headed out to the QNX parking lot. A heavy snow had started to fall, making the roads slippery — but was I worried? Not really. In Ottawa, snow is a fact of life. You learn to live with it, and you learn to drive in it. So I cleared off the car windows, hopped in, and drove off.

Alas, my lack of concern was short-lived. The further I drove, the faster and thicker the snow fell. And then, it really started to come down. Pretty soon, all I could see out my windshield was a scene that looked like this, but with even less detail:



That’s right: a pure, unadulterated whiteout. Was I worried? Nope. But only because I was in a state of absolute terror. Fortunately, I could see the faintest wisp of tire tracks immediately in front of my car, so I followed them, praying that they didn’t lead into a ditch, or worse. (Spoiler alert: I made it home safe and sound.)

Of course, it doesn’t snow every day in Ottawa — or anywhere else in Ontario, for that matter. That said, we can get blanketed with the white stuff any time from October until April. And when we do, the snow can play havoc with highways, railways, airports, and even roofs.

Roofs, you say? One morning, a few years ago, I heard a (very) loud noise coming from the roof of QNX headquarters. When I looked out, this is what I saw — someone cleaning off the roof with a snow blower! So much snow had fallen that the integrity of the roof was being threatened:



When snow like this falls on the road, it can tax the abilities of even the best driver. But what happens when the driver isn’t a person, but the car itself? Good question. Snow and blowing snow can mask lane markers, cover street signs, and block light-detection sensors, making it difficult for an autonomous vehicle to determine where it should go and what it should do. Snow can even trick the vehicle into “seeing” phantom objects.

And it’s not just snow. Off the top of my head, I can think of 4 other phenomena common to Ontario roads that pose a challenge to human and robot drivers alike: black ice, freezing rain, extreme temperatures, and moose. I am only half joking about the last item: autonomous vehicles must respond appropriately to local fauna, not least when the animal in question weighs half a ton.

To put it simply, Ontario would be a perfect test bed for advancing the state of autonomous technologies. So imagine my delight when I learned that the Ontario government has decided to do something about it.

Starting January 1, Ontario will become the first Canadian province to allow road testing of automated vehicles and related technology. The provincial government is also pledging half a million dollars to the Ontario Centres of Excellence Connected Vehicle/Automated Vehicle Program, in addition to $2.45 million already provided.

The government has also installed some virtual guard rails. For instance, it insists that a trained driver stay behind the wheel at all times. The driver must monitor the operation of the autonomous vehicle and take over control whenever necessary.

Testing autonomous vehicles in Ontario simply makes sense, but not only because of the weather. The province also has a lot of automotive know-how. Chrysler, Ford, General Motors, Honda, and Toyota all have plants here, as do 350 parts suppliers. Moreover, the province has almost 100 companies and institutions involved in connected vehicle and automated vehicle technologies — including, of course, QNX Software Systems and its parent company, BlackBerry.

So next time you’re in Ontario, take a peek at the driver in the car next to you. But don’t be surprised if he or she isn’t holding the steering wheel.


A version of this post originally appeared in Connected Car Expo blog.

Tuesday, October 20, 2015

ADAS: The ecosystem's next frontier

At DevCon last week, Renesas showcased their ADAS concept vehicle. It was just what you would expect from an advanced demonstration, combining radar, lidar, cameras, V2X, algorithms, multiple displays and a huge amount of software to make it all work. They were talking about sensor fusion and complete surround view and, well, you get the picture.

What isn’t readily obvious as you experience the demo is the investment made and the collaboration required by Renesas and their ADAS ecosystem.

Partnership is a seldom recognized cornerstone of what will ultimately become true sensor fusion. It seems, to me at least, unlikely that anyone will be able to develop the entire system on their own. As processors become more and more powerful, the discrete ECUs will start to collapse into less distributed architectures with much more functionality on each chip. The amount of data coming into and being transmitted by the vehicle will continue to grow and the need to secure it will grow alongside. V2X, high definition map data, algorithms, specialized silicon, vision acceleration and more will become the norm in every vehicle.

How about QNX Software Systems? Are we going to do all of this on our own? I doubt it. Instead, we will continue to build on the same strategy that has helped take us to a leadership position in the infotainment market: collaborating with best of breed companies to deliver a solution on a safety-certified foundation that customers can leverage to differentiate their products.

The view from above at Renesas DevCon.

Wednesday, October 14, 2015

What does a decades-old thought experiment have to do with self-driving cars?

Paul Leroux
Last week, I discussed, ever so briefly, some ethical issues raised by autonomous vehicles — including the argument that introducing them too slowly could be considered unethical!

My post included a video link to the trolley problem, a thought experiment that has long served as a tool for exploring how people make ethical decisions. In its original form, the trolley problem is quite simple: You see a trolley racing down a track on which five people are tied up. Next to you is a lever that can divert the trolley to an empty track. But before you can pull the lever, you notice that someone is, in fact, tied up on the second track. Do you do nothing and let all 5 people die, or do you pull the lever and kill the one person instead?

The trolley problem has undergone criticism for failing to represent real-world problems, for being too artificial. But if you ask Patryk Lin, a Cal Tech professor who has delivered talks to Google and Tesla on the ethics of self-driving cars, it can serve as a helpful teaching tool for automotive engineers — especially if its underlying concept is framed in automotive terms.

Here is how he presents it:

“You’re driving an autonomous car in manual mode—you’re inattentive and suddenly are heading towards five people at a farmer’s market. Your car senses this incoming collision, and has to decide how to react. If the only option is to jerk to the right, and hit one person instead of remaining on its course towards the five, what should it do?”

Of course, autonomous cars, with their better-than-human driving habits (e.g. people tailgate, robot cars don’t) should help prevent such difficult situations from happening in the first place. In the meantime, thinking carefully through this and other scenarios is just one more step on the road to building fully autonomous, and eventually driverless, cars.

Read more about the trolley problem and its application to autonomous cars in a recent article on The Atlantic.

Speaking of robot cars, if you missed last week's webinar on the role of software when transitioning from ADAS to autonomous driving, don't sweat it. It's now available on demand at Techonline.

Wednesday, October 7, 2015

The ethics of robot cars

“By midcentury, the penetration of autonomous vehicles... could ultimately cause vehicle crashes in the U.S. to fall from second to ninth place in terms of their lethality ranking.” — McKinsey

Paul Leroux
If you saw a discarded two-by-four on the sidewalk, with rusty nails sticking out of it, what would you do? Chances are, you would move it to a safe spot. You might even bring it home, pull the nails out, and dispose of it properly. In any case, you would feel obliged to do something that reduces the probability of someone getting hurt.

Driver error is like a long sharp nail sticking out of that two-by-four. It is, in fact, the largest single contributor to road accidents. Which raises the question: If the auto industry had the technology, skills, and resources to build vehicles that could eliminate accidents caused by human error, would it not have a moral obligation to do so? I am speaking, of course, of self-driving cars.

Now, a philosopher I am not. I am ready to accept that my line of thinking on this matter has more holes than Swiss cheese. But if so, I’m not the only one with Emmenthal for brain matter. I am, in fact, in good company.

Take, for example, Bryant Walker-Smith, a professor in the schools of law and engineering at the University of South Carolina. In an article in MIT Technology Review, he argues that, given the number of accidents that involve human error, introducing self-driving technology too slowly could be considered unethical. (Mind you, he also underlines the importance of accepting ethical tradeoffs. We already accept that airbags may kill a few people while saving many; we may have to accept that the same principle will hold true for autonomous vehicles.)

Then there’s Roger Lanctot of Strategy Analytics. He argues that government agencies and the auto industry need to move much more aggressively on active-safety features like automated lane keeping and automated collision avoidance. He reasons that, because the technology is readily available — and can save lives — we should be using it.

Mind you, the devil is in the proverbial details. In the case of autonomous vehicles, the ethics of “doing the right thing” is only the first step. Once you decide to build autonomous capabilities into a vehicle, you often have to make ethics-based decisions as to how the vehicle will behave.

For instance, what if an autonomous car could avoid a child running across the street, but only at the risk of driving itself, and its passengers, into a brick wall? Whom should the car be programmed to save? The child or the passengers? And what about a situation where the vehicle must hit either of two vehicles — should it hit the vehicle with the better crash rating? If so, wouldn’t that penalize people for buying safer cars? This scenario may sound far-fetched, but vehicle-to-vehicle (V2X) technology could eventually make it possible.

The “trolley problem” captures the dilemma nicely:



Being aware of such dilemmas gives me more respect for the kinds of decisions automakers will have to make as they build a self-driving future. But you know what? All this talk of ethics brings something else to mind. I work for a company whose software has, for decades, been used in medical devices that help save lives. Knowing that we do good in the world is a daily inspiration — and has been for the last 25 years of my life. And now, with products like the QNX OS for Safety, we are starting to help automotive companies build ADAS systems that can help mitigate driver error and, ultimately, reduce accidents. So I’m doubly proud.

More to the point, I believe this same sense of pride, of helping to make the road a safer place, will be a powerful motivator for the thousands of engineers and development teams dedicated to paving the road from ADAS to autonomous. It’s just one more reason why autonomous cars aren’t a question of if, but only of when.