Tuesday, August 28, 2012

New SAP video: the connected car means business

I always enjoy a good read on the connected car — a topic that is very near and dear to me. One of the latest articles in particular that excites me is a recent contributed piece for Forbes, “Can Connected Cars Help Change The World?” by Judith Magyar, executive office, product GTM and mobile division at SAP.

Why the excitement? Well, for one, the attention-grabbing headline is backed up by an insightful analysis of the promise of the connected car — even touching on the notion of the connected car as a means of environmental change and the four factors that are essential to this vision becoming reality. What’s more, Magyar uses QNX Software Systems’ very own concept car as an example of how the connected car and its real-world use cases are coming to fruition!

Want to see QNX and SAP’s collaboration in action? Check out this video, which shows how all of this technology would come together in one (very beautiful) vehicle:




Thursday, August 23, 2012

The hidden cost of ethanol

Because of the drought plaguing the mid-west, about 2.2 billion less bushels of corn will be produced this year. That correspondingly means a huge hike in corn prices, from $6/bushel in May to a record high of $8.50/bushel today, a 40% increase. That fact got me thinking about ethanol.

Oil independence sounds like a good thing, right? Grow our own fuel, from a renewable resource, without strip mining the land or polluting the earth. Who wouldn’t want that?



There seems to be a good deal of debate about how ethanol is produced and what impact it actually has. Massive lobbyists are on both sides—agribusiness on one side, and petroleum on the other—so it pays to look at where the information is coming from.

The unfortunate reality of current corn production is that it needs a lot of oil to keep it going. Fossil fuels are used for farm machinery, fertilizer, and pesticides. Raising corn uses a terrific amount of fresh water, which is not an unlimited resource. Because of these factors, raising corn for ethanol does not necessarily reduce the carbon footprint of your gas tank—in fact, it may increase it.

Some plants are much better than corn when it comes to carbon footprint, like switchgrass, algae, sawdust, or sugar cane. These all use either material that is already waste or much more of the plant. Corn ethanol the way it's made today uses at most 50% of the kernel—just the starch. The rest of the kernel, stalk, husk, cob, is cellulose waste that could be used, but current production methods can’t take advantage of it.

Unfortunately, you can’t pick where your ethanol comes from. I want a green tank, but I can’t choose the source of any ethanol I might buy. Because ethanol is primarily made from corn today, for now it would seem that the balance tilts away from ethanol as a truly green choice. That isn’t to say that all biofuels will always be problematic. There’s certainly something to be said for voting for further ethanol development and breaking our dependency on oil. But I feel that in the current biofuel environment, voting for ethanol is really just lining the pockets of agribusiness. We’ve gotten the “green” message ahead of the true bigger picture of the implications of ethanol production.

(But if you want to be truly green, your best bet is to be a vegan that bikes everywhere. That’s a little ambitious—even for me. As a compromise, just drive an electric car and charge it up with your windmill.)

Tuesday, August 21, 2012

Autonomous cars by 1976?

By Paul Leroux

When you hear "Firebird," what image comes to mind? Chances are, it looks something like this:



Or this:



But did you know that the Firebird brand dates back to the 1950s? In those days, the Firebird looked like this:



Clearly, this wasn't a production car. Rather, GM designed it to promote a variety of forward-looking technologies, including a rear-view camera, a CRT-based instrument panel, and, yes, autonomous drive.

Speaking of which, here's a video from 1956 that shows how an "electronic control strip" embedded in the road allowed the Firebird II to drive itself. Jump to the :37 mark to catch the action:



My favorite part? The closing comment, "This may well be part of the American scene in 1976." The prediction was on the optimistic side, to say the least. But it does reflect our long-standing fascination with self-driving cars. In fact, it goes beyond that. The Firebird II also embodies a persistent belief that such cars are inherently safer than cars driven by humans.

Here, for example, is an excerpt from the Firebird II brochure, which extols the benefits of putting technology in the driver's seat:

    Not only do you relax and enjoy your journey, but you are as safe as modern science can make you. For, while human beings err in judgment, the electronic brain is completely foolproof.

Does that sound familiar? It does to me. A few weeks ago, I wrote about an article published in 1958 that claimed:

    Driving will one day be foolproof, and accidents unknown, when science finally installs the Electronic Highway of the Future.

Part of me laughs at the sheer naïvety of these statements. But you know what? They aren't all that far from the truth. I'd like to think I'm better than any "electronic brain" at driving safely, but the evidence is starting to suggest otherwise. According to data gathered by the Highway Loss Data Institute, automatic crash-avoidance systems in cars are, in fact, better than humans at responding to a variety of dangerous situations.

So, in some small way, I'm threatened by these statements. After all, who wants think of themselves as Captain Dunsel? :-)



Wednesday, August 15, 2012

Am I crazy for talking to my car?

Earlier this afternoon, I participated in a connected car panel at SpeechTEK 2012, hosted by our friend Mazin Gilbert from AT&T. The other panelists included Greg Bielby of VoltDelta, Thomas Schalk of Agero, and Hakan Kostepen of Panasonic.

Even though Mazin did a fantastic job, not every panelist had a chance to answer every question. I was itching to answer some, so here are my responses to the questions that I didn't get to answer, or where I feel I could have provided a more complete response.

Have speech technologies matured to the point where they can be used robustly in the car? The general answer to this question from the panel was yes, but I think the real answer is a qualified yes. The technologies exist, but often aren't applied or may need auto-specific adaptations to handle in-cabin noise or other issues. Natural language recognition was an oft-stated driving technology, but a missing piece to the puzzle is hybrid recognition. I don't mean pushing recognition wholesale to the cloud, like Siri does. I mean a true split of the recognition effort, where each part does what it’s best at. Put the front half of acoustic processing in the vehicle to clean up the audio and convert the waveform to frequency-domain data, then send the data to the cloud-based server. The cloud server can then parse and interpret the data, and send back the result.

Hybrid speech rec solves three problems at once: better audio signals (the car can improve audio specific to the in-cabin environment), better cost (frequency data is far more compressed than raw audio, so you pay less for data transfer), and better responsiveness (hybrid rec gives the server time to start working on the response while it's coming in instead of waiting for the whole utterance to finish before starting).

Is driver distraction a major business driver, or is it the "Siri effect"? Currently, the car industry seems to use driver distraction as a reason to push a lot of features into speech. Many of those uses are gimmicky. Personally, I don't care if I can set my climate control system with voice — why would I when I can simply turn a dial? I once had someone ask me about the feasibility of adding voice recognition commands for rolling down the windows. I asked him, "Yes, but wouldn't people just push the window button?"

We shouldn’t implement speech commands just because we can. They may have contributed to excitement in the early adopter crowd, but we're beyond that now. Mind you, there are some seriously useful ways to use voice. For instance, any time you need to pick from a huge number of choices, voice recognition is the natural way to go. Calling contacts ("Call Sarah Potter"), entering destinations ("Go to 3121 South Park Street"), or picking music ("Play Audioslave") are all much easier than using an HMI to enter the same information, and safer to boot. It just has to work consistently and accurately.

Will car makers see more speech moving to the cloud, or will it be a hybrid of cloud and embedded? I disagree with the majority of the panel on this one, and, I think, the majority of people in the industry. Most auto people believe a hybrid between embedded and cloud allows the best of both worlds — good recognition and updatability when connected, and consistent reliability when not. My colleague Andrew Poliak also champions this view with a memorable catch phrase: Zombie Apocalypse. That is, you still want the system to work, albeit partially, when the infrastructure isn't available.

But if you ask me, everyone is missing the point — theirs is a technology-centric point of view. Everyday customer acceptance of a particular technology is notoriously harsh: if it doesn't work well, it gets rejected out of hand. Good cloud solutions beat an embedded solution hands-down; they just need some improvements (see my hybrid bullet above). Once a customer experiences a good solution, they will become frustrated with one that performs poorly. In my opinion, it's better not to offer the service at all, than to try a graceful degradation of capability, because most customers won't understand or care. Spend the effort instead on making sure you always have an acceptable cloud connection — either through multiple redundant mechanisms or a car-based powerful antenna — and you'll be better off. Even when the car knows some data that the cloud doesn't (like a mobile's contact list or music selection), there's no need to handle that on the embedded side. The cloud recognition server is powerful enough to not require the data set a priori. And I think we can predict an eventual migration of phone data to cloud-based data (or cloud-synchronized data) that makes the car's knowledge either easily transferrable or less relevant.

Who makes money, and how, from voice-enabled agents or voice services? This was one of the best questions of the panel, because nobody really knows the exact model, but everybody agreed that customer tolerance is very low. The most likely candidate is ad-based revenue. This doesn't mean reading ads aloud to the driver, but rather, positively influencing search results for either active or temporary situation-based points of interest (POIs). Depending on how valuable the service is to the driver, there will still be an option for service-based payments and high-value apps.

Standards and building mobile apps — will it come? You need standards if you want to build an app platform that will promote application creation and adoption. That's what we're doing with the QNX CAR 2 application platform — creating a way for someone other than the car companies to join the ecosystem and to deploy their apps to the car in a controlled way. But don't forget, you need a standard way to deploy apps for the cloud half of the recognition, too.

To close, let me share two photos. One was taken outside the Marriott Marquis, the hotel hosting the conference just off of Times Square in NYC. The other is from our PR agency, Breakaway Communications. What do they have in common? Wooden water towers. Sorry, I couldn't help myself; I just love those things. They just look so quaint in a city full of glass and brick.






Monday, August 13, 2012

Will autonomous cars motivate more teenagers to get behind the wheel?

I know, it seems like an odd question. But allow me to provide some context.

A few months ago, my colleague Andy Gryc predicted that autonomous cars will, in a few years, start rolling off the assembly lines. To support this prediction, he cited several trends, including two demographic factors: 1) baby boomers are getting older and hence losing their ability to drive safely, and 2) young people today are much more interested in connecting than in driving; they prefer to live their lives online.

I must admit, I thought the second factor was anecdotal at best. But boy, was I wrong… I think.

According to a new study published by the journal Traffic Injury Prevention, the number of young drivers is, in fact, falling precipitously. For instance, in 1983, 87.3% of 19-year-olds had a driver’s license. By 2008 that number had fallen to 75.5%, and by 2010 it had tumbled to 69.5%.

Similar drops occurred in other age groups under 40, but the trend is far more pronounced among teenagers and twenty-somethings. Here’s a graph from the article:



So what accounts for the trend? According to the authors, Michael Sivak and Brandon Schoettle, the decrease in driver licensing is consistent with the increase in Internet usage — an interpretation that falls in line with Andy Gryc’s hypothesis. I, too, believe that the Internet is a factor. But is it the only one?

In July, Jordan Weissmann of The Atlantic wrote a short piece on Sivak and Brandon’s article, and if the comments are anything to go by, the trend is the result of many contributing factors, not just one. Commenters noted that, since the 1980s, gas prices have gone up; teenagers face more restrictions when applying for licenses; parents have become more protective; and cars, with all their electronics, can no longer be maintained by an teenager with a wrench and a smattering of mechanical skills. And let’s not forget the elephant in the room: the lack of jobs available to young people.

So, to return to our original question, will autonomous cars spur more young people to get behind the wheel? If young people are losing interest in driving because they’d rather stay connected, possibly yes. But if serious economic factors are at play, probably not.

What do you think?

Wednesday, August 8, 2012

Want to be an automotive trivia superstar?


The first car manufacturer to record the sale of a million vehicles in one year was Ford with the Model T in 1922. What was the next car? If you guessed the 1973 Volkswagen Beetle, you are correct!

Get ready to test your automotive knowledge because we are excited to announce the QNX Software Systems automotive trivia sweepstakes.

Each Friday, starting on August 10, we’ll tweet out an automotive trivia question at 1:00 pm ET from the @QNX_Auto Twitter account and look to you for answers. You’ll have four hours to tweet us back with your guess; the official answer will be unveiled at 5:00 pm ET that same day.

Those who respond to the @QNX_Auto Twitter account with the correct guess will be entered into a monthly draw where they’ll be eligible to win a BlackBerry® PlayBook®. Participants get one entry per week – that’s a maximum of four entries per month. For a full overview of rules, visit: qnx.com.

So start brushing up on your automotive facts and stay tuned for Friday’s first question!

Tuesday, August 7, 2012

8 steps to building a lean and mean HTML5 application

Guest post from Marc Lapierre, HMI developer for the QNX CAR 2 application platform

Have you seen photos of the QNX reference vehicle? If so, you've already caught a glimpse of the rich user experience that HTML5 can bring to car infotainment systems. The vehicle's head unit, in particular, makes extensive use of HTML5.

The members of the QNX CAR 2 team have considerable experience with HTML5, and we follow a number of “best practices” to achieve optimal performance. If you use HTML5, here are 8 techniques proven to help applications perform as smoothly and responsively as possible:

1. Use 3D, rather than 2D, transformations — For example, instead of translateX(x), use translate3d(x,y,z). This will hardware-accelerate the translation. Similar methods exist for most other transformations. Also, avoid animating with JavaScript libraries!

2. Use opacity, rounded corners, and gradients sparingly — If you use these elements sparingly and on mostly static objects, you should achieve decent performance. But when you mix them with animations, buttons, or anything else that gets redrawn often, performance will suffer. Consider using images for framing rather than building components with many specific CSS attributes.

3. When modifying elements, remove them from the DOM — This technique is especially helpful when updating several DOM fields at once. For example, if you are scrolling through a list of 100 contacts and want to refresh them, updating them one by one will cause the list to redraw 100 times. But if you remove the entire list, update it in memory, and then re-add it, you will incur only 2 redraws.

4. Avoid canvas and SVG — Hardware acceleration for canvas isn’t always available in WebKit or other browsers, and might incur performance hits in some cases. Likewise, SVG isn’t always accelerated on mobile and embedded platforms.

5. Hide elements you don’t need — Adding display:none to elements that don’t need to be displayed will prevent them from being rendered.

6. Don’t link across pages — When developing websites, it is common to link across pages. But in mobile applications, this approach detracts from the user experience — when using an app, it can be jarring to see the white screen that often appears when moving from one page to another. For a better UX, use AJAX requests to pull in data dynamically, and update your interface accordingly when the result is received.

7. Avoid libraries intended for the desktop — Some JavaScript libraries are designed for use on a desktop browser with a powerful CPU. Try to limit the number of third-party JavaScript libraries included in your application or seek out versions optimized for mobile use.

8. Use image sprites for pre-loading active element states — For example, using sprites for buttons with a “pressed” state allows you to have the alternative state pre-cached and ready to display, rather than having to load or draw assets on demand.

What about you? Do you have any resource-saving or performance-optimizing techniques that you’d like to share?


Marc Lapierre is an HMI developer on the QNX CAR 2 application platform team, where he focuses on development of user applications using HTML5, JavaScript and CSS3, and on improving coding efficiency and standards in this environment. Before joining QNX Software Systems, Marc worked at RIM, developing social networking and multimedia applications for smartphones and the BlackBerry PlayBook tablet.