With the average transaction prices of new vehicles in the United States hitting nearly $35,000 at the end of 2014, drivers can be grateful that the cars they purchase are also more durable and reliable than ever before. The average age of the more than 200 million vehicles on the road in the United States today is now nearly 11.5 years. However, that longevity has a big potential downside: as computing and communications technology marches on to improve safety, efficiency, and reliability, many of those existing cars will be incapable of participating in these advances. Luckily, cloud computing could come to the rescue.
According to Navigant Research’s report, Autonomous Vehicles, full-function self-driving vehicles aren’t expected to be available in significant volumes until late in the 2020s. Until the fully self-driving car arrives, we’ll have a steady stream of incremental improvements in advanced driver assistance systems. Thanks to increasing connectivity in vehicles, we’re also less likely to be stuck with the capability that was built-in when the vehicle rolled off the assembly line.
No Car Left Behind
General Motors (GM) and Audi are among the manufacturers that are already building 4G LTE radios into many of their new vehicles. When this capability is combined with advanced new microprocessors from companies like NVIDIA and Qualcomm, vehicles will be able to leverage cloud computing infrastructure to get smarter as they age, rather than being left behind.
At the 2015 Consumer Electronics Show in Las Vegas, NVIDIA unveiled a new generation 256-core processor, called the Tegra X1, along with electronic control units powered by this advanced chip. One of the problems that driver assistance and autonomous systems have to solve is being able to recognize and distinguish the objects detected by all of the sensors on new vehicles. The human brain is remarkably adept at distinguishing the nuances between an animal and pedestrian or an ambulance and a delivery van.
Detection before Failure
This sort of image recognition is far more difficult for a computer, so the Tegra X1 is designed to collect image data from its 12 camera inputs and transmit it back to data centers where it can be aggregated with information from other vehicles. By combining data from many vehicles, the object recognition can be dramatically improved, and updated image libraries can be fed back to vehicles for improved onboard sensing ‑ even without changing hardware.
GM is also harnessing the power of the cloud to provide drivers with predictive diagnostic information for their vehicles using OnStar. Available for more than a decade, OnStar provides subscribers with vehicle health reports when faults are detected. Now, by monitoring critical systems such as the battery, starter, and fuel pump and sending this information back to the cloud, OnStar is able to detect subtle changes in performance that have previously been shown to be precursors to component failures. The OnStar Driver Assurance system can then notify the driver so that an impending problem can be corrected before the driver is left stranded on the side of the road. This predictive diagnostic system will be available on several of GM’s 2016 model year vehicles.
As automakers roll out new infotainment interfaces, such as Apple CarPlay and Google’s Android Auto, drivers will also benefit from improved voice recognition that leverages massive data centers run by these technology companies. More robust and reliable voice control will help reduce driver frustration and keep their attention on the road ‑ at least until the car can take over completely.