In the future, we'll pay for what we get when it comes to artificial intelligence.
Reading this interesting article from a group of leading artificial intelligence researchers on what it takes to make a 'good auto driving car'. He said that "30 million samples" is not enough to produce quality data. Their approach moving forward will be to:
We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model.
All this really begs the questions:
- What is 'good driving'?
- Will what one person thinks as 'good driving' be the same as another?
Where there are various inputs required to produce a certain outcome, the capitalization model fits well. My gut prediction, is that because of the varying scales of tolerance and vary levels of budgets producers will offer consumers the option to purchase an 'A.I driving system' that suits their budget:
| Quality | Safety | Price | |-----------------|-------------|-------------| | low | lower | cheap | | low | higher | expensive |