Charles Ortiz, Senior Principal Manager of AI / Senior Research Scientist at Nuance Communications, wrote this article a few months ago on TechCrunch, arguing that there is no particular reason to believe AI will be bad for humanity – there are a vast number of possible outcomes.
That is undoubteldy true, and it should be born in mind when reading some of the more apocalyptic articles about AI. However, few people are saying that’s the only outcome, or even the likely outcome. But as highlighted in some of the comments, the fact it’s a possible outcome means we at least should plan for this eventuality: whilst unlikely, the impact if it did happen could be immense.
Part of the challenge with planning for this future is that whereas in the past all significant advances in technology have been very capital intensive, naturally keeping them in the hands of big business or Governments, AI is predominanty software where the marginal cost of production tends towards zero, and the cost of computing power is constantly falling. This will be a great enabler, but it will also be nearly impossible to control who has access to AI capabilties.