Some AI is troubling
Yes and no.
The pseudo-science of Psychology studies why people act the way they act. But people vary all over the map, as do their actions, as do their justifications, so it's very difficult to make a science out of it.
If we ever want to fully benefit from strong AI, we need to accept that it may not be always fully clear to us how it works. It's a choice we will need to make as a species.
> It's a choice we will need to make as a species.
Taken by the council of humans?
I rather think it is a decision taken by the makers of the AI.
Theologians and Ethical Philosophers can explain the common man what to think about it, but they don't know what it will be like.
April 14th, 2017 4:40pm
Note the 'normal' training algorithm for a set of neural nodes is to present to the input your training inputs, present to the output your desired outputs, and 'percolate' the difference back into the neural net, so that after about 10,000 to 100,000 presentations, the neural node strengths have been set. This means the neural net will 'recognize' your inputs and create the appropriate outputs.
The problem with neural nets and teaching algorithms is that you're never sure when 'edge conditions' are responsible for whatever success you see 'so far'.
This means you can never be sure when the 'edge conditions' will suddenly collapse when presented with new training data, or even REAL data (like visual depictions of "where the edge of the road is").
It's a side-effect of the neural-net training algorithm, that the neural-net CAN recognize things, but the creator of the net doesn't know how it's doing it. The re-inforced links between the neural nodes have 'evolved' values during training to accomplish the task.
Well, as long as the solution is 'evolved' and not 'designed', I don't think you're going to have the reliability and flexibility desired in the entity controlling an automotive death machine.
Good article for a change. It is troubling to rely on software that no one can debug.
Pie is a lot better than ...
April 15th, 2017 6:24pm