Y'all are a bunch of wankers!

Some AI is troubling

Do we know why people act as they do act?
Permalink Yoda 
April 14th, 2017 10:41am
Yes and no.

The pseudo-science of Psychology studies why people act the way they act.  But people vary all over the map, as do their actions, as do their justifications, so it's very difficult to make a science out of it.
Permalink SaveTheHubble 
April 14th, 2017 3:03pm
If we ever want to fully benefit from strong AI, we need to accept that it may not be always fully clear to us how it works. It's a choice we will need to make as a species.
Permalink Yoda 
April 14th, 2017 4:17pm
>  It's a choice we will need to make as a species.

Taken by the council of humans?

I rather think it is a decision taken by the makers of the AI.

Theologians and Ethical Philosophers can explain the common man what to think about it, but they don't know what it will be like.
Permalink Lotti Fuehrscheim 
April 14th, 2017 4:40pm
Note the 'normal' training algorithm for a set of neural nodes is to present to the input your training inputs, present to the output your desired outputs, and 'percolate' the difference back into the neural net, so that after about 10,000 to 100,000 presentations, the neural node strengths have been set.  This means the neural net will 'recognize' your inputs and create the appropriate outputs.

The problem with neural nets and teaching algorithms is that you're never sure when 'edge conditions' are responsible for whatever success you see 'so far'.

This means you can never be sure when the 'edge conditions' will suddenly collapse when presented with new training data, or even REAL data (like visual depictions of "where the edge of the road is").

It's a side-effect of the neural-net training algorithm, that the neural-net CAN recognize things, but the creator of the net doesn't know how it's doing it.  The re-inforced links between the neural nodes have 'evolved' values during training to accomplish the task.

Well, as long as the solution is 'evolved' and not 'designed', I don't think you're going to have the reliability and flexibility desired in the entity controlling an automotive death machine.
Permalink SaveTheHubble 
April 14th, 2017 4:41pm
Good article for a change. It is troubling to rely on software that no one can debug.
Permalink Shylock 
April 14th, 2017 5:42pm
Good article Pie.
Permalink Shylock 
April 15th, 2017 9:44pm

This topic is archived. No further replies will be accepted.

Other topics: April, 2017 Other topics: April, 2017 Recent topics Recent topics