Not to be taken too seriously, but a previous post presented
Karl Friston’s idea that consciousness is not a thing but a process, the process
of inference.
Conscious processing
is about inferring the causes of sensory states, and thereby navigating the
world to elude surprises. While natural selection performs inference by
selecting among different creatures, consciousness performs inference by
selecting among different states of the same creature (in particular, its
brain). There is a vast amount of anatomical and physiological evidence in support
of this notion. If one regards the brain as a self-evidencing organ of
inference, almost every one of its anatomical and physiological aspects seems
geared to minimise surprise.
Karl Friston
As far as I know this isn’t Friston’s view, but if his idea is
sound, then surely some machines are already conscious because inference is one of the things they do. From complex inputs they infer the best output. It may be a remote, alien and robotic consciousness and it may not
be intelligent as we understand it, but it can be adaptive with the ability to infer and learn enough to improve the next inference. Not all of us can do that consistently.
Many, most or almost all people may dismiss Friston’s idea
either because they don’t like it anyway or because it can be adapted towards
such a tricky conclusion. One obvious reason to dismiss the idea is that
machines merely follow algorithms and following an algorithm is not the same as
being conscious. It’s a good argument and deeply convincing because we do feel
as if we humans are fundamentally different from machines. We feel as if we could do this
or we could do that in ways which are not mechanical.
How about the political convictions of Jeremy Corbyn and his
followers? In an interesting sense they follow political algorithms and that
may be part of the chap’s appeal. His concept of government is essentially a
socialist algorithm and his response to any political input is restricted to
whatever the algorithm allows. Even the way he assimilates input is dictated by
the algorithm.
Following a similar line of thought, it could be said that
Theresa May’s problems are caused by her following no obvious algorithm. One
could even claim that this is the problem with politics, it places too much
weight on algorithms and too little on pragmatic flexibility.
None of this need be taken too seriously, but there are at
least two reasons why we might play around with the idea of machine
consciousness however dubious it feels.
Firstly the obvious one – forewarned is forearmed. Many of
us must regard artificial intelligence with at least some degree of
trepidation, possibly mixed with scorn, scepticism or a hard-nosed tendency to
dismiss it all as hype. It may be more than hype though. If so then it may be
as well to adjust now and not have the adjustment forced upon us in the near future.
For example, if self-driving vehicles ever take to public
roads, and it is not certain that they will, but if they do then one might say
that these vehicles are able to drive themselves because they are conscious. They
constantly infer the current state of the road from a range of sensory inputs and
act on that inference - geared to minimise surprise. Not only that but they do it within an unpredictable
environment – just as we do.
Admit this and the possibility of machine consciousness makes
some kind of sense, if only as a means to assess any threats it may pose. There is an important sense in which self-driving vehicles
are more aware than human drivers, a sense in which they are more conscious of
their environment, a sense in which they are much more conscious of their environment.
Secondly a linked problem – the wider issues of automation
and employment. As we all know automation kills off old ways of working and
consigns old forms of employment to history. This should not be
a problem if new jobs appear, jobs we probably haven’t thought of yet. Or so we are often told.
However, automation via conscious machines may be different
and for that reason the new jobs may not appear or they may be inaccessible to many
people. A key problem could be the rate of progress. In time, and that time may
be now, conscious machines may acquire new areas of expertise more quickly, cheaply and comprehensively than their human competitors. Million may find themselves out-competed
by conscious machines.
I mean – look around you. How unlikely is it?
6 comments:
Interesting stuff, but probably a bit above my pay grade. Two questions.
1) I know you posted about Friston earlier this year, but a few years ago you were talking about a theory that involved our minimising or avoiding surprises. Was that also Friston, or is there another one?
2) According to Friston's idea of consciousness, what would suffering be, do you think?
I think it is a good question, and one I’d feel confident venturing a thoughtful opinion on a hundred years hence. Except we’ll all be dead. It is one of those rumbling, just sort-of under the general consciousness, memes. Are we, for all our assumed rights, airs and graces and vociferous entitlements, generally, just like historical curiosities; rather similar to, say, home-weavers pre- industrialisation? The equivalent of unknowable figures in quaint sepia photographs? Probably, almost certainly. But then we have to consider; just who then inherits the earth? Someone or more likely some grouping or system has to, that's for sure. And much, much more importantly, and also practically, how do they manage to hold on to enough of it to show a profit of sorts, without being lynched. I suspect, only suspect, that people are sufficiently inventive and awful to swallow this pill and manage it somehow, with concomitant casualties. Whether this world seems like much of an improvement to those left behind is moot. But who in truth gives a shit about them (us) anyway?
"if new jobs appear" - and if they don't, the political arguments are all about distribution of wealth.
Machines are designed, manufactured and programmed by humans, so isn't it fair to say that their ability to produce what they were designed for, is a result of the designer's skill, therefore, it's doing what it's told and nothing else?
I agree what you say about dogma like that followed by Corbyn, as everything he does is coloured by that fantasy.
Seems to me all minds from ant to human are probably built from layer upon layer of instinctive reactions with some plasticity in the upper layers. There is no internal 'algorithm' just a fairly flexibly lookup table and the layers of instinct tend to trigger each other. Human vision for example does not (afaik) compute trigonometry to drive a car, it simply uses experience.
Hence successful politics is less of an algorithm with a grand theory behind it, more a process of just getting along without too much bother. Adherence to this or that theory merely causes wasted effort and trouble.
Building machine conciousness on the basis of layers of instincts looks possible. But what of events that are outwith the built-in layers and how would the 'plasticity' be created. So far the mantra is 'build it big enough and plasticity will emerge'. The plasticity problem will probably be solved, that is when the human's problems start.
Recently returned from east of Suez and I immediately got the feeling that the denizens of a nearby town seemed by comparison to be pretty lumpen. We have more than just machines to worry about.
Sam - yes Friston is the avoiding surprises guy. I don't know what he would say about suffering because I don't come across him very often and don't seek him out to probe more deeply into his ideas. He's a working scientist so maybe he hasn't published much outside his field.
Clacket - I'm sure we won't inherit the earth because our labour is being sucked dry. Something else may turn up as it always has, but it all feels more and more trivial.
Sackers - which we see already and the problem seems likely to become acute. There is already much that we don't acknowledge, or don't acknowledge openly.
Scrobs - yes, designers' skill may turn out to be a permanent limitation, but chess computers play chess better than their designers could.
Roger - "plasticity in the upper layers" is a good description although the plasticity often seems very limited and much of it seems to be forced by the environment - micro adjustments to the basic response. I know what you mean about denizens of a nearby town seeming pretty lumpen. It worries me, particularly when I think about the grandchildren. Something bad is festering away.
Post a Comment