Journal on Policy & Complex Systems Volume 1, Number 2, Fall 2014 | Page 10

��������������������������
The problem of extinction of “ unlearning ” is especially critical for complex , hierarchical , learning . For , once a generalization about the past has been made , one is likely to build upon it� ( Minsky , 1961 , p . 30 – 31 )
In order to solve complex problems , the machine has to manage a set of interrelated sub problems . A step by step heuristic would imply in : subdividing the problem ; selecting sub problems to be solved ( estimating relative difficulty ; estimating the centrality of different sub problems ); and selecting adequate methods for the solution of the sub problems . However , for difficult problems , the machine would have to analyze the problem as a whole , that is to plan . A number of planning schemes are presented by Minsky , which include the use of analogy , semantic , or abstract models .
Enabling machines to solve complex problems means that they should have inductive capacity , that is , they should have methods that make it possible for them to build general statements about events beyond their own experience . But for machines to answer questions about hypothetic events beyond their own experience , without trying out those experiments would mean that the answers would have to come from an internal submachine which is within the original machine . The problem of inductive inference can thus be seen as the reconstruction of this internal machine ; and this internal machine can be understood as a model that regulates the world . Since the machine is part of the world , the internal model would have to include representations of the machine itself .
John von Neumann ( 1966 ) states that from the point of view of physical organisms , it is not possible to be sure about the implicit mechanisms of how memory is established . A relevant difference among living organisms and machines is that living organisms constitute a system that is so integrated that it can run in spite of the occurrence of errors . Von Neumann describes human beings as similar to a complex system of self-organization :
The system is sufficiently flexible and well organized that as soon as an error shows up in any part of it , the system automatically senses whether this error matters or not . If it doesn ’ t matter , the system continues to operate without paying any attention to it . If the error seems to the system to be important , the system just blocks the regions off and by-passes it forever , and proceeds along other channels . ( Von Neumann , 1966 , p . 71 )
Neumann reinforces his description of living organisms as ‘ complicated aggregations ’, probabilistically ‘ improbable ’ amalgamation of elementary parts . Implicitly , it can be said that he is agreeing with Anderson ( 1972 ). Neumann goes further and describes his own surprise with the fact that organisms can generate other organisms that are even more ‘ complicated ’ than themselves , and that the original recipes are not necessarily within the original recipes , with no hints or ‘ predictions ’ about how the succeeding organism should be . This does not occur when artificial automata is at play . That is , the synthesis made by the automata must be entirely described .
Hopfield ( 1982 ) shows that computational properties such as the stability of memories and the construction of categories of generalization can emerge as an spontaneous result of the interaction of a large number of simple neurons . “ Any physical system whose dynamics in phase space is dominated by a substantial number of locally stable states to which it is attracted can therefore be regarded as a general content-addressable
8