Previous Page Table of Contents Next Chapter

Evolution and Information Processing

Basic nerve encoding schemes predominantly seem to be of two types: labeled-line code, and frequency code.� Labeled-line code just means that a different nerve is used to the arm rather than to the leg (different destination).� A different nerve might also be used to tell the elbow to straighten up rather than bending further (different function).� Frequency code means more pulses per second are sent if the action should be faster or more intense.� Both of these methods of encoding seem well suited for present-moment action or perception.� We can see how they would work to pass messages from the brain to the limb while the action is progressing.� The encoding scheme can also work to pass messages from the limb to the brain about resistance encountered or pain.� It would also work to identify a type of smell and to indicate the intensity.� This is sufficient for simple discriminatory action: �if it smells good, walk to it, and if it smells bad, walk away.� If it smells really bad, run, don�t walk�.� Such decision making can be built as neural wiring or it could be represented by symbolically represented rules, like the �if � then� statements above.

Representing rules by abstracted codes (like language expressions) might be more compact than having separate neural structures for each rule.� The code representation may allow more rules to be stored by the same brain complexity � i.e., by the same number of neurons.� However, the code representation would require additional code transformation mechanisms, which themselves would need to be evolved.� The discussion below will discuss a couple of such transformation mechanisms.

Vision and sound need more complex encoding schemes, along with encoding transformations.� In vision, the initial image is mapped out by distinct pixels, each representing hue and brightness � somewhat like a scanned image going into Adobe Photoshop.� Next, features are extracted from the image � somewhat like taking the Photoshop image into Adobe Illustrator, with lines, vectors, and planes.� The underlying encoding scheme changes the image from a rasterized form (like TV), to a vector-based form.� Sound goes through a somewhat similar transformation, where the complex sound is represented as sine waves with different amplitudes and phase angles.� The point we are trying to make is that we encounter multiple encoding schemes in observing how the nerves and neural system process information.

Reacting versus planning

The simple rules above (�If it smells good, walk to it, and if it smells bad, walk away.� If it smells really bad, run, don�t walk�) may be reactive, i.e., they get invoked if you are smelling something.� However, we can see them as part of purposive behaviour or even a plan.� The creature we are imagining may see some dead animal and think on what it wants to do, and whether there is an opportunity to feed.� It may first look and smell for other predators but then it may invoke this rule.� It may even invoke a prior rule to first go downwind to better smell what there is: �If I want to smell then I have to ��.� Rules about meeting preconditions for other rules are easier to represent if the rules are codified.� In language we would put it as: �If the rule mentions smells, then get into position to smell and get a good whiff before applying the rule�.� Codified rules can be examined without actually invoking them.� This distinction is also known as �use vs. mention�, or �assertion vs. quote�.� This kind of capability is needed for the use of tools, and involves backward chaining:� �To get the ants I can use a twig. To get a twig I can break off a branch and strip the leaves�.�

Representing time

For reactive action, time does not have to be represented, since the sequence of action is in the rules.� For plans, however, the order of action is important:� get twig, get ants, eat ants.� The wrong order in time simply does not work.

Learning

More complex animals learn, i.e., present behaviour is a function both of genetics and prior experiences.� Without learning, all individuals with the same genetic make up would exhibit exactly the same behaviour in equivalent circumstances.

Direct experience-based learning

A simple form of learning is captured well by neural networks.� It assumes repeated similar conditions where different alternative actions are chosen.� Actions that lead to successful outcomes become more frequent.� This model works very well where similar conditions recur frequently, and where optimizing the choices brings significant benefits.

Other experience-based learning, such as imprinting, are not as easily captured by the model above, and seem to fit better to an encoded representation.� In imprinting, young chicks see a target, and then follow the target during a maturational period.

Vicarious learning � through observation and mimicking

An action may be based on copying observed behaviour.� That action may be later in time, well after the observed behaviour, so that a memory trace is required.� A symbolically encoded representation seems more likely than a neurally hard-wired action-plan memory.� The second challenge is the mechanism by which the observation (perception) is translated into an action plan.� Somehow the observation �programs� the action.� The simplest explanation might be that sequences of perceptions and actions are observed, somewhat like the simple �if smell then act� type of rule.� These simple sequences might then be assembled into longer sequences with some kind of time-sequence representation.

Symbolic learning

An action may be based on being told what to do, i.e., following instructions which are a symbolically encoded representation of the desired behaviour.� This type of learning requires effective symbolic communication, and is likely restricted to very few species.

Encoding and decoding mechanisms

The mechanisms discussed above include symbolic encoding, decoding, and self-programming with rules and meta-rules.� To explore how this might work, we can simulate the rule to action decoding through the use of animation.� To be able to explore mechanisms for the �if smell � then action �� type of rule, we first need to simulate language-based action (deliberate vs. impulsive action), and then perception resulting in language (action based on knowledge).

Previous Page Table of Contents Next Chapter