Salience

Salience — selecting what is relevant and important to a given context and goal — is an important aspect of intelligent systems.

This comes into play at different levels of cognition:

Firstly, in autonomous data selection on input — what senses and features to process and /or ignore, and what level of importance to assign to them for processing. For example, most animals are wired to pay extra attention to fast moving items in their visual field, and to loud sounds. For AGI we have to assume that much more sensory input will be available than can (or should) reasonably be processed. We must also assume that relevant feature extractors such as edge or shape detection must be prioritized. It seems that some semi-automatic mechanism needs to do this pre-selection. This mechanism should be under overall high-level cognitive control to preset parameters; for example to, say, bias it to focus on changes in color or pitch.

Once input has been appropriately selected and prioritized, pattern matching, categorization, and conceptualization mechanisms need to be selected according to contextual requirements. What matters currently? For example, are we trying to match incoming patterns against each other, or against some internal reference; are we interested in shape or texture patterns; or are we just interested in object collisions?

Higher level goals also need to be selected and prioritized according to salience. What are we trying to achieve right now? What dependencies are there? What is most important in the current context?

Finally, the overall architecture has to allow for consolidation and forgetting. What information or experience should be consolidated? What should be forgotten (or archived)?

AGI need to have mechanisms in place at each of these levels (and probably some others) to evaluate salience and to adjust cognition accordingly.