SI problems have often been approached in AI using blackboard frameworks. This is because the blackboard model supports constructive problem solving and because it supports opportunistic control for dealing with uncertain data and problem solving knowledge. Despite the power of the blackboard model, most blackboard-based interpretation systems have been limited to using variations of incremental hypothesize and test. (Further information about blackboard systems can be found here.)
The RESUN SI framework provides an alternative to conventional blackboard systems since it supports the use of sophisticated strategies like differential diagnosis. The RESUN framework has two key components: an evidential representation that includes explicit, symbolic encodings of the reasons why hypotheses are uncertain (the sources of uncertainty or SOUs) and and a script-based, incremental planner for control. Interpretation is viewed as an incremental process of gathering evidence to resolve particular sources of uncertainty. Control plans invoke actions that examine the symbolic SOUs associated with hypotheses and use the resulting information to post goals to resolve uncertainty. These goals direct the system to expand methods appropriate for resolving the current sources of uncertainty in the hypotheses. The planner's refocusing mechanism makes it possible to postpone focusing decisions when there is insufficient information to make decisions and provides the opportunistic control capabilities of a blackboard system.
The RESUN framework has been implemented and experimentally evaluated using a simulated aircraft monitoring application. It forms the basis of the agents in the DRESUN testbed for distributed situation assessment. RESUN has also been used as the basic problem-solving archtecture for IPUS, a testbed for research in knowledge-based signal processing/understanding. One of the main applications of IPUS is robotic audition (sound understanding).
"A New Framework for Sensor Interpretation: Planning to Resolve Sources of Uncertainty," N. Carver and V. Lesser, Proceedings of AAAI-91, 724--731, 1991.
"Blackboard-based Sensor Interpretation using a Symbolic Model of the Sources of Uncertainty in Abductive Inferences," N. Carver and V. Lesser, notes from the AAAI-91 workshop: Toward Domain-Independent Strategies for Abduction, 1991.
Sophisticated Control for Interpretation: Planning to Resolve Uncertainty, Ph.D. Thesis, Department of Computer Science, University of Massachusetts, 1990.
Norman Carver's home page.