Session chair: [Sophia]
Introduction to Multi-sensor Cognitive Systems (10 mins) [Fahad]
List of presentations:
- Data fusion from multiple sources (10 mins) [Leonid]
- Semiotic cognitive approach (10 mins) [Valerio]
- Performance evaluation of surveillance systems (10 mins) [Tahir]
- Discussion [20 mins]
Multi-sensor cognitive systems
A cognitive system is an intelligent system, inspired by the architecture of human cognition, that refines rules of operation based on continuous learning from the sensorial feedback. It is able to adapt its behavior to the environment and perform various tasks effectively and efficiently. According to Fuster , cognition contains the functionalities of perception, memory, attention, intelligence and language. More specifically, cognitive systems may perform the following tasks depending on the application of interest :
- Recognition and categorization
- Decision making and choice
- Perception and situation assessment
- Prediction and monitoring
- Problem solving and planning
- Reasoning and belief maintenance
- Execution and action
- Interaction and communication
A multi-sensor cognitive system, as the name implies, consists of a multiple sensors arranged in space in a centralized, decentralized or distributed  manner. This gives rise to several important research questions.
The first is how to integrate data from multiple sources (sensors) in order to expand the observation area beyond the capability of a single sensor, improve the performance of the system and to introduce redundancy (robustness).
Secondly, there is a need to reduce the operative costs for a multi-sensor system. For this, measurements of the sensors that are correlated (close in space) can be used by automatic calibration methods.
Thirdly, as the development of a multi-sensor cognitive system is resource demanding, there is a need to model operation of sensors and a network as an input for an optimization framework.
Finally, the system will interface itself with multiple user requirements, some of which cannot be specified in advance. This triggers the coordination problem for the interactions of sensors.
The use of multi-sensors improves the robustness and reliability of a system. Moreover, it extends the coverage of a system both spatially and temporally while reducing uncertainty and ambiguity in it. This accounts toward the improve performance of a system. In a human cognitive system, these tasks come naturally but from an engineering and system design perspective, they are considerably complicated. Thus, multi-sensor cognitive system design and development is an active area of research.
- Multi-user / Multi-sensor systems
- Data and information fusion from multiple sources
- Automatic deployment and operation of sensors
- Modelling of sensors and network
- Coordination of networks of sensors
- Performance evaluation methods
Multi-user / Multi-sensor systems
The multiple sensor system refers to an integrated combination containing a variety of sensors to gather some features or interesting information from the system’s environment. Most of the multi-sensor/multi-user cognitive systems are application-specific and are highly dependent on the design requirement of the system.
Multi-sensor cognitive systems involves development of context aware systems for applications such as imagery , object recognition , intelligent robotics [8, 9], multi-user cognitive radio for communication [2, 10], cognitive radar for remote sensing , intelligent surveillance systems [11, 12] for crowd behaviour analysis and monitoring , nuclear fusion device monitoring , vehicle safety and driver assistance , etc.
Intelligent Surveillance Systems
Surveillance and monitoring technologies aim at understanding what people do in a given environment. Video surveillance is one of the fastest growing sectors in the security market due to its wide range of potential applications, such as to detect crime occurring in indoor or outdoor settings, or to monitor the flow of large crowds through public spaces, to analyse the behaviour of traffic, to develop a vehicle safety and driver assistance system.
An intelligent surveillance system learns autonomously, and builds cognitive memories while continuously monitoring a scene from videos obtained by cameras along with information from other sensors installed in the monitored area. The aim of an automated visual surveillance system is to obtain the description of what is happening in a monitored area and to automatically take appropriate action like alerting a human supervisor, based on the perceived description.
Intelligent robots are programmed to take actions or to make choices autonomously based on input from sensors.
Autonomous mobile robots are programmed such that they are made aware of the environment in the real world. Simultaneous Localization and Mapping (SLAM) is a widespread method which is used for this proposes and exploits information from multiple sensors such as lasers, sonars, and proximity. Likewise, humanoid robots acquire information from sensors such as gyroscope and accelerometers for balance and stability control. Furthermore, information from depth and color are exploited for cognitive robotics for building object recognition system.
The cognitive radio network is a complex multisensory wireless communication system capable of emergent behaviour. It embodies the following functions [2, 10]:
- to perceive the radio environment (i.e., outside world) by empowering each user’s receiver to sense the environment on a continuous-time basis;
- to learn from the environment and adapt the performance of each transceiver (transmitter-receiver) to statistical variations in the incoming RF stimuli;
- to facilitate communication between multiple users through cooperation in a self-organized manner;
- to control the communication processes among competing users through the proper allocation of available resources;
- to create the experience of intention and self-awareness.
The primary objectives of a cognitive radio network are to provide highly reliable communication for all users of the network and to facilitate efficient utilization of the radio spectrum in a fair way.
Data and Information Fusion from Multiple Sources
The main purpose of data fusion from multiple sources is enhancement of performance that is beyond capabilities of the system components individually. More specifically, data fusion can be defined as a process of combining data from multiple sensors, which measure the same real-world phenomenon, to reliably obtain accurate representation of this phenomenon. As real-world measurements often contain noise and in some situation do not provide signal of optimal quality, data fusion techniques were developed for data integration from multiple sensors with an assumption that all the sensors cannot fail at the same time.Strictly speaking assumptions of data fusion methods generally include :
- Homogeneous sensors (from signal-to-noise ratio point of view)
- Statistical independence among different sensor data
- Large number of sample data that allow to characterize fusion performance asymptotically
Data fusion methods found applications in a variety of areas: wireless sensor networks, geospatial information systems, bioinformatics, discovery science, intelligent transport systems, physiological computing, etc.
The process of data fusion can be explained based on one of the applications, namely, physiological computing. Physiological computing systems use real-time physiological data of users as an input for man-machine interaction interface. There many physiological signals sensed from human body that are frequently used in physiological computing. Cardiovascular, electrodermal, respiratory, and brain activities are commonly measured in physiological user interfaces. Also skin temperature and muscle activities may be monitored. The main purpose of physiological data collection is determination of psychological state of user from this data. So, psychological state of user is the real-world phenomenon in this case that is being observed with an array of physiological sensors. Physiological computing has many potential applications, for instance, determination of cognitive load in learning or during piloting an airplane. Also, it has been used to automatically recognize emotions of gamers during playing games. Psychological states of users have a positive relation with their physiological signals. For example, higher values of skin conductance indicate states with high arousal.
Psychophysiological data fusion involves integration of data from multiple physiological sensors into a feature vector and assignment of a psychological label to it . The psychological label can be either categorical or continuous. However, before the data fusion it is generally advisable to complete several prerequisites. First, it is necessary to normalize data, because physiological signals have high variability depending on age, gender, and even time day. Second, dimension reduction is necessary because the feature vector is likely to contain many elements, which are correlated with each other. Consequently, data fusion is commonly performed with one of classification algorithms. The effectiveness of the algorithms is usually judged by their accuracy. Popular classification methods employed in physiological computing include:
- k-nearest neighbor classifiers
- Naïve Bayes classifiers
- Discriminant analysis
- Support vector machines
- Classification trees
- Artificial neural networks
Coordination of the sensor network
Sensors are connected together to build a network. Very often, communication happen via wireless. Three approaches are possible:
- Centralized approach: all the sensor transmit their measurements to a central processing point performing global data fusion; control information is passed from the central processor to all the sensors that are also actuators. The approach has several disadvantages: (i) high latency, communication from distant nodes to the central processor and backward from it; (ii) communication bottlenecks, in proximity of the central processor; (iii) single point of failure (the central processor)
- Hierarchical approach: the network is partitioned in clusters and each cluster is assigned a clusterhead, responsible of local data fusion. The local processors communicate with each other to coordinate themselves. This reduces latency and the communication overhead. Clusterheads can be identical to all other sensors (robustness: a clusterhead can be replaced if it fails) or have additional processing capabilities and communication capabilities.
- Distributed approach: there is no multi-hop communication; sensors communicate only with neighboring sensors. Typically, such an approach requires the smaller amount of data and iterative algorithms.
In the distributed approach, different distributed algorithms have been proposed:
- Average consensus is a synchronous distributed averaging algorithm: at each iteration, nodes exchange their current state information with their neighbors. The state received from neighbors and the previous state are assigned weights such that the average of the states is preserved. After some iterations, the nodes converge to the average of the initial states.
- Randomized gossip is a distributed averaging algorithm based on pairwise averaging: nodes transmit with given probability and randomly selects a node among their neighbors to exchange their current state information and to replace it with the pairwise average
- Broadcast randomized gossip modifies randomized gossip to have nodes sending a broadcast message instead of a unicast message. Receivers of this message do not reply to the sender, but update their state to a convex combination of the sender state and their own one.
VT poster ("Distributed averaging in wireless networks") offers a performance study of these algorithms under realistic physical constraints.
Performance evaluation methods
The development of a multi-sensor cognitive system leads to need of devising procedures and methods to evaluate different aspects of its performance. This includes the evaluation of the methods proposed to solve the abovementioned problems and challenges in a multi-sensor cognitive network. For example, this may involve the performance assessment of the data and information fusion algorithms used to combine data received from multiple sensors. This may also involve evaluating the effectivenes of camera placement techniques in terms of their ability to determine the number of cameras required to maximize the coverage of the monitored scene and its effect on the energy resources of a network. Additionally, it may include evaluating the strengths and weaknesses of using different approaches (centralized, decentralized and distributed) for the coordination of sensors in network.
Students: Tahir Nawaz, Fahad Tahir, Valerio Targon, Leonid Ivonin, Sophia Bano
Supervisor: Prof. Carlo Regazzoni
- S. Haykin, “Cognitive Dynamic Systems”, Proceedings of the IEEE, vol. 94, no. 11, pp. 1910-1911, Nov. 2006.
- S. Haykin, “Cognitive Dynamic Systems: Radar, Control, and Radio”, Proceedings of the IEEE, vol. 100, no. 7, pp. 2095-2103, Jul. 2012.
- P. Langley, J. E. Laird, and S. Rogers, “Cognitive architectures: Research issues and challenges”, Cognitive Systems Research, vol. 10, no. 2, pp. 141-160, Jun. 2009.
- J. M. Fuster, “Cortex and Mind, Unifying Cognition”, Oxford University Press, 2003.
- M. Taj and A. Cavallaro, “Distributed and decentralized multi-camera tracking”, IEEE Signal Processing Magazine, Vol. 28, Issue 3, May 2011.
- Q. M. Jonathan Wu, and Z. Q. Ji, “An improved artificial immune algorithm with application to multiple sensor systems”, Elsevier Information Fusion, vol. 11, no. 2, pp. 174–182, 2010.
- Ren C. Luo and Michael G. Kay, Multisensor Integration and Fusion for Intelligent Machines and Systems, pp. 7-10, Ablex Publishing Corp., NJ, 1995
- S. Chen, Y. Li, and N. M. Kwok, “Active vision in robotic systems: a survey of recent developments”, International Journal of Robotics Research, vol. 30, no. 11, pp. 1343-1377, 2011.
- D. Klimentjew, J. Zhang, “Adaptive sensor-fusion of depth and color information for cognitive robotics”, 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 957-962, 2011.
- S. Haykin, “Cognitive Radio: Brain-Empowered Wireless Communications”, IEEE Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201-220, 2005.
- A. Dore, M. Pinasco, L. Ciardelli, C. Regazzoni, “A bio-inspired system model for interactive surveillance applications”, Journal of Ambient Intelligence and Smart Environments, vol. 3, no. 2, pp. 147-163, 2011.
- L. Bixio, L. Ciardelli, M. Ottonello, and C. S. Regazzoni, “Distributed cognitive sensor network approach for surveillance applications”, Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS'09, pp. 232-237, 2009.
- Simone Chiappino, Pietro Morerio, Lucio Marcenaro, Elisabetta Fuiano, Giulia Repetto, Carlo S. Regazzoni, "A multi-sensor cognitive approach for active security monitoring of abnormal overcrowding situations", 15th International Conference on Information Fusion, FUSION2012, July 9th, 2012, Singapore.
- V. Martin,V. Moncada, J. M. Travere, T. Loarer, F. Bremond, G. Charpiat, and M. Thonnat, “A cognitive vision system for nuclear fusion device monitoring”, Springer Computer Vision Systems, pp. 163-172, 2011.
- P. Boyraz, X. Yang, J. H. L. Hansen, “Computer Vision Systems for “Context-Aware” Active Vehicle Safety and Driver Assistance”, Springer Digital Signal Processing for In-Vehicle Systems and Safety, pp. 217-227, 2012.
- S. C. A. Thomopoulos, “Sensor selectivity and intelligent data fusion,” in Proceedings of 1994 IEEE International Conference on MFI ’94. Multisensor Fusion and Integration for Intelligent Systems, pp. 529–537.
- D. Novak, M. Mihelj, and M. Munih, “A survey of methods for data fusion and system adaptation using autonomic nervous system responses in physiological computing,” Interacting with Computers, vol. 24, no. 3, pp. 172–154, May 2012.