Session 3

From ICE PhD Wiki
Jump to: navigation, search

User centered innovation and applications / Gaming

User-centered innovation (UCI) suggests that the way to create innovations is by starting from users, by getting closer to them and better understanding their current needs. User-centered innovation is not about developing and building products and services that people want. It is about listening to people, and getting to the root of what their problems are or will be in the future, and then providing innovative solutions to meet those problems. User centered innovation incorporates research methods that fuse research practices from several different areas such as anthropology, ergonomics, and usability study. Also, the use of participatory design techniques in UCI is encouraged. Examples of such design techniques are workshops and different creative tools which allow users to express their ideas freely and in this way contribute to the product development. In UCI, design research and design practice are closely tied, in an effort to create better experience and achieve maximum usability for the user. This means that the users are, in some way, put in the position of co-designer for the solution of their problems, next to expert designers and engineers who can bring their ideas to life through prototypes. Participatory, user centred design research techniques give the framework to understand and diagnose the real issues and problems behind users’ experiences of products, systems and services that are being developed. In order to successfully innovate, designers, researchers and developers have to act upon this framework, to imagine better system, problem solution or experience, and to implement them through iterative prototyping process. User centered innovation is the main approach taken in development of user applications for interactive and cognitive environments.

Topics:

  • Modeling interactive behavior in emergent ICT environments
  • Quality of life improvement and user experience assessment
  • Embedded user-centered design engineering
  • User adaptive/oriented systems and applications
  • Health and elderly applications of ICT systems
  • Platforms to study long-term interactions
  • Application driven datasets and testing environments for health-entertainment intelligent designs
  • Mixed responsive environments for care and fun through serious gaming
  • Social and physical serious gaming for rehabilitation/prevention
  • Connecting the virtual and the real

Students: Huang-Ming Chang, Danu Pranantha, Boris Takac, John Brown, Pongpanote Gongsook, Taufique Sayeed, Pasquale Ferrara

Supervisor: Andreu Català

Research and design practices

User centered innovation is carried in practice mainly through user centered design (UCD) process, which is as a design philosophy with associated set of methods. Basic UCD process involves the following steps[S.S]:

  • Know the users
  • Understand their needs and goals
  • Conceptual design
  • Prototyping & evaluation

Product development begins with a vision of a product, which includes a vision of the users for that product. A vision, however, is not enough to start design. Every product has different users. Some products have many different types of users. Even new versions of old products have a changing user population[1]. At this stage, it is important to identify target group for the envisioned product, look into existing documentation about this group and then conduct investigation on all relevant levels. "Know Thy User" is a mantra for usability specialists; understand your audience and design for them. Don’t make assumptions about your users. Go out and meet them. Gather data. Perform user testing. Understand how they differ, what their goals are, what their needs are, and how they think and feel.[2]

In order to understand users' needs and goals more closely user/audience analysis is conducted. This process includes making of the list of the categories inside the user group, followed by the construction of user profiles and personas. This process of, known as user modelling, is primary focused on collection of data directly from users. Different techniques can be employed to collect necessary data, such as:

  • Contextual inquiry 
  • Ethnographic field studies 
  • Focus groups 
  • General interviews 
  • Depth interviews 
  • Questionnaires 

For user centered technological innovation it is very important to understand how people are making use of existing technology in their lives in order to insight and inspire the future technology. Field studies which observe the current behavior are seen as an important source of information in this context.

Conceptual design is a process of transformation of user requirements or needs, observed in the previous steps, into a conceptual model. The often repeated definition of the conceptual model is given by Pierce et al. in [20] as "a description of the proposed system in terms of a set of integrated ideas and concepts about what it should do, behave and look like, that will be understandable by the users in the manner intended." Conceptual design process implies open minded approach in which different solutions are proposed and analyzed. It is important to refine ideas through multiple iterations and not to move to the final solution to quickly. Keeping eyes open for the alternatives in each moment is encouraged, and fast prototyping if possible is recommended. Usual design techniques used in this stage involve:

  • Brainstorming 
  • Card sorting
  • Sketching 
  • Storyboards (in interaction design) 

A prototype is an early sample or model built to test a concept or process or to act as a thing to be replicated or learned from. Why make prototypes? Prototypes are built because evaluation and feedback are important part of UCD. Stakeholders can see, hold, and interact with a prototype more easily than with a document or a drawing. It helps team members working on a product to communicate more effectively and it allows for reflection on the product, which is the important part of design. Prototypes answer some of the questions that might arise during design process and it supports designers in choosing between alternatives, if they exist. Prototyping process ends with model of the product. Word prototype and model are sometimes used interchangeably. There are five basic categories of prototype:[3]

  • Proof-of-Principle Prototype (test some aspects of design)
  • Form Study Prototype (explores the basic size, look and feel of a product without simulating the actual function)
  • User Experience Prototype (tests interaction and is primarily used to support user focused research)
  • Visual Prototype (capture the intended design aesthetic and simulates visual appearance)
  • Functional Prototype (simulates the final design, aesthetics, materials and functionality of the intended design)


Personal Opinions

-JB -In my case, I'm taking user-centered a little further than it usually goes, by focussing on how the human brain perceives the world. To me this is a logical step towards truly user-centered innovation, as opposed to innovation that considers the user's needs or preferences, but is still built around a basic architectural scheme that centers on the engineer's or the company's traditional techniques and practices. I'm not saying that user-centered innovation isn't happening, just that, despite our best efforts, it is usually still centered on something other than the user. In some cases it goes as far as considering cultural and some physiological needs [18], but I still think it needs to meet basic perceptual needs to be really human centered.

What do you all think? Is that controversial enough to generate a discussion?

- BT -What interests me is user-centered innovation vs. user centered design. What are the differences when we are talking about these two? In his book "Design Driven Innovation: Changing the Rules of Competition by Radically Innovating What Things Mean" Roberto Verganti classifies innovation into three main types: user-centered, technology driven (technology push) and design-driven innovation. As he, himself writes in the introduction, the book is a book about business strategies intended for managers, and it is not about design. I got the feeling that the term innovation can be used as overreaching term when talking about business strategies for future product development. Then, in the execution of all of those strategies it is possible to use user-centered design as general methodology which helps to achieve the goal of maximum product usability for the user. I talked with a few designers and so far nobody was able to give me clear answer. It is one possible topic for the debate. -JB -That sounds like a good debate to me, Boris.

User centered innovation applications

-BT -What are the needs of people that interactive and cognitive environments can improve? So far work has been done on: physical well being (health applications), emotional well being(maybe not so much), entertainment (games and media), learning (serious games). What else can we add to this list?

-JB - How about assistance in the performance of both day-to-day tasks, and specialised tasks? This would involve things as simple as the scheduling reminders (calendars) or performance reminders (the "door is open" alert in a car) or as complex as the task-specific systems that are used for automatic flight correction in all modern passenger aircraft.

-PF - e-Health is the meeting point between user centered technologies and health care and it can be employed to give to patients a lot of services. An example of such services is monitoring elderly persons or patients staying at home. The main advantages are a better quality of life, because a familiar environment is more appreciated by elderly persons or patient in general, enhancing and restoring health, preventing disease, limiting illness (i.e. diabetics), and it can reduce health care costs. A plethora of e-Health systems have been build: the first example is a diabetes home telemedicine system [44]. Other technologies point on the assistance of elderly persons, with their own difficulties due to the aging, rather than the assistance of a particular disease. In the EasyLine + project [45], an AmI kitchen with advanced white goods prototypes near to market. This would increase the autonomy of the elderly and people with disabilities in their everyday activities. Other possible outcomes that I can suggest is the monitoring of people with Parkinson’s disease: the evaluation of the difficulties to movie their body would be an index of the disease progression. Furthermore, an alarm system can be implemented when the patient is in difficulty. Other possible application?


Health and Aging

The breadth of the applications in this area can be seen from the projects that are currently undertaken by EU. Here are some of the examples:

Ubiquitous care system to support independent living (CONFIDENCE)[4] - Mobility problems

The proposed systems deals with the fear of falls in elderly people.User wears tags, whose position is determined using radio technology. This information is used to reconstruct the posture. And based on that an alarm can be triggered.

Advanced multi-paRametric Monitoring and analysis for diagnosis and Optimal management of epilepsy and Related brain disorders (ARMOR)[5] - Neural disorders

Reliable diagnosis of epilepsy requires state of the art monitoring and communication technologies providing real-time, accurate and continuous brain and body multi-parametric data measurements, suited to the patient's medical condition and normal environment and facing issues of patient and data security, integrity and privacy. This project manages and analyses a large number of already acquired and new multimodal and advanced technology data from brain and body activities of epileptic patients and controls (MEG, multichannel EEG, video, ECG, GSR, EMG, etc) aiming to design a more holistic, personalized, medically efficient and economical monitoring system.

Autonomy and social inclusion through mixed reality Brain-Computer Interfaces: Connecting the disabled to their physical and social world (BRAINABLE)[6] - Disability

The project is focused on the development of an ICT-based human computer interface (HCI) composed of Brain-Neural-Computer-Interaction (BNCI) sensors combined with affective computing and virtual environments. BNCI information is used for indirect interaction, such as by changing interface or overall system parameters based on measures of boredom, confusion, frustration, or information overload. BrainAble's HCI is complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that can help disabled people to move around on their wheelchairs, interact with all sort of devices, create self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.

COntinuous Multi-parametric and Multi-layered analysis Of DIabetes TYpe 1 & 2 (COMMODITY12)[7] - Chronic diseases

The system exploits multi-parametric data to provide healthcare workers and patients, with clinical indicators for the treatment of diabetes type 1 and 2. COMMODITY12 focuses on the interaction between diabetes and cardiovascular diseases. System develops 4 layered platform:

  • Body Area Network Layer (BAN): this layer employ sensors from the BodyTel PHS and additional Bluetooth sensors to monitor the patient physiological signals. This layer performs multi-parametric aggregation of data for the Smart Hub layer.
  • The Smart Hub Layer (SHL): the BodyTel PHS at this layer receives aggregated data from the BAN and applies machine learning to classify the signals and provide indications about abnormalities in the curves. SHL communicates with DRR over the cell-phone network.
  • The Data Representation And Retrieval Layer (DRR): this layer, based on the Portavita PHS to manage EHR, interfaces to the SHL and utilises existing medical data to perform information retrieval and produce structured information for the agents at the AIL.
  • The Artificial Intelligence Layer (AIL): this layer uses the DRR layer to retrieve structured background knowledge of the patient for intelligent agents applying diagnostic reasoning to the patient's condition.

A Computational Distributed System to Support the Treatment of Patients with Major Depression (HELP4MOOD)[8] - Psychological problems Research shows that psychological therapies for Major Depression (MD) can be delivered effectively without face to face contact: computerised cognitive behavioural therapy (CCBT) is suitable for self-guided treatment in the individual's own home. This project proposes to significantly advance the state-of-the-art in computerized support for people with MD by monitoring mood, thoughts, physical activity and voice characteristics, prompting adherence to CCBT, and promoting behaviours in response to monitored inputs. These advances will be delivered through a Virtual Agent (VA) which can interact with the patient through a combination of enriched prompts, dialogue, body movements and facial expressions. Monitoring will combine existing (movement sensor, psychological ratings) and novel (voice analysis) technologies, as inputs to a pattern recognition based decision support system for treatment management.

Knowledgable SErvice Robots for Aging (KSERA) [9] - Entertainment and wellbeing

The project provides:

  • a mobile assistant to follow and monitor the health and behavior of a senior
  • useful communication (video, internet) services including needed alerts to caregivers and emergency personnel
  • a robot integrated with smart household technology to monitor the environment and advise the senior or caregivers of anomalous or dangerous situations

The problems to be addressed by the research and field trials include: robot mobile behavior (machine navigation and following a target person through a variable and cluttered environment), ubiquitous monitoring of physiological and behavioral data through direct measurements and interaction with household sensors, and human-robot interaction including new developments in shared environmental processing, affective technology, and adaptable multimodal interfaces. A single robotic device hosting entertainment and communication aids, and at the same time providing an assistant that monitors the environment and the user's behavior, contributes to the user's health and quality of life (QoL).

Personal Health Device for the Remote and Autonomous Management of Parkinson’s Disease (REMPARK)[10]- Neurodegenerative disease The aim of this project is to develop a Personal Health care System (PHS) for the improved management of Parkinson’ Disease. To achieve the goal, a system will be developed that will identify the motor status in real time, provide gait guidance system to the patient, will have user interface and a server for long term monitoring to keep track of the evolution of the patients’ conditions. The long term analysis will allow the doctors to accurately measure the motor state and prescribe the medication with proper doses.

Multispectral Image Diagnostics of Skin Tumors (MIDST) [11] - The aim of the project is to develop a multispectral imaging device in order to evaluate the risk that a pigmented skin lesion could be a melanoma, i.e. it can be evaluate objectively a suspected skin lesion. This is important for two reasons:

  • Early diagnosis and excision are the only tools we have to successfully treat patients;
  • Many people underestimate the risk of the melanoma.

The end users of such device are physicians (i.e. dermatologists hist-opathologists). They are involved in the project in order to give an interpretation and a new semeiotics about lesions and structures.

Calm Technology

JB- The following text is based on a series of invited talks I gave in Australia in July, 2012,starting with an "expert talk" at ICME in Melbourne, and continuing at the Queensland University of Technology in Brisbane and at NICTA in Sydney. A fourth invited talk has been scheduled as an IEEE event at the University of Malta on October 19th, 2012. As a result, some portions of this text have been or will be published elsewhere.

Mark Weiser and John Seeley Brown predicted in the early 1990s that the internet and coming advances in distributed computing would lead to an era of Ubiquitous Computing (UC) [2] [3] [4] [5]. Some say that it has, but what we now call UC is not what Weiser and Brown described, and an important part of their message has been largely forgotten, ignored or misunderstood.

“The most potentially interesting, challenging and profound change implied by the ubiquitous computing era is a focus on calm.”

Calm Technology (CT) has come to mean a number of things, usually focused on the task of helping the user to be(come) calm [6][7][8][9][10][11][12]. That is not the intent expressed by Weiser and Brown. They describe technology that behaves in a calm manner in order to support human computer interaction based on the way that humans naturally take in information. CT is based on the two ways that humans process information. Trying to focus on more than one thing at once is stressful, but humans can take in much more information if it is presented peripherally; in a way that allows the individual to judge whether or not to give it more attention. Basic physiology and neuroanatomy show that we naturally examine things closely while at the same time using other senses to keep track of subtle changes in our environment, warning us when the peripheral becomes important. What’s more, the process of plucking things from the periphery, examining them and then deciding how to re-sort them is a comforting activity. It makes us feel at home and in control.

I propose that CT has been all but abandoned because it is harder to design and implement than traditional multi-media interaction. This means that, instead of deliberate calm, we have constant text message alerts, ring tones and email pop-ups demanding the immediate attention of everyone within earshot. Imagine instead that, when appropriate, your cell phone would subtly let you know who is trying to reach you without pulling your attention away from the task at hand. It could be as gentle as the sound of familiar footsteps or soft laughter, and it could be as harsh as a scream for help, depending on the nature of the interruption.

I further propose that, since hardware and software are now more than good enough, and rich multimedia can be customized, stored, accessed and processed quickly and cheaply, it is time for rich and textured human computer interaction; interaction that doesn't treat the user like one machine in a series, but behaves instead with all of the range and depth of input and output systems in the natural world.

What do you think, Huang-Ming?

-HM - I think it's great. So I just continue with your abstract and write two paragraphs according to my previous mail. Please have a look and see if there's anything needs correction.

It has been found that human mind will assign more cognitive resources to deal with the task in hand, and at the same time blocking useless information from the environment [13]. The main task users are engaging and the senses that they use exploit most of their cognitive resources while other information distributed in the environment becomes the background of the context, which is what we called the peripheral. The peripheral of the context is a constantly changing dynamic. The proportion of cognitive resources assigned to the environment would change in response to users’ conscious or non-conscious thoughts and feelings. Although users would automatically ignore the information that is not related to their current task in their conscious thinking, there are more evidence prove that their unconscious minds still receiving and processing the information even they are not aware of [14]. Furthermore, contrary to conventional wisdom, when users confront tasks that are beyond their capability of cognitive processing, unconscious mental process can contribute a better performance than conscious thinking [15]. This evidence has shown great potentials to make use of the multimodal channel in the peripheral of the context to deliver better user experience in a nonintrusive and indirect manner.

However, the crucial challenge to the core of this direction is to activate interaction between users and the peripheral since users are not focusing on the information in the background of the context. Would it be possible for users to perceive useful information even when they are not aware of it? One interesting idea was proposed that emotion might be the representation of the output of the higher-level of mental information processing, which is unconscious mind [16]. It has also been proved that people could still be influenced by their emotions even if these emotions are unconscious [15] [17]. As the very first step of developing calm technology systems, we propose to start with affective interaction between users and intelligent environment. We argue that a calm technology system should be capable of sensing users’ emotions and react corresponding to users’ current main task. Via understanding users’ use of senses, the environment should utilize proper channels to deliver feedback in the peripheral. As a result, users are able to interact with the environment using secondary senses without being interrupted and receive important information in a more smooth and elegant way.

-JB -That's great Huang-Ming! Very, very interesting!! Two ideas come to mind right away:

1) that we should be very, very careful in deliberately manipulating human emotion in a subconscious manner, because we rarely carry out user tests that truly reflect the breadth and depth of use, and;

2) that we are already doing this in video games, through the use of (sub)sonics, colours, storylines and other means of subliminal affective stimulation.

Huang-Ming, I think you and I could discuss this for hours, and I hope it will lead to some very interesting work together once your remaining contract and work permit issues are resolved.

-HM

1) This concern actually relates to other session. Since users are not aware of or cannot infer why they feel this emotion, it is extremely difficult to observe and evaluate this interaction via users' subjective report methods. To the best of my knowledge, two approaches might work: 1. qualitative research: usually this require professional psychologist to participate, there are still some methods are broadly used, especially experience sampling [24] and day reconstruction method [25]. 2. using physiological signals (heart rate, skin conductance, facial detection, respiration, etc.), which is actually some of our ICEPHD candidates are working on.

2) Very much true. It has been debated for decades whether emotions are universal or cultural variated [26]. Elfenbein and Ambady [26] proposed an interesting idea: "emotion dialects", saying that emotions are universal in general, but people from the same cultural background could interpret and express their emotions with each other better than cross-cultural pairs. Video games and commercial movies apparently are aimed to sell globally and some of them are remarkably successful even with similar storyline. A very good example of all would be hero's journey [27], as we can see a huge numbers of gams and movies are about heros. Extended from Carl Jung's theory of archetypes [28], Campbell[27] formed the stages of a hero's journey, which can be seen in ancient Greek myths and even modern superheros. It might be a very promising starting point to investigate more subtle emotions except basic emotions [29], such as joy, anger, disgust, etc.

(I have to spend some time on my teaser. I would try to keep on this discussion if I have extra time.)

-JB I'm off to watch the football, but it's been a real pleasure reading your thoughts on this, Huang-Ming. I'm looking forward to working with you.

Gaming for Education and Health

Subtopics:

A) Mixed responsive environments for care and fun through serious gaming
B) Social and physical serious gaming for rehabilitation/prevention
C) Connecting the virtual and the real

PG - Game and VR

If someone asks me ‘what should we gain from playing a game?’ I would like to ask him one question ‘Do we have to participate in a real world situation when we learn something?’ Since the answer to the mentioned question could probably be 'yes' and 'no'. On the one hand it is 'yes', if we want to practice relies heavily with our physical body such as running, or martial arts. On the other hand it is 'no', if we need only to make us understand about a non-action related subject or a high abstract topic such as Philosophy.

I believe that gamers gain something after playing, but what they gain from playing game depend on what side we look. For example, First Person Shooter (FPS) and a simulator game are long used as practicing tools for US marines [32]. One of the game that widely used is America's Army. This kind of game is used to train fire fighter and life-rescue team. Another benefit of using gaming as a training tool is to prevent inexperienced learners from dangerous situations. However, like a double edge sword, questions on a violence that player might absorb or receive while playing such game are exist. A non-violent game like Tetris could encourage a player to think where to put a block and does it fit with space below and a shooting game like Galaga could give player eyes-hands control – even a player uses only three fingers to control a spaceship: to go left, right, and fire a missile.

Game give player fun and entertain but not all game can give players the feeling of presence. Should it be better if we merge a game with something that could give the player an immersion. Virtual reality could be what we looking for. Virtual reality is the medium that support high level of presence-- the feeling of being in the world that exists outside the self; more importantly, knowledge obtained in the virtual reality can latterly be applied in real life [40]. Now we have a better graphic card, terra flops computing power, faster network connection, grid computing and many more these mentioned technologies bring virtual reality to an achievable state. Virtual reality can be used as a rehabilitation or treatment tool for disabilities people. Schneider’s team found that virtual reality is effective for a distraction which draw patient attention from chemotherapy processes thus give patients more tolerate to chemotherapy [41]. Virtual reality is used in a psychological setting for example, it can present cognitive tasks which targeting to attention performance better than traditional methods. It’s better control of a perceptual environment provides reliable attention assessment. In additions, it provides more consistent stimulus presentation, and more precise at scoring [42]. Bioulac et. al. used a virtual reality classroom to measure the performances of children with and without ADHD [43], and Passig used virtual reality to teach time perception to children with mild to moderate mental retardation [44].

Gaming in virtual reality could provide us double benefits; fun while learning and knowledge transferring. This combination would connect the virtual to the real and it is also used in a rehabilitation setting. Reflect to our first question, it is not only entertainment we gain from playing a game, but we can extend its potential thus obtain knowledge and use as a medical treatment.

-DP Learning is frequently seen as an awkward activity that is necessary to acquire the needed knowledge and skills. On the other hand, playing is considered as funny and engaging activity, which gives strong motives to people to do it voluntarily. Therefore, Prensky [31] argued that conventional learning is outdated and the modern way of learning is through a real gameplay so that student will learn while having fun. Since computer games are abundant in terms of types, objectives, and effects, Zyda [32] used serious games term to define games which provide a mental contest and are played with a computer in accordance with specific rules for government or corporate training, education, health, public policy, and strategic communication. Therefore, any game built to differ from pure entertainment can be considered as a serious game. In relation with user centered application, serious games as part of pedagogy should be approached from the perspective of contents design and content delivery. Firstly, contents should be designed such that they are suitable with the learning objectives [33]. Moreover, their mechanics and interaction allow users to internalize the learning material in a less demanding of cognitive load [34]. This relies heavily on the contents creators and the instructors who are responsible in capturing the learners’ requirements.

Secondly, contents delivery should conform to users’ current knowledge and skills which means it encourages the users to be within the flow [35]. This leads to personalized delivery which can be regulated using a formal model derived from: 1) performance based measures, or 2) usability metrics. Performance based measures for productivity applications are not applicable since the focus is the emotional experience of the players regardless the performance [36]. The usability metrics based on theoretical mental model somewhat relevant to address this issue, though they actually reflect more on interaction experience. However, given the significant progress on human factors study in the last decade, understanding emotion, which equally essential to design [37] is still not well understood, especially when the objectives are to challenge and to entertain the users. These human factors based formal modeling have practical merits yet they are tied to the game used in designing the metrics. Moreover, designing the metrics that able to effectively capture emotional states of various players is a challenging task, which merits may diminish as the number of players grows. An alternative to this issue is the use of Brain Computer Interfaces (BCIs) which are currently becoming more advanced and accessible [38]. BCI captures brainwave emitted by human which each particular wave reflects on different state of human emotion. One of first works of the brainwave classification in game is a work in [39] which showed that using EEG and peripheral signals (GSR and cardiovascular) can approximately achieve 63% of overall accuracy in recognizing three different emotional states, i.e. boredom, engagement, anxiety, with the engagement state has the lowest accuracy with 39%. Though it still seems at a distance, the possibility of implementing this technology is quite attractive.

Publications Reflecting the State of the Art For Our Topics

--- Modeling interactive behavior in emergent ICT environments ---

-JB [1] R. Wojciechowski, "Modeling Interactive Augmented Reality Environments," Interactive 3D Multimedia Content, pp. 137-170, 2012.

-BT R. Casas et al., "User Modelling in Ambient Intelligence for Elderly and Disabled People", in Proceedings of the 11th international conference on Computers Helping People with Special Needs, ICCHP '08, Pages 114 - 122

--- Quality of life improvement and user experience assessment ---

--- Embedded user-centered design engineering ---

-JB [1] K. A. Jeong, R. W. Proctor and G. Salvendy, "Smart‐home interface design: Layout organization adapted to Americans' and Koreans' cognitive styles," Human Factors and Ergonomics in Manufacturing & Service Industries, 2012

--- User adaptive/oriented systems and applications ---

--- Health and elderly applications of ICT systems ---

-BT L. Henkemans et al., "Medical Monitoring for Independent Living: User-Centered Smart Home Technologies for Older Adults", Med-e-Tel 2007, April 2007.

--- Platforms to study long-term interactions ---

-JB [1] E. Karapanos, J. Jain and M. Hassenzahl, "Theories, methods and case studies of longitudinal HCI research," in Proceedings of the 2012 ACM Annual Conference Extended Abstracts on Human Factors in Computing Systems Extended Abstracts, 2012, pp. 2727-2730

--- Application driven datasets and testing environments for health-entertainment intelligent designs ---

--- Mixed responsive environments for care and fun through serious gaming ---

--- Social and physical serious gaming for rehabilitation/prevention ---

-JB [1] R. Steinmetz and S. Göbel, "Challenges in Serious Gaming as Emerging Multimedia Technology for Education, Training, Sports and Health," Advances in Multimedia Modeling, pp. 3-3, 2012.

-PG [2] S. M. Schneider, C. K. Kisby, and E. P. Flint, “Effect of virtual reality on time perception in patients receiving chemotherapy,” Supportive Care in Cancer, vol. 19, no. 4, pp. 555–564, 2010.

-PG [3] T. D. Parsons, T. Bowerly, J. G. Buckwalter, and A. A. Rizzo, “A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods.,” Child Neuropsychol, vol. 13, no. 4, pp. 363–81, Jul. 2007.

--- Connecting the virtual and the real ---

-JB [1] B. Koles and P. Nagy, "Who is portrayed in Second Life: Dr. Jekyll or Mr. Hyde? The extent of congruence between real life and virtual identity," Journal of Virtual Worlds Research, vol. 5, 2012.

-PG [2] G. Riva, F. Mantovani, and A. Gaggioli, “Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology,” J Neuroeng Rehabil, vol. 1, no. 1, p. 9, 2004.

REFERENCES

[GA1] G. Riva, F. Mantovani, and A. Gaggioli, “Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology,” J Neuroeng Rehabil, vol. 1, no. 1, p. 9, 2004.

[2] M. Weiser and J. S. Brown, “Designing calm technology”, Xerox PARC, December 21, 1995. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.9788&rep=rep1&type=pdfdf

[3] M.Weiser and J. S. Brown, “Designing calm technology,” Powergrid J., vol. 1, no. 1, July, 1996.

[4] J. 58. M. Weiser and J. S. Brown, “The Coming Age of Calm Technology”, October 5, 1996. Available at: http://www.cs.ucsb.edu/~ebelding/courses/284/papers/calm.pdf

[5] M. Weiser, "Hot Topics: Ubiquitous Computing," Computer, Oct. 1993, pp. 71-72.

[6] Davies, N.; Gellersen, H.-W.; , "Beyond prototypes: challenges in deploying ubiquitous systems," Pervasive Computing, IEEE , vol.1, no.1, Jan-Mar 2002, pp. 26- 35

[7] G. D. Abowd, E. D. Mynatt, “Charting past, present, and future research in ubiquitous computing”, ACM Transactions on Computer- Human Interaction 7(1), 2000, pp. 29–58.

[8] Y. Rogers, “Moving on from Weiser's vision of calm computing: engaging UbiComp experiences”. In Ubicomp 2006 Proceedings, LNCS 4206, Dourish P, Friday A (eds). Springer-Verlag: Heidelberg; 2006, pp. 404–421

[9] G. D. Abowd, E. D. Mynatt, and T. Rodden, "The human experience [of ubiquitous computing]," Pervasive Computing, IEEE , vol.1, no.1, Jan-Mar 2002, pp. 48- 57.

[10] A. Tugui, “Calm technologies in a multimedia world”, in Ubiquity, ACM, vol. 5, no. 4, 2004.

[11] A. Tugui, “Calm technologies: A new trend for educational technologies”, World Future Review, Spring 2011, pp. 64-73.

[12] J. Fiaidhi, “Towards Developing Installable e-Learning Objects utilizing the Emerging Technologies in Calm Computing and Ubiquitous Learning”, International Journal of u- and e- Service, Science and Technology, vol. 4, no. 1, 2011, pp. 1-12.

[13] S. B. Most, D. J. Simons, B. J. Scholl, R. Jimenez, E. Clifford, and C. F. Chabris, “How not to be seen: the contribution of similarity and selective ignoring to sustained inattentional blindness,” Psychological Science, vol. 12, no. 1, pp. 9–17, 2001.

[14] P. Winkielman, K. C. Berridge, and K. C. Berridge, “Unconscious emotion,” Current Directions in Psychological Science, vol. 13, no. 3, pp. 120–123, Jun. 2004.

[15] A. A. Dijksterhuis, M. W. Bos, L. F. Nordgren, and R. B. van Baaren, “On making the right choice: the deliberation-without-attention effect.,” Science, vol. 311, no. 5763, pp. 1005–7, 2006.

[16] M. Rauterberg, “Emotions: The voice of the unconscious,” in Entertainment Computing-ICEC 2010, 2010, pp. 205–215.

[17] J. F. Kihlstrom, S. Mulvaney, B. A. Tobias, and I. P. Tobis, “The Emotional Unconscious,” in Counterpoints: Cognition and Emotion, no. September, E. Eich, J. F. Kihlstrom, G. H. Bower, J. P. Forgas, and P. M. Niedenthal, Eds. New York: Oxford University Press, 2000, pp. 30–86.

[18] F. Lima and L. K. Araújo, "Infant feeding: the interfaces between interaction design and cognitive ergonomics in user-centered design," Work: A Journal of Prevention, Assessment and Rehabilitation, vol. 41, pp. 1086-1093, 2012.

[19] R. Wojciechowski, "Modeling Interactive Augmented Reality Environments," Interactive 3D Multimedia Content, pp. 137-170, 2012.

[20] K. A. Jeong, R. W. Proctor and G. Salvendy, "Smart‐home interface design: Layout organization adapted to Americans' and Koreans' cognitive styles," Human Factors and Ergonomics in Manufacturing & Service Industries, 2012

[21] E. Karapanos, J. Jain and M. Hassenzahl, "Theories, methods and case studies of longitudinal HCI research," in Proceedings of the 2012 ACM Annual Conference Extended Abstracts on Human Factors in Computing Systems Extended Abstracts, 2012, pp. 2727-2730

[22] R. Steinmetz and S. Göbel, "Challenges in Serious Gaming as Emerging Multimedia Technology for Education, Training, Sports and Health," Advances in Multimedia Modeling, pp. 3-3, 2012.

[23] B. Koles and P. Nagy, "Who is portrayed in Second Life: Dr. Jekyll or Mr. Hyde? The extent of congruence between real life and virtual identity," Journal of Virtual Worlds Research, vol. 5, 2012.

[24] C. Napa Scollon, C. Kim-Prieto, and E. Diener, “Experience Sampling: Promises and Pitfalls, Strength and Weaknesses,” Assessing Well-Being, vol. 39, pp. 157–180, 2009.

[25] D. Kahneman, A. B. Krueger, D. A. Schkade, N. Schwarz, and A. A. Stone, “A survey method for characterizing daily life experience: the day reconstruction method.,” Science, vol. 306, no. 5702, pp. 1776–80, Dec. 2004.

[26] H. A. Elfenbein and N. Ambady, “On the Universality and Cultural Specificity of Emotion Recognition: A Meta-Analysis,” Psychological Bulletin, vol. 128, no. 2, pp. 203–235, Mar. 2002.

[27] J. Campbell, The Hero with A Thousand Faces, 3rd ed. New World Library, 2008, p. 432.

[28] C. G. Jung, The archetypes and the collective unconscious, 2nd ed. Princeton, N.J.: Princeton University Press, 1968, p. 451.

[29] P. Ekman, “Are there basic emotions?,” Psychological Review, vol. 99, no. 3, pp. 550–553, 1992.

[30] J. Preece, Y. Rogers,H. Sharp, Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.40, 2002

[31] M. Prensky, “The Motivation of Gameplay: The Real Twenty-first Century Learning Revolution,” On the Horizon, 10(1), pp. 5-11, 2002.

[32] M. Zyda, “From visual simulation to virtual reality games,” Computer, Vol. 38, no. 9, pp. 25-32, 2005.

[33] D. Pranantha, C. Luo, F. Bellotti, A. de Gloria. “Designing Contents for a Serious Game for Learning Computer Programming with Different Target Users,” Proceedings of IIIS International Conference on Design and Modeling in Science, Education, and Technology: DeMSET 2011, Orlando, Florida, USA, Nov-Dec. 2011.

[34] Sweller, J. “Cognitive load during problem solving: Effects on learning”. Cognitive Science, 1988. 12, 257-285.

[35] Csikszentmihalyi, M. “Flow: The psychology of optimal experience”, 1990. New York: Harper Perennial.

[36] R.J., Pagulayan, K. Keeker, D. Wixon, R. Romero, T. Fuller, “Player-centered design in games,” In: J. Jacko and A. Sears (Eds.), Handbook for Human-Computer Interaction in Interactive Systems, NJ: Lawrence Erlbaum Associates, Inc., pp. 883-906, 2002.

[37] D.A. Norman, “Emotion and design: attractive things work better,” Interactions 9, 36-42, 2002.

[38] A. Nijholt and D. Tan, “Brain-Computer Interfacing for Intelligent Systems,” Intelligent Systems, IEEE , vol.23, no.3, pp.72-79, May-June 2008.

[39] Chanel, G.; Rebetez, C.; Bétrancourt, M.; Pun, T.; , “Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on , vol.41, no.6, pp.1052-1063, Nov. 2011

[40] G. Riva, F. Mantovani, and A. Gaggioli, “Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology,” J Neuroeng Rehabil, vol. 1, no. 1, p. 9, 2004.

[41] S. M. Schneider, C. K. Kisby, and E. P. Flint, “Effect of virtual reality on time perception in patients receiving chemotherapy,” Supportive Care in Cancer, vol. 19, no. 4, pp. 555–564, 2010.

[42] T. D. Parsons, T. Bowerly, J. G. Buckwalter, and A. A. Rizzo, “A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods.,” Child Neuropsychol, vol. 13, no. 4, pp. 363–81, Jul. 2007.

[43] S. Bioulac, S. Lallemand, A. Rizzo, P. Philip, C. Fabrigoule, and M. P. Bouvard, “Impact of time on task on ADHD patient’™s performances in a virtual classroom,” European Journal of Paediatric Neurology, no. 0, pp. 1–8, Jan. 2012.

[44] D. Passig, “Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR),” Journal of Educational Computing Research, vol. 40, no. 3, p. 18, 2009.

[44] Blanson Henkemans, O. A., Caine, K. E., Rogers, W. A., Fisk, A. D., Neerincx, M. A. & de Ruter, B. (2007). Medical Monitoring for Independent Living: User-centered design of smart home technologies for older adults.Proceedings of the Med-e-Tel Conference for eHealth, Telemedicine and Health Information and Communication Technologies.

[45] R. Casas, R. B. Marín, A. Robinet, A. R. Delgado, A. R. Yarza, J. Mcginn, R. Picking, and V. Grout, User modelling in ambient intelligence for elderly and disabled people, In Proc. Of the 11th ICCHP, number 5105 in LNCS, Springer-Verlag, (2008).