Search results
Found 18188 matches for
A review of augmented reality applications for history education and heritage visualisation
Augmented reality is a field with a versatile range of applications used in many fields including recreation and education. Continually developing technology spanning the last decade has drastically improved the viability for augmented reality projects now that most of the population possesses a mobile device capable of supporting the graphic rendering systems required for them. Education in particular has benefited from these technological advances as there are now many fields of research branching into how augmented reality can be used in schools. For the purposes of Holocaust education however, there has been remarkable little research into how Augmented Reality can be used to enhance its delivery or impact. The purpose of this study is to speculate regarding the following questions: How is augmented reality currently being used to enhance history education? Does the usage of augmented reality assist in developing long-term memories? Is augmented reality capable of conveying the emotional weight of historical events? Will augmented reality be appropriate for teaching a complex field such as the Holocaust? To address these, multiple studies have been analysed for their research methodologies and how their findings may assist with the development of Holocaust education.
Interval relations in lexical semantics of verbs
Numerous temporal relations of verbal actions have been analysed in terms of various grammatical means of expressing verbal temporalisation such as tense, aspect, duration and iteration. Here the temporal relations within verb semantics, particularly ordered pairs of verb entailment, are studied using Allen's interval-based temporal formalism. Their application to the compositional visual definitions in our intelligent storytelling system, CONFUCIUS, is presented, including the representation of procedural events, achievement events and lexical causatives. In applying these methods we consider both language modalities and visual modalities since CONFUCIUS is a multimodal system.
A perception-based emotion contagion model in crowd emergent evacuation simulation
With the increasing number of emergencies, the crowd simulation technology has attracted wide attention in the recent years. Existing emergencies have shown that individuals are easy to be influenced by others' emotion during the evacuation. This will make it easier for people to aggregate together and increase security risks. Some of the existing evacuation models without considering emotion are therefore not suitable for describing crowd behaviors in emergencies. We propose a perception-based emotion contagion model and use multiagent technology to simulate crowd behaviors. Navigation points are introduced to guide the movement of the agents. Based on the proposed model, a prototype simulation system for crowd emotion contagion is developed. The comparative simulation experiments verify that the model can effectively deduct the evacuation time and crowd emotion contagion. The proposed model could be an assistant analysis method for crowd management in emergencies.
Prosocial video game as an intimate partner violence prevention tool among youth: A randomised controlled trial
Evidence demonstrates that exposure to prosocial video games can increase players' prosocial behaviour, prosocial thoughts, and empathic responses. Prosocial gaming has also been used to reduce gender-based violence among young people, but the use of video games to this end as well as evaluations of their effectiveness are rare. The objective of this study was to assess the effectiveness of a context-specific, prosocial video game, Jesse, in increasing affective and cognitive responsiveness (empathy) towards victims of intimate partner violence (IPV) among children and adolescents (N = 172, age range 9–17 years, M = 12.27, SD = 2.26). A randomised controlled trial was conducted in seven schools in Barbados. Participants were randomly assigned to an experimental (prosocial video game) or control (standard school curriculum) condition. Experimental and control group enrolled 86 participants each. Girls and boys in the experimental condition, but not their counterparts in the control condition, recorded a significant increase in affective responsiveness after intervention. This change was sustained one week after game exposure. No significant effects were recorded for cognitive responsiveness. Findings suggest that Jesse is a promising new IPV prevention tool among girls and boys, which can be used in educational settings.
Virtual human animation in natural language visualisation
Simulation motion of Virtual Reality (VR) objects and humans has experienced important developments in the last decade. However, realistic virtual human animation generation remains a major challenge, even if applications are numerous, from VR games to medical training. This paper proposes different methods for animating virtual humans, including blending simultaneous animations of various temporal relations with multiple animation channels, minimal visemes for lip synchronisation, and space sites of virtual human and 3D object models for object grasping and manipulation. We present our work in our natural language visualisation (animation) system, CONFUCIUS, and describe how the proposed approaches are employed in CONFUCIUS' animation engine. © 2007 Springer Science+Business Media B.V.
A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots
A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation differential.
An Information Perception-Based Emotion Contagion Model for Fire Evacuation
In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people’s prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.
Design and development of a spatial mixed reality touring guide to the Egyptian museum
Many public services and entertainment industries utilise Mixed Reality (MR) devices to develop highly immersive and interactive applications. However, recent advancements in MR processing has prompted the tourist and events industry to invest and develop commercial applications. The museum environment provides an accessible platform for MR guidance systems by taking advantage of the ergonomic freedom of spatial holographical Head-mounted Displays (HMD). The application of MR systems in museums can enhance the typical visitor experience by amalgamating historical interactive visualisations simultaneously with related physical artefacts and displays. Current approaches in MR guidance research primarily focus on visitor engagement with specific content. This paper describes the design and development of a novel museum guidance system based on the immersion and presence theory. This approach examines the influence of interactivity, spatial mobility, and perceptual awareness of individuals within MR environments. The developmental framework of a prototype MR tour guide program named MuseumEye incorporates the sociological needs, behavioural patterns, and accessibility of the user. This study aims to create an alternative tour guidance system to enhance customer experience and reduce the number of human tour guides in museums. The data gathering procedure examines the functionality of the MuseumEye application in conjunction with pre-existing pharaonic exhibits in a museum environment. This methodology includes a qualitative questionnaire sampling 102 random visitors to the Egyptian Museum in Cairo. Results of this research study indicate a high rate of positive responses to the MR tour guide system, and the functionality of AR HMD in a museum environment. This outcome reinforces the suitability of the touring system to increase visitor experience in museums, galleries and cultural heritage sites.
A Comparison of Immersive and Non-Immersive VR for the Education of Filmmaking
Nowadays, both IVR (Immersive VR) and NIVR (Non-immersive VR) have already been adopted by filmmakers and used in the education of filmmaking, but few studies have shown their differences in supporting learning. This article aims to compare these two forms of technology as educational tools for learning filmmaking and give suggestions on how to choose between them. Two applications with the same purpose and content were developed using IVR and NIVR technologies respectively. An experiment of within subject design was implemented using these two versions as experimental material. 39 subjects participated in experiment and the quantitative measures include presence, motivation and usability. SPSS was used for data analysis and the statistical results, together with interview reports showed that both technologies led to positive learning experience while IVR had better performance in the presence (especially in the “sensory & realism” and “involvement” subscales) and intrinsic motivation (especially in the “enjoyment” subscale) while NIVR was more accessible to the public and may provide more complex and powerful functions with sophisticated GUI. In conclusion, both technologies are capable of supporting the learning of filmmaking effectively when chosen for proper educational missions.
Augmented Reality in Holocaust Museums and Memorials
Augmented reality (AR) is a new medium with the potential to revolutionize education in both schools and museums by offering methods of immersion and engagement that would not be attainable without technology. Utilizing augmented reality, museums have the capability to combine the atmosphere of their buildings and exhibits with interactive applications to create an immersive environment and change the way that audiences experience them and therefore providing the ability to perform additional historical perspective taking. Holocaust museums and memorials are candidates for augmented reality exhibits; however, using this technology for them is not without concerns due to the sensitive nature of the subject. Ethically, should audiences be immersed in a setting like the Holocaust? How is augmented reality currently being used within Holocaust museums and memorials? What measures should be taken to ensure that augmented reality experiences are purely educational and neither disrespectful to the victims nor cause secondary trauma? These are the questions that this chapter will seek to answer in order to further develop the field of augmented reality for Holocaust education. To achieve this, previous AR apps in Holocaust museums and memorials have been reviewed, and a series of studies on the usage of AR for Holocaust education have been examined to identify the ethical considerations that must be made and the ramifications of utilizing AR technology to recreate tragic periods of history.
The multimodal turing test for realistic humanoid robots with embodied artificial intelligence
Alan Turing developed the Turing Test as a method to determine whether artificial intelligence (AI) can deceive human interrogators into believing it is sentient by competently answering questions at a confidence rate of 30%+. However, the Turing Test is concerned with natural language processing (NLP) and neglects the significance of appearance, communication and movement. The theoretical proposition at the core of this paper: ‘can machines emulate human beings?’ is concerned with both functionality and materiality. Many scholars consider the creation of a realistic humanoid robot (RHR) that is perceptually indistinguishable from a human as the apex of humanity’s technological capabilities. Nevertheless, no comprehensive development framework exists for engineers to achieve higher modes of human emulation, and no current evaluation method is nuanced enough to detect the causal effects of the Uncanny Valley (UV) effect. The Multimodal Turing Test (MTT) provides such a methodology and offers a foundation for creating higher levels of human likeness in RHRs for enhancing human-robot interaction (HRI).
Immersive Storytelling in Augmented Reality: Witnessing the Kindertransport
Although hardware and software for Augmented Reality (AR) advanced rapidly in recent years, there is a paucity and gap on the design of immersive storytelling in augmented and virtual realities, especially in AR. In order to fill this gap, we designed and developed an immersive experience based on HoloLens for the National Holocaust Centre and Museum in the UK to tell visitors the Kindertransport story. We propose an interactive narrative strategy, an input model for Immersive Augmented Reality Environment (IARE), a pipeline for asset development, the design of character behavior and interactive props module and provide guidelines for developing immersive storytelling in AR. In addition, evaluations have been conducted in the lab and in situ at the National Holocaust Centre and Museum and participants’ feedback were collected and analysed.
Interactive Narrative in Augmented Reality: An Extended Reality of the Holocaust
In this research, the author descripted new narrative media known as Immersive Augmented Reality Environment (IARE) with HoloLens. Aarseth’s narrative model [17] and all available input design in IARE were reviewed and summarised. Based on these findings, The AR Journey, a HoloLens app aiming at interactive narrative for moral education purpose, was developed and assessed. Qualitative methods of interview and observation were used and the results were analysed. In general, narrative in IARE were proved to be valid for moral education purpose, and findings including valid narrative structure, input model, design guidelines were revealed.
User experience design for mixed reality: A case study of HoloLens in museum
In recent years, the applications of mixed reality (MR) processing have become highly apparent in academia and the manufacturing industry with the release of innovative technologies such as the Microsoft HoloLens. However, crucial design issues with the HoloLens’ restricted field of view (FOV) to a narrow window of 34 degrees inhibited the user’s natural peripheral vision (Kress and Cummings, 2017). This visual limitation results in a loss of pre-set functions and projected visualisations in the AR application window. This paper presents an innovative methodology in designing a spatial user interface (UI), to minimise the adverse effects associated with the HoloLens’ narrow FOV. The spatial UI is a crucial element towards developing a museum-based MR system, which was evaluated by nine experts in human-computer interaction (HCI), visual communication and museum studies. Results of this study indicate a positive user reaction towards the accessibility of the spatial UI system and enhancing the user experience. This approach can help current and future HoloLens developers to extend their application functions without visual restrictions and missing content.
3D visual simulation of individual and crowd behavior in earthquake evacuation
Simulation of behaviors in emergencies is an interesting subject that helps to understand evacuation processes and to give out warnings for contingency plans. Individual and crowd behaviors in the earthquake are different from those under normal circumstances. Panic will spread in the crowd and cause chaos. Without considering emotion, most existing behavioral simulation methods analyze the movement of people from the point of view of mechanics. After summarizing existing studies, a new simulation method is discussed in this paper. First, 3D virtual scenes are constructed with the proposed platform. Second, an individual cognitive architecture, which integrates perception, motivation, behavior, emotion, and personality, is proposed. Typical behaviors are analyzed and individual evacuation animations are realized with data captured by motion capture devices. Quantitative descriptions are presented to describe emotional changes in individual evacuation. Facial expression animation is used to represent individuals’ emotions. Finally, a crowd behavior model is designed on the basis of a social force model. Experiments are carried out to validate the proposed method. Results showed that individuals’ behavior, emotional changes, and crowd aggregation can be well simulated. Users can learn evacuation processes from many angles. The method can be an intuitional approach to safety education and crowd management.
User Experience of Markerless Augmented Reality Applications in Cultural Heritage Museums: ‘MuseumEye’ as a Case Study
This paper explores the User Experience (UX) of Augmented Reality applications in museums. UX as a concept is vital to effective visual communication and interpretation in museums, and to enhance usability during a museum tour. In the project ‘MuseumEye’, the augmentations generated were localized based on a hybrid system that combines of (SLAM) markerless tracking technology and the indoor Beacons or Bluetooth Low Energy (BLE). These augmentations include a combination of multimedia content and different levels of visual information that required for museum visitors. Using mobile devices to pilot this application, we developed a UX design model that has the ability to evaluate the user experience and usability of the application. This paper focuses on the multidisciplinary outcomes of the project from both a technical and museological perspective based on public responses. A field evaluation of the AR system was conducted after the UX model considered. Twenty-six participants were recruited in Leeds museum and another twenty participants in the Egyptian museum in Cairo. Results showed positive responses on experiencing the system after adopting the UX design model. This study contributes on synthesizing a UX design model for AR applications to reach the optimum levels of user interaction required that reflects ultimately on the entire museum experience.
SceneMaker: Creative technology for digital storytelling
The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors.