banner



0209 Module Two Review and Practice Test Answers

Abstruse

From a computational viewpoint, emotions continue to exist intriguingly difficult to understand. In enquiry, a straight and existent-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic consequence. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as information technology focusses on real-time continuous annotation of emotions, every bit experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based note interface was adult, that immune for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight loftier quality, synchronized physiological recordings (1000 Hz, 16-flake ADC) were obtained from ECG, BVP, EMG (3x), GSR (or EDA), respiration and peel temperature sensors. The dataset consists of the physiological and annotation information from 30 participants, fifteen male and fifteen female person, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological information, is likewise presented.

Measurement(south) electrocardiogram data • respiration trait • blood flow trait • electrodermal action measurement • temperature • musculus electrophysiology trait
Engineering science Type(southward) electrocardiography • Hall upshot measurement system • photoplethysmography • Galvanic Skin Response • skin temperature sensor • electromyography
Gene Type(s) sex • age
Sample Characteristic - Organism Human being sapiens

Machine-accessible metadata file describing the reported information: https://doi.org/10.6084/m9.figshare.9891446

Background & Summary

The field of Bogus Intelligence (AI) has apace advanced in the last decade and is on the cusp of transforming several aspects of our daily existence. For example, services similar customer support and patient care, that were till recently only accessible through human–human interaction, can present exist offered through AI-enabled conversational chatbotsane and robotic daily assistantstwo, respectively. These advancements in interpreting explicit human intent, while highly commendable, often overlook implicit aspects of human–human interactions and the role emotions play in them. Addressing this shortcoming is the aim of the interdisciplinary field of Affective Computing (Air conditioning, besides known as Emotional AI), that focuses on developing machines capable of recognising, interpreting and adapting to human emotions3,iv.

A major hurdle in developing these affective machines is the internal nature of emotions that makes them inaccessible to external systemsfive. To overcome this limitation, the standard AC processing pipeline6 involves: (i) acquiring measurable indicators of homo emotions, (2) acquiring subjective annotations of internal emotions, and (iii) modelling the relation between these indicators and annotations to make predictions about the emotional state of the user. For undertaking steps (i) and (two) several different strategies are used. For example, during stride (i) different modalities like physiological signalsfive,vii,8, voice communication9 and facial-expressions10,11 tin can be acquired. Similarly, the approaches to step (ii) vary along the post-obit 2 principal aspects. Start, on the kind of annotation scale employed, i.east., either detached or continuous. Second, on the ground of the emotion-model used, i.e., either discrete emotion categories (e.g., joy, anger, etc.) or dimensional models (e.k., the Circumplex model 12). Traditionally, approaches based on discrete emotional categories were ordinarily used. However, these approaches were bereft for defining the intensity6,eight,xi and accounting for the fourth dimension-varying nature13 of emotional experiences. Therefore, continuous annotation based on dimensional models is preferred and several note tools for undertaking the same have been developedfourteen,fifteen,16. Notwithstanding these efforts at improving the annotation procedure, a major impediment in the Air conditioning pipeline is that steps (i) and (two) require direct human interest in the form of subjects from whom these indicators and annotations are causedviii,17. This makes undertaking these steps a adequately time-consuming and expensive do.

To address this event, several (uni- and multi-modal) datasets that contain continuous annotation take been developed. Principal among these are the DEAP18, SEMAINE19, RECOLA20, DECAF21 and SEWA22. The annotation strategy used in these datasets have the following mutual aspects. Get-go, the two dimensions of the Circumplex model (i.e., valence and arousal) were annotated separately. Second, in all datasets except SEWA, that uses a joystick, mouse-based annotation tools were used. In contempo years, both these aspects accept been reported to accept major drawbacks22,23,24. These beingness, that separate annotation of valence and arousal does not account for the inherent relationship between them23,25, and that mouse-based note tools are generally less ergonomic than joysticks22,23,24,26. To address these drawbacks, we adult a novel Joystick-based Emotion Reporting Interface (JERI) that facilitates simultaneous notation of valence and arousalxvi,27,28,29. A testament to the efficacy of JERI is that in recent years, several similar annotation setups have been presented25,30. However, currently there are no openly available datasets that feature JERI-similar setups.

To address this gap, we developed the Continuously Annotated Signals of Emotion (Example) dataset31,32. Information technology contains data from several physiological sensors and continuous annotations of emotion. This data was acquired from thirty subjects while they watched several video-stimuli and simultaneously reported their emotional experience using JERI. The physiological measures are from the Electrocardiograph (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Respiration (RSP), Skin Temperature (SKT) and Electromyography (EMG) sensors. The notation data has been previously used for several publications aimed at introducing, analysing and validating this approach to emotion notationsixteen,28,29. Notwithstanding, it has not been previously released. To the best of our cognition, this is the first dataset that features continuous and simultaneous annotation of valence and arousal, and as such can be useful to the wider Psychology and Ac communities.

Methods

Participants

Xxx volunteers (fifteen males, age 28.6 ± iv.8 years and 15 females, historic period 25.7 ± three.i years; range of age 22–37 years) from different cultural backgrounds participated in the data collection experiment. These participants were recruited from an organisation-wide call for volunteers sent at the Found of Robotics and Mechatronics, DLR. Upon registering for the experiment, an e-mail containing general information and instructions for the experiment was sent to the participants. In this email, they were asked to habiliment loose article of clothing and men were asked to preferably shave facial pilus, to facilitate the placement of sensors. All participants had a working proficiency in English and were communicated to in the aforementioned. More information on the sex, age-grouping, etc., of the participants is available in the metadata to the dataset31,32.

Ethics statement

This experiment is compliant with the World Medical Association'south Announcement of Helsinki, that pertains to the ethical principles for medical research involving homo subjects, last version, every bit approved at the 59th WMA General Associates, Seoul, October 2008. Data collection from participants was approved by the institutional board for protection of data privacy and past the work council of the German Aerospace Center. A physician is office of the council that approved the experiment.

Experiment blueprint

The experiment was setup with a inside subjects design. Accordingly, repeated measures were made and all participants watched and annotated the different video-stimuli used for the experiment. To avoid deport-over effects, the society of the videos in a viewing session was modified in a pseudo-random manner, such that the resulting video sequence was different for every participant. To isolate the emotional response elicited by the different videos, they were interleaved by a ii-minute long blueish screen. This two-minute menses besides immune the participants to rest in-between annotating the videos. More information on the video-sequences is bachelor in the dataset31,32.

Experiment protocol

On the day of the experiment, the participants were provided an oral and a written description of the experiment. The written description is available as a supplementary file to this document. After all questions regarding the experiment were addressed, the participants were asked to sign the informed consent form. Then, a brief introduction to the 2nd circumplex model was provided and whatsoever doubts nearly the same were antiseptic. Post-obit this, physiological sensors were attached and the participant was seated facing a 42" flat-panel TV (see Fig. 1, left). Detailed information was then provided on the notation procedure. It was emphasised to the participants that they should comment their emotional experience resulting from the videos, and not the emotional content of the videos. To accustom the participants to the note interface, they were asked to watch and annotate five short practice videos. During this exercise session, the experiment supervisor intervened whenever the participant asked for help and if required, provided suggestions at the terminate of every video. This session was also used to inspect the sensor measurements and if required, make advisable corrections. Subsequently the practice session, the experiment was initiated and lasted for approximately twoscore minutes. During the experiment, the supervisor sat at a computer table placed roughly 2 meters behind the participants to monitor the data acquistion, but non to specifically observe the annotation style of the participants. The participants were informed nigh the reasons of his presence. At the terminate of the experiment, feedback on the annotation system was acquired using the SUS questionnaire16,33. And then, the sensors were removed and refreshments were offered. The participants were also encouraged to share any farther insights they had on the experiment.

Fig. one
figure 1

The typical experiment setup shows a participant watching a video and annotating using JERI. The key effigy shows the video-playback window with the embedded annotation interface. The right-most figure shows the annotation interface in detail, where Self-Assessment Manikin that were added to the valence and arousal axes tin exist seen.

Full size image

Annotation interface

Effigy 1 (right) shows the design of the annotation interface. Information technology is based on the second circumplex model that has been supplemented with the Self-Assessment-Manikin (SAM)34 on its co-ordinate axes. These manikin depict different valence (on X–axis) and arousal (on Y–axis) levels, thereby serving as a non-verbal guide to the participants during note. The red pointer in the figure shows the resting/neutral position. The participants were instructed to comment their emotional experience by moving/holding the red pointer in the appropriate region of the interface. The position of the annotation interface within the video-playback window is shown in Fig. 1 (center). This position can exist easily changed, merely since none of the participants requested that, information technology was retained as shown for all participants. Since the annotation was washed over the unabridged length of a video, it results in a continuous 2D trace of the participant's emotional feel (see Fig. 2). The annotation interface was developed in the National Instruments (NI) LabVIEW programming surround.

Fig. ii
figure 2

The plot on the left shows the annotations from ane participant for the different videos (see Table 1) in the experiment. The annotations for the 'scary-2' video by the first 5 participants (labelled equally p1–p5) tin can be seen in the plot on the right.

Total size image

Videos

Practice videos

As previously mentioned, the aim of the practise session was to accustom the users to the notation interface and not to explicitly train them on the different types of emotional videos they would encounter in the main experiment. Accordingly, 5 short (duration: ~1 minute each) videos interleaved by a 10-second long transitioning blue screen, were used for this session. These videos aimed to elicit exciting, distressing, relaxing, scary and happy emotional states. Thus, the expected emotional content of these videos was often similar to the videos used in the principal experiment, merely non in all cases. The data acquired during the practice session was not recorded and is therefore not a role of this dataset.

Experiment videos

In the main experiment, the aim was to elicit amusing, deadening, relaxing and scary emotional states through video-stimuli. To this finish, 20 videos previously used by other studies were shortlisted35,36,37. The emotional content of these videos was so verified in a pre-study, where 12 participants (no overlap with the participants of this written report) viewed and rated these videos remotely using a web-based interface. Based on the results of this pre-written report and further internal reviews, 8 videos were selected for the master experiment, such that there were ii videos each for the emotional state that nosotros wanted to elicit. Additionally, iii other videos were also used in the experiment, i.e., the start-video, the end-video and the interleaving blueish-screen videos. The starting time-video is a relaxing documentary excerpt aimed at calming participants before the presentation of the emotional videos. The terminate-video was added for the same purpose, i.eastward., to act as a 'cool-down' phase before the end of the experiment. More than data on all these videos is available in Table 1, in the Usage Notes department and in the dataset31,32.

Table 1 The source, label, ID used, intended valence-arousal attributes and the duration of the videos used for the dataset.

Total size table

Sensors & instruments

The physiological sensors used for the experiment were selected based on their prevalence in AC datasets and applicationsthree,six,ten,xviii. Other sensors and instruments were called based on either the recommendations of the sensor manufacturer or on how interfaceable they were with the information acquisition setup. More details on these sensors and instruments are provided in this subsection and Table 2.

Tabular array 2 The blazon, number (No.), manufacturer and model of different sensors and instruments used in the experiment.

Total size tabular array

ECG sensor

The electric signal generated past the middle muscles during wrinkle can be detected using an ECG sensor. The used procedure involves placement of three electrodes in a triangular configuration on the breast of the participant. First, the skin placement site was prepared past (i) if required, removing any backlog pilus around the site, (2) abrading the skin using the Nuprep abrasive foam, and (iii) cleaning the pare using an alcohol (70% isopropanol) pad. Then, pre-gelled electrodes were placed on the cleaned sites, such that ii electrodes rest on the right and left coracoid processes and the third on the xiphoid process 38. This sensor also pre-amplifies and filters the detected electrical signal.

Respiration sensor

The expansion and wrinkle of the breast cavity can be measured using a Hall effect sensor placed effectually the pectoralis major muscle38. Accordingly, a respiration sensor belt was placed loftier on the trunk (i.eastward., under the armpits merely in a higher place the breasts). The tautness of sensor belt was set such that it was comfy for the participants38. Since this sensor was worn over clothing, no pare training was required.

BVP sensor

Too known as a Photoplethysmography (PPG) sensor, it emits light into the tissue and measures the reflected light. The amount of observed reflected low-cal varies according to the blood flowing through the vessels, thus serving as a measure for cardiac activity. A hand-based BVP sensor was used for this experiment and was placed on the heart finger of the not-dominant hand38. The use of this sensor does not mandate any specific skin preparation. However, to prevent any impediment in the operation of this sensor resulting from clay on the skin, the participants were asked to wash their hands with water. Following which, the sensor was placed and secured using an rubberband band.

GSR sensor

Also known as Electrodermal Activity (EDA) sensor, it measures the variation in electric conductance resulting from sweat released past the glands on the skin. Since these glands are regulated past the sympathetic nervous organisation, changes in electric conductance serve as a good indicator of physiological arousal. This sensor was placed on the index and ring fingers of the non-dominant paw38. The placement procedure followed for the aforementioned, involved: (i) rinsing the pare sites past water to remove whatsoever dirt followed by a pat-drying, (ii) applying conductance paste (Ten20® that has 12.5% NaCl content) to the sites, and (three) attaching the silver cloride (Ag-AgCl) electrodes in a bipolar configuration to the sites using finger straps. The hydration-level and the electrolytic concentration of the skin influence the level of electrodermal activeness. Thus, a waiting-period of v to xv minutes was allowed to stabilize the skin-electrolyte interface. During this waiting-flow, other sensors were placed and the sensitivity of this sensor was verified by asking the participants to have a deep breath and hold information technology for few seconds. A proficient GSR signal showed a fasten in peel conductance inside a couple of seconds of the jiff being initiated39.

Skin temperature sensor

Modest variations in peel temperature were measured and converted to electrical signals using an epoxy rod thermistor. This sensor was placed on the pinky finger of the non-ascendant hand38, where it was secured using Coban cocky-agglutinative record that immune for a snug placement.

EMG sensors

The surface voltage associated with musculus contractions tin exist measured using a surface-Electromyography (sEMG, simply EMG) sensor. Previous research in AC has more often than not focused on three muscles. These are the zygomaticus major and the corrugator supercilii muscle groups on the face, and the trapezius muscle on the upper-back. Appropriately, a total of three EMG sensors (one each for the aforementioned muscles) were used for this experiment. The peel at the placement sites for these sensors was prepared by gently abrading it using the Nuprep cream. The details on the placement of the pre-gelled electrodes for these sensors are as follows:

  1. one.

    zygomaticus major – The first electrode was fixed midway along an imaginary line joining the cheilion and the preauricular depression, i.e., at the bony dimple higher up the posterior edge of the zygomatic curvation. The 2nd electrode was placed 1 cm inferior and medial to the kickoff, i.due east., approximately at the point where the horizontal sub-nasale line get-go intersects the zygomaticus major musculus. The tertiary electrode serves as a reference electrode and was placed at the left/right upper edge of the forehead, i.e., ~1 cm below the hairline.

  2. 2.

    corrugator supercilii – The first electrode (surface area: ~6 cm2) was fixed in a higher place the brow on an imaginary vertical line that extends over the endocanthion (i.e., the point at which the inner ends of the upper and lower eyelid meet). The second electrode was placed 2 cm to the correct and slightly superior the first electrode such that it sits at the upper edge of the countenance. The third electrode serves as a reference electrode and was placed at the primal upper end of the forehead, i.e., ~2 cm beneath the hairline.

  3. iii.

    trapezius – A triode electrode (surface surface area: ~thirty cmii) was placed at the edge between superior and heart fibres of the trapezius musculus, approximately x cm left/right from the first thoracic vertebra for a 180 cm alpine adult.

After the placement of each sensor'south electrodes, the impedance between them was checked and verified to exist within the range specified by the sensor manufacturer (i.eastward., 0–15 kΩ for a reliable assessment). Following this evaluation, the sensor leads from the main sensor unit were connected to the electrodes. The main sensor unit of measurement pre-amplifies and filters the acquired raw EMG signal, and also perform an analog Root Hateful Square (RMS) conversion of the same38.

Sensor isolators

The sensor manufacturer recommends using a 'sensor isolator' to ensure electric isolation betwixt the participants and the powered sensors. Accordingly, the physiological sensors were indirectly connected to the data conquering module, through these sensor isolators (see Fig. iii).

Fig. 3
figure 3

The schematic shows the various aspects of the experiment and the data conquering setup. The arrows point the management of the data-catamenia. The solid and the dotted lines betoken the primary and secondary tasks of the acquisition process, respectively.

Full size prototype

Data acquisition modules

A 32-channel (16-channel differential) Analog-to-Digital Conversion (ADC) module with sixteen-bit resolution was used to acquire the output voltages from the sensor isolators (indirectly, the sensors). This module was connected to a Data Acquisition (DAQ) system that transfers the data to the conquering PC.

Joystick

The joystick was the only instrument in the experiment that is directly controlled by the participants. The used joystick is a generic digital gaming peripheral that features a return spring. This provided the user proprioceptive feedback well-nigh the location of the pointer in the interface, thereby helping to mitigate the cognitive load associated with simultaneous tasks of watching the video and annotating the emotional experiencexvi,25.

Data acquisition

Figure iii shows the experiment and the data acquisition setup. The video-playback, the annotation interface and the information conquering components were all directly managed through LabVIEW. This allowed for a seamless integration of all these different components. The open up-source VLC media player was used for video-playback. The joystick was straight connected to the conquering PC over a USB port. The physiological data was acquired over Ethernet using the DAQ system. The conquering rate for the annotation and the physiological data was twenty Hz and one thousand Hz, respectively. The acquired data was augmented with the timestamp provided by the acquisition PC and logged in two dissimilar text files, i.e., one each for the physiological and the annotation data. The aforementioned procedure was repeated for all participants resulting in lx (xxx × 2) log files.

Data preprocessing

The procedure used for processing the raw log files is summarized in this subsection. In the following, step ane was performed once and steps two–6 were iteratively practical to log files for each participant.

  1. 1.

    Duration of the videos: using the ffprobe tool from the FFmpeg multimedia framework40, the exact duration (in milliseconds) of the videos was adamant and has been made available in the dataset31,32.

  2. two.

    Initial files: the raw log files for the physiological data were generally large in size (~200 MB) and hence manipulating these files in MATLAB was very deadening. To this finish, the raw data from both, the annotation and physiological files, was first extracted and and so saved in a single MATLAB preferred.mat format file. The subsequent steps in preprocessing were implemented on the data stored in these mat files.

  3. iii.

    Transforming raw information: the sensor input received by the sensor isolators gets modified before existence transferred to the DAQ system. To rectify the effects of this modification, the logged voltages need to be transformed. This was accomplished by applying the equations presented in Table 2 to yield the desired output with specific units/scales (Table ii, final cavalcade). Similarly, the logged notation data, which was in the integer interval \([-26225\,..26225]\), was also rescaled to the note interface interval \([0.5\,..9.five]\) by using the equations presented in Table 2.

  4. 4.

    Data interpolation: a common trouble in data conquering and logging is the latencies that can be introduced during whatsoever of these processes. This was also axiomatic in our data, where, due east.m., the time betwixt the subsequent samples of the annotation data was occasionally more than the expected 50 ms. To accost this issue, linear interpolation was performed on the physiological and the annotation data. For undertaking the same, commencement, two time-vectors with sampling intervals of 1 ms (for the physiological data) and 50 ms (for the note data) were generated based on the time-duration of the logged data. These vectors serve as the query points for the interpolater that determines the value at these points by fitting a line between the respective discrete samples in the logged information. As a result of the interpolation process, the resulting sampling intervals for the physiological and the annotation data were 1 ms and fifty ms, respectively. In the case that other researchers might adopt to utilize either the non-interpolated data or unlike interpolation methods, the original non-interpolated information are besides available in the dataset31,32.

  5. v.

    Adding the video-IDs: the log files contain timestamps, but practise not have information identifying the duration and the order of the videos. Hence, the extracted video-durations and a lookup table of the video-sequences were used to identify the data segments pertaining to each video. And then, this data was added as an extra column in the log files, containing the dissimilar video-IDs (come across Table 1). This process was likewise undertaken for the not-interpolated data.

  6. 6.

    Saving the information: the resulting data from the aforementioned steps was saved into two different comma-separated values (csv) files, i.due east., one each for the physiological and the annotation information. The csv format was chosen as it is natively accessible by unlike programming and scientific computing frameworks.

Data Records

The presented Instance dataset is available in three variants:

  • CASE_full: includes raw, initial and processed data from all xxx participants, and also the preprocessing code and other metadata. This version of the dataset is the near comprehensive and should ideally be the commencement pick for users interested in performing downstream analyses on the data. This dataset is hosted as a single archive file on the figshare data repository31.

  • CASE_snippet: includes raw, initial and processed data from simply 2 participants, and as well the preprocessing code and other metadata. The size of CASE_full archive is five gigabytes (GB). Hence, to allow users to examine CASE before downloading the full dataset, we created this snippet of the dataset. The size of the snippet archive file is only ~0.3 GB and it is too hosted on the figshare information repository31.

  • CASE git repository: includes raw information from all 30 participants, the preprocessing code and other metadata. This repository offers the users yet another user-friendly method to examine CASE and its preprocessing code. Users can hence easily browse through the preprocessing lawmaking on the repository website and if desired, clone the repository and reproduce the processed data. Users tin as well verify the processing pipeline by comparison the reproduced candy data with the processed data available in CASE_full. This git repository is hosted on GitLab 32 and more information on it is available in the Usage Notes section.

    The directory construction implemented across the same versions of CASE dataset is the aforementioned. At the root of the dataset, it is organised into following three primary directories: (i) data, (2) metadata and (iii) scripts. Detailed README files that explain the contents of each directory and whatever subsequent sub-directories are available in these directories.

    Some pointers that are essential to sympathize this section and the dataset in full general, are as follows:

  • The post-obit description of the information records uses the letters XX to denote the IDs of the participants, where 20 are natural numbers in the set {1, 2, …, thirty}. Thus, a combination of XX with a specific filename (e.g., sub_XX.csv) is used to announce that the file exists for all 30 participants.

  • The jstime and the daqtime variables mentioned in the following subsections contain timestamps provided past a common clock on the logging computer. They have been named and stored separately due to the different sampling intervals used for logging of the note and physiological data, i.eastward., 50 ms and 1 ms, respectively.

Data

The data directory is further divided into the following sub-directories: (i) raw, (ii) initial, (iii) interpolated and (iv) non-interpolated. These sub-directories and the information contained in them pertain to the different stages of the information conquering and preprocessing pipeline.

Raw

This directory contains the raw data logs acquired using LabVIEW (see Information Acquisition). It is farther divided into: (i) the annotations and (ii) the physiological sub-directories that contain the participant-wise note and physiological information, respectively. An overview of the data files in these sub-directories is provided beneath.

Annotations/ subXX_joystick.txt – contains thirty raw annotation data files titled subXX_joystick.txt. Each file contains the post-obit three variables (one variable per column):

  • Cavalcade 1. Time in seconds from the get-go of the video-viewing session to the stop.

  • Column 2. The Ten-axis value (i.east., valence) of the joystick position in the interface. The values lie in the integer range from −26225 to +26225.

  • Column 3. The Y-axis value (i.e., arousal) of the joystick position in the interface. The values prevarication in the integer range from −26225 to +26225.

    Physiological/ subXX_DAQ.txt – contains thirty raw physiological data files titled subXX_DAQ.txt. Each file contain the following ix variables (1 variable per column):

  • Column 1. Time in seconds from the starting time of the video-viewing session to the end.

  • Columns 2–ix. Incorporate the input voltages (measured in volts) from the ECG, BVP, GSR, Respiration, Skin temperature, EMG-zygomaticus, EMG-corrugator and EMG-trapezius sensors, respectively.

Initial

As previously mentioned in Data Preprocessing, the raw physiological and annotation data from each participant was loaded into MATLAB and later saved into a participant-wise mat file (e.g., sub_XX.mat). The mat files for all 30 participants are stored in the initial directory. Each mat file contains 12 variables. Iii of these variables, i.east., jstime (joystick time), val (valence) and aro (arousal), pertain to the raw notation information. The rest, i.e., daqtime (DAQ fourth dimension), ecg, bvp, gsr, rsp (respiration), skt (skin temperature), emg_zygo (EMG-zygomaticus), emg_coru (EMG-corrugator) and emg_trap (EMG-trapezius), pertain to the raw physiological data. The data independent in these mat files is the same as the data in the raw directory. Nevertheless, it was added to the dataset to offering a convenient starting indicate for MATLAB users.

Interpolated and non-interpolated

These directories contain data that results from steps 3–5 mentioned in the Data Preprocessing section. They are structured like the raw directory and hence are further divided into the: (i) annotations and (two) physiological sub-directories. The only difference between the information contained in these directories is that the generation of interpolated data involves an extra processing step (see Information Preprocessing). Hence, unless stated otherwise, the description of the data records provided beneath is applicative to the files in both of these directories.

Annotations/ sub_XX.csv – the notation data files (titled sub_XX.csv) from all thirty participants are contained in this directory. The names (variable-names) and the contents of the 4 columns in each file, are as follows:

  • Column one: jstime. Time in milliseconds from the beginning of the video-viewing session to the end.

  • Column 2: valence. The scaled X-axis value of the joystick position in the interface (run into Table 2).

  • Column 3: arousal. The scaled Y-axis value of the joystick position in the interface (meet Tabular array ii).

  • Cavalcade 4: video. Contains the sequence of video-IDs that indicates the ordering and duration of the dissimilar video-stimuli for the given participant.

    Physiological/ sub_XX.csv–contains the physiological data files (titled sub_XX.csv) for all 30 participants. The names (variable-names) and the contents of the 10 columns in each file, are equally follows:

  • Column 1: daqtime. Time in milliseconds from the beginning of the video-viewing session to the end.

  • Columns ii–9: ecg, bvp, gsr, rsp, skt, emg_zygo, emg_coru and emg_trap. The transformed sensor output values for each of 8 physiological sensors used in the experiment. More than information on the sensors, the transformations applied, and the outputs units for these values, is available in Tabular array two and the README files in these directories.

  • Column 10: video. Contains the sequence of video-IDs that indicates the ordering and duration of the different video-stimuli for the given participant.

Metadata

This directory contains auxiliary information nigh the experiment that has been organised into the following files:

  • participants.xlsx – this Excel file contains the participant-ID, sex, age-group and ID of the video-sequence used, for all participants in the experiment.

  • seqs_order.txt, seqs_order_num.mat and seqs_order_num.xlsx – the video-stimuli were shown in a unique sequence to every participant. The columns of these files incorporate either the video-labels (in seqs_order.txt) or the video-IDs (otherwise) that indicate the ordering of the video-stimuli in these sequences. The seqs_order_num.mat is created during preprocessing from seqs_order.txt. The data in seqs_order_num.mat and seqs_order_num.xlsx is the same and the latter has been added to the dataset only for the convenience of users.

  • videos_duration.txt, videos_duration_num.mat and videos_duration_num.xlsx – comprise the duration in milliseconds of the different video-stimuli used for this dataset. Equally was the example in the previous betoken, these files incorporate the same information only in different formats. Where, video-labels are used in videos_duration.txt and video-IDs otherwise. Similarly, videos_duration_num.xlsx has been included in the dataset only for the convenience of users.

  • videos.xlsx – in addition to the attributes already presented in Tabular array i, this Excel file contains further information on the used video-stimuli. This includes, the videos' durations in milliseconds, links to the IMDb/YouTube entries for the videos' sources, URLs to the videos and the fourth dimension-window for the videos at these URLs. More than information on how to acquire these videos is presented in the Usage Notes section.

Scripts

This directory contains the lawmaking (MATLAB scripts) used for preprocessing the acquired raw data. Since preprocessing is a multi-step process, the lawmaking has been appropriately divided across the post-obit scripts:

  1. i.

    s01_extractVidData.g – the duration data acquired from step 1 of Data Preprocessing is saved in videos_duration.txt. Similarly, seqs_order.txt contains the sequence of videos for all participants. This script extracts the data from these txt files, converts the video-labels to video-IDs and saves the converted data to mat files.

  2. 2.

    s02_extractSubData.chiliad – implements the preprocessing measures mentioned in the 2d step of Information Preprocessing. The resulting files are saved in the initial data directory.

  3. 3.

    s03_v1_transformData.m – is used to generate the not-interpolated data. Information technology implements steps three, five and 6 of Data Preprocessing, in that order.

  4. four.

    s03_v2_interTransformData.m – is used to generate the interpolated data. It beginning inter-/extra-polates the raw data to standardize the sampling intervals (run across pace four of Information Preprocessing). Then, in a similar mode to s03_v1_transformData.g, steps three, v and 6 of Data Preprocessing are implemented.

  5. v.

    f_labelData.m – implements step 5 of Data Preprocessing. This script is used as a helper function by scripts s03_v1_transformData.m and s03_v2_interTransformData.m to label the transformed data.

Technical Validation

Annotation data

The quality and the reliability of the annotation data has been thoroughly addressed in our previous works16,27,28,29. A summary of the relevant highlights from these works is presented below.

In27,28 several different exploratory data analyses were presented. These analyses provided an initial intuition into the annotation patterns for the dissimilar video-stimuli. For example, the annotations for the two scary videos had in general depression valence and high arousal values. They were thus different from annotations for the amusing videos which had relatively high valence and medium arousal. These differences can also be seen in the annotations presented in Fig. 2 (left). The initial exploratory results presented in27,28 were then formally validated in16, where Multivariate ANOVA (MANOVA) was used to quantify the statistical significance of the differences in the annotations for these videos. The 'usability' of our annotation approach was validated using the System Usability Calibration (SUS) questionnaire. According to the ratings received on the aforementioned, the annotation setup had 'fantabulous' usability as the participants constitute it to be consequent, intuitive and simple to use16. Insixteen,29, several dissimilar methods for analysing the annotation patterns in continuous 2D annotations were presented. The results of these continuous methods were concurrent to the results of the MANOVA. Also in16,29 several dissimilar methods for extracting additional information from these continuous annotations accept been presented. For example, the Change Point Analysis method in16 automatically detects the major change-points in the annotation data that can be used to segment the annotations into several salient segments. For comparison with the physiological data, some results for the note data are presented in the next subsection.

Physiological data

In the Background & Summary section, the typical AC processing pipeline was presented. The concluding objective of this pipeline is to develop machine learning models that can infer the emotional land of humans from (in the given case) physiological signals. To accomplish the same, it is disquisitional that the physiological responses to the dissimilar video-stimuli are discernible from each other and are ideally correlated to annotation data. If indeed these patterns exist, they would validate the quality and the value of this information. To make up one's mind the aforementioned, nosotros extracted several features from the interpolated physiological data and performed Principal Components Analysis (PCA) on these features. The details and results of this analysis are presented as follows.

Feature extraction

The characteristic extraction was performed iteratively over the physiological information files for each participant. First, the data for a given participant was segmented into chunks for the dissimilar video-stimuli. Then, from the sensor information pertaining to each of these video-chunks, several features were extracted (meet Tabular array 3).

Tabular array iii The sensors and the various features extracted from the sensor signals.

Full size tabular array

For the technical validation presented hither, one predominantly used feature for each sensor was selected and where applicable, the hateful of this feature across the given video-clamper was calculated. Similarly, the mean valence and arousal values across each video-clamper were calculated. The selected physiological features are presented in Table 4. As a event, for the xxx participants who all watched eight emotional video-stimuli, we have 240 (xxx × 8) values for each of the selected features. Due to inter-personal differences, the participants take a different baseline value for each of these extracted features. These differences can be detrimental to the comparison of the features across all participants and were therefore removed using Z-score standardisation beyond each participant. The aforementioned was as well done for the annotation data. The violin-plots in Fig. 4 show the distributions of the selected features and annotation data, across the 4 different video-labels (encounter Table 1). From the figure, information technology is credible that some of the physiological features (consider the peak eight panels) characterise specific types of videos. For instance, scary videos result in high values of SCR and elevated HR, while amusing videos arm-twist accelerated respiration rates and action of the zygomaticus muscles. Irksome and relaxing videos, as expected, elicit similar values of all features. These results are in line with previous research5,six, where, eastward.g., HR and SCR were determined to exist positively correlated to arousal. This effect tin likewise been seen in our data, where the reported arousal levels (meet bottom-left panel) for scary videos are higher than for the other videos. Similarly, zygomaticus activeness which has been reported to be positively correlated to valence (see bottom-left console), also exhibits like patterns in our data.

Table four The sensors and the features selected from each sensor.

Full size table

Fig. iv
figure 4

"Violin" plots of the distribution of the selected features and the mean annotation (valence & arousal) values beyond different types of videos. The box plots embedded in each violin plot show the Interquartile Range (IQR) for each considered variable, while a yellow diamond marks the mean of the distribution.

Full size image

PCA

PCA is a commonly used dimensionality reduction technique41,42. Thus, it allows for visualization of the given information in a lower dimensional space, where e.g., spatial distributions of the data can be analysed. To this end, PCA was undertaken on the selected Z-scored features listed in Table four. For the assay presented here, that compares the distribution of the PCA scores for these features to the 2nd annotation information (see Fig. 5), simply the first two Principal Components (PC) were retained. These PC explicate 43% of the variance in the selected features (one.PC: 27% and 2.PC: sixteen%). The scatter plot on the left in Fig. five shows the hateful valence and arousal values across the dissimilar video-labels. The information ellipses evidence the standard departure of data pertaining to these video-labels. As is axiomatic from this figure, the physiological and the note data form concurrent clusters. These two figures validate the data in an even more prominent style than Fig. 4. Valence and arousal values (left console) of scary videos are concentrated in the upper-left quadrant, those for the amusing videos are in the upper-right, and the others lie in the middle with low arousal values, equally one would wait. This is confirmed past the right panel, in which the four types of videos are represented analogously on the airplane obtained using the first 2 master components of the physiological features. This seems to indicate that the physiological features somehow "match" the joystick annotations. Of course, this serves only an initial investigation and a more rigorous analysis is required to fully exploit the potential of the database. Nevertheless, the results provided hither show that the presented dataset has several feasible characteristics that would make it of involvement to our research community.

Fig. v
figure 5

Scatter plots of the mean note information and the starting time two master components of the physiological data, labelled according to the types of videos. Ellipses announce one standard divergence.

Full size image

Usage Notes

Videos

Due to copyright bug, we cannot straight share the videos every bit a function of this dataset31,32. Nonetheless, to assist users in ascertaining the emotional content of these videos in more detail and if required, to replicate the experiment, we have provided links to websites where these videos are currently hosted (encounter /metadata/videos.xlsx). Nosotros are enlightened that these links might become unusable in the future and that this tin cause inconvenience to the users. In such an eventuality, nosotros encourage the users to contact us, so that nosotros can assist them in acquiring/editing the videos.

GitLab repository

The code contained in this repository32 will not be updated such that it remains identical to the code available in CASE_full and CASE_snippet. To ensure the aforementioned, every bit an actress measure, we have created a 'release' for this repository. This release points to a snapshot of this repository that is concurrent to the compilation of CASE_full and CASE_snippet. This release is available at: https://gitlab.com/karan-shr/case_dataset/tree/ver_SciData_0.

Also, users who crave assist with the dataset are welcome to contact the states using the 'issues' feature from GitLab.

Feature extraction and downstream analysis

The code used for the technical validation of the dataset was developed in MATLAB 2014b and R-linguistic communication (version 3.iii.3). The characteristic extraction was done in MATLAB using open-source toolboxes/code like TEAP43 and an implementation of the Pan Tompkins QRS detector44. The PCA analysis was performed in R using the prcomp office from the stats package. This code is available to interested researchers upon request. Users of the dataset31,32 interested in leveraging the continuous nature of the provided annotations are advised to check our previous worksxvi,29. The analysis presented in these works was primarily undertaken in R-language and tin be easily reproduced.

Code Availability

The LabVIEW-based graphical lawmaking for the experiment and data acquisition is highly specific to the sensors and equipment used in our experiment. It has therefore not been made available with this dataset31,32. Nevertheless, readers who wish to replicate the experiment can contact the corresponding author for further assist. For readers who desire to reproduce the experiment, nosotros hope the detailed clarification provided in this commodity will suffice.

The data preprocessing steps outlined in the previous subsection were implemented in MATLAB 2014b. The linear interpolation was performed using the interp1 function. The raw log files and the information preprocessing code are bachelor in the dataset31,32. Hence, readers who wish to reproduce and/or improve-upon our preprocessing pipeline can easily undertake the aforementioned.

References

  1. van den Oord, A. et al. Wavenet: A generative model for raw audio. Preprint at, http://arxiv.org/abs/1609.03499 (2016).

  2. Hagengruber, A., Leidner, D. & Vogel, J. EDAN: EMG-controlled Daily Assistant. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 409–409 (ACM, 2017).

  3. Picard, R. W. Affective Computing (MIT Printing, 1997).

  4. McStay, A. Emotional AI: The Ascent of Empathic Media (SAGE, 2018).

  5. van den Broek, Due east. 50. et al. Affective man-machine interface: Unveiling man emotions through biosignals. In Biomedical Engineering Systems and Technologies 21–47 (Springer Berlin Heidelberg, 2010).

  6. van den Broek, E. L. Melancholia Betoken Processing (ASP): Unraveling the mystery of emotions. (Academy of Twente, 2011).

  7. Hanke, G. et al. A studyforrest extension, simultaneous fmri and middle gaze recordings during prolonged natural stimulation. Sci. Data three, 160092 (2016).

  8. Gatti, E., Calzolari, Eastward., Maggioni, E. & Obrist, Thousand. Emotional ratings and skin conductance response to visual, auditory and haptic stimuli. Sci. Data 5, 180120 (2018).

  9. Schwenker, F. & Scherer, S. Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction, vol. 10183 of Lecture Notes in Informatics (Springer, 2017).

  10. Taamneh, South. et al. A multimodal dataset for various forms of distracted driving. Sci. Data iv, 170110 (2017).

  11. Soleymani, M., Koelstra, S., Patras, I. & Pun, T. Continuous emotion detection in response to music videos. In Proceedings of International Conference on Automatic Face up and Gesture Recognition 803–808 (IEEE, 2011).

  12. Russell, J. A. Cadre bear on and the psychological construction of emotion. Psychological review 110, 145–172 (2003).

    ADS  Commodity  Google Scholar

  13. Soleymani, M., Asghari Esfeden, S., Fu, Y. & Pantic, One thousand. Assay of EEG signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Calculating 7, 17–28 (2015).

    Article  Google Scholar

  14. Cowie, R. et al. FEELTRACE: An instrument for recording perceived emotion in real time. In SpeechEmotion-2000 19–24 (2000).

  15. Nagel, F., Kopiez, R., Grewe, O. & Altenmueller, E. EMuJoy: Software for continuous measurement of perceived emotions in music. Beliefs Research Methods 39, 283–290 (2007).

    Article  Google Scholar

  16. Sharma, K., Castellini, C., Stulp, F. & van den Broek, E. L. Continuous, real-time emotion annotation: A novel joystick-based assay framework. IEEE Transactions on Affective Computing, one-1 (2017).

  17. Kächele, M., Schels, M. & Schwenker, F. The influence of annotation, corpus design, and evaluation on the issue of automatic nomenclature of man emotions. Frontiers in ICT three, 27 (2016).

  18. Koelstra, S. et al. DEAP: A Database for Emotion Analysis using Physiological Signals. IEEE Transactions on Melancholia Calculating 3, eighteen–31 (2012).

    Article  Google Scholar

  19. McKeown, G., Valstar, M., Cowie, R., Pantic, M. & Schroder, M. The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent. IEEE Transactions on Affective Computing 3, v–17 (2012).

    Article  Google Scholar

  20. Ringeval, F., Sonderegger, A., Sauer, J. & Lalanne, D. Introducing the RECOLA multimodal corpus of remote collaborative and melancholia interactions. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 1–viii (2013).

  21. Abadi, M. K. et al. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Melancholia Computing 6, 209–222 (2015).

    Commodity  Google Scholar

  22. Ringeval, F. et al. AVEC2017: Real-life depression, and touch recognition workshop and claiming. In Proceedings of the seventh Annual Workshop on Audio/Visual Emotion Challenge 3–9 (ACM, 2017).

  23. Metallinou, A. & Narayanan, South. Annotation and processing of continuous emotional attributes: Challenges and opportunities. In 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) 1–8 (IEEE, 2013).

  24. Baveye, Y., Dellandréa, Eastward., Chamaret, C. & Chen, L. Deep learning vs. kernel methods: Functioning for emotion prediction in videos. In International Conference on Affective Calculating and Intelligent Interaction (ACII) 77–83 (IEEE, 2015).

  25. Girard, J. Thousand., Wright, C. & DARMA, A. 1000. Software for dual axis rating and media note. Behavior Enquiry Methods 50, 902–909 (2017).

    Commodity  Google Scholar

  26. Yannakakis, G. N. & Martinez, H. P. Grounding truth via ordinal annotation. In International Conference on Affective Calculating and Intelligent Interaction (ACII) 574–580 (IEEE, 2015).

  27. Antony, J., Sharma, Thou., van den Broek, Due east. L., Castellini, C. & Borst, C. Continuous touch state annotation using a joystick-based user interface. In Proceedings of Measuring Behavior 2014: ninth International Conference on Methods and Techniques in Behavioral Inquiry 268–271 (2014).

  28. Sharma, Grand., Castellini, C. & van den Broek, E. Fifty. Continuous affect country annotation using a joystick-based user interface: Exploratory data analysis. In Proceedings of Measuring Beliefs 2016: 10th International Conference on Methods and Techniques in Behavioral Research 500–505 (2016).

  29. Sharma, K. et al. A functional data analysis approach for continuous 2-D emotion annotations. Spider web Intelligence 17, 41–52 (2019).

    Commodity  Google Scholar

  30. Karashima, M. & Nishiguchi, H. Continuous Touch Rating in Cartesian Infinite of Pleasure and Arousal Scale by Joystick Without Visual Feedback. in HCI International 2017–Posters' Extended Abstracts 316–323 (Springer International Publishing, 2017).

  31. Sharma, M., Castellini, C., van den Broek, East. L., Albu-Schaeffer, A. & Schwenker, F. A dataset of continuous affect annotations and physiological signals for emotion analysis. figshare. https://doi.org/10.6084/m9.figshare.c.4260668 (2019).

  32. Sharma, K. Source Code for: Instance Dataset. GitLab, https://gitlab.com/karan-shr/case_dataset (2019).

  33. Sauro, J. & Lewis, J. R. When designing usability questionnaires, does it hurt to be positive? In Proceedings of the SIGCHI Conference on Human Factors in Calculating Systems 2215–2224 (ACM, 2011).

  34. Bradley, M. Grand. & Lang, P. J. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry 25, 49–59 (1994).

    CAS  Article  Google Scholar

  35. Gross, J. J. & Levenson, R. W. Emotion elicitation using films. Cognition & Emotion ix, 87–108 (1995).

    Article  Google Scholar

  36. Hewig, J. et al. A revised picture show set for the induction of bones emotions. Noesis and Emotion 19, 1095–1109 (2005).

    Article  Google Scholar

  37. Bartolini, E. E. Eliciting Emotion with Moving-picture show: Development of a Stimulus Prepare. (Wesleyan University, 2011).

  38. Physiology Suite 5.i. Thought Engineering, http://www.thoughttechnology.com/pdf/manuals/SA7971.

  39. Mendes, Westward. B. In Methods in social neuroscience Vol. 1 (ed. Harmon-Jones, E.) Ch. 7 (The Guilford Press, 2009).

  40. ffprobe Evolution Squad. Ffmpeg, https://ffmpeg.org/ (2016).

  41. Jolliffe, I. Principal component assay (Wiley Online Library, 2005).

  42. Ringner, One thousand. What is principal component analysis? Nat. Biotechnol. 26, 303–304 (2008).

    CAS  Article  Google Scholar

  43. Soleymani, M., Villaro-Dixon, F., Pun, T. & Chanel, G. Toolbox for emotional characteristic extraction from physiological signals (TEAP). Frontiers in ICT iv, i (2017).

  44. Sedghamiz, H. Source lawmaking for: Complete Pan Tompkins Implementation ECG QRS detector. MATLAB Primal File Exchange, https://de.mathworks.com/matlabcentral/fileexchange/45840-complete-pan-tompkins-implementation-ecg-qrs-detector (2018).

Download references

Acknowledgements

The authors would like to thank the participants of the pre-study and the main experiment. We too want to thank Mr. Jossin Antony for co-developing the information conquering software and assisting in data collection, and Dr. Freek Stulp for his help and support in developing this dataset.

Author information

Affiliations

Contributions

K.Due south. co-adult the data acquisition software, collected the data, developed the dataset, contributed to the technical validation and composed the manuscript. C.C. co-designed the experiment, helped with participant enrolment, supervised the data collection, contributed to the technical validation and the composition of the manuscript. E.50. assisted in the design and the development of the annotation interface, supervised the information processing and edited the manuscript. A.A. verified the dataset and edited the manuscript. F.Due south. co-designed the experiment, supervised the technical validation and edited the manuscript.

Respective author

Correspondence to Karan Sharma.

Ideals declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Data

Rights and permissions

Open Access This article is licensed under a Artistic Commons Attribution iv.0 International License, which permits employ, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Artistic Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If textile is not included in the article's Creative Eatables license and your intended use is non permitted by statutory regulation or exceeds the permitted employ, you will need to obtain permission direct from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/past/four.0/.

The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/naught/1.0/ applies to the metadata files associated with this article.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this commodity

Sharma, K., Castellini, C., van den Broek, E.L. et al. A dataset of continuous affect annotations and physiological signals for emotion analysis. Sci Data six, 196 (2019). https://doi.org/10.1038/s41597-019-0209-0

Download commendation

  • Received:

  • Accustomed:

  • Published:

  • DOI : https://doi.org/ten.1038/s41597-019-0209-0

Farther reading

gadsdonmentir.blogspot.com

Source: https://www.nature.com/articles/s41597-019-0209-0

0 Response to "0209 Module Two Review and Practice Test Answers"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel