Customer Emotion Technology Learning Method

ISBN-10
ISBN-13
9798638555276
Pages
78
Language
English
Published
2020-04-19
Author
Johnny Ch Lok

Description

Computers were used in place of human coders to detect vocalbehaviors ( e.g. time spent speaking, influence over conversationpartners, variation in pitch and volume and behavior mirroring) duringa negotiation task. Their results imply that the speech features extractedduring the first five minutes of negotiation are highly predictive of future outcomes. The researchers also noted that using computers tocode speech features offers advantages such as high test-retestreliability and real time feedback. As a cost-effective and relatively accurate method to detect, track andcreate models for behavior classification and prediction, automaticfacial expression analysis has the potential to be applied to multipledisciplines. Capturing behavioral data from participants may be amore accurate representation of how and what they feel, and a better alternative to self-report questionnaires that interrupt participants' affective cognitive processes and are subject to bias .Our model goes beyond to predict the future behavior within a giventask ( e.g. a virtual car accident or an error in performance). This opensup the possibility of such models becoming a common methodologyin social scientific and behavioral research.Data synchronization and time series statistics calculation. In the next phase of analysis, video recording are recording with data collectedfrom experimental tasks such as surveys or simple motor tasks. Thisis done to map the extracted facial geometry information to behavioraloutput data. In the experiments three to five second intervals of facialexpressions were taken one to two seconds before each instance of thebehavior to be predicted and used as the input data. After datasynchronization we also computed a series of time-domain statistics on coordinates in each interval to use as additional inputs to our classifiers.The input data for this study consisted of videotapes of forty oneparticipants watching films that elicited the emotions of either amusement or sadness, along with measures of their cardiovascular activity responding. It should be noted that the recordedexpressions were expressions, unlike the photographs of deliberately posed faces often used in prior facial expression research.