Information

Are there useful applications for three channel consumer EEG?

Are there useful applications for three channel consumer EEG?

I stumbled upon the consumer EEG Melon (at Kickstarter). It has three electrodes and is advertised as measuring how "focused" you are. In the FAQ it says:

The Melon headband has three electrodes. Our primary electrode is on the forehead region known as FP1, where Melon can monitor brainwave activity from the prefrontal cortex.

I wonder: What can I measure with such an EEG? What applications would be unlikely?

  • How can "focus" be quantified?
  • Would it be possible to identify (different) states of sleep?
  • Could it be possible to control a cursor/game (left/right movements)?

Point by point:

The Melon headband has three electrodes. Our primary electrode is on the forehead region known as FP1, where Melon can monitor brainwave activity from the prefrontal cortex.

The problem with this is that electricity doesn't work like that. Current always flows between two points, and our electrodes measure the potential between two locations. After its discovery by Tönnies somewhere in 1938, most EEG amplifiers have been of the differential amplifier design. That means every electrode recording is made up of 3 different sites: the Ground electrode, the Reference, and the target site. What is being reported as activity at FPz is truly ((FPz-Ground) - (Reference-Ground)).
What this entails is that our measurement at the so-called active electrode is not an isolated measure of the activity directly under that electrode alone, but a mix of the activity underlying multiple sites. So what you observe in such a setup is highly dependent on where your Reference is located. I couldn't find where the Melon's Reference sits, but it has to be somewhere on the headband. The headband encircles the skull, and all of the positions allowed by the headband are over cortical areas.
Typical locations for Reference are Cz, Nose or the average, neither of which are possible or meaningful in a 3-electrode headband context. Possibly, the reference is located over the temporal or occipital lobes.

So what the Melon will report as activity at FPz will in truth be a mix between activity at FPz, and activity at some further site also above the cortex. (It is possible they placed their Ground very close to the Reference, which is a very unusual setup… )

Furthermore, individual electrodes never pick up isolated brain sites. Electrical fields, as those projected by cortical clusters firing in synchrony, are propagated field-like. Every recording site therefore picks up a distinct mix of all of the brain. The activity of sites closer to the electrode will be somewhat stronger since field strength falls off with the square of distance, but the part of the cortex directly under the electrode will still be overwhelmed by the summed activity of everything next to and below it.
All the more so as the strongest activity patterns of the cortex come from the occipital lobes - the so-called alpha rhythm. Alpha usually dominates the cortical activity, as it is a simple synchronized firing.

Moreover, alpha will most likely be artificially reenforced on the Melon by the site of reference, since depending on where the reference is placed, it will necessarily be closer to the occipital lobe than FPz, FPz receives alpha with the inverted polarity of most of the scalp, and subtracting from the residual inverted alpha at FPz the alpha from the reference will lead to substantial alpha effects.

Both of these problems are well known since basically the beginning of the EEG, since Hans Berger himself, discoverer of the EEG, already stumbled on the phenomenon that regardless of where he put his two electrodes (Berger never really got around to appreciating Tönnies' proposal), he'd see very similar activity. Berger's interpretation was that all of the brain partially does basically the same thing, everything in the brain is partially united by one huge common shared oscillatory pattern - alpha.
The activity associated specifically with frontal sites, especially the theta rhythm, is much weaker. Extracting just prefrontal theta activity can be a very hard task, as one can tell from the elaborate methods employed by papers trying to isolate just that, such as this and this.

What especially the second paper will show you however is the most dominant activity at FPz: eye blinks. Here I'm showing you activity from FPz in an experiment of mine.

What you mostly see in the top right image are the red dots. Each of these is a blink by our subject. You see that the dots totally overwhelm the overall activity. The spectrum below gives the same information: it's a simple power-law spectrum with almost all power in very low frequencies, without a specific peak in a frequency range such as theta or alpha.

I wonder: What can I measure with such an EEG? What applications would be unlikely? How can "focus" be quantified?

Interestingly, this might even be possible, since one of the best EEG predictors of focus is alpha amplitude. The distinct prefrontal cortex signals however are not very important here. Rather, the alpha signal is.
Alpha is a very rough indicator of attentiveness, and you'll always get less information out of a single sensor than by simple introspection ("do I feel droopy?"). But you might get a rough correlate of current awareness from looking at alpha power, assuming you position the thing so you get as much alpha out of it as possible (ie., not over FPz). The research in this regard is still in the explorative stage, much before you'd rely on the systems. See for a few examples: a three-channel system like the Melon; a 4-channel system; 14 electrodes.
When I say "it is possible" to do that using a 3-channel system, I do not mean it is possible right now. Nobody has that technology yet. Rather, it might become possible in the future. Although even then I'd rather hope for a larger sensor array. Note also that these are still very rough guesses - they can tell you if you're drowsy or not, if you are currently falling asleep or not. They won't be able to distinguish more subtle states.

Would it be possible to identify (different) states of sleep?

I'll have to pass on this one. I'm not a sleep expert. I think the location of the Melon is wrong though.

Could it be possible to control a cursor/game (left/right movements)?

Possibly - if you place it so that your eye movements reflect as strong positive and negative currents. From brain activity? No. Movement decoding requires much more sensors over very different sites. Motor areas are closer to the middle of the brain, roughly speaking, and that's where you'd need very dense electrode coverage to see anything.

Postscript edit: a few references for decoding movement intentions using (multichannel) EEG. As you see, even with multichannel, high-quality EEG, performance is not especially good. http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00222/abstract http://journal.frontiersin.org/Journal/10.3389/fnhum.2013.00124/full


This type of sensor is hardly revolutionary, it seems that the integration and miniaturization are the key differences between say Melon, or this, or a 7-point sensor such as Muse, which supposedly monitors alpha and beta waves, and is already in production. Note that the 10/20 System is as old as I am: 30+ years. Additionally, it is an analog, dumb device, without interpretation or adaptive algorithms.

The most obvious non-listed application for this device is to train yourself to be a better video gamer. Seriously, the pre-frontal cortex, real time executive function, and a "focused state" would have all kinds of useful applications, especially for real-time activities involving executive function: video games, race car driving, piloting under extreme weather, or 1-on-1 sports such as tennis. This may be considered the "future" in some neuroscience textbook like the one copied and pasted above. However, this is very much reality, as the military, MIT, and other serious research institutions have been using this tech for many years for that exact purpose.

I am not a neuroscientist; I have a background in computing and liberal arts/research. However, according to: http://www.brainm.com/help/Positions_and_brain_function.htm Fp1 controls:

  • Logical Attention
  • Orchestrate network
  • Interactions planning
  • Decision making
  • Task completion
  • Working memory

It appears that they are attempting to integrate a tilt sensor, which could be used to control the left-right axis, such as in a video game. The documentation and specifications are lacking or vague.


Materials and Methods

Participants and Design

Final sample consisted of 35 randomly healthy volunteers (15 women and 20 men, mean age = 25 SD = 5 years) recruited from the city where the lab is located. Initial sample measured was 47 subjects but after an examination of the dataset was carried out, 12 participants were removed due to corrupted data from experimental sessions in some of the acquired signals. All of the participants showed corrected-to-normal vision and hearing. They were asked to pay attention to the documentary as in a common situation. No mention of the importance of the ads was made. The study was approved by the Institutional Review Board of the Polytechnic University of Valencia with written informed consent from all subjects in accordance with the Declaration of Helsinki.

The experiment was conducted in a neuromarketing lab of a large European university and comprises the three parts shown in Figure 1. In Parts 1 and 2, participants sat comfortably on a reclining chair with a 32-channel EEG device, with two electrodes to measure heart variability and an eye-tracker (Figure 1). In Part one, participants were exposed to a mindfulness audio designed by experts to help them relax and disconnect from past experiences of the day (Fjorback et al., 2011 Demarzo et al., 2014). Then, in Part two they were shown a 30-min long documentary with three commercial breaks of three ads lasting about 30 s each the first break occurred after 7 min, the second in the middle of the documentary, and the third 7 min before the end as Figure 2 depicts. At the end of this second part, participants were informed that an interview would be held 2 h later (Part 3).

FIGURE 1. Participant in the study. (Top: The EEG cap is visible. ECG electrodes placed on the chest and TMSI equipment). (Bottom: Eye tracking equipment is shown).


Arithmetic in the Bilingual Brain

Nicole Y. Wicha , . Amanda Martinez-Lincoln , in Language and Culture in Mathematical Cognition , 2018

Electrophysiology and Event-Related Potentials

For this reason, electroencephalography (EEG) has been successfully used to study multiple stages of arithmetic processing in both monolinguals and bilinguals. EEG is a direct measure of neural activity, typically recorded from the scalp, which reflects the electric changes over time of large populations of neurons with millisecond precision. From intracranial recordings, we know that EEG generally captures postsynaptic potentials from cortical pyramidal neurons (for review, see Luck & Kappenman, 2011 ). The ongoing EEG can be time-locked to particular events of interest and then averaged across trials and subjects to generate the average brain response or event-related potential (ERP) to an experimental condition of interest. These derived ERPs are a time-sensitive multidimensional measure of brain electric activity, with functionally specific effects (i.e., changes in the recorded activity compared to a baseline). The amplitude of the change in voltage, polarity (whether the difference in voltage between conditions is negative or positive), latency (the timing of the effect), and scalp distribution (which electrodes show the effect) of the waveform are all independently informative about the nature of ongoing cognitive activities. Importantly, ERPs can provide information about processing without dependence on explicit responses or self-reported of strategies, which have been a particular issue of some contention in the domain of arithmetic problem solving (e.g., Fayol & Thevenot, 2012 ).

In the first 300 ms after the onset of a stimulus, ERP components (the typically observed modulations in amplitude) generally reflect modality-specific sensory processing. For example, with visual stimuli, early components are modulated by contrast, size, brightness, and attention ( Pratt, 2011 ). After about 300 ms, more modality-independent effects emerge, including well-characterized cognitive components that have been the most relevant to studies of arithmetic processing—the N400 (see Fig. 1 ) and a subsequent positive shift or late positive component.

Fig. 1 . Grand average ERPs from monolingual adults recorded in response to expected and unexpected sentence-final words (top, from Wicha, Moreno, &amp Kutas, 2004 ) and in response to correct and incorrect multiplication solutions presented as number words (middle) or Arabic digits (bottom) (unpublished data from Wicha lab). The data come from a representative electrode, with 1 s of time in milliseconds along the X-axis and voltage (in microvolts, with negative plotted up) on the Y-axis.

ERP studies of arithmetic have typically measured two effects: the congruency effect, primarily observed in verification tasks during recordings of the brain response to a provided answer (e.g., Niedeggen, Rosler, & Jost, 1999 ), and the problem-size effect, observed during recordings of the brain response either to provided answers or after presentation of the operands (e.g., 7 × 6) but prior to the answer itself ( Jost, Hennighausen, & Rösler, 2004 Zhou et al., 2006 ). We focus here on the congruency effect, found in comparisons of brain responses for congruent solutions (correct answers) and incongruent solutions (incorrect answers), as this has been the primary effect measured in bilingual populations. Importantly, typical studies of the congruency effect report the time-locked brain response to the answer presented in isolation, after retrieval/calculation processes have already been initiated by prior presentation of the operands.

The congruency effect is characterized by a more negative response to incorrect solutions (e.g., 7 × 6 = 36) compared with correct solutions (e.g., 7 × 6 = 42), with a maximum peak amplitude difference occurring about 350–400 ms after the presentation of the answer ( Niedeggen et al., 1999 ). This arithmetic congruency effect has drawn comparisons to similar effects in other domains that study the processing of potentially meaningful items (e.g., language and object processing). Notably, words that are unexpected/incongruent with a prior sentential context (e.g., the brain response to “dog” given the prior context, “he takes his coffee with cream and ___”) also elicit a more negative brain response than words that are expected/congruent with a prior context (e.g., the brain response to “sugar” given the same context), around 400 ms after the congruent/incongruent word is presented. Thus, the effect of answer congruency has been traditionally labeled an N400 effect (a negative-going wave sensitive to semantics/meaningfulness, peaking at 400 ms see Kutas & Federmeier, 2011 , for a review of N400 response properties).

Both the timing and the size of this congruency effect (i.e., the average difference in voltage across a time window surrounding the ERP effect) can be taken as independent measures of the brain's readiness to categorize an arithmetic problem as correct or not. If the maximal difference occurs later in time (i.e., at 400 ms instead of 350 ms), then it is reasonable to infer that the brain had to do additional processing before the correctness judgment could be fully rendered (much like an RT delay). Moreover, if the average voltage difference is a smaller or larger size in response to different contrasts of answer subtypes (e.g., table-related or table-unrelated incorrect solutions) or to the same answer subtypes under different contexts (e.g., digits/words and fast/slow presentation latencies), then the size of the difference itself can be taken as an indication of the brain's ability to distinguish between the correct and incorrect solutions. This initial and short-lived (

200 ms) effect is generally followed by a slow positive ERP selective to unambiguously incorrect answers. Notably, this later effect is not as consistent or as well characterized and also seems to reflect sensitivity to more subtle differences in answer types (as in traditional language studies, Van Petten & Luka, 2012 , for review).


1) Small Data Is The New Big Data

The research of ‘Nextstage Evolution’ concluded, as referred in their Facebook and LinkedIn profiles:

Companies are realizing ‘big data’ isn’t as useful as they were told, and that smaller, precise data sets answer questions quicker and cheaper.

In that case, a Behavioral Economist can help companies reduce their costs and time spent on ‘big data’. In their research, they could find which variables lead to these precise ‘small data’. Their ability of separating data is reflected through a Behavioral Economics tool, the ‘conceptual models’.

Behavioral scientist Alain Samson, editor of ‘The Behavioral Economics Guide 2015’, suggests in his guide that these models are used to identify consumer groups, classified by their needs and wants.

The most important factor, in that case, is human psychology.

In order to classify those groups, Behavioral Economists analyze descriptive characteristics, such as gender, income, age and education, and behavioral dimensions, such as benefits, usage rates and loyalty status.

Behavioral economists could effectively apply the analysis and understanding of consumer behavior, as well.

It includes the consumer’s behavior when face-to-face with the seller, or the pre-purchase behavior of the consumer, according to the available information collected up to that point. More emphasis could also be given to the post-purchase outcomes and reactions of the existing consumers, in order to evoke positive feelings (about the performance of the good) that will lead to additional word of mouth and loyalty on the company’s brand.


Discussion

In this review, we summarize previous studies analyzing EEG signals as biological markers in affective mechanism and recognition in the marketing area (as shown in Table 1). The majority of the studies, especially those using machine learning techniques and algorithms, have been published in the last 10 years. This review provides new directions regarding neuromarketing data analyses and fosters cooperation among scholars from miscellaneous disciplines, such as information science, neuroscience, marketing, and psychology. Although there has been a recent increase in the number of EEG-based AC studies in marketing with no signs of slowing down, theoretical and operational challenges must be settled before moving forward.

Table 1. Summary of current findings on EEG-based affective computing in marketing.

First, it would be helpful to pay more attention to multiclass affective classification. As this review shows, most of the previous studies are based on dimensional emotion theory, typically concerning the dimensions of arousal, valence, liking, and dominance. The state of the art usually relies on the affective polarity of its components (e.g., positive or negative) and proposes approaches that mostly focus on binary affective classification. However, to study the affective states of consumers, it would be more interesting to go deeper into the classification and detect subtle affective changes in marketing. Furthermore, marketing scenarios may induce multiple emotions in customers. The phenomena of coexistence should be considered in affective tagging. We recommend that future studies should focus on two issues to develop a more accurate affective definition and conduct better forecasting: (1) they should aim for a deeper understanding of consumer ambivalence, characterized by the co-occurrence of positive and negative emotions (Kreibig and Gross, 2017 Hu et al., 2019) and (2) consider emotion dyads, namely, a mix of primary emotions, raised by Plutchik (1980).

The multidimensional and multimodal feature fusion can obtain better recognition performance. When studying EEG-affect relationships, EEG-based AC studies assume that EEG signals can sufficiently depict and predict human affective states. However, this hypothesis cannot always be assumed to be true because the relationship between physiological responses and psychological states could be very complex (Cacioppo and Tassinary, 1990 Hu et al., 2019). To achieve precise prediction and improved generalization, first, we suggest decreasing the abundant number of features from EEG signals and further perform feature selection and fusion. The most widely used features include differential symmetry, GFP, PSD, and ERPs. It might be the case that a fusion of features derived from different EEG signal types will lead to better recognition performance (Hakim and Levy, 2019). It is worth noting that future studies should be more cautious regarding the reliability and validity of “one-to-one” relationships (one affective state is associated with one and only one EEG feature) (Bridwell et al., 2018 Hu et al., 2019). Second, recent studies have revealed that multimodal frameworks can effectively increase emotion recognition accuracy and robustness compared to unimodal frameworks (Guixeres et al., 2017 Avinash et al., 2018 Kumar et al., 2019). The advantage of multiple modalities (for example, vision, sound, or smell) helps to increase the validity and usability since the weaknesses of one modality are offset by the strengths of another. Future studies may derive features from modalities other than EEG while collecting and analyzing data by using machine learning, natural language processing, and automatic speech recognition technology, evolving from unimodal analyses to multimodal fusion.

The use of portable wireless EEG devices and virtual reality (VR) technology can alleviate the lack of ecological validity in marketing studies. For EEG hardware devices, the whole-brain coverage, the time-consuming preparation procedure, and the prohibitive cost of a professional headset with wet electrodes make it impractical and difficult to transfer the laboratory to real-world applications in marketing. A group of recent studies has confirmed the feasibility of using consumer-level EEG headsets for AC with promising results. For example, the widely used wireless EPOC headset (e.g., Kuan et al., 2014 Lin et al., 2014 Friedman et al., 2015 Yang et al., 2015 Gauba et al., 2017 Yadava et al., 2017 Kumar et al., 2019), due to its light weight, low price, and ease of use, shows promise. Studies on the EPOC headset seem to agree that it can be applied to acquire reliable EEG signals in marketing, but researchers should pay attention to its relatively low signal-to-noise ratio and poor signal stability (Friedman et al., 2015). We suggest that researchers evaluate the performance of consumer-level devices using the standard testing procedures proposed by Hu et al. (2019). In addition, to bridge the gap between the laboratory environment and real market scenarios, the use of VR is an important trend that can effectively enhance the experience of immersive sensation. It enables consumers to get a direct, intuitive, and concrete understanding of the appearance, quality, and performance of products (Guo and Elgendi, 2013). Furthermore, VR makes it possible to simulate and assess retail and consumption environments under controlled laboratory conditions (Marín-Morales et al., 2017), allowing the isolation and modification of variables in a cost-effective manner.

Studying interactions among multiple customers is critical for understanding the marketing ecosystem, which consists of interrelated trends that shape consumer behaviors. Most AC studies in marketing have concentrated mainly on a single consumer’s EEG activity and may ignore the socio-affective interaction and processes related to consumer behavior (Hasson et al., 2012). The EEG-based hyperscanning technique [for a recent review, see Liu et al. (2018)] provides a way to explore dynamic brain activities between two or more interactive customers and their underlying neural affective mechanisms. In previous hyperscanning studies, interpersonal neural synchronization (INS) has been verified to be a crucial neural marker for different kinds of social interactions, such as communication (Stephens et al., 2010), collaborative decision making (Montague et al., 2002 Hu et al., 2018), and imitation (Pan et al., 2017). As consumer behavior is inherently social and interactive in nature, EEG-based INS could be used to study the biological mechanism for shared intentionality of consumption, panic buying, collective emotion, and group purchase.


Affiliations

University of Tübingen, Tübingen, Germany

University of Mainz, Mainz, Germany

Gerd Grübler & Elisabeth Hildt

University of Dresden, Dresden, Germany

Clinique Romande de Réadaptation, Sion, Switzerland

École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Fondazione Santa Lucia, Rome, Italy

Iolanda Pisotta & Angela Riccio

Spinal Cord Injury Center of Heidelberg University Hospital, Heidelberg, Germany

Forschungsstelle Neuroethik/Neurophilosophie, Johannes Gutenberg-Universität Mainz, Gresemund-Weg 4, Raum 2.437, 55099, Mainz, Germany

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

Corresponding author


10 Real Life Examples Of BCI Devices That You Can Control With Your Thoughts

Since the first experiments of Electroencephalography (EEG) on humans by Hans Berger in 1929, the idea that brain activity could be used as a communication channel rapidly emerged. EEG is a technique which measures, on the scalp and in real-time, small electrical currents that reflect brain activity. As such, EEG discovery has enabled researchers to measure brain activity in humans and to start trying to decode this activity.

In the early days of BCI research, another substantial barrier to using EEG as a brain-computer interface was the extensive training required before users can work with the technology.

EEG signals are easily recorded in a non-invasive manner through electrodes placed on the scalp, for which that reason it is by far the most widespread recording modality. However, it provides very poor-quality signals as the signals have to cross the scalp, skull, and many other layers. This means that EEG signals in the electrodes are weak, hard to acquire and of poor quality. This technique is moreover severely affected by background noise generated either inside the brain or externally over the scalp.

EEG is the most studied non-invasive interface, mainly due to its fine temporal resolution, ease of use, portability and low set-up cost.

Recently a number of companies have scaled back medical grade EEG technology to create inexpensive BCIs. This technology has been built into toys and gaming devices. Let us have a look at the best consumer-based EEGs released.

1. In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy.

The Adventures of NeuroBoy is a game that monitors your brain activity via a Bluetooth headset (called a “MindSet”) and uses that data to interact with virtual objects. The technology is still limited to measuring “concentration” and “meditation,” so the game itself still relies on keyboard and mouse commands to manoeuvre NeuroBoy on screen. In order to handle any object, it must first be selected with a mouse click. Players have a choice of four abilities: pushing, pulling, levitating and burning.

This was also the first large-scale EEG device to use dry sensor technology.

2. In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography. Neural Impulse Actuator carries out specific commands that your noggin wants to do in conjunction with a mouse and keyboard in FPS games.

The inside of the NIA headband has three visible neural sensor pads that pick up the neural activity and classifies the activity into three classes of neural and electromyography signals. The three classes are as follows: electro oculographic, electroencephalographic and electromyographic signals which are said to reflect the behaviour of the extraocular muscles, brain, and facial muscles.

  1. In 2008, Japanese publisher Square Enix released a mind-controlled video game, known as “Judecca”. Judecca was designed for use with the NeuroSky Mindset.

  1. In 2009 Mattel partnered with NeuroSky to release one of the first commercial brain wearables. The device was an EEG headset that could be used to play a game called Mindflex, from Mattel, in which users move a ball around a small obstacle course using their “brain power.” Increased concentration raises the ball in the air, via a motorized fan, and relaxation lowers the ball

  1. In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing The Force.

  1. In 2009, Emotiv Systems released a headset called the EPOC that allows the user to play video games with only their brainwaves. The device can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection.

  1. In November 2011, Japanese company Neurowear created a pair of supposedly mind-reading plush cat ears, called Necomimi, that could react to a wearer’s moods. Neocomimi runs on the EEG technology used to detect seizures and measure brain activity. The ears are powered by four AAA batteries and feature two motors along the headband, which help the ears react to signals being sent out by the wearer’s brain.

  1. In March 2012 g.tec introduced the intendiX-SPELLER, the first commercially available BCI system for home use which can be used to control computer games and apps. It can detect different brain signals with an accuracy of 99%. has hosted several workshop tours to demonstrate the intendiX system and other hardware and software to the public, such as a workshop tour of the US West Coast during September 2012.

9. In February 2014, They Shall Walk (a nonprofit organization fixed on constructing exoskeletons, dubbed LIFESUITs, for paraplegics and quadriplegics) began a partnership with James W. Shakarji on the development of a wireless BCI.

10. In 2016, a group of hobbyists developed an open-source BCI board that sends neural signals to the audio jack of a smartphone, dropping the cost of entry-level BCI to less than Rs. 2000. The basic diagnostic software is available for Android devices, as well as a text entry app for Unity.

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Application of EEG and Interactive Evolutionary Design Method in Cultural and Creative Product Design

In order to design a cultural and creative product that matched the target image, this paper proposed to use EEG, interactive genetic algorithm (IGA), and back propagation neural network (BPNN) to analyze the users’ image preferences. Firstly, the pictures of cultural elements were grouped according to the pleasantness value and emotional state by PAD emotion scale, and the brain waves induced by the pictures of cultural elements with different pleasure degree were recorded by electroencephalograph. Then, the preference of cultural elements was obtained according to the theory of frontal alpha asymmetry. Secondly, the semantic difference method was used to carry out questionnaire survey to users, and the factor analysis method was used to statistically analyze the survey results to extract the perceptual image semantics of users for cultural and creative products. Thirdly, an interactive evolutionary design system based on IGA and BPNN was constructed. According to the cultural elements preferred by users, the designer designed the initial set of morphological characteristics, and the fitness value was determined according to the degree of user preference for the image semantics. Meanwhile, in order to reduce the fatigue caused by users’ interaction evaluation, BPNN was introduced to simulate artificial evaluation. Finally, the proposed method was verified by the practice of flavoring bottle design. User preference requirement could be used as feedback information to help designers understand users’ design emotional need and generate design schemes that satisfied the users’ perceptual image.

1. Introduction

With the advent of experience economy, it is a key link of innovative product design to obtain user preferences quickly and accurately [1]. Therefore, this paper attempted to use EEG technology and interactive evolutionary design method to analyze users’ emotional experience. Based on user preferences to conduct product shape design and the users participated in cultural and creative product design by computer-aided design, which could directly and effectively help designers to obtain the information of design requirements.

In the process of cultural and creative product design, many scholars have studied the extraction of cultural genes and design elements. For example, Gou et al. [2] extracted the cultural genes of Banpo painted pottery based on genetic theory. Wang et al. [3] extracted the form, color, and connotation factors contained in traditional culture. Zhu and Luo [4] interpreted and excavated cultural elements from four dimensions: semantic, syntactic, contextual, and pragmatic. Chai et al. [5] analyzed the impact of different cultural elements on consumer satisfaction using continuous fuzzy Kano model. Liu et al. [6] studied the extraction of color features from the traditional pattern library and recommended color scheme that best reflected the original features of culture for designers. Luo and Dong [7] introduced the concept of “ontology” in knowledge engineering and developed a management system of cultural artifacts knowledge for cultural creative design. The cultural and creative products designed by the above methods could reflect the traditional cultural elements, but whether the products met the emotional need of users required further research.

When a user is observing a product, the external visual stimulation will induce the changes of EEG. Analyzing EEG signals can accurately and objectively measure users’ perception, preference and emotion, and obtain users’ psychological needs. At present, EEG technology has been widely applied in the field of industrial design, such as commercial advertising design [8], seat design [9], interface design [10], comfort evaluation [11], and so on. In the field of emotional measurement, Li et al. [12] studied the influence of apple tree’s leaves and flowers on human brain waves and found that different visual stimuli would induce different emotions. Zhuang [13] selected two shopping websites with large differences in appearance and usability as the object of study. Then, taking EEG and eye-movement index as independent variables, an emotional measurement model was constructed by partial least square regression method. Aiming at the scheme selection problem of automobile industry design, Tang et al. [14] proposed to objectively evaluate the user experience through EEG data. Therefore, this paper could use the above research for reference in cultural and creative product design, and use EEG technology for quantitative research to help designers to develop products that meet users’ emotional need.

In the traditional cultural and creative product shape design, designers use personal experience and subjective speculation to obtain users’ emotional need. Because of the lack of scientific and objective evaluation mechanism, users’ image preference cannot be reflected in product design accurately and quickly. It is a research hotspot to integrate decision makers’ advantage into evolutionary design methods. Gong et al. [15] presented a novel evolutionary algorithm based on the theory of preference polyhedron that interacted with a decision maker during the optimization process to obtain the most preferred solution. In many algorithms, IGA is effective method of solving optimization problems with implicit criteria by incorporating a user’s intelligent evaluation into traditional evolution mechanisms [16]. In IGA, decision makers give the individual fitness evaluation by interactive means. It has very high application value and extensive practical significance in the field of artistic creation, design, and other areas of that bias towards human subjective feeling. IGA has been applied to clothing design [17], chair design [18], building design [19], logo design [20], product color planning [21], personalized search [22], and so on. However, IGA also has some deficiencies. In the process of user evaluation, users are prone to fatigue. Therefore, how to reduce the fatigue caused by users’ interaction evaluation is the research point of many scholars. For example, Xu and Sun [23] put forward a product modeling design method based on orthogonal-interactive genetic algorithm. Gong et al. [24] introduced the idea of stratification into interactive evolutionary computation and proposed hierarchical interactive evolutionary computation. Sun et al. [25] proposed a new surrogate-assisted IGA, where the uncertainty in subjective fitness evaluations was exploited both in training the surrogates and in managing surrogates.

In order to reduce the users’ fatigue during the interaction evaluation in IGA, this paper introduced neural network to assist IGA. Neural network is a nonlinear algorithm, which is often used to establish the relationship between complex input and output variables, and is successfully applied to the field of product shape design by perceptual image. Using the method of fuzzy neural network, Hsiao and Tsai [26] constructed the correspondence relationship between the modeling data of conceptual product and the perceptual image vocabulary. Diego-Mas and Alcaide-Marzal [27] proposed a consumer emotional response model based on neural network. Through experimental research on mobile phones, Yeh and Lin [28] established an artificial neural network model between product image and consumer’s perceptual image based on the concept of Kansei engineering. In view of the feasibility of neural network in Kansei engineering design, this paper combined BPNN to simulate user evaluation to realize automatic solution of the scheme.

This paper took the flavoring bottle design as an example to explore the innovative shape design scheme. Based on the cognition of users and designers, the product scheme was optimized by combining IGA and BPNN, to reflect the users’ emotional need in product design accurately.

2. Outline of the Proposed Method

As shown in Figure 1, this paper first extracts the cultural elements preferred by users and sets it as the source of morphological characteristics for subsequent evolutionary design (see Section 3). Secondly, we extract the users’ perceptual image for cultural and creative products, which will be set as the target image of subsequent evolutionary design (see Section 4). On this basis, interactive evolutionary design is carried out. IGA is used to achieve the product shape evolution, and the fatigue problem of interactive evaluation is solved by combining with BPNN (see Section 5). Finally, the process is verified by the example of flavoring bottle design (see Section 6).


Are there useful applications for three channel consumer EEG? - Psychology

Semantic clustering reveals the core concepts and themes of consumer neuroscience.

We propose a framework to assess the evidence from consumer neuroscience.

Evidence is growing on the role of several brain regions in consumption.

Important obstacles exist against the integration of consumer behaviour theories.

Neuroimaging cannot replace traditional consumer research techniques.


Materials and Methods

Participants and Design

Final sample consisted of 35 randomly healthy volunteers (15 women and 20 men, mean age = 25 SD = 5 years) recruited from the city where the lab is located. Initial sample measured was 47 subjects but after an examination of the dataset was carried out, 12 participants were removed due to corrupted data from experimental sessions in some of the acquired signals. All of the participants showed corrected-to-normal vision and hearing. They were asked to pay attention to the documentary as in a common situation. No mention of the importance of the ads was made. The study was approved by the Institutional Review Board of the Polytechnic University of Valencia with written informed consent from all subjects in accordance with the Declaration of Helsinki.

The experiment was conducted in a neuromarketing lab of a large European university and comprises the three parts shown in Figure 1. In Parts 1 and 2, participants sat comfortably on a reclining chair with a 32-channel EEG device, with two electrodes to measure heart variability and an eye-tracker (Figure 1). In Part one, participants were exposed to a mindfulness audio designed by experts to help them relax and disconnect from past experiences of the day (Fjorback et al., 2011 Demarzo et al., 2014). Then, in Part two they were shown a 30-min long documentary with three commercial breaks of three ads lasting about 30 s each the first break occurred after 7 min, the second in the middle of the documentary, and the third 7 min before the end as Figure 2 depicts. At the end of this second part, participants were informed that an interview would be held 2 h later (Part 3).

FIGURE 1. Participant in the study. (Top: The EEG cap is visible. ECG electrodes placed on the chest and TMSI equipment). (Bottom: Eye tracking equipment is shown).


1) Small Data Is The New Big Data

The research of ‘Nextstage Evolution’ concluded, as referred in their Facebook and LinkedIn profiles:

Companies are realizing ‘big data’ isn’t as useful as they were told, and that smaller, precise data sets answer questions quicker and cheaper.

In that case, a Behavioral Economist can help companies reduce their costs and time spent on ‘big data’. In their research, they could find which variables lead to these precise ‘small data’. Their ability of separating data is reflected through a Behavioral Economics tool, the ‘conceptual models’.

Behavioral scientist Alain Samson, editor of ‘The Behavioral Economics Guide 2015’, suggests in his guide that these models are used to identify consumer groups, classified by their needs and wants.

The most important factor, in that case, is human psychology.

In order to classify those groups, Behavioral Economists analyze descriptive characteristics, such as gender, income, age and education, and behavioral dimensions, such as benefits, usage rates and loyalty status.

Behavioral economists could effectively apply the analysis and understanding of consumer behavior, as well.

It includes the consumer’s behavior when face-to-face with the seller, or the pre-purchase behavior of the consumer, according to the available information collected up to that point. More emphasis could also be given to the post-purchase outcomes and reactions of the existing consumers, in order to evoke positive feelings (about the performance of the good) that will lead to additional word of mouth and loyalty on the company’s brand.


Discussion

In this review, we summarize previous studies analyzing EEG signals as biological markers in affective mechanism and recognition in the marketing area (as shown in Table 1). The majority of the studies, especially those using machine learning techniques and algorithms, have been published in the last 10 years. This review provides new directions regarding neuromarketing data analyses and fosters cooperation among scholars from miscellaneous disciplines, such as information science, neuroscience, marketing, and psychology. Although there has been a recent increase in the number of EEG-based AC studies in marketing with no signs of slowing down, theoretical and operational challenges must be settled before moving forward.

Table 1. Summary of current findings on EEG-based affective computing in marketing.

First, it would be helpful to pay more attention to multiclass affective classification. As this review shows, most of the previous studies are based on dimensional emotion theory, typically concerning the dimensions of arousal, valence, liking, and dominance. The state of the art usually relies on the affective polarity of its components (e.g., positive or negative) and proposes approaches that mostly focus on binary affective classification. However, to study the affective states of consumers, it would be more interesting to go deeper into the classification and detect subtle affective changes in marketing. Furthermore, marketing scenarios may induce multiple emotions in customers. The phenomena of coexistence should be considered in affective tagging. We recommend that future studies should focus on two issues to develop a more accurate affective definition and conduct better forecasting: (1) they should aim for a deeper understanding of consumer ambivalence, characterized by the co-occurrence of positive and negative emotions (Kreibig and Gross, 2017 Hu et al., 2019) and (2) consider emotion dyads, namely, a mix of primary emotions, raised by Plutchik (1980).

The multidimensional and multimodal feature fusion can obtain better recognition performance. When studying EEG-affect relationships, EEG-based AC studies assume that EEG signals can sufficiently depict and predict human affective states. However, this hypothesis cannot always be assumed to be true because the relationship between physiological responses and psychological states could be very complex (Cacioppo and Tassinary, 1990 Hu et al., 2019). To achieve precise prediction and improved generalization, first, we suggest decreasing the abundant number of features from EEG signals and further perform feature selection and fusion. The most widely used features include differential symmetry, GFP, PSD, and ERPs. It might be the case that a fusion of features derived from different EEG signal types will lead to better recognition performance (Hakim and Levy, 2019). It is worth noting that future studies should be more cautious regarding the reliability and validity of “one-to-one” relationships (one affective state is associated with one and only one EEG feature) (Bridwell et al., 2018 Hu et al., 2019). Second, recent studies have revealed that multimodal frameworks can effectively increase emotion recognition accuracy and robustness compared to unimodal frameworks (Guixeres et al., 2017 Avinash et al., 2018 Kumar et al., 2019). The advantage of multiple modalities (for example, vision, sound, or smell) helps to increase the validity and usability since the weaknesses of one modality are offset by the strengths of another. Future studies may derive features from modalities other than EEG while collecting and analyzing data by using machine learning, natural language processing, and automatic speech recognition technology, evolving from unimodal analyses to multimodal fusion.

The use of portable wireless EEG devices and virtual reality (VR) technology can alleviate the lack of ecological validity in marketing studies. For EEG hardware devices, the whole-brain coverage, the time-consuming preparation procedure, and the prohibitive cost of a professional headset with wet electrodes make it impractical and difficult to transfer the laboratory to real-world applications in marketing. A group of recent studies has confirmed the feasibility of using consumer-level EEG headsets for AC with promising results. For example, the widely used wireless EPOC headset (e.g., Kuan et al., 2014 Lin et al., 2014 Friedman et al., 2015 Yang et al., 2015 Gauba et al., 2017 Yadava et al., 2017 Kumar et al., 2019), due to its light weight, low price, and ease of use, shows promise. Studies on the EPOC headset seem to agree that it can be applied to acquire reliable EEG signals in marketing, but researchers should pay attention to its relatively low signal-to-noise ratio and poor signal stability (Friedman et al., 2015). We suggest that researchers evaluate the performance of consumer-level devices using the standard testing procedures proposed by Hu et al. (2019). In addition, to bridge the gap between the laboratory environment and real market scenarios, the use of VR is an important trend that can effectively enhance the experience of immersive sensation. It enables consumers to get a direct, intuitive, and concrete understanding of the appearance, quality, and performance of products (Guo and Elgendi, 2013). Furthermore, VR makes it possible to simulate and assess retail and consumption environments under controlled laboratory conditions (Marín-Morales et al., 2017), allowing the isolation and modification of variables in a cost-effective manner.

Studying interactions among multiple customers is critical for understanding the marketing ecosystem, which consists of interrelated trends that shape consumer behaviors. Most AC studies in marketing have concentrated mainly on a single consumer’s EEG activity and may ignore the socio-affective interaction and processes related to consumer behavior (Hasson et al., 2012). The EEG-based hyperscanning technique [for a recent review, see Liu et al. (2018)] provides a way to explore dynamic brain activities between two or more interactive customers and their underlying neural affective mechanisms. In previous hyperscanning studies, interpersonal neural synchronization (INS) has been verified to be a crucial neural marker for different kinds of social interactions, such as communication (Stephens et al., 2010), collaborative decision making (Montague et al., 2002 Hu et al., 2018), and imitation (Pan et al., 2017). As consumer behavior is inherently social and interactive in nature, EEG-based INS could be used to study the biological mechanism for shared intentionality of consumption, panic buying, collective emotion, and group purchase.


Affiliations

University of Tübingen, Tübingen, Germany

University of Mainz, Mainz, Germany

Gerd Grübler & Elisabeth Hildt

University of Dresden, Dresden, Germany

Clinique Romande de Réadaptation, Sion, Switzerland

École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Fondazione Santa Lucia, Rome, Italy

Iolanda Pisotta & Angela Riccio

Spinal Cord Injury Center of Heidelberg University Hospital, Heidelberg, Germany

Forschungsstelle Neuroethik/Neurophilosophie, Johannes Gutenberg-Universität Mainz, Gresemund-Weg 4, Raum 2.437, 55099, Mainz, Germany

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

Corresponding author


10 Real Life Examples Of BCI Devices That You Can Control With Your Thoughts

Since the first experiments of Electroencephalography (EEG) on humans by Hans Berger in 1929, the idea that brain activity could be used as a communication channel rapidly emerged. EEG is a technique which measures, on the scalp and in real-time, small electrical currents that reflect brain activity. As such, EEG discovery has enabled researchers to measure brain activity in humans and to start trying to decode this activity.

In the early days of BCI research, another substantial barrier to using EEG as a brain-computer interface was the extensive training required before users can work with the technology.

EEG signals are easily recorded in a non-invasive manner through electrodes placed on the scalp, for which that reason it is by far the most widespread recording modality. However, it provides very poor-quality signals as the signals have to cross the scalp, skull, and many other layers. This means that EEG signals in the electrodes are weak, hard to acquire and of poor quality. This technique is moreover severely affected by background noise generated either inside the brain or externally over the scalp.

EEG is the most studied non-invasive interface, mainly due to its fine temporal resolution, ease of use, portability and low set-up cost.

Recently a number of companies have scaled back medical grade EEG technology to create inexpensive BCIs. This technology has been built into toys and gaming devices. Let us have a look at the best consumer-based EEGs released.

1. In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy.

The Adventures of NeuroBoy is a game that monitors your brain activity via a Bluetooth headset (called a “MindSet”) and uses that data to interact with virtual objects. The technology is still limited to measuring “concentration” and “meditation,” so the game itself still relies on keyboard and mouse commands to manoeuvre NeuroBoy on screen. In order to handle any object, it must first be selected with a mouse click. Players have a choice of four abilities: pushing, pulling, levitating and burning.

This was also the first large-scale EEG device to use dry sensor technology.

2. In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography. Neural Impulse Actuator carries out specific commands that your noggin wants to do in conjunction with a mouse and keyboard in FPS games.

The inside of the NIA headband has three visible neural sensor pads that pick up the neural activity and classifies the activity into three classes of neural and electromyography signals. The three classes are as follows: electro oculographic, electroencephalographic and electromyographic signals which are said to reflect the behaviour of the extraocular muscles, brain, and facial muscles.

  1. In 2008, Japanese publisher Square Enix released a mind-controlled video game, known as “Judecca”. Judecca was designed for use with the NeuroSky Mindset.

  1. In 2009 Mattel partnered with NeuroSky to release one of the first commercial brain wearables. The device was an EEG headset that could be used to play a game called Mindflex, from Mattel, in which users move a ball around a small obstacle course using their “brain power.” Increased concentration raises the ball in the air, via a motorized fan, and relaxation lowers the ball

  1. In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing The Force.

  1. In 2009, Emotiv Systems released a headset called the EPOC that allows the user to play video games with only their brainwaves. The device can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection.

  1. In November 2011, Japanese company Neurowear created a pair of supposedly mind-reading plush cat ears, called Necomimi, that could react to a wearer’s moods. Neocomimi runs on the EEG technology used to detect seizures and measure brain activity. The ears are powered by four AAA batteries and feature two motors along the headband, which help the ears react to signals being sent out by the wearer’s brain.

  1. In March 2012 g.tec introduced the intendiX-SPELLER, the first commercially available BCI system for home use which can be used to control computer games and apps. It can detect different brain signals with an accuracy of 99%. has hosted several workshop tours to demonstrate the intendiX system and other hardware and software to the public, such as a workshop tour of the US West Coast during September 2012.

9. In February 2014, They Shall Walk (a nonprofit organization fixed on constructing exoskeletons, dubbed LIFESUITs, for paraplegics and quadriplegics) began a partnership with James W. Shakarji on the development of a wireless BCI.

10. In 2016, a group of hobbyists developed an open-source BCI board that sends neural signals to the audio jack of a smartphone, dropping the cost of entry-level BCI to less than Rs. 2000. The basic diagnostic software is available for Android devices, as well as a text entry app for Unity.

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Application of EEG and Interactive Evolutionary Design Method in Cultural and Creative Product Design

In order to design a cultural and creative product that matched the target image, this paper proposed to use EEG, interactive genetic algorithm (IGA), and back propagation neural network (BPNN) to analyze the users’ image preferences. Firstly, the pictures of cultural elements were grouped according to the pleasantness value and emotional state by PAD emotion scale, and the brain waves induced by the pictures of cultural elements with different pleasure degree were recorded by electroencephalograph. Then, the preference of cultural elements was obtained according to the theory of frontal alpha asymmetry. Secondly, the semantic difference method was used to carry out questionnaire survey to users, and the factor analysis method was used to statistically analyze the survey results to extract the perceptual image semantics of users for cultural and creative products. Thirdly, an interactive evolutionary design system based on IGA and BPNN was constructed. According to the cultural elements preferred by users, the designer designed the initial set of morphological characteristics, and the fitness value was determined according to the degree of user preference for the image semantics. Meanwhile, in order to reduce the fatigue caused by users’ interaction evaluation, BPNN was introduced to simulate artificial evaluation. Finally, the proposed method was verified by the practice of flavoring bottle design. User preference requirement could be used as feedback information to help designers understand users’ design emotional need and generate design schemes that satisfied the users’ perceptual image.

1. Introduction

With the advent of experience economy, it is a key link of innovative product design to obtain user preferences quickly and accurately [1]. Therefore, this paper attempted to use EEG technology and interactive evolutionary design method to analyze users’ emotional experience. Based on user preferences to conduct product shape design and the users participated in cultural and creative product design by computer-aided design, which could directly and effectively help designers to obtain the information of design requirements.

In the process of cultural and creative product design, many scholars have studied the extraction of cultural genes and design elements. For example, Gou et al. [2] extracted the cultural genes of Banpo painted pottery based on genetic theory. Wang et al. [3] extracted the form, color, and connotation factors contained in traditional culture. Zhu and Luo [4] interpreted and excavated cultural elements from four dimensions: semantic, syntactic, contextual, and pragmatic. Chai et al. [5] analyzed the impact of different cultural elements on consumer satisfaction using continuous fuzzy Kano model. Liu et al. [6] studied the extraction of color features from the traditional pattern library and recommended color scheme that best reflected the original features of culture for designers. Luo and Dong [7] introduced the concept of “ontology” in knowledge engineering and developed a management system of cultural artifacts knowledge for cultural creative design. The cultural and creative products designed by the above methods could reflect the traditional cultural elements, but whether the products met the emotional need of users required further research.

When a user is observing a product, the external visual stimulation will induce the changes of EEG. Analyzing EEG signals can accurately and objectively measure users’ perception, preference and emotion, and obtain users’ psychological needs. At present, EEG technology has been widely applied in the field of industrial design, such as commercial advertising design [8], seat design [9], interface design [10], comfort evaluation [11], and so on. In the field of emotional measurement, Li et al. [12] studied the influence of apple tree’s leaves and flowers on human brain waves and found that different visual stimuli would induce different emotions. Zhuang [13] selected two shopping websites with large differences in appearance and usability as the object of study. Then, taking EEG and eye-movement index as independent variables, an emotional measurement model was constructed by partial least square regression method. Aiming at the scheme selection problem of automobile industry design, Tang et al. [14] proposed to objectively evaluate the user experience through EEG data. Therefore, this paper could use the above research for reference in cultural and creative product design, and use EEG technology for quantitative research to help designers to develop products that meet users’ emotional need.

In the traditional cultural and creative product shape design, designers use personal experience and subjective speculation to obtain users’ emotional need. Because of the lack of scientific and objective evaluation mechanism, users’ image preference cannot be reflected in product design accurately and quickly. It is a research hotspot to integrate decision makers’ advantage into evolutionary design methods. Gong et al. [15] presented a novel evolutionary algorithm based on the theory of preference polyhedron that interacted with a decision maker during the optimization process to obtain the most preferred solution. In many algorithms, IGA is effective method of solving optimization problems with implicit criteria by incorporating a user’s intelligent evaluation into traditional evolution mechanisms [16]. In IGA, decision makers give the individual fitness evaluation by interactive means. It has very high application value and extensive practical significance in the field of artistic creation, design, and other areas of that bias towards human subjective feeling. IGA has been applied to clothing design [17], chair design [18], building design [19], logo design [20], product color planning [21], personalized search [22], and so on. However, IGA also has some deficiencies. In the process of user evaluation, users are prone to fatigue. Therefore, how to reduce the fatigue caused by users’ interaction evaluation is the research point of many scholars. For example, Xu and Sun [23] put forward a product modeling design method based on orthogonal-interactive genetic algorithm. Gong et al. [24] introduced the idea of stratification into interactive evolutionary computation and proposed hierarchical interactive evolutionary computation. Sun et al. [25] proposed a new surrogate-assisted IGA, where the uncertainty in subjective fitness evaluations was exploited both in training the surrogates and in managing surrogates.

In order to reduce the users’ fatigue during the interaction evaluation in IGA, this paper introduced neural network to assist IGA. Neural network is a nonlinear algorithm, which is often used to establish the relationship between complex input and output variables, and is successfully applied to the field of product shape design by perceptual image. Using the method of fuzzy neural network, Hsiao and Tsai [26] constructed the correspondence relationship between the modeling data of conceptual product and the perceptual image vocabulary. Diego-Mas and Alcaide-Marzal [27] proposed a consumer emotional response model based on neural network. Through experimental research on mobile phones, Yeh and Lin [28] established an artificial neural network model between product image and consumer’s perceptual image based on the concept of Kansei engineering. In view of the feasibility of neural network in Kansei engineering design, this paper combined BPNN to simulate user evaluation to realize automatic solution of the scheme.

This paper took the flavoring bottle design as an example to explore the innovative shape design scheme. Based on the cognition of users and designers, the product scheme was optimized by combining IGA and BPNN, to reflect the users’ emotional need in product design accurately.

2. Outline of the Proposed Method

As shown in Figure 1, this paper first extracts the cultural elements preferred by users and sets it as the source of morphological characteristics for subsequent evolutionary design (see Section 3). Secondly, we extract the users’ perceptual image for cultural and creative products, which will be set as the target image of subsequent evolutionary design (see Section 4). On this basis, interactive evolutionary design is carried out. IGA is used to achieve the product shape evolution, and the fatigue problem of interactive evaluation is solved by combining with BPNN (see Section 5). Finally, the process is verified by the example of flavoring bottle design (see Section 6).


Are there useful applications for three channel consumer EEG? - Psychology

Semantic clustering reveals the core concepts and themes of consumer neuroscience.

We propose a framework to assess the evidence from consumer neuroscience.

Evidence is growing on the role of several brain regions in consumption.

Important obstacles exist against the integration of consumer behaviour theories.

Neuroimaging cannot replace traditional consumer research techniques.


Arithmetic in the Bilingual Brain

Nicole Y. Wicha , . Amanda Martinez-Lincoln , in Language and Culture in Mathematical Cognition , 2018

Electrophysiology and Event-Related Potentials

For this reason, electroencephalography (EEG) has been successfully used to study multiple stages of arithmetic processing in both monolinguals and bilinguals. EEG is a direct measure of neural activity, typically recorded from the scalp, which reflects the electric changes over time of large populations of neurons with millisecond precision. From intracranial recordings, we know that EEG generally captures postsynaptic potentials from cortical pyramidal neurons (for review, see Luck & Kappenman, 2011 ). The ongoing EEG can be time-locked to particular events of interest and then averaged across trials and subjects to generate the average brain response or event-related potential (ERP) to an experimental condition of interest. These derived ERPs are a time-sensitive multidimensional measure of brain electric activity, with functionally specific effects (i.e., changes in the recorded activity compared to a baseline). The amplitude of the change in voltage, polarity (whether the difference in voltage between conditions is negative or positive), latency (the timing of the effect), and scalp distribution (which electrodes show the effect) of the waveform are all independently informative about the nature of ongoing cognitive activities. Importantly, ERPs can provide information about processing without dependence on explicit responses or self-reported of strategies, which have been a particular issue of some contention in the domain of arithmetic problem solving (e.g., Fayol & Thevenot, 2012 ).

In the first 300 ms after the onset of a stimulus, ERP components (the typically observed modulations in amplitude) generally reflect modality-specific sensory processing. For example, with visual stimuli, early components are modulated by contrast, size, brightness, and attention ( Pratt, 2011 ). After about 300 ms, more modality-independent effects emerge, including well-characterized cognitive components that have been the most relevant to studies of arithmetic processing—the N400 (see Fig. 1 ) and a subsequent positive shift or late positive component.

Fig. 1 . Grand average ERPs from monolingual adults recorded in response to expected and unexpected sentence-final words (top, from Wicha, Moreno, &amp Kutas, 2004 ) and in response to correct and incorrect multiplication solutions presented as number words (middle) or Arabic digits (bottom) (unpublished data from Wicha lab). The data come from a representative electrode, with 1 s of time in milliseconds along the X-axis and voltage (in microvolts, with negative plotted up) on the Y-axis.

ERP studies of arithmetic have typically measured two effects: the congruency effect, primarily observed in verification tasks during recordings of the brain response to a provided answer (e.g., Niedeggen, Rosler, & Jost, 1999 ), and the problem-size effect, observed during recordings of the brain response either to provided answers or after presentation of the operands (e.g., 7 × 6) but prior to the answer itself ( Jost, Hennighausen, & Rösler, 2004 Zhou et al., 2006 ). We focus here on the congruency effect, found in comparisons of brain responses for congruent solutions (correct answers) and incongruent solutions (incorrect answers), as this has been the primary effect measured in bilingual populations. Importantly, typical studies of the congruency effect report the time-locked brain response to the answer presented in isolation, after retrieval/calculation processes have already been initiated by prior presentation of the operands.

The congruency effect is characterized by a more negative response to incorrect solutions (e.g., 7 × 6 = 36) compared with correct solutions (e.g., 7 × 6 = 42), with a maximum peak amplitude difference occurring about 350–400 ms after the presentation of the answer ( Niedeggen et al., 1999 ). This arithmetic congruency effect has drawn comparisons to similar effects in other domains that study the processing of potentially meaningful items (e.g., language and object processing). Notably, words that are unexpected/incongruent with a prior sentential context (e.g., the brain response to “dog” given the prior context, “he takes his coffee with cream and ___”) also elicit a more negative brain response than words that are expected/congruent with a prior context (e.g., the brain response to “sugar” given the same context), around 400 ms after the congruent/incongruent word is presented. Thus, the effect of answer congruency has been traditionally labeled an N400 effect (a negative-going wave sensitive to semantics/meaningfulness, peaking at 400 ms see Kutas & Federmeier, 2011 , for a review of N400 response properties).

Both the timing and the size of this congruency effect (i.e., the average difference in voltage across a time window surrounding the ERP effect) can be taken as independent measures of the brain's readiness to categorize an arithmetic problem as correct or not. If the maximal difference occurs later in time (i.e., at 400 ms instead of 350 ms), then it is reasonable to infer that the brain had to do additional processing before the correctness judgment could be fully rendered (much like an RT delay). Moreover, if the average voltage difference is a smaller or larger size in response to different contrasts of answer subtypes (e.g., table-related or table-unrelated incorrect solutions) or to the same answer subtypes under different contexts (e.g., digits/words and fast/slow presentation latencies), then the size of the difference itself can be taken as an indication of the brain's ability to distinguish between the correct and incorrect solutions. This initial and short-lived (

200 ms) effect is generally followed by a slow positive ERP selective to unambiguously incorrect answers. Notably, this later effect is not as consistent or as well characterized and also seems to reflect sensitivity to more subtle differences in answer types (as in traditional language studies, Van Petten & Luka, 2012 , for review).


Watch the video: 5 Χρήσιμες εφαρμογές για το σχολείο (January 2022).