Non-Invasive BCI through EEG
1.Abstract
- Electroencephalography: The measurement and recording of the electrical signals produced by neurons in the brain. This is recorded with sensors across the scalp
- BCI: Brain Computer Interface
- Applications: Prosthetic Devices, Communication, Military uses, video gaming, virtual reality, robotic control, and assistance to disabled persons
- Current issues: Inaccuracies, detection, thought and action delays, cost, and invasive surgery
2. Introduction
- This paper will cover the process of creating a suitable BCI that uses the Emotiv EPOC System to measure EEG waves and control a robot called the Parallax Scribbler
2.1 Electroencephalography
- EEGs were first measured in 1912 in dogs, then 1922 for humans
- Postsynaptic potentials are measured by electrons that are sensitive to these changes, and they are made up of a combination of inhibitory and excitatory potentials in the dendrites
- Excitatory postsynaptic potentials increase the likelihood of a postsynaptic potential occurring
- Inhibitory postsynaptic potentials decrease the likelihood of a postsynaptic potential occurring
- The primary research for this system uses rhythmic activity that is based on mental state, and this compares the brain at an active and passive state
2.2 Brain-Computer Interface
- Open loop system: Responds to a users thoughts
- Closed loop system: Gives feedback to the user (goal of the BCI research)
- By focusing on the motor cortex of the brain responses could be translated into robotic activity
- Invasive methods are highly debated
3. Previous EEG BCI Research
- The model of EEG-BCI systems follows reading in EEG data, translating into output, and then giving feedback back into the user interface. The issues involve the necessity for real time extraction and classification of the data
- Extraction separates the necessary data, and classification decides what it represents
- There are various filters that can show the normal slope of the data, among other methods
- Problems: No standard way of classification
- Feedback is essential and can be auditory clues or haptic sensations
- Synchronous: Giving the user a cue to perform a certain mental action and records the user’s EEG patterns in a fixed time window (easier to construct than Asynchronous)
- Asynchronous: Driven by the user, instead of the computer, and passively and continuously monitors the users EEG data to classify it on the fly
4. The Emotiv System
- Based around the EPOC headset for recording EEG measurements and a software suit which processes and analyzes the data
- Specifics on the makeup of the headset is described
4.1 Control Panel
- This is a GUI that is a gateway to the headset
- It’s twelve recognizable artifacts are blink, right/left wink, look right/left, raise brow, furrow brow, smile, clench, right/left smirk, and laugh
- Affectiv monitors emotional states, and Cognitive monitors the users conscious thoughts
- These are prototype action thoughts created with test cases
- only four thoughts running at a time
4.2 TestBench
- Real time data from any sensors and record the data in .edf files
4.3 The Emotive API
- The EmoEngine is the next step after the raw data from the headset, which then uses the EmoState (the new facial movements or brain activity) to go through a loop
- Check page 16 again for the helpful visual
5. The Parallax Scribbler Robot and IPRE Fluke
- The Parallax Scribbler Robot: fully assembled reprogrammable robot with various sensors and two wheels. A GUI could be used to program it or the Basic Stamp Editor
- IPRE Fluke: The Institute for Personal Robots in Education Fluke is an add on to give the robot more functions and software packages
- The Myro software package interacts with bluetooth and has the ability to reprogram with Python (YES)
6. Control Implementation
- Written in Microsoft Visual C++
- Four basic parts to this code: Connecting to the headset with the Emotive API, connecting to the Scribbler with the Myro libraries, reading and decoding the events, and closing the connections afterward
- Another visual of this process is on 18
6.1 Emotiv Connection
- The robot can be controlled with thoughts detected from the headset itself or from generated signals from the EmoComposer emulator (helps to train the headset and work as a prototype)
- The EE_EngineConnect call always comes back as true even when the headset was off, so using EE_EngineRemoteConnect to connect through the Control Panel to make certain the power status and connection status
- the port has to be switched if using the EmoComposer in the call
- it’s like a database call, go over your database notes for a while
6.2 Scribbler Connection
- Three steps to initialize the robot: install python, load the Myro libraries, connecting to the robot using the initialize() command
- C/C++ code can make direct calls to the Python interpreter (I wonder if I could just use Python instead of going this roundabout way)
- four lines of code are needed at first (there are other hints earlier, go over the paper again when you are closer to actually writing the software)
- Py_Initialize();
- PySys_SetArgv(argc, argv);
- main_module = PyImport_AddModule("__main__");
- main_dict = PyModule_GetDict(main_module);
- more specific ways to use the Scribbler are detailed, it is all on 20
6.3 Decoding and Handling EmoStates
- 4 major steps in reading and decoding info from the headset:
- Create the EmoEngine and Emostate handles, querying for the most recent EmoState, determining if it is new, and decoding it
- the API is used to make these new handles:
EmoEngineEventHandle eEvent = EE_EmoEngineEventCreate();
EmoStateHandle eState = EE_EmoStateCreate();
- To get the most recent event:
EE_EngineGetNextEvent(eEvent);
- The kinds of events that can be seen: hardware related events, new emostate events, and suite related events
- decoding code on page 21
- if headset is disconnected, suspend activity until a reconnect emostate is published
- these calls determine whether or not the headset was still connected:
ES_GetWirelessSignalStatus(eState)
ES_GetHeadsetOn(eState)
- three possible cognitive thoughts: push, turn left, and turn right
- get the integer value with the CognitivSuite
- Specific values determine the action taken, and the strength of the thought does not apply in this situation
6.4 Modifications
- At first, the rate of input and the time to complete the actions was extremely too long
- The interface was unusable because the actions piled after the first and it would be completing tasks from a minute ago instead of current tasks
- The solution was to institute a sampling variable to only decode every ten EmoStates, but when that made issues then it would decode every emostate
- An additional mode was added to change the output after a different facial movement was made
7. Blink Detection
- Eye blinks can be used as control inputs, else they must be filtered out
- This is centered on one specific channel which makes the recognition easier
- The data set to set this new mod took a lot of testing and recording or the subject blocking for some, and not blinking for others.
- There was a pattern to the blinks, but the neural net was thrown off because they weren’t normalized with time, it was treated as an attribute.
- blinks correspond to spikes in the EEG data, and can be used with the neural net to improve the API’s analysis of the recordings
8. Conclusions
- The author was able to create a system to control a robot with thoughts and use accurate blink-recognition software, plus a method of switching modes with a movement of eyebrows
- These innovations could help the disabled to move wheelchair
Emotiv Experimenter
Abstract
- The Experiemtenter is an application that can record brain data and attempt online analysis and classification of the incoming data stream
4. Introduction
- Emotiv EPOC headset has simple Brain Computer Interface (BCI) abilities
- Inexpensive experiments without the need to go into a lab
4.1 EEG
- EEG is non intrusive means that results in waveforms referred to as “Brain Waves”. They can diagnose diseases and study neurological activity after specific stimuli
4.2 The Emotiv Headset
- A relatively affordable headset that brings EEG capabilities to the masses
- Downside: there are only 14 channels, however it is a lot for the public
- 10-20 positioning scheme
- Event based classification suites
- Many online applications that build on these three suites
- In reality, the facial expression detection system is successful, while the mind-reading capabilities are unreliable and do not function for the games associated with the headset
4.3 Previous Work
- Lillian Zhou ran experiments with the headset, and used Emotiv’s native C++ API, but she needed a full-featured application
5 The Experimenter Application
- This is the design of the app that the author created
5.1 Initial Work
- The first version of the app allowed folders of images to be loaded and treated like types of stimuli, ran an experiment where images of these types were shown, and collects the data of the EEG from the headset
- This allows presentation and analysis
5.2 Online Classification Capabilities
- This couldn’t process the data real-time, so online classification of the EEG signals was instituted
- This was more interesting for subjects, and relied on the portable aspect of the headset
- Subject-verifiable mind reading: A setup enables the user to believe that the app is relying on their specific EEG data
- Some methods of this are not real because they are easily cheated or rely on muscle action instead of neural
- True subject verifiable mind reading must present the user with a decision that the app could only solve with the EEG data
5.3 Experimental setup
- This exemplifies a highly configurable stimulus presentation experiment with real time data collection and classification.
5.3.1 Test
- There are lists of stimuli that are randomly shuffled and one stimulus from each list will be presented
- Shuffling prevents bias and allows for different combination of stimuli
- The stimuli are shown on the left, then a different one on the right, and then an arrow will point to one of the images (at this point the class of stimuli is still being shown)
- The class names are replaced with fixation crosses after a period of time
- A question is presented with two answers that are possible, with an optional I Don’t Know option (this invalidates the trial)
- This is repeated until the set of online classifiers are trained
- Next, the set up is the same, except that the arrow is pointing both ways (the subject could focus on either)
- Then, the trained classifiers identify what class of images they looked at and it’s confirmed with the subject
- Other variables like the time and the size of the images can be set and changed
5.3.2 Justification
- This experiment is very close to the ideal, and it shows how the subject makes a mental choice that the application doesn’t know until it’s prediction is complete
- The random placement also eliminates bias
5.3.3 Variations
- The images may also be superimposed on top of each other, or only text might be shown
- The text permits experiments regarding the subjects respond to words
5.3.4 A Note About Timing
- The timing is not quickly accurate, but the data has a correct time stamp
5.4 Features and Design
- This shows application capabilities and feature set
5.4.1 Goals
- This is supposed to be useful to anyone even if they cannot modify it’s code base, and it still maintains the original TestBench application provided by the company
5.4.2 User Interface Design and Functionality
- The GUI has three distinct regions: Experiment Settings, Classifiers, and Classes section
- Each have a sort and load function and help with large cases
5.4.3 Software Design
- This justifies the software design and acts as a high level guide for those modifying the code
5.4.3.1 Platform Choice
- This is C# and it’s native is C++ SDK
- the use is restricted to .NET which only allows on a Windows operating system, however more recent iterations of the headset and design are compatible with Mac operating systems
- 5.4.3.2 Interfacing with Emotiv
- The original raw data is hard to deal with, so incoming raw data is put in a fixed size buffer to be polled by the application, but if this is infrequent then some dat will be lost
- Experimenter wraps the Emotiv API with a publish/subscribe design pattern
- There are listeners to the data that have the data on treads, so that polling is open to new data
- I believe this is similar to a sort of dam that lets some water through, while keeping the rest in one place.
- EEG data can go to many listeners after going through the thread, which is beneficial for a large experiment that needs many watchers
- a test data signal can also be sent, so the headset is not constantly required
5.4.3.3 The Yield View Presentation System
- The logic could have been put into a sequence of GUI events with timers and button pushes triggering change (however this would make the code complex and unmanageable)
- The management of the resources is difficult because the operating system limits this process
- The application also needs proper cleanup to limit users accidentally corrupting each other’s data
- This is solved with two C# features
- The yield keyword suspends the data to the user, which almost pauses it until the user calls something in the sequence
- The using call helps with disposal
- A view is a temporary look at images or other graphical components, and when an event causes it to terminate, it’s resource intensive components are disposed
5.4.3.4 The Parameter-Description System
- C# attributes specify display names, descriptions, default values, and valid value ranges for the parameters
- This system saves programming time and it makes the code neater and easier to modify
- the System.Windows.Forms.Timer makes the Experimenter not as fast with it’s presentation displays but it carries out it’s function with the GUI in the background
- Also, accurate data time stamping is more important than the display time stamps
6. Experiments
- The experiments test Emotiv in general and the uses of the Experimenter’s abilities
6.1 Eyes Open vs. Eyes Closed
- When a person’s eyes are closed, the EEG signals at the occipital electrodes have high amplitude waves
- This makes this experiment a good one to start with
- The subject was given instruction for their eyes for each trial, and a tone signals the end of the trial. The computer then checks with the subject whether they followed the instruction (if they did not the trial is invalidated)
- Then the data was stored in folders of stimulus classes, one for eyes open and one for eyes closed. This classifies the stimuli very easily
6.2 Faces vs. places
- Distinguishes between the neural processing of faces and scenes (normally done with MRI)
6.2.1 Version 1
- This only involved data collection, and had many faults
- This was tedious because of the lack of requirement, the time in-between the trials was very long, and the subject’s mind could easily wander in this setup
- The color could have also effected the results
6.2.2
- This involved superimposed gray-scale images, and during the test phase the subject would choose what image to focus on
- They could chose between man or woman for faces, and indoor and outdoor for places
- The classifiers messed up in this setup
6.2.3
- This finally showed the side by side images like in the Experimenter, and it worked well
6.3 Artist vs. Function
- This shows a noun and they subject either draws it or list it’s uses
- The setup is to make a long list of concrete nouns from a linguistic database (source is in paper) and put it in a text file with one word on each line
- The classes are labelled and all stimuli is put as unclassified for now
- The question mode is put on ask and the superimposed is turned off
6.4 Faces vs. Expressions
- This distinguishes between two different kinds of processing