Thursday, December 17, 2015

Progress Report

Progress:

  • Downloaded & am using the Testbench software
  • Collected data for the Openvibe group and for my own education
  • I have been testing around with the connecting command in the sample code created by Emotiv
  • Finished presentation and posted it up on the blog
  • Learned an easier way to put on the headset
Problems:
  • I've forgotten a lot of Objective-C since my study of it near the beginning of the year
  • Strange errors from code
  • Timing with my other classes conflicting with my project work
Plan:
  • Get my app to connect to the headset by looking closer through the code (I have been doing this, I think I should be able to finish it soon)
  • Collect more data of me doing various tasks or whatever the group needs of me with the headset (this will benefit their research, and also let me use the headset the most I can)
  • Use Lynda.com to review Objective-C
Puns! 
  • What kind of fish performs brain surgery? A Neuro-sturgeon!
  • What do you call an empty skull? A no brainer!

Monday, December 14, 2015

December Seminar Presentation

This is the link for my seminar presentation. I tried to make the terms simpler for everyone to understand, so I might want to update it soon, just to improve.


https://docs.google.com/presentation/d/11Ef6HuEKUuWeCziFFKuvMfBO7NiTaNh_Ak-tnhkVrFw/edit?usp=sharing

Sunday, December 13, 2015

New Progress Report Will Be Made Tomorrow

I need the headset to do this new report, which is still at school. My Sunday update will be done on Monday. Thank you for your patience.

Sunday, December 6, 2015

Progress Report: Insight Testing



This week, with the arrival of the headset, I have been experimenting with the capabilities of the already created app. I have found that preparing the headset on your head is the initial challenge, as it requires very direct sensor to scalp connection. This makes sense based on the data it's collecting, however if you are not used to it it can be annoying. This is how the headset looks once placed correctly, and a white light shows on the side when it is turned on. The connection was a little spotty but it could be fixed if I rehydrate the sensor, or part my hair further. This can be very difficult with curly hair.


There are various tests and challenges you can complete within the app. First, it requires you to record a baseline, where it asks that you do certain tasks such as close your eyes and relax for ten seconds, or to behave normally with a normal amount of blinking. the Insight can record artifacts such as blinking so it is important to keep that in mind. Every task has a somewhat long loading time, and when I look closer into the code I might be able to tell why. Here are some screenshots from the app. The green means fully connected and the yellow means somewhat. 



The emotions that Emotiv classifies the waves into is attention, focus, engagement, interest, excitement, affinity, relaxation, and stress. From the few tests that I tried, it seems that the attention, focus, interest, relaxation, and stress levels appear the most. It could be that for the tests that I took only those classifications were needed. I will be looking into this further. Just as a personal note, I might need to work on lowering stress. I can post my individual results more often, however I find them unnecessary. The organization of the GUI is pretty and easy to look at. It also gives me at a glance very important information.



 Problems had been connection issues. My plan now is to test this out even more, finish my seminar presentation, think about what I am presenting during Hour of Code, and consider incorporating some ideas that Emotiv used into my own app design.


RE: Patent Project

Even though many companies have got patents in this field. As an app developer, you can still keep an eye on the possible innovation opportunities. Especially when you feel the existing technology is not good enough or not convenient. Any need is an indication of potential innovation.

Monday, November 30, 2015

Progress Report

I have caught up with tutorials online and watched all of Emotiv's available online resources. I finished the patent project. I had some problems with this, as applications themselves are difficult to patent, but I found some very solid patents. We also have a confirmation for the headset, and it will be coming in on Friday, December 4th. This allows me to set very clear goals for this week. I want to be able to test the headset with Emotiv's pre existing app, and also see how it connects with some sample code. I can create a document compiling the setup information and the way to connect so that my time in class will be easier. In conclusion, this week with be a lot of planning and testing.

Patent Project

I am posting the patent project perused by myself. The methods and hardware that I found gave me a good grasp on what is already out there, and how I can possibly improve it. It can be found through this link:

https://docs.google.com/document/d/1KaAjASdmBJBczaU4nepWqAb25COpO5mGc9UdJpGy6nk/edit?usp=sharing

Wednesday, November 25, 2015

Progress Report

I have a lot of news in terms of the headset and what I am doing now. Firstly, Mr. Lin received a response from the company regarding the shipping of the headset. It should be in the next couple of weeks, and that is very exciting. Because of this, I have;

  • Downloaded the app that has already been made
  • Watched the videos online by Emotiv
  • Looked through the Lynda.com tutorials for refreshing my app development skills
  • Begun a new gantt chart to help plan around the headset shipment
  • Found 9 patents so far for the patent project
One problem is that I did not finish a lot of the goals that I set for myself because of other classes and commitments. To fix this I know that I have to do two things;


  • Be more realistic with my goals
  • Consider the influence of my schoolwork
This will not limit the product that I am making, or hurt the project at all. It will simply help my planning skills to keep these ideas in mind as I write these reports.

My plan now is:
  • Finish the Gantt Chart
  • Finish the patent project
  • Do some more work on the app, hopefully have a prototype soon (this goal may be set forward a little longer)

I know what I have to do, and now I know that I have far to go. If I want to succeed, I have to be more serious about my work and focus on my set deadlines.

Tuesday, November 17, 2015

Emotiv Response #3

We received these links after Mr. Lin's payment was confirmed. I am definitely going to look over the links for a couple days.

Useful Links:
Google+ Community
Early Access Page
YouTube videos
Knowledge Base
Newly released iOS Insight app
iOS Mental Commands app

Browser Based Apps:
Control Panel
EEG Viewer using Bluetooth
EEG Viewer using Dongle

SDK: GitHub

To rehydrate sensors:
this

Sunday, November 15, 2015

Temporary Progress Report 3

I have to add my new Gantt chart and work completed when I get to school, therefore this progress report will be updated on November 16th. I am sorry that it has to be delayed.

-Isabelle Greenberg

Tuesday, November 10, 2015

Emotiv Response #2

This is the second email that I have received. I asked them about the order and gave them the order number. For some reason, they didn't say anything specific about the order. I think the main point is that it will take more time than expected. Here is the email exactly:

Hi Isabelle,
We’ll be using FedEx International Economy method to ship directly from China.
Unfortunately, we are experiencing a little delay in shipping due to FedEx service. We are hoping the issue will be resolved quickly (preferably within this week) so we can continue shipping.
Also, in the meantime, we have released iOS and Android Insight app.
- This is the link to iOS app: https://itunes.apple.com/us/app/emotiv-insight/id1031805596?l=vi&ls=1&mt=8
- This is the link to Android app: https://play.google.com/store/apps/details?id=com.emotiv.insightapp
- Insight app allows you to record sessions and review your performance metrics and Mental Commands app allows you to control a virtual object with your mind.
We thank you for your support and apologize for any inconvenience the delay in delivery of this order may have caused. We hope this instance has not tainted your experience with Emotiv and if it has, we hope to continue to work with you to improve your outlook in the future.
Kind regards,
Emy
Emy
EMOTIV Customer Support

Sunday, November 8, 2015

Progress Report 3 (11/8/15)

Progress:

  • I have found an extremely useful wireframe website where you can test out the positioning of various graphics for apps
  • It can be used for many things, and the free trial is plenty as of now (I do not believe we will have to purchase a membership)
  • This is the preview as of now, expect more soon! There is always ways to improve
  • https://www.fluidui.com/editor/live/preview/p_aSUz6vve8epg6JOBypSbMUA22DpTPVUC.1447037055658
  • With Lynda.com, there are plenty of tutorials and reviews for Objective C and App Design in general
  • I am going to use this whenever I have time
Problems:
  • A lot of the problems from last time haven't been resolved, and I could have done more studying
  • No response on the forum specifically, but the response from the Emotiv official is a good thing
  • While the wireframe is made, the code itself hasn't been created
  • I will improve!!!
Plan:
  • Finish the wireframe
  • Do some concrete studying
  • Get at least started on the code, if not finished

Wednesday, November 4, 2015

Emotiv Response

I sent a couple questions to Emotiv on the 20th of October to do with the differences between EEG and non-EEG. I also told them a little bit about my app ideas and asked about their Performance Metrics Suite. This is their response. I will try to work around the issues that this presents.

Hello Isabelle,

They have different firmware in the headset. EEG headset will allow the viewing of raw EEG data in our EEG viewer browser app as well as Emotiv’s proprietary metrics (Facial expressions, Performance Matrix, Mental Commands) in our CPanel browser app. It allows researchers to look at data at these specific locations and calculate their own metrics should they desire. Non-EEG headset will show metrics but no raw data. You won’t be able to do research but can still track your cognitive performance and develop apps.
If you want the software to work with OpenVibe, I'd recommend EPOC/EPOC+. 
Insight and EPOC are designed for different purposes. Clearly EPOC has more sensors and therefore measures more regions of the brain, giving a more complex view of mental activity. It is more appropriate for research where information density is critical. Insight is positioned over key brain regions found to be important for our detections but there is less signal redundancy which can be used to reduce the effects of noise in more extreme situations, therefore EPOC retains higher accuracy in more situations. In good operating conditions both headsets perform well in detecting our key emotional outputs. The reduced sensor count also means that distinguishing 3 or more Mental Commands is slightly more difficult and some facial expressions cannot be detected with Insight. 
Or let's say Insight detections have been reworked significantly and are different to EPOC detections. The lower sensor count produces a lower spatial resolution and significantly reduces the available redundancy in the available information. Expressiv has less detections, Affectiv detections are a different list by and large, and Cognitiv self-trains so the detections are similar in principle. The devices are designed for different markets and applications.
But if you still stick to Insight, we have released iOS and Android Insight app.
- This is the link to iOS app: https://itunes.apple.com/us/app/emotiv-insight/id1031805596?l=vi&ls=1&mt=8
- This is the link to Android app: https://play.google.com/store/apps/details?id=com.emotiv.insightapp
Insight app allows you to record sessions and review your performance metrics and Mental Commands app allows you to control a virtual object with your mind.
We thank you for your support and apologize for any inconvenience the delay in responding may have caused. We hope this instance has not tainted your experience with Emotiv and if it has, we hope to continue to work with you to improve your outlook in the future.
Kind regards,
Emy
Emy
EMOTIV Customer Support

Sunday, November 1, 2015

Progress Report 2 (11-1-15)

Progress:


  • The headset has been ordered, which is extremely exciting (thank you Mr. Lin!)
  • I will be working with the Insight, and now there is a clearer goal in mind for the project
  • More posts have been made on the Emotiv forum, and I have emailed my remaining confusions to their account
  • The framework for the app has begun, and there should be a concrete set of code that I will post soon
  • I have watched various tutorials for the kind of app I want to make, so a good portion of my reviewing has been completed already

Problems:

  • I have not received any responses from the Emotiv forum, but the questions I posted are not too important (this isn't a major problem, just an annoyance)
  • I need to refresh a bit more on app design, as it has been a while since I've worked with Objective-C
  • I do not think I can use Swift since all the example code is in Objective-C (I might be able to just translate it into Swift if necessary)
  • The headset will ship out in 3 weeks, which is a long period of time.

Plan:

I will use this next week to design the app and get something up and running to test. I can use Tuesday to my advantage, as we will not be having school. Also, on Tuesday I am touring an app development company, which could only benefit my endeavors on this project. Reviewing and such is important, but as of now it is not my key objective. I need to remember to post an updated Gantt chart. The present Gantt chart does not accurately reflect the work I have done so far.

Monday, October 26, 2015

Progress Report 1 (10-25-15)

Progress:

  • Downloaded the example code from the Emotiv database
  • Found a more extensive description of the difference between the two headset models on the Emotiv website, giving me more details with what comes with the EEG version of the headset.
  • The EEG version gives the raw data and other programs, while the non-EEG just has their se algorithms. Here are the details:

    Basic SDK
    The Emotiv SDK provides an effective development environment that integrates well with new and existing frameworks and is now available to independent developers. It includes our proprietary software toolkit that exposes our APIs and detection libraries. 
    Facial Expressions
    This suite uses the signals measured by the Emotiv Brainwear to interpret user facial expressions in real-time.  Artificial intelligence can now respond to users naturally, in ways only humans have been able to until now. When a user smiles, their avatar can mimic the expression even before they are aware of their own feelings.
    Performance Metrics & Emotional States
    This suite monitors user emotional states in real-time. It provides an extra dimension in human computer interaction by allowing the application/game to respond to a user's emotions. Characters can transform in response to the user's feeling. Music, scene lighting and effects can be tailored to heighten the experience for the user in real-time. These algorithms can be used to monitor user state of mind and allow developers to adjust difficulty to suit each situation.
    Mental Commands
    This detection suite reads and interprets a user's conscious thoughts and intent. Users can manipulate virtual objects using only the power of their thought! For the first time, the fantasy of magic and supernatural power can be experienced.

    EEG Access
    When you purchase EEG data access, your headset will be equipped with EEG Firmware that allows the real-time display of raw EEG data stream, contact quality, FFT, motion sensors, wireless packet acquisition/loss display, and marker events in our exclusive TestBench software:
    TestBench™ software provides:
    Real-time display of the Emotiv headset data stream, including EEG, contact quality, FFT, gyro, wireless packet acquisition/loss display, marker events, headset battery level.
    Record and replay files in binary EEGLAB format1. Command line file converter included to produce .csv format.
    Define and insert timed markers into the data stream, including on-screen buttons and defined serial port events. Markers are stored in EEG data file
    Marker definitions can be saved and reloaded. Markers are displayed in real time and playback modes. 
    Export screenshot for documentation

    TestBench™ features include:
    EEG display:
    5 second rolling time window (chart recorder mode)
    ALL or selected channels can be displayed
    Automatic or manual scaling (individual channel display mode)
    Adjustable channel offset (multi-channel display mode)
    Synchronized marker window

    FFT display:
    Selected channel only
    ALL or selected channels can be displayed
    Adjustable sampling window size (in samples)
    Adjustable update rate (in samples)
    dB mode – power or amplitude calculations
    dB scale
    FFT window methods: Hanning, Hamming, Hann, Blackman, Rectangle
    Predefined and custom sub-band histogram display – Delta, Theta, Alpha, Beta, custom bands
    Gyro display:
    5 second rolling time window (chart recorder mode)
    X and Y deflection
    Data Packet display:
    5 second rolling graph of Packet Counter output
    Packet loss – integrated count of missing data packets
    Verify data integrity for wireless transmission link
    Data Recording and Playback:
    Fully adjustable slider, play/pause/exit controls.
    Subject and record ID, date, start time recorded in file naming convention.



  • The FFT display can be made by OpenVIBE, so technically if it is compatible, we could purchase the cheaper headset.
  • Created an OpenVIBE account on the forum to ask various questions about the compatibility of the platform
  • Received a response from the OpenVIBE team (this is described more in the problems section)

Problems:
  • The example code requires a headset for it to be able to run, and produces errors if not set up
  • The Emotiv forum has not responded to my posts, and the answered questions are vague and confusing
  • While it is great that openVIBE responded to me, they did not give any good news. They said that they do not have the Insight in their offices but they guess that it is not compatible. This is because of two reasons; the channel characteristics are different and the API is different than the EPOC.
Plan:

I might have no choice but to use the EPOC for various reasons. Firstly, a major component of the team is that I am supposed to collaborate with the openVIBE group. They would be extremely helpful, but what am I supposed to do if the headset doesn't even work with the platform? The order will also take almost three weeks to process, pushing back farther and farther my development process. I do not know if the headset could even be shipped after that time. I need to go over this progress and the details available to me on Tuesday. After I talk to Mr. Lin about all this, I will choose and then update the blog on Sunday. 






Tuesday, September 29, 2015

RE: Gantt Project

Feedback:
  1. There are quite a few overlapping tasks in your chart. Since you are working alone at the moment, make sure you can handle them.
  2. Most of your tasks are week long. Can you reduce the granularity by subdividing the tasks into sub-tasks? 
  3. Should "Study Example Code" and "Review Coding Language" happen at the same time?
  4. Based on the chart, it's not clear what can I expect to see at the end of this period. Maybe you can rearrange the activities under the problem solving steps.

Thursday, September 24, 2015

Gantt Project

https://drive.google.com/folderview?id=0B6jiE_ff0anPM1N3dDVxLTZqVEE&usp=sharing

As a side note, this is still a work of progress and it will be for a day or two, however I wanted to post what I had. I am sorry if it still seems incomplete.

http://www.ntu.edu.sg/home/EOSourina/Papers/EmoRecVis2010.pdf

http://www.ntu.edu.sg/home/EOSourina/Papers/RealtimeEEGEmoRecog.pdf


Tuesday, September 22, 2015

Presentation from 9/17/15

This is my slideshow from the presentation:

https://docs.google.com/presentation/d/1NtyjgU_Kjo6AUZ8EfKDfIbpKdvQGlzYpJYjaPbbDuSQ/edit?usp=sharing


Sunday, September 20, 2015

Resource from Last Year

Team Progress Report Blog: http://advstem2.blogspot.com/
Project Resource including Gantt Chart: http://stem14-15.blogspot.com/2015/02/project-resource-biofeedback-games.html

RE: Initial Planning & Coordination

Project Description and Merits:

  • I would like to create an app that uses the EEG data from the Emotiv Insight headset to read the users mood. The app will then show a color that corresponds to the users mood in real time, and change depending on how they are feeling.
  • This will be useful in therapy situations to help the user and their therapist to understand how they are feeling. It will also be useful for those that cannot describe how they are feeling because of a disability. People could practice controlling their emotions easier if they have visual cues of their feelings.
  • It sounds like an interesting application. The only question is that some other people have done the similar research. You can start from understanding and replicating their design, and then try to improve it. 
Group/Team Communication:
  • The teams 4, 5, and 6 make up this group, however at present there is only teams 4 and 6.
  • My team consists of myself.
  • Collaboration can only work if everyone involved understands each other, and allows for all ideas and opinions to be shared. To summarize, good communication is key and an open mind can only help.
  • Since team 4 is focusing on the classification mechanism, you should discuss and collaborate with them earlier on about your topic. They should help you develop the algorithms.
Prior Work/Resource Inventory:
I tried to update it, but I just posted it onto my blog by mistake. I will fix this at a later time.
  • There is a menu item called "Pages" where you can edit your Resource page.

Technology Analysis:
  • A strong understanding of neuroscience and the lobes of the brain.
  • Understanding of the headset's API and the coding involved with it.
  • Can be even more specific. For example, types of EEG signals, brainwave channels, headset/probes, brainwave analysis process, classification algorithms, IDE & language (you need to pick a development platform), subject test procedure, etc.. 
Competence:
  • At present, I know Python and some C, but I will have to learn a mobile app language like Swift. I also need to brush up on the EEG information that I read over the summer. I have not looked at the papers in a couple of weeks and I need to look over them again.
Safety: The headset, when it arrives, should be stored correctly, but other than that everything should be fine.

Equipment, Materials, and Budget: Once again, other than the headset, there shouldn't have to be any extra equipment. The online courses that I will be taking are free, so that isn't an issue.
  •  Depends on the platform you choose, you might need an iPhons/iPad or an Android phone/tablet, or Windows/Linnux/Mac laptop.

Schedule: In the next week, I want to strengthen my neuroscience background and solidify my ideas. I may also have to change the idea that I have as of now, since I do not know yet how good of an idea it is. Only after that I want to dive into the programming.
  •  You can also use this topic as your learning curve, and target a newer challenging topics.

Initial note: I am not certain how successful and/or feasible this project is with me working on my own. I also have a lot of skills that I will need learn, as described earlier in this post. I am very much welcome to suggestions and/or feedback.
  • You can always discuss the issues with me. As far as the programming part, there will be several teams involving in programming. You can always support each other. Furthermore, there is a huge online community which provides enormous amount of resource!

Saturday, September 12, 2015

Initial Planning and Coordination (Isabelle)

Project Description and Merits:

  • I would like to create an app that uses the EEG data from the Emotiv Insight headset to read the users mood. The app will then show a color that corresponds to the users mood in real time, and change depending on how they are feeling.
  • This will be useful in therapy situations to help the user and their therapist to understand how they are feeling. It will also be useful for those that cannot describe how they are feeling because of a disability. People could practice controlling their emotions easier if they have visual cues of their feelings.
Group/Team Communication:
  • The teams 4, 5, and 6 make up this group, however at present there is only teams 4 and 6.
  • My team consists of myself.
  • Collaboration can only work if everyone involved understands each other, and allows for all ideas and opinions to be shared. To summarize, good communication is key and an open mind can only help.
Prior Work/Resource Inventory:
I tried to update it, but I just posted it onto my blog by mistake. I will fix this at a later time.

Technology Analysis:
  • A strong understanding of neuroscience and the lobes of the brain.
  • Understanding of the headset's API and the coding involved with it.
Competence:
  • At present, I know Python and some C, but I will have to learn a mobile app language like Swift. I also need to brush up on the EEG information that I read over the summer. I have not looked at the papers in a couple of weeks and I need to look over them again.
Safety: The headset, when it arrives, should be stored correctly, but other than that everything should be fine.

Equipment, Materials, and Budget: Once again, other than the headset, there shouldn't have to be any extra equipment. The online courses that I will be taking are free, so that isn't an issue.

Schedule: In the next week, I want to strengthen my neuroscience background and solidify my ideas. I may also have to change the idea that I have as of now, since I do not know yet how good of an idea it is. Only after that I want to dive into the programming.


Initial note: I am not certain how successful and/or feasible this project is with me working on my own. I also have a lot of skills that I will need learn, as described earlier in this post. I am very much welcome to suggestions and/or feedback.
Another resource (This may or may not be useful, I have yet to fully read/understand the text):
Evaluating a Brain-Computer Interface to Categorise Human Emotional Response: http://www.brainathletesports.com/pdf/WP_KatieCrowley.pdf

Tuesday, August 11, 2015

Research Task 2, Part 2 (Isabelle Greenberg)

Brain-Activity-Driven Real-Time Music Emotive Control:
Abstract:

  • Current active music systems allow a user to control aspects like the playback and volume.
  • This project will use the Emotiv EPOC headset to receive EEG data and map it to emotional states. Then, the music will transform based on the user's mood
Introduction:
  • This description is very similar to the abstract
Background:
  • "GoTo" classifies music systems by their ability to control playback, touchup, retrieval, and browsing
  • This system will have two parts; a real time system and a system able to adapt based on the real time data
2.1 Emotion Detection
  • Methods of emotion detection are voice and expression, skin conductance, heart rate, and pupil dilation
  • By measuring the alpha and beta activity on the prefrontal lobe, indicators for arousal and valence can be measured
  • This can classify emotions such as happiness, anger, sadness, and calm
2.2 Active Music Listening
  • Interactive systems allow the listener to control the music like a conductor using hand gestures
  • The parameters are set from high to low, and they can effect the music and the influence of the gestures being made
2.3 Expressive Music Performance
  • The KTH music system has a set of thirty rules that control different aspects of expressive music
  • The magnitude is controlled by parameter "k", and different combinations of k parameters can create different styles, stylistic conventions or emotional intention
  • An example of this situation was shown in 2006 with an arousal/valence plane and 7 of the rules
3 Methodology
  • First, the process would begin with the Emotiv Epoch headset getting data in the prefrontal cortex (the sensors are F3, F4, AF3, AF4)
  • That cortex regulates emotion and deals with our experience when we are conscious
  • Affective states in emotions result from activation or deactivation (this is arousals) and pleasure or displeasure (this is valence)
  • The results are happiness, anger, relaxation, and sadness and each has a different combination of arousal and valence
3.1 Signal Reprocessing
  • Alpha waves are dominant in relaxed and awake states of mind
  • Beta waves are indicators of excited mind states
  • The signal has to first be filtered to get these waves separated
  • The sample waves are put through a logarithmic equation that I do not presently completely understand but will look into further. It finds the band power.
3.2 Arousal and Valence Calculation
  • The arousals are found by adding the beta band power of F3 and F4 and dividing it by the sum of the alpha band power of the same electrodes. The valence is found with a different equation
3.4 Synthesis
  • The use of the rules comes in after the pDM program is used to analyze the values found so far
3.4 Experiments
  • Two types of experiments were performed, one while sitting and not moving and one while playing an instrument
  • The subject was asked to try to change their emotion and then the results from the software would be checked to see whether it was correct
4 Results
  • The EEG signal came through and the arousals were mapped out on a graph, but the signal was not very smooth
5 Conclusions
  • The opportunities for EEG data to control music is possible, because the arousals and valence values do change in accordance to emotion
  • However, the signal is not smooth and more experiments would have to be done for emotions other than happy or sad

Algorithm Notes:
Decision Trees:
  • Different patterns are divided into single pattern subsamples in a decision tree
  • When there is only one pattern, the decision tree has a leaf
  • Threshold: Purity measure of each node to improve feature selection
  • The gain is the information value of all the features minus the information value of the nodes nodes
  • sample sets are divided into two sets and the threshold can affect how the tree functions
  • The entropy is low, or technically zero, when there is all elements of the same class. This means a higher gain
  • At present, this math is confusing. The logs are understandable but how the equation is set up is unclear
  • The leaf does not need to be processed any further
Self Organizing Maps:
  • The inputs are in a table separated by lines and governed by different maximum and minimums 
  • The values are updated on a graph for all the input data and a longer process is gone through
  • The neurons are displaced?
K-Means Algorithms:
  • Proper to compact clusters, sensitive to outliers and noise, and only works with numerical attributes
  • The iteration is repeated again and again by increasing a counter each time it runs
  • If the classification of one scenario is less than the convergence, it will repeat from an earlier step
  • The K is given a  value and it will find the same amount of clusters as in that value
  • Uses the random defined means to find all the elements based on the elements nearby to them
  • The different means shift around the elements, but it still classifies the elements
Artificial Neural Network Algorithm:
  • The inputs are 1 or 0
  • The middle is hidden and two neurons are the first hidden layer and another neuron composes the second
  • Propagate is to find the value
  • The step function in the neuron is a simple graph measured by an equation
  • Same inputs can have different weights
Support Vector Machine Algorithm:
  • Research hyperplanes
  • The larger margin for a hyperplane makes that hyperplane better
  • research weight vectors
Random Forest Algorithm
  • The combination of learning models increases the classification accuracy
  • This idea is behind "bagging" which is to average noisy and unbiased models to create a model with low variance
  • Random Forest: Large collection of decorrelated decision trees
  • Research "matrix"
  • this involves many features
  • random subsets using the different matrix to create class prediction
Hopefully more basic research will allow me to process these videos better.

Sunday, August 9, 2015

Research Task 2, Part 1 (Isabelle Greenberg)

Mind-Controlled Keyboard:

Introduction:

  • Although the subjects for this project are relatively immobile, their eye movements may help to give more possibilities for the research. In the previous papers, extra movements like blinking would move the the next slide and mean something for the program.
  • This will rely more on neurophysiologic signals as an access method, or the neuron system in general
  • Factors that have to be considered are the users, the environment, and the algorithm behind the process
  • There are many ways to make an EEG controlled keyboard and one of them is the RSVP (Rapid Serial Visual Presentation) system. It allows the user to study options one after another by showing the alphabet one letter at a time until the user chooses one
  • The headgear must recognize the user's intent to select one of the signals with some form of EEG data
  • One of the waves is called P300 (this is probably described more later)
  • The language will be in Thai, and the target group is the disabled
2.1 Brain-Computer Interface
  • The BCI connects directly between the human brain and the computer, so it will be useful for people with locked in syndrome or ALS
  • The user's habits, homes, and their environments will be considered, and they may or may not be considered in our project(the environment may be the most useful in our situation)
2.2 Mind-Controlled Keyboard Interfaces
  • Rapid Serial Visual Presentation: Shows one letter at a time and when the expected letter arrives, the positive intent value stands out in comparison to the other letters.
  • Matrix Speller: Letters are in rows and columns like a chess board, and moves over the columns one by one. When the expected column arrives, the positive intent signal will distinguish it, and then the speller goes letter by letter in that column
  • Hex-o-Spell: This is more visual with groupings of letters in circles. It highlights each circle and loops until the positive intent signal, with which it further breaks down that circle.
  • The same idea is used repeatedly but with different setups and appearances.
2.3 Emotiv EPOC
  • 14 sensors that track the user's EEG
  • Accurately detects when given the user's gender, age, handedness, intentional control, vividness of visual imagery, and mental rotation ability (the ability to move a 2D or 3D image in your mind)
  • Test bench is going to be used mostly with this project, and I believe was described more thoroughly in an earlier paper
2.4 EEG Data
  • There is a constantly changing electric field on the scalp because of the signals fired by neurons in the brain
  • There is 4 types of EEG data:
    • P300:
    • Detectable peak in activity that occurs 300 ms after some stimulus is presented. This signal helps choose the specific highlighted value when the computer hovers over it
    • Slow Cortical Potential:
    • Shown by changing voltage in the brain which can be controlled after a long period of time
    • Sensorimotor Rhythms:
    • Rhythms detected when relaxing while not thinking about movement.
    • Steady State visual evoked potential:
    • If flashing stimulus is presented to the user, brainwave modulations of the same frequency as the flashing rate of stimulus is detected in the visual cortex.
4.1 Objective and Outputs
  • They want to study the signal activities in the brain and get useful info from it, make software with the headset, create new algorithms, and make a new method to the keyboard system
  • The outputs will be a program, a report, a new algorithm, and the result of the experiments.
  • The new algorithm has to give the fastest solution and the most accurate result
4.2 Benefits
  • People with these project can communicate in a new way that's better than before, and the headset will make the setup relatively inexpensive
  • In practice the technology we have is insufficient, slow, and sometimes not reliable
Literature Review
  • Brain Computer Interfaces is a possible assistive technology (AT) for the disabled and it is classified as an augmentative and alternative communication (AAC) possibility.
  • Previous AAC tech has been the joystick, mouse, binary switch, eye gaze and head control, however this does not help a good portion of the disabled
  • BCI research for this purpose has 5 components
    • Input modalities for the device
    • Processing demand of the device
    • Language representation
    • Output modalities
    • Functional gain of the device
  • BCI technology has various components
    • Stimulus presentation paradigm (check 2.4)
    • Signal acquisition (Info received from the headset)
    • Preprocessing (filtering out the noise)
    • Dimensionality (reducing random variables)
    • EEG evidence
    • COntextual Evidence
    • Joint interference
Available Tech
  • Matrix Speller
    • first goes column by column then by each value
    • Works as a loop until the sentence is completed
    • The matrix can be rearranged like how a keyboard is set up in a specific way
    • 7.8 characters per minute with 80% accuracy
  • Study Case for the Matrix Speller
    • This is called the Brain-Computer Interface Virtual Keyboard for accessibility
    • 95 keys that had groups then sections then the values themselves, using the drill-down approach
    • The accuracy was 61.25%, thus showing how a massive amount of variation in the matrix makes users spend more time selecting characters
    • Ways to improve: many groups with the same type of keys inside each group, thus there is less variation
  • Rapid Serial Visual Presentation (RSVP)
    • A symbol is presented one at a time in the center of the screen rapidly and seemingly randomly (it is not actually random, it just appears to be)
    • Depends less on eye gaze control like the previous, but only shows one character at a time
    • 5 characters a minute, which cannot be used for conversations
    • Must be improved
    • A good filter in the program to take out the extra noise is essential (but how will we make a good filter?)
    • Symbol selection can increase more after use
  • Balanced-Tree Visual Presentation
    • Groups in circles balanced in probability according to a Huffman tree
    • When the group with the desired symbol is selected the symbols distribute themselves, and there is a "back" symbol in case a mistake is made
  • The matrix speller was chosen for this project, but now the algorithm suggests probable words after a word is started.
6.1 Approach
  • The approach begins with the input which is obtained with the Emotiv EPOC headset in this situation. There are 14 channels that are communicated over bluetooth to the computer
  • The denoise is the first part of the software. The data received has to be cleared of the clutter that affects signal quality.
  • Data mining then detects the user's intention, but many instances of the intention have to be recorded so that a relative pattern is found. Afterwards, the pattern triggers the command of the virtual keyboard in real time
  • the GUI is made to suit people with disability in Thai and has a word guess function
  • The output is shown and the user decides if it is okay, or decides to pass
6.2 Tools and Techniques
  • They used a compiler called Eclipse and Java because they are more suited with it
  • The testbench and control panel were a major help
The last portion of the paper were examples of previous works and images of those projects.


The next two sets of notes will be posted tomorrow, my internet isn't working that well right now.
Sorry!

Thursday, July 23, 2015

Research Task 2 (07/27/15 - 08/09/15)

Great to see you making substantial progresses! Here is your second task for the summer. The main goals of this period is to learn brainwave application development through example projects. Pay special attention to the project flow and the major components of the projects. Also, identify the missing links (knowledge and skills) for conducting your brainwave research. At the same time, you can start brainstorming your own project ideas, and drafting your proposals. In addition, there are a series of nice, short videos which will give you an overview of some of the classification techniques you might encounter while conducting your research. 

Brainwave Projects
  1. Project Proposal: Mind-Controlled Keyboard (2014),  Putthipee Chattaris, Supree Srimanchanda. [20 pages] It is a pretty decent project proposal for developing brainwave application. Learn from the proposal in details how they plan and organize their project in many aspects. It will help you a lot how to plan and organize your own project. 
  2. Brain-Activity-Driven Real-Time Music Emotive Control (2013), Sergio Giraldo, Rafael Ramirez. [6 pages] Use this paper as an example to see how people conduct brainwave research and report their results. 
  3. Come up a few project ideas, and view them through the framework of example projects. Write a very brief (preliminary, non-formal) project proposal based on the ideas you have.
Classification Algorithms

  1. How classification algorithms work (2015),  YouTube videos, Thales Sehn Körting. Classification is one of the major step of developing brainwave applications. Since classification involves lots of math, it can be overwhelming when you encounter them in technical papers. A series of 9 video tutorials give you a light introduction to classification algorithms. Though some of them may still seem pretty difficult or not clear enough. You will have chance to zoom in and study in more depth later. At this point, just get familiar with them and understand them as much as possible.
* Whenever you come across good papers/websites/reports, don't forget to add them to the "Project Resource" page.

* Please take electronic notes while you are studying the materials, watching the videos, or browsing through the web.

* Make PowerPoint presentations based on your notes. We will start presentation at the beginning of the year.
 

Tuesday, July 7, 2015

Research Task 1 (Isabelle Greenberg)

Non-Invasive BCI through EEG

1.Abstract
  • Electroencephalography: The measurement and recording of the electrical signals produced by neurons in the brain. This is recorded with sensors across the scalp
  • BCI: Brain Computer Interface
  • Applications: Prosthetic Devices, Communication, Military uses, video gaming, virtual reality, robotic control, and assistance to disabled persons
  • Current issues: Inaccuracies, detection, thought and action delays, cost, and invasive surgery


2. Introduction
  • This paper will cover the process of creating a suitable BCI that uses the Emotiv EPOC System to measure EEG waves and control a robot called the Parallax Scribbler


2.1 Electroencephalography
  • EEGs were first measured in 1912 in dogs, then 1922 for humans
  • Postsynaptic potentials are measured by electrons that are sensitive to these changes, and they are made up of a combination of inhibitory and excitatory potentials in the dendrites
  • Excitatory postsynaptic potentials increase the likelihood of a postsynaptic potential occurring 
  • Inhibitory postsynaptic potentials decrease the likelihood of a postsynaptic potential occurring
  • The primary research for this system uses rhythmic activity that is based on mental state, and this compares the brain at an active and passive state


2.2 Brain-Computer Interface
  • Open loop system: Responds to a users thoughts
  • Closed loop system: Gives feedback to the user (goal of the BCI research)
  • By focusing on the motor cortex of the brain responses could be translated into robotic activity
  • Invasive methods are highly debated


3. Previous EEG BCI Research
  • The model of EEG-BCI systems follows reading in EEG data, translating into output, and then giving feedback back into the user interface. The issues involve the necessity for real time extraction and classification of the data
  • Extraction separates the necessary data, and classification decides what it represents
  • There are various filters that can show the normal slope of the data, among other methods
  • Problems: No standard way of classification
  • Feedback is essential and can be auditory clues or haptic sensations
  • Synchronous: Giving the user a cue to perform a certain mental action and records the user’s EEG patterns in a fixed time window (easier to construct than Asynchronous)
  • Asynchronous: Driven by the user, instead of the computer, and passively and continuously monitors the users EEG data to classify it on the fly


4. The Emotiv System
  • Based around the EPOC headset for recording EEG measurements and a software suit which processes and analyzes the data
  • Specifics  on the makeup of the headset is described


4.1 Control Panel
  • This is a GUI that is a gateway to the headset
  • It’s twelve recognizable artifacts are blink, right/left wink, look right/left, raise brow, furrow brow, smile, clench, right/left smirk, and laugh
  • Affectiv monitors emotional states, and Cognitive monitors the users conscious thoughts
  • These are prototype action thoughts created with test cases
  • only four thoughts running at a time


4.2 TestBench
  • Real time data from any sensors and record the data in .edf files


4.3 The Emotive API
  • The EmoEngine is the next step after the raw data from the headset, which then uses the EmoState (the new facial movements or brain activity) to go through a loop
  • Check page 16 again for the helpful visual


5. The Parallax Scribbler Robot and IPRE Fluke
  • The Parallax Scribbler Robot: fully assembled reprogrammable robot with various sensors and two wheels. A GUI could be used to program it or the Basic Stamp Editor
  • IPRE Fluke: The Institute for Personal Robots in Education Fluke is an add on to give the robot more functions and software packages
  • The Myro software package interacts with bluetooth and has the ability to reprogram with Python (YES)


6. Control Implementation
  • Written in Microsoft Visual C++
  • Four basic parts to this code: Connecting to the headset with the Emotive API, connecting to the Scribbler with the Myro libraries, reading and decoding the events, and closing the connections afterward
  • Another visual of this process is on 18


6.1 Emotiv Connection
  • The robot can be controlled with thoughts detected from the headset itself or from generated signals from the EmoComposer emulator (helps to train the headset and work as a prototype)
  • The EE_EngineConnect call always comes back as true even when the headset was off, so using EE_EngineRemoteConnect to connect through the Control Panel to make certain the power status and connection status
  • the port has to be switched if using the EmoComposer in the call
  • it’s like a database call, go over your database notes for a while


6.2 Scribbler Connection
  • Three steps to initialize the robot: install python, load the Myro libraries, connecting to the robot using the initialize() command
  • C/C++ code can make direct calls to the Python interpreter (I wonder if I could just use Python instead of going this roundabout way)
  • four lines of code are needed at first (there are other hints earlier, go over the paper again when you are closer to actually writing the software)
  • Py_Initialize();
  • PySys_SetArgv(argc, argv);
  • main_module = PyImport_AddModule("__main__");
  • main_dict = PyModule_GetDict(main_module);
  • more specific ways to use the Scribbler are detailed, it is all on 20


6.3 Decoding and Handling EmoStates
  • 4 major steps in reading and decoding info from the headset:
  • Create the EmoEngine and Emostate handles, querying for the most recent EmoState, determining if it is new, and decoding it
  • the API is used to make these new handles:

EmoEngineEventHandle eEvent = EE_EmoEngineEventCreate();
EmoStateHandle eState = EE_EmoStateCreate();
  • To get the most recent event:

EE_EngineGetNextEvent(eEvent);
  • The kinds of events that can be seen: hardware related events, new emostate events, and suite related events
  • decoding code on page 21
  • if headset is disconnected, suspend activity until a reconnect emostate is published
  • these calls determine whether or not the headset was still connected:

ES_GetWirelessSignalStatus(eState)
ES_GetHeadsetOn(eState)
  • three possible cognitive thoughts: push, turn left, and turn right
  • get the integer value with the CognitivSuite 
  • Specific values determine the action taken, and the strength of the thought does not apply in this situation



6.4 Modifications
  • At first, the rate of input and the time to complete the actions was extremely too long
  • The interface was unusable because the actions piled after the first and it would be completing tasks from a minute ago instead of current  tasks
  • The solution was to institute a sampling variable to only decode every ten EmoStates, but when that made issues then it would decode every emostate
  • An additional mode was added to change the output after a different facial movement was made


7. Blink Detection
  • Eye blinks can be used as control inputs, else they must be filtered out
  • This is centered on one specific channel which makes the recognition easier
  • The data set to set this new mod took a lot of testing and recording or the subject blocking for some, and not blinking for others.
  • There was a pattern to the blinks, but the neural net was thrown off because they weren’t normalized with time, it was treated as an attribute.
  • blinks correspond to spikes in the EEG data, and can be used with the neural net to improve the API’s analysis of the recordings


8. Conclusions
  • The author was able to create a system to control a robot with thoughts and use accurate blink-recognition software, plus a method of switching modes with a movement of eyebrows
  • These innovations could help the disabled to move wheelchair


Emotiv Experimenter

Abstract
  • The Experiemtenter is an application that can record brain data and attempt online analysis and classification of the incoming data stream



4. Introduction
  • Emotiv EPOC headset has simple Brain Computer Interface (BCI) abilities
  • Inexpensive experiments without the need to go into a lab


4.1 EEG
  • EEG is non intrusive means that results in waveforms referred to as “Brain Waves”. They can diagnose diseases and study neurological activity after specific stimuli


4.2 The Emotiv Headset
  • A relatively affordable headset that brings EEG capabilities to the masses
  • Downside: there are only 14 channels, however it is a lot for the public
  • 10-20 positioning scheme
  • Event based classification suites
  • Many online applications that build on these three suites
  • In reality, the facial expression detection system is successful, while the mind-reading capabilities are unreliable and do not function for the games associated with the headset


4.3 Previous Work 
  • Lillian Zhou ran experiments with the headset, and used Emotiv’s native C++ API, but she needed a full-featured application


5 The Experimenter Application
  • This is the design of the app that the author created


5.1 Initial Work
  • The first version of the app allowed folders of images to be loaded and treated like types of stimuli, ran an experiment where images of these types were shown, and collects the data of the EEG from the headset
  • This allows presentation and analysis


5.2 Online Classification Capabilities
  • This couldn’t process the data real-time, so online classification of the EEG signals was instituted
  • This was more interesting for subjects, and relied on the portable aspect of the headset
  • Subject-verifiable mind reading: A setup enables the user to believe that the app is relying on their specific EEG data
  • Some methods of this are not real because they are easily cheated or rely on muscle action instead of neural
  • True subject verifiable mind reading must present the user with a decision that the app could only solve with the EEG data


5.3 Experimental setup
  • This exemplifies a highly configurable stimulus presentation experiment with real time data collection and classification.


5.3.1 Test
  • There are lists of stimuli that are randomly shuffled and one stimulus from each list will be presented
  • Shuffling prevents bias and allows for different combination of stimuli
  • The stimuli are shown on the left, then a different one on the right, and then an arrow will point to one of the images (at this point the class of stimuli is still being shown)
  • The class names are replaced with fixation crosses after a period of time
  • A question is presented with two answers that are possible, with an optional I Don’t Know option (this invalidates the trial)
  • This is repeated until the set of online classifiers are trained
  • Next, the set up is the same, except that the arrow is pointing both ways (the subject could focus on either)
  • Then, the trained classifiers identify what class of images they looked at and it’s confirmed with the subject
  • Other variables like the time and the size of the images can be set and changed


5.3.2 Justification
  • This experiment is very close to the ideal, and it shows how the subject makes a mental choice that the application doesn’t know until it’s prediction is complete
  • The random placement also eliminates bias


5.3.3 Variations
  • The images may also be superimposed on top of each other, or only text might be shown
  • The text permits experiments regarding the subjects respond to words


5.3.4 A Note About Timing
  • The timing is not quickly accurate, but the data has a correct time stamp


5.4 Features and Design
  • This shows application capabilities and feature set


5.4.1 Goals
  • This is supposed to be useful to anyone even if they cannot modify it’s code base, and it still maintains the original TestBench application provided by the company


5.4.2 User Interface Design and Functionality
  • The GUI has three distinct regions: Experiment Settings, Classifiers, and Classes section
  • Each have a sort and load function and help with large cases


5.4.3 Software Design
  • This justifies the software design and acts as a high level guide for those modifying the code


5.4.3.1 Platform Choice
  • This is C# and it’s native is C++ SDK
  • the use is restricted to .NET which only allows on a Windows operating system, however more recent iterations of the headset and design are compatible with Mac operating systems
  • 5.4.3.2 Interfacing with Emotiv
  • The original raw data is hard to deal with, so incoming raw data is put in a fixed size buffer to be polled by the application, but if this is infrequent then some dat will be lost
  • Experimenter wraps the Emotiv API with a publish/subscribe design pattern
  • There are listeners to the data that have the data on treads, so that polling is open to new data
  • I believe this is similar to a sort of dam that lets some water through, while keeping the rest in one place.
  • EEG data can go to many listeners after going through the thread, which is beneficial for a large experiment that needs many watchers
  • a test data signal can also be sent, so the headset is not constantly required


5.4.3.3 The Yield View Presentation System
  • The logic could have been put into a sequence of GUI events with timers and button pushes triggering change (however this would make the code complex and unmanageable)
  • The management of the resources is difficult because the operating system limits this process
  • The application also needs proper cleanup to limit users accidentally corrupting each other’s data
  • This is solved with two C# features
  • The yield keyword suspends the data to the user, which almost pauses it until the user calls something in the sequence
  • The using call helps with disposal
  • A view is a temporary look at images or other graphical components, and when an event causes it to terminate, it’s resource intensive components are disposed


5.4.3.4 The Parameter-Description System
  • C#  attributes specify display names, descriptions, default values, and valid value ranges for the parameters
  • This system saves programming time and it makes the code neater and easier to modify
  • the System.Windows.Forms.Timer makes the Experimenter not as fast with it’s presentation displays but it carries out it’s function with the GUI in the background
  • Also, accurate data time stamping is more important than the display time stamps


6. Experiments
  • The experiments test Emotiv in general and the uses of the Experimenter’s abilities


6.1 Eyes Open vs. Eyes Closed
  • When a person’s eyes are closed, the EEG signals at the occipital electrodes have high amplitude waves
  • This makes this experiment a good one to start with
  • The subject was given instruction for their eyes for each trial, and a tone signals the end of the trial. The computer then checks with the subject whether they followed the instruction (if they did not the trial is invalidated)
  • Then the data was stored in folders of stimulus classes, one for eyes open and one for eyes closed. This classifies the stimuli very easily


6.2 Faces vs. places
  • Distinguishes between the neural processing of faces and scenes (normally done with MRI)


6.2.1 Version 1
  • This only involved data collection, and had many faults
  • This was tedious because of the lack of requirement, the time in-between the trials was very long, and the subject’s mind could easily wander in this setup
  • The color could have also effected the results


6.2.2
  • This involved superimposed gray-scale images, and during the test phase the subject would choose what image to focus on
  • They could chose between man or woman for faces, and indoor and outdoor for places
  • The classifiers messed up in this setup


6.2.3
  • This finally showed the side by side images like in the Experimenter, and it worked well


6.3 Artist vs. Function
  • This shows a noun and they subject either draws it or list it’s uses
  • The setup is to make a long list of concrete nouns from a linguistic database (source is in paper) and put it in a text file with one word on each line
  • The classes are labelled and all stimuli is put as unclassified for now
  • The question mode is put on ask and the superimposed is turned off


6.4 Faces vs. Expressions
  • This distinguishes between two different kinds of processing





Thursday, June 18, 2015

Research Task 1 (06/22/15 - 07/03/15)

EEG and BCI
  1. Non-Invasive BCI through EEG (2010), Daniel J. Szafir. [32 pages] The report serves as an brief introduction to EEG and Emotiv EPOC headset.
  2.  Emotiv Experimenter: An experimentation and mind-reading Application for the Emotiv EPOC (2011), Michael Adelson. [section 1-7, 29 pages]

Emotiv
  1. Emotiv SDK Wiki. Visit the Wiki page, and browse through various resource. You can register and download the example code.

Please take electronic notes while you are studying the materials, watching the videos, or browsing through the web. Each team will present their learning later in the summer meeting.