Monday, October 26, 2015

Progress Report 1 (10-25-15)

Progress:

  • Downloaded the example code from the Emotiv database
  • Found a more extensive description of the difference between the two headset models on the Emotiv website, giving me more details with what comes with the EEG version of the headset.
  • The EEG version gives the raw data and other programs, while the non-EEG just has their se algorithms. Here are the details:

    Basic SDK
    The Emotiv SDK provides an effective development environment that integrates well with new and existing frameworks and is now available to independent developers. It includes our proprietary software toolkit that exposes our APIs and detection libraries. 
    Facial Expressions
    This suite uses the signals measured by the Emotiv Brainwear to interpret user facial expressions in real-time.  Artificial intelligence can now respond to users naturally, in ways only humans have been able to until now. When a user smiles, their avatar can mimic the expression even before they are aware of their own feelings.
    Performance Metrics & Emotional States
    This suite monitors user emotional states in real-time. It provides an extra dimension in human computer interaction by allowing the application/game to respond to a user's emotions. Characters can transform in response to the user's feeling. Music, scene lighting and effects can be tailored to heighten the experience for the user in real-time. These algorithms can be used to monitor user state of mind and allow developers to adjust difficulty to suit each situation.
    Mental Commands
    This detection suite reads and interprets a user's conscious thoughts and intent. Users can manipulate virtual objects using only the power of their thought! For the first time, the fantasy of magic and supernatural power can be experienced.

    EEG Access
    When you purchase EEG data access, your headset will be equipped with EEG Firmware that allows the real-time display of raw EEG data stream, contact quality, FFT, motion sensors, wireless packet acquisition/loss display, and marker events in our exclusive TestBench software:
    TestBench™ software provides:
    Real-time display of the Emotiv headset data stream, including EEG, contact quality, FFT, gyro, wireless packet acquisition/loss display, marker events, headset battery level.
    Record and replay files in binary EEGLAB format1. Command line file converter included to produce .csv format.
    Define and insert timed markers into the data stream, including on-screen buttons and defined serial port events. Markers are stored in EEG data file
    Marker definitions can be saved and reloaded. Markers are displayed in real time and playback modes. 
    Export screenshot for documentation

    TestBench™ features include:
    EEG display:
    5 second rolling time window (chart recorder mode)
    ALL or selected channels can be displayed
    Automatic or manual scaling (individual channel display mode)
    Adjustable channel offset (multi-channel display mode)
    Synchronized marker window

    FFT display:
    Selected channel only
    ALL or selected channels can be displayed
    Adjustable sampling window size (in samples)
    Adjustable update rate (in samples)
    dB mode – power or amplitude calculations
    dB scale
    FFT window methods: Hanning, Hamming, Hann, Blackman, Rectangle
    Predefined and custom sub-band histogram display – Delta, Theta, Alpha, Beta, custom bands
    Gyro display:
    5 second rolling time window (chart recorder mode)
    X and Y deflection
    Data Packet display:
    5 second rolling graph of Packet Counter output
    Packet loss – integrated count of missing data packets
    Verify data integrity for wireless transmission link
    Data Recording and Playback:
    Fully adjustable slider, play/pause/exit controls.
    Subject and record ID, date, start time recorded in file naming convention.



  • The FFT display can be made by OpenVIBE, so technically if it is compatible, we could purchase the cheaper headset.
  • Created an OpenVIBE account on the forum to ask various questions about the compatibility of the platform
  • Received a response from the OpenVIBE team (this is described more in the problems section)

Problems:
  • The example code requires a headset for it to be able to run, and produces errors if not set up
  • The Emotiv forum has not responded to my posts, and the answered questions are vague and confusing
  • While it is great that openVIBE responded to me, they did not give any good news. They said that they do not have the Insight in their offices but they guess that it is not compatible. This is because of two reasons; the channel characteristics are different and the API is different than the EPOC.
Plan:

I might have no choice but to use the EPOC for various reasons. Firstly, a major component of the team is that I am supposed to collaborate with the openVIBE group. They would be extremely helpful, but what am I supposed to do if the headset doesn't even work with the platform? The order will also take almost three weeks to process, pushing back farther and farther my development process. I do not know if the headset could even be shipped after that time. I need to go over this progress and the details available to me on Tuesday. After I talk to Mr. Lin about all this, I will choose and then update the blog on Sunday. 






Tuesday, September 29, 2015

RE: Gantt Project

Feedback:
  1. There are quite a few overlapping tasks in your chart. Since you are working alone at the moment, make sure you can handle them.
  2. Most of your tasks are week long. Can you reduce the granularity by subdividing the tasks into sub-tasks? 
  3. Should "Study Example Code" and "Review Coding Language" happen at the same time?
  4. Based on the chart, it's not clear what can I expect to see at the end of this period. Maybe you can rearrange the activities under the problem solving steps.

Thursday, September 24, 2015

Gantt Project

https://drive.google.com/folderview?id=0B6jiE_ff0anPM1N3dDVxLTZqVEE&usp=sharing

As a side note, this is still a work of progress and it will be for a day or two, however I wanted to post what I had. I am sorry if it still seems incomplete.

http://www.ntu.edu.sg/home/EOSourina/Papers/EmoRecVis2010.pdf

http://www.ntu.edu.sg/home/EOSourina/Papers/RealtimeEEGEmoRecog.pdf


Tuesday, September 22, 2015

Presentation from 9/17/15

This is my slideshow from the presentation:

https://docs.google.com/presentation/d/1NtyjgU_Kjo6AUZ8EfKDfIbpKdvQGlzYpJYjaPbbDuSQ/edit?usp=sharing


Sunday, September 20, 2015

Resource from Last Year

Team Progress Report Blog: http://advstem2.blogspot.com/
Project Resource including Gantt Chart: http://stem14-15.blogspot.com/2015/02/project-resource-biofeedback-games.html

RE: Initial Planning & Coordination

Project Description and Merits:

  • I would like to create an app that uses the EEG data from the Emotiv Insight headset to read the users mood. The app will then show a color that corresponds to the users mood in real time, and change depending on how they are feeling.
  • This will be useful in therapy situations to help the user and their therapist to understand how they are feeling. It will also be useful for those that cannot describe how they are feeling because of a disability. People could practice controlling their emotions easier if they have visual cues of their feelings.
  • It sounds like an interesting application. The only question is that some other people have done the similar research. You can start from understanding and replicating their design, and then try to improve it. 
Group/Team Communication:
  • The teams 4, 5, and 6 make up this group, however at present there is only teams 4 and 6.
  • My team consists of myself.
  • Collaboration can only work if everyone involved understands each other, and allows for all ideas and opinions to be shared. To summarize, good communication is key and an open mind can only help.
  • Since team 4 is focusing on the classification mechanism, you should discuss and collaborate with them earlier on about your topic. They should help you develop the algorithms.
Prior Work/Resource Inventory:
I tried to update it, but I just posted it onto my blog by mistake. I will fix this at a later time.
  • There is a menu item called "Pages" where you can edit your Resource page.

Technology Analysis:
  • A strong understanding of neuroscience and the lobes of the brain.
  • Understanding of the headset's API and the coding involved with it.
  • Can be even more specific. For example, types of EEG signals, brainwave channels, headset/probes, brainwave analysis process, classification algorithms, IDE & language (you need to pick a development platform), subject test procedure, etc.. 
Competence:
  • At present, I know Python and some C, but I will have to learn a mobile app language like Swift. I also need to brush up on the EEG information that I read over the summer. I have not looked at the papers in a couple of weeks and I need to look over them again.
Safety: The headset, when it arrives, should be stored correctly, but other than that everything should be fine.

Equipment, Materials, and Budget: Once again, other than the headset, there shouldn't have to be any extra equipment. The online courses that I will be taking are free, so that isn't an issue.
  •  Depends on the platform you choose, you might need an iPhons/iPad or an Android phone/tablet, or Windows/Linnux/Mac laptop.

Schedule: In the next week, I want to strengthen my neuroscience background and solidify my ideas. I may also have to change the idea that I have as of now, since I do not know yet how good of an idea it is. Only after that I want to dive into the programming.
  •  You can also use this topic as your learning curve, and target a newer challenging topics.

Initial note: I am not certain how successful and/or feasible this project is with me working on my own. I also have a lot of skills that I will need learn, as described earlier in this post. I am very much welcome to suggestions and/or feedback.
  • You can always discuss the issues with me. As far as the programming part, there will be several teams involving in programming. You can always support each other. Furthermore, there is a huge online community which provides enormous amount of resource!