Redefining Lookomotive

When we started developing Lookomotive we all agreed that it should be as generic as possible to be able to apply it on many applications beside the wheelchair. We started on focusing on the features that LookOmotive SDK should have & the best architecture that would help us to reach this goal … being generic.

 Lately we decided to widen our scope and to focus on LookOmotive SDK more than the application (wheelchair) and to focus on developing a powerful Pattern Recognition & Digital Signal Processing SDK that can be used in many BCI or non BCI applications.

 Thus, here’s the new definition: LookOmotive is a generic SDK that can be used to develop many Digital Signal Processing & Pattern Recognition based applications. As a demonstration we’ll be working on controlling a wheel chair according to some mental (cognitive) tasks.

How the whole thing works?

You might have asked an important question after reading the previous post which is “How the whole thing works?”,and you might want to tell us :”Guys you talked about the EEG Signals and its behaviour due to a specific action ,but I’m still confused about how everything works?,How could you recognize the signal to detect a specific action for a specific person ?? Does it act like a thought reading device as we see in science fiction movies or something : D “,we wrote this post to answer all these questions : ) .

Most of us heard before about the Recognition Systems ,Recognition systems can be speech,voice,fingerprint or even face recognition systems.Have you ever asked how recognition systems work ? Any recognition system recognizes input data according to certain pre-saved data and by applying some sort of “Matching algorithms” between the input and pre-saved data ,recognition can be done and the system works properly,Let’s take the speech recognition systems for instance,speech recognition systems let the user spell some words several times before using the system or otherwise the pre-saved data can be provided by the system developers to enable its users to use it without requiring any input data,but they are all based on “Pre-saved data”,this is well-known as “Machine Learning”,we are working on the same thing but with different data type and different features to focus on.

So simply we are developing a “recognition system” which will be designed to recognize different actions through the “EEG Signals”,data for any recognition system can be taken in the form of “Signals”,Signals are data representation with an independent variable (Most probably is “time”) which are digitized for further processing on computers or any processing element,As we mentioned in the previous post,the EEG Signals are voltage values with respect to the time which can be processed and used in the system,so our main work comes after the data digitizing phase.

Our project is divided into main parts which are two of the main pattern recognition principles:

  1. Feature extraction and selection
  2. Learning/Classification

1.Feature extraction and selection:

Feature extraction is extracting some properties of the input signal that can be used in the learning or signal classification instead of using all the signal,but why do we need this?,when the input data is too large to be processed,it is suspected to be notoriously redundant (much data, but not much information) then the input data will be transformed into a reduced representation set of features,so the expected result of the feature extraction that it will extract the relevant information from the input data to be used for further processing.(Feature extraction and selection is used in both the learning and classification phases).

2.Learning/Classification:

Every single action to be detected by the system is called a “Class”,some pre-saved data have to be provided to the system for each class in order to perform the learning phase.

Learning: It is the phase concerned with taking the extracted features of the data samples for a specific action and the system starts to take the “Strong features” between the samples to use them in the classification phase.

Classification:Data to be classified is analyzed and by applying some classification algorithms we can find to which class the input data is highly correlated.

Now i think you have an overview about how Lookomotive works ,Later posts will be explaining our phases in detail and we will go deeper in the technical details of implementation so stay tight ; ) and catch you later : )

Inside your brain

So what’s going inside there? .. what happens when you think, move, blink , smile …etc?

Well .. let’s see ..

Each action inside your brain is associated with a certain intensity measured in micro voltage .. electricity! .. yeah this is how it goes..

So when you think in a certain action, perform a muscular movement or even have a certain feeling [Sad, Happy .. etc], your brain fires a series of signals with an intensity that corresponds to this action.

Brain Rhythms:
EEG Signals are grouped into 5 categories based on their frequencies:

  • Delta waves (0.5-4 Hz): Mainly associated with sleeping.
  • Theta waves(4- 7.5 Hz): They fire when your consciousness slips towards drowsiness, in other words when you feel sleepy or even when you get a short nap.
  • Alpha waves (8-13 Hz): Relaxed awareness without attention or concentration.
  • Beta waves (14-26 Hz): Active thinking and attention
  • Gamma waves (30 – 45 Hz): They often detect occurences of diseases.

Source: EEG Signal Processing, Saeid Sanei and J.A. Chambers

First Seminar

Three weeks ago, we had our first seminar .. It went great!

Posting this now might be a little bit late .. we’re actually getting prepared for the next siminar .. next week!, but you know .. seminar.. midterms .. Eid .. a BREAK  .. and  a post!

Lookomotive

Three months ago our lovely team was formed up, 5 different yet enthusiastic students that are willing to have an outstanding graduation project ..

We had our first meeting last August, we came to an agreement that our project should help to solve a social/humanitarian problem, so we focused on this scope and here is it .. LookOmotive!

LookOmotive is an assisting technology that provides handicaps with  means of controlling their wheelchairs according to their cognitive state of mind .. so yes, when you think “forward”, the wheelchair should obey!

This blog is intended to provide a non-formal live record for the project .. so, keep up with it and wish us luck ;)!

Team members:

  • Mina Fayek – CS
  • Mona Mohamed – SC
  • Mostafa Saeed – SC
  • Nour Galal – IS
  • Osama Moussa – SC