PsyLink

Adding some AI

Most neural interfaces I've seen so far require the human to train how to use the machine. Learn unintuitive rules like "Contract muscle X to perform action Y", and so on. But why can't we just stick a bunch of artificial neurons on top the human's biological neural network, and make the computer train them for us?

While we're at it, why not replace the entire signal processing code by a bunch of more artificial neurons? Surely a NN can figure out to do a bandpass filter and moving averages, and hopefully come up with something more advanced than that. The more I pretend that I know anything about signal processing, the worse this thing is going to get, so let's just leave it to the AI overlords.

The Arduino Part

The Arduino Nano 33 BLE Sense supports TensorFlow Lite, so I was eager to move the neural network prediction code onto the microcontroller, but that would slow down the development, so for now I just did it all on my laptop.

The Arduino code now just passes through the value of the analog pins to the serial port.

Calibrating with a neural network

For this, I built a simple user interface, mostly an empty window with a menu to select actions, and a key grabber. (source code)

The idea is to correlate hand/arm movements with keys that should be pressed when you perform those hand/arm movements. To train the AI to understand you, perform the following calibration steps:

  1. Put on the device and jack it into your laptop
  2. Start the Calibrator
  3. Select the action "Start/Resume Recording" to start gathering training data for the neural network
  4. Now for as long as you're comfortable (30 seconds worked for me), move your hand around a bit. Hold it in various neutral positions, as well as positions which should produce a certain action. Press the key on your laptop whenever you intend your hand movement to produce that key press. (e.g. wave to the left, and hold the left arrow key on the laptop at the same time) The better you do this, the better the neural network will understand wtf you want from it.
    • Holding two keys at the same time is theoretically supported, but I used TKinter which has an unreliable key grabbing mechanism. Better stick to single keys for now.
    • Tip: The electric signals change when you hold a position for a couple seconds. If you want the neural network to take this into account, hold the positions for a while during recording.
  5. Press Esc to stop recording
  6. Save the recordings, if desired
  7. Select the action "Train AI", and watch the console output. It will train it for 100 epochs by default. If you're not happy with the result yet, you can repeat this step until you are.
  8. Save the AI model, if desired
  9. Select the action "Activate AI". If everything worked out, the AI overlord will now try to recognize the input patterns with which you associated certain key presses, and press the keys for you. =D

Results

I used this to walk left and right in 2020game.io and it worked pretty well. With zero manual signal processing and zero manual calibration! The mathemagical incantations just do it for me. This is awesome!

Some quick facts:

Video demo:

Still a lot of work to do, but I'm happy with the software for now. Will tweak the hardware next.

Now I'm wondering whether I'm just picking low hanging fruits here, or if non-invasive neural interfaces are really just that easy. How could CTRL-Labs sell their wristband to Facebook for $500,000,000-$1,000,000,000? Was it one of those scams where decision-makers were hypnotized by buzzwords and screamed "Shut up and take my money"? Or do they really have some secret sauce that sets them apart? Well, I'll keep tinkering. Just imagine what this is going to look like a few posts down the line!