Neural Networks

Published April 09, 2018

Because Neural Networks are such a massive topic, crossing so many disciplines (Computer Science, Mathematics, Statistics and Data Science), I wanted to focus this week on building a foundational understanding of their structure and operation. To that end, I recreated the “Toy Neural Network” described in this series of videos on the Coding Train youtube channel. Rather than using a higher-level framework like ml5.js, tensorflow.js or ml4a, these videos build from scratch an entire working neural network in javascript. Following along with the videos, I ended up with a neural network able to solve XOR (one of the simplest non-linearly-seperable problem, as I understand it):

It works!

After hours of work building and debugging the neural network and accompanying matrix math scripts, this felt like a big win! After hearing about the incredible abilities of neural networks to mimic human-like intelligence, however, I wanted to see if I couldn’t at least begin to tackle a somewhat ‘softer’ problem using a neural network. So I began incorporating a neural network as a ‘brain’ to help one of my boids from earlier in this term steer to avoid obstacles (inspired in large part by Jabril’s Neural Network Running Game). This type of system would use a genetic algorithm rather than back-propogation to learn the obstacle avoidance steering behavior. The steps required were as follows:

Alone and without companionship, the boid searches desperately for meaning...

Code here!

I didn’t complete this evolving neural network, but a few interesting questions / ideas arose for me in the process:

  • will a random small ‘mutation’ to the neural network’s weights and biases allow a trained network to emerge (after many generations)? Or is a cost function / gradient descent necessary? AKA Does the inter-relatedness of a networks weights and balances preclude using the type of ‘dumb’ mutations used in other genetic algorithms?

  • could each boid ‘train’ itself as it moved by using the result of an obstacle avoidance behavior (as described by Craig Reynolds) to establish steering-force targets away from the obstacle? Could this type of training be expanded to mimic other already-established behaviors as neural networks?