After hours of work building and debugging the neural network and accompanying matrix math scripts, this felt like a big win! After hearing about the incredible abilities of neural networks to mimic human-like intelligence, however, I wanted to see if I couldn’t at least begin to tackle a somewhat ‘softer’ problem using a neural network. So I began incorporating a neural network as a ‘brain’ to help one of my boids from earlier in this term steer to avoid obstacles (inspired in large part by Jabril’s Neural Network Running Game). This type of system would use a genetic algorithm rather than back-propogation to learn the obstacle avoidance steering behavior. The steps required were as follows:
I didn’t complete this evolving neural network, but a few interesting questions / ideas arose for me in the process:
will a random small ‘mutation’ to the neural network’s weights and biases allow a trained network to emerge (after many generations)? Or is a cost function / gradient descent necessary? AKA Does the inter-relatedness of a networks weights and balances preclude using the type of ‘dumb’ mutations used in other genetic algorithms?
could each boid ‘train’ itself as it moved by using the result of an obstacle avoidance behavior (as described by Craig Reynolds) to establish steering-force targets away from the obstacle? Could this type of training be expanded to mimic other already-established behaviors as neural networks?