TomBolton.io

Tom Bolton’s AI and Machine Learning Lab Notebook.

optimization

Machine Learning

It’s been an eventful few days for my model. This entry is going to be a bit of a saga, but here goes. I had implemented the weighting function as described in my last post. I was getting great results…

Machine Learning

Okay. Last post I wrote that it’s tricky to do a non-simplistic naive implementation of taking into account all the attempts. Actually, though, it’s not tricky at all, and when I originally thought about it weeks ago, I was so…

Machine Learning

Built a quick (slow) framework to track board state and proposed illegal and legal moves for every game, and save out the game history for any game where the illegal move percentage spiked to higher than 98%. The vast majority…

Machine Learning

So, with my new softmax back propagation sorted, I was excited to see how my model would perform, especially since softmax with MSE was my first hypothesis. Well, MSE was a disaster, although I didn’t push it too far. Softmax with…

Machine Learning

A couple of days ago, I got all the machinery of my network in place and did my first test. I decided that for starters, using a network I wasn’t even sure worked for a problem I’ve never tackled before…

Machine Learning

Having established in my previous post that softmax looks like the way to go for my final activation layer it’s time to think about the cost function. And this one is trickier. Hypothesis: Use Mean Squared Error Cost Function The…

Machine Learning

So with these classifiers and unambiguous accuracy numbers when compared to ground truth, it’s dead simple to evaluate success. But what about this checkers game? Unlike the pong from pixels example, my game won’t be playing against a functioning, trained…