TomBolton.io

Tom Bolton’s AI and Machine Learning Lab Notebook.

Machine Learning

Okay. Last post I wrote that it’s tricky to do a non-simplistic naive implementation of taking into account all the attempts. Actually, though, it’s not tricky at all, and when I originally thought about it weeks ago, I was so buried in the actual cost formula (MSE) that I wasn’t thinking outside of that, where the answer actually lies. Since it’s just the same example over and over and you’re summing them, you just add a weight to the cost for the one example, multiplying it by the number of attempts to get to a legal move. That multiplier carries over intact to the derivative for the gradient. Easy peasy.

Tagged:

LEAVE A RESPONSE

You Might Also Like