View source for User talk:Xor
Jump to navigation
Jump to search
- Archived talks
- Archived talk:Xor 20180221
Hey there!
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Thoughts on entropy | 0 | 17:56, 1 June 2022 |
Manifold Learning | 2 | 17:20, 29 July 2021 |
Good to see robowiki is back! | 0 | 14:04, 12 June 2018 |
- Targeting maximizes cross-entropy
- Wave-surfing minimizes cross-entropy
- Random movement maximizes self-entropy
- But random movement doesn't minimize cross-entropy
- Flattener minimizes "self" cross-entropy
- But flattener doesn't maximize self-entropy
What deeper insight can you get from this?
Guess Factor based methods generalize well, based on priori knowledge about robots moving in circles & max escape angle. Better methods such as precise max escape angle helps greatly. However given enough samples, I wonder whether some deep enough model can learn the shape of escape envelop, as well as precise max escape angle, etc. And generalize even better.
I could imagine developing some sort of "LearnedFactor" function that takes as input the firing angle along with the enemy's position, velocity, maybe more complex features like precise MAE, etc. As long as the function is invertible with respect to the firing angle you could then do KNN with those instead of GuessFactors.