View source for User talk:Xor
- Archived talks
- Archived talk:Xor 20180221
Hey there!
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Thoughts on entropy | 0 | 17:56, 1 June 2022 |
Manifold Learning | 2 | 17:20, 29 July 2021 |
Good to see robowiki is back! | 0 | 14:04, 12 June 2018 |
- Targeting maximizes cross-entropy
- Wave-surfing minimizes cross-entropy
- Random movement maximizes self-entropy
- But random movement doesn't minimize cross-entropy
- Flattener minimizes "self" cross-entropy
- But flattener doesn't maximize self-entropy
What deeper insight can you get from this?
Guess Factor based methods generalize well, based on priori knowledge about robots moving in circles & max escape angle. Better methods such as precise max escape angle helps greatly. However given enough samples, I wonder whether some deep enough model can learn the shape of escape envelop, as well as precise max escape angle, etc. And generalize even better.
I could imagine developing some sort of "LearnedFactor" function that takes as input the firing angle along with the enemy's position, velocity, maybe more complex features like precise MAE, etc. As long as the function is invertible with respect to the firing angle you could then do KNN with those instead of GuessFactors.
The biggest challenge will be how to deal with different settings in recorded and aiming. Guess Factor indeed do this with orbital movement assumption, and PIF with not moving out of wall.
I'm thinking about some end2end deep model, where transformations between recorded and aiming angles can be learnt automatically. E.g. Given a sequence of historical wave intersect location, movement and bullet hits, try to predict the next wave intersect location.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:User talk:Xor/Good to se robowiki is back!.