User talk:Jdev/Questions

From Robowiki
Jump to navigation Jump to search
RhinoScriptEngine - it's fair play or not?

I find out today, that there're embed script engine may be used, to reduce codesize. Example:

package lxx.test.js;

import robocode.AdvancedRobot;
import com.sun.script.javascript.RhinoScriptEngine;

import javax.script.ScriptException;
import javax.script.ScriptContext;
import javax.script.SimpleScriptContext;

 * User: jdev
 * Date: 11.11.2009
public class JSTest extends AdvancedRobot {

    public void run() {
        RhinoScriptEngine rse = new RhinoScriptEngine();
        try {
            ScriptContext ctx = new SimpleScriptContext();
            rse.eval("var a = 100;", ctx);
            Double i = (Double) ctx.getAttribute("a");
        } catch (ScriptException e) {

so i can rewrite most part of my code on js and only call it it nanobot. but i don't sure, that it is fair. What do you think about it? --Jdev 16:37, 11 November 2009 (UTC)

I think it was generally agreed that using interpreted languages to bypass the codesize utility isn't exactly fair, so although you can put your bot in the rumble to see where it would score, it would be better if you didn't leave it there, because based on the amount of code you have, it probably isn't a nanobot anyways. Look at White Whale and Talk:White Whale for more discussion on this =) It's quite cool that there is an included script engine in Java, I didn't even know about it =) --Skilgannon 17:19, 11 November 2009 (UTC)

Why i so honest?=) I glad, that i give something new in java=) there're also interpreters for php python, as i know. And i'm sure that there're also engines for other languages. --Jdev 17:28, 11 November 2009 (UTC)

Aside from what Skilgannon said... I'm very doubtful it can be used to reduce codesize actually for the following reason: I'm 95% certain that Robocode does not let robots access packages that are not part of the core java standard, in particular, I'm pretty certain you can't access the com.sun package from inside robots, maybe not even the javax package. Nice trythough, hah. --Rednaxela 17:35, 11 November 2009 (UTC)

I check it before publishing - it works=) --Jdev 17:55, 11 November 2009 (UTC)

I'm running OpenJDK, which doesn't include Rhino. So I can't compile this bot, and any bot based on this would get 0 on my system =) --Skilgannon 19:15, 11 November 2009 (UTC)

Yes, you right=) I try to use

new ScriptEngineManager().getEngineByName("JavaScript")

, but "Preventing lxx.UltraMarine 1.0 from access (java.lang.RuntimePermission createClassLoader)" appears. So question is closed=) --Jdev 19:29, 11 November 2009 (UTC)

It's not possible to get around it by going the other way either:

   package jk;

   import robocode.AdvancedRobot;

   import javax.script.*;
   import java.util.*;

    public class JSTest extends AdvancedRobot {
       public void run() {
         ScriptEngineManager manager = new ScriptEngineManager();
         List <ScriptEngineFactory> l = manager.getEngineFactories();
         ScriptEngine engine = null;
         for(ScriptEngineFactory sef:l){
            engine = sef.getScriptEngine();
            System.out.println("Engine found: " + sef.getEngineName());
         try {
            Double i = (Double)engine.eval("var a = 100; return a;");
             catch (ScriptException e) {


jk.JSTest: Exception: Preventing jk.JSTest from access: (java.lang.RuntimePermission createClassLoader) Preventing jk.JSTest from access: (java.lang.RuntimePermission createClassLoader)
    at Source)
    at java.lang.SecurityManager.checkCreateClassLoader(
    at java.lang.ClassLoader.<init>(
    at org.mozilla.javascript.DefiningClassLoader.<init>(
    at org.mozilla.javascript.ContextFactory.createClassLoader(
    at org.mozilla.javascript.Context.createClassLoader(
    at org.mozilla.javascript.SecurityController.createLoader(
    at org.mozilla.javascript.optimizer.Codegen.defineClass(
    at org.mozilla.javascript.optimizer.Codegen.createScriptObject(
    at org.mozilla.javascript.Context.compileImpl(
    at org.mozilla.javascript.Context.compileString(
    at org.mozilla.javascript.Context.compileString(
    at org.mozilla.javascript.Context.evaluateString(
    at com.sun.script.javascript.RhinoTopLevel.<init>(
    at com.sun.script.javascript.RhinoScriptEngine.<init>(
    at com.sun.script.javascript.RhinoScriptEngineFactory.getScriptEngine(
    at Source)

--Skilgannon 19:59, 11 November 2009 (UTC)

The improved security manager is partly my fault, as I awhile ago demonstrated hackbots which staticly accessed the internal classes in robocode. I also took the route of showing I could access parts of the JDK which allowed web access (which I later tested by downloading a file from my server), this in turn caused FNL to beef up the security manager, which for the most part is for the best, as running one of the built in script engines in your bot could allow it to run unsigned or hazardous code which could get around the security manager. --Chase 20:42, 11 November 2009 (UTC)

What of fire situation attributes (like lateral velocity, distance, near wall etc.) you use for segmentation you data?

I understand, that it may be "professional secret", but if any of top bot authors share this info - it will be cool=) Thank you=) --Jdev 10:32, 19 February 2010 (UTC)

Easy -- Lateral Velocity seems to be the most important. Distance and near wall can split the movement type in situations. Try to look at Top Bot source code =) --Nat Pavasant 11:12, 19 February 2010 (UTC)

It's important for me to don't see other bot's source code =) --Jdev 11:30, 19 February 2010 (UTC)

Then it is what I said, plus some experiment. This information is vary between different implementation. --Nat Pavasant 12:02, 19 February 2010 (UTC)

Yep, lateral velocity, distance (I'd use "bullet flight time" instead), wall distance, and velocity change (accelerating/decelerating or time since velocity change) are the most important ones. Somewhere on the wiki is a table of what attributes/how many slices some top bots use (starting with Skilgannon/DrussGT), but I'm failing to find it with a quick Google search right now... --Voidious 12:40, 19 February 2010 (UTC)

I think the tables you're referring to are here: --Darkcanuck 14:47, 20 February 2010 (UTC)
Thanks! I swear I spent like 20 minutes Googling and wiki-searching for those. =) (I thought they were on the new wiki, too...) --Voidious 15:23, 22 February 2010 (UTC)

I'm considering the same issues right now as well. Right now LBB only segments on Distance - but it segments its movement as well as its gun. I'm currently investigating Lateral velocity and averaged velocity acceleration right now, combined with distance and the results are decent. I'm not yet sold though as my firepower management still needs improvement. Funny thing though, even distance by itself does ok against simpler bots. --Miked0801 17:48, 19 February 2010 (UTC)

A quick little note about distance: In some experiments, I was finding that distance was ranking as very unimportant actually, however those experiments were in a 'Targeting Challenge' environment where only the enemy moves and distances stay relatively stable. As soon as I put it on a movement and into the rumble, I found that adding some more distance-related (bullet travel time), segmentation helped significantly. --Rednaxela 18:45, 19 February 2010 (UTC)

Thanks for all. Now I develop autosegmentet gun, which use currently about 40 fire situation attributes, and it seems (imaging, looking?) not bad. --Jdev 11:57, 22 February 2010 (UTC)

Question for native english speakers.

Which text is better readable:

1) I found out new application for google translator: i write text on english, but I do watching to how, google translate it on russian. This text is written by this way

2) I found a new use for an interpreter Google: I am writing in English, but look how he takes what I write in Russian. But this text was translated to english from russian by Google Translator

--Jdev 16:39, 15 September 2010 (UTC)

I think #1 is a little better. And if it helps, I would write it like this:

  • I found a new way to use Google translator: I write text in English and then examine how Google translates it to Russian.

--Voidious 17:45, 15 September 2010 (UTC)

Thank you:)

and It will be cool, if native english speakers will show me my mistakes. Especially in time (past simple, present simpl etc) - i didn't study them (i was been lazy schoolboy and student:)) in 4 class and still didn't done it:) --Jdev 18:00, 15 September 2010 (UTC)

Just a note though, watching Google Translate translating between languages is not really learnable, perhaps some pairs may be translated correctly (from what I've seen, French-English pair is usually accurate). And do I need to be native English speaker to help you? --Nat Pavasant 18:18, 15 September 2010 (UTC)

I use Google tranlator not for learning, but for to be more understandable:)

No, but yours English must on expert level, because switch one mistake on another one isn't good idea:) --Jdev 18:36, 15 September 2010 (UTC)

I remeber, that there was discussion about migrate RoboRumble to robocode version 1.7, but i cannot find it. When migration will happen?

--Jdev 09:45, 19 November 2010 (UTC)

When the community decided that Robocode 1.7 is stable enough. But I think we must find one responsible for this migration first. The RoboWiki seems quiet lately. --Nat Pavasant 12:14, 19 November 2010 (UTC)

Thanks. It seems like the fate thinks that UltraMarine isn't ready for release:) --Jdev 12:35, 19 November 2010 (UTC)

Null victim name in BulletHitEvent

I notice, that in version BulletHitEvent.getBullet.getVictim returns null. I hear about "hide enemy name" feature, but idea was to return "Enemy 1" etc. It's bug or normal behavior?

I do some more research and find out, that if "Hide enemy names" checkbox is setted, then ScannedEvent.getName returns "1#", but BulletHitEvent.getName() returns not hidden enemy name. I think, that this bug will seriously impact to melee bots --Jdev 09:31, 6 June 2011 (UTC)

Those two things definitely sound like a bug. Could you please report it at the bug tracker? --Rednaxela 11:18, 6 June 2011 (UTC)

Bullet.equals semantic has been change

In my code i has condition bulletObjectReturnedBySetFire.equals(bulletObjectInBulletHitEvent) which doesn't works in (returns false for all recorded bullets). But in earler version (i don't remember exactly number) it works fine It's bug or normal behavior? --Jdev 08:49, 6 June 2011 (UTC)

BulletHitBulletEvent.getHitBullet().getName() and HitByBulletEvent.getBullet().getName() returns not hidden enemy name too --Jdev 09:38, 6 June 2011 (UTC)

The equals method changing behavior for bullets in that way would definitely be a bug as well. Could you report that at the bug tracker as well? Thanks. --Rednaxela 11:18, 6 June 2011 (UTC)

Thank you, Rednaxela, for answer. I've submit bugs:

--Jdev 11:30, 6 June 2011 (UTC)


What is LRP in problem bots graph? --Jdev 09:37, 10 August 2011 (UTC)

Those graphs in the roborumble robot details page. OldWiki Roborumble/LRPChase-san 10:26, 10 August 2011 (UTC)
Thank you, Chase. I forgot about old wiki

Edit conflict... but... The LRP shows your problem bots by comparing the score you get against them vs. what you were expected to score. This makes it easy to see whether you tend to do better or worse than expected against good or bad bots. --Skilgannon 10:46, 10 August 2011 (UTC)

ok, thank you too:) --Jdev 10:52, 10 August 2011 (UTC)


Thread titleRepliesLast modified
Missing bots request215:44, 7 January 2013
What avg bullet dmg of robot in battle against himself may mean?1202:23, 22 June 2012
Multiply wave suffering609:53, 23 November 2011
Rumble Kings207:40, 17 October 2011
Test bed with stable results1005:02, 28 September 2011

Missing bots request

Can anyone share with me currently missing bots from rr: Download bot agrach.Dalek 1.0 (13/969)... Failed:


Download bot agrach.MicroDalek 1.0 (14/969)... Failed:


Download bot agrach.RobotSlayer 1.0 (15/969)... Failed:


Download bot com.timothyveletta.FuzzyBot 1.1 (161/969)... Failed:


Download bot davidalves.Firebird 0.25 (201/969)... Failed:


Download bot davidalves.Phoenix 1.02 (202/969)... Failed:


Download bot davidalves.PhoenixOS 1.1 (203/969)... Failed:


Download bot froh.micro.Aversari 0.1 (304/969)... Failed:


Download bot fruits.NanoStrawbery 1.3 (308/969)... Failed: Server returned HTTP response code: 403 for URL:*/0B0bFe9FCJ2HSbjNJb2FkVnExYzg?e=download


Download bot rampancy.Durandal 2.2d (684/969)... Failed: Server returned HTTP response code: 403 for URL:


Download bot rampancy.micro.Epiphron 1.0 (685/969)... Failed: Server returned HTTP response code: 403 for URL:


Download bot serenity.serenityFire 1.29 (753/969)... Failed:


Download bot testantiswapgun.AntiSwap 1.0 (870/969)... Failed:


Download bot uccc.Dorito 1.12 (899/969)... Failed:


Download bot uccc.MilkyWay 1.01 (900/969)... Failed:


Download bot uccc.RingDing 1.12 (901/969)... Failed:


Download bot uccc.Scrapple 1.0 (902/969)... Failed:


Download bot yarghard.Y101 1.0 (952/969)... Failed: Server returned HTTP response code: 403 for URL:


Download bot zzx.Serunyr 2.0.2 (969/969)... Exists!


Jdev12:49, 7 January 2013

Try the robot database at RoboRumble/Starting With RoboRumble.

If they are incomplete, I can upload another zip pack later.

MN15:07, 7 January 2013

Thanks, i skip latest link:)

Jdev15:44, 7 January 2013

What avg bullet dmg of robot in battle against himself may mean?

I do fast and rough research and find out, that in battles against themself Tomcat gets ~1200 bullet dmg, DrussGT gets ~1100 bullet dmg and Diamond gets 1000 bullet dmg. Can it mean, that Tomcat has better gun relative to his movement, Diamond has better movement relative to his gun and Druss is well balanced? What do think, guys? And related question: how you decide what to improve next or what is weakness of yours robot?

Jdev15:46, 19 June 2012

My first thought would be to look at distancing. If Diamond keeps a larger average distance than DrussGT, and DrussGT more than Tomcat, the hit percentage would naturally be lower. Also, Tomcat might just have the most aggressive bullet power strategy - higher bullet powers is always a trade off of bullet damage vs survival.

Figuring out what to improve is a much tougher question. :-) Recently I've been refactoring Diamond and adding lots of tests, which has given me lots of little bugs to fix and ideas of what might work better. I think just going through your code and writing tests is a good way to get ideas, because you start really getting a feel for how things work. But other than that, I don't have any great advice.

The big ideas kind of come out of the blue, immediately kick ass in tests, and then you can just polish them and release. But usually it's not like that. =) One general thing is I try to think of how to make things more accurate, like data collection or how I'm normalizing my values. I'm really trying to avoid the "tiny tweak and test for hours" cycle recently and focus on significant behavior changes, which I find a lot more fun and which has a better chance of being a big improvement.

Voidious16:20, 19 June 2012

Distance and bullets power, sure, thanks:) Looks like i still stuck with my gun...

About tests - i did start work on ConceptA because Tomcat now has high coupling (cohesion?), so test it is difficult task:) But any way thanks for advice:)

I agree, that big things is better, but i think, that APS gap between Tomcat and Diamond and DrussGT just in "tiny tweak", which require hours of tests. But tests is problem for me with my low-end first generation mobile i3 core:)

Jdev07:14, 20 June 2012

I'm on a 3 year old MacBook Pro with Core 2 Duo 2.8 GHz, so I feel your testing pain. =) But I just ordered a Core i7 box, so not for long. =) Though I've come to accept that no amount of CPU power is enough to satisfy a Robocoder's appetite.

I feel like when I focus on tweaks, I just lose my mind and never have any real insights. I just feel like every minute that my CPU isn't running some Robocode benchmark is wasted, so I just keep thinking of the next thing to fiddle with and run the next test. There's a place for that if you have a good idea something could be improved, but it also clouds my mind and keeps my focus away from the bigger picture. I'm sure you do have lots of little things to tweak in Tomcat, but I bet you also have some significant bugs and feature gaps. Do you do Gun Heat Waves and precise intersection in the surfing? Do your implementations actually help? I think I gained about 0.3 APS from each of those.

Maybe you can attach Diamond's gun for a hybrid bot test and see how much of the difference is in gun vs movement? It should be pretty easy to plug in. Might just need to be careful to make Bullet Shadows still work, but it's a pretty simple interface.

Voidious16:20, 20 June 2012

I do not know much about hardware, but i think, that my i3 with 1.3 GH is much more worse:)

Tomcat uses gun heat waves. It's not completly clear for me how precise intersection can help in surfing. Actually Tomcat starts to surf next wave, when closest wave in distance of two ticks to Tomcats center. But thanks, i will add it to todo list:) I'm not sure, but it seems, that Tomcat was released with gun heat waves, and some tuning of it give him some points.

Thanks, for Diamond's gun but I'd rather spend time to Tomcat development - i have the super secret big idea which must hit:) I hope, it must hit:)

Jdev20:50, 20 June 2012

I've always found it optimal to surf until the wave is one tick from center. Effectively, this is when it passes your center, since it will move and then check for collisions before you can move again. While there still could be firing angles that would hit you after this, most of them would have already hit you, so it makes sense.

I guess you're just using the firing angle on the last tick you surf that wave? Of all the firing angles that would hit your bot for that movement option, using precise intersection will let you use the angle at the center of that range as the input to your danger calculation. (Or the whole range, depending on how your danger formula stuff works.) I was surprised by how much I gained by it. Pretty sure I argued against it having value for a while before I tried it. =))

Voidious21:21, 20 June 2012

I cannot understand yours point because of my english skill or simple misunderstanding. Currently Tomcat uses this algorithm (in roughly):

  • predict movement in CW and CCW directions until closest's wave travel time > 0
  • for each point calculate it's danger as: (amount of possible bullets within +/-botWidthInRadians * 0.75) * (high danger) + (amount of possible bullets within +/-botWidthInRadians * 2.25) * (low danger danger) (distance in radians between point and bullet bearing offset is also taken in account, botWidthInRadians are calculated exactly)
  • move in direction with safet position or if distance to safest point <= stop distance, then go there directly (yes, i know that it's my not good bicycle and it's another reason of birth of ConceptA:) )

Where i can add precise intersection? May be it can be applied only for true surfing? Are you know is DrussGT use it in movement?

Jdev21:53, 20 June 2012

DrussGT doesn't use precise intersection, but it would work and is something I've been contemplating adding. I'm just worried that it would be very slow.

Skilgannon02:23, 22 June 2012

Hi maybe this is interesting to you if you need more power for testing. Most of the big server companies (HP,IBM) have some sort of test/develop and whatnot programs where you can get an so called free "shell account". You can use these systems for non commercial use. Not sure what the current restrictions are but you should get an SSH port where you can up/download your stuff. I guess you could easily install robocode on such system - run your test and download the results. The CPU power should be way above everything what you can get at home :)

I had one of these years ago ... not sure if they changed some conditions: IBM Shell Account

Wompi17:32, 20 June 2012

Thanks, i will see on it

Jdev20:51, 20 June 2012

Oh that's interesting. I actually worked for IBM until last year (on System z, even) and didn't know about that. If my experience in our dev environments is any indication, the horsepower might not be as high as you'd think. =) My other concern is that the CPU is shared with all the other virtual machines, so you couldn't rely on a constant CPU allocation, so things would get weird with Robocode's CPU constant. Either lots of skipped turns or SlowBots given infinite time and taking forever.

Some folks have played with App Engine and EC2: Talk:RoboRumble#Client_on_Google_Apps_Engine. Seems like it can't be long before these options are cheaper/easier/better than buying your own big box for things like Robocode.

Voidious18:31, 20 June 2012

Hmm normally those shell accounts are for people who want to try if the hardware is appropriate for there software and as far as i know you get a fixed CPU power and most of them should work on a real hardware. It is years ago since i had to do with these stuff so it might be have changed. And you might be right with the horsepower :) i have no idea what todays home cpus can do - it just came into my mind by reading the posts of you both. I'm happy with my 4 year old Macbook :)

Wompi19:13, 20 June 2012

Let's speak, when you will develop mega bot with multiply wave surfing, 30 R-Trees with 500-5000 entries, 7 kD-Trees with 50000 entries, bullet shielding and million other computations:) Actually, i really sometimes think about spending 1000-1500$ on new computer only due to the robocode:)

Jdev20:58, 20 June 2012

Multiply wave suffering

I have implemented multiply wave surfing, but my local tests shows only +0.17 APS. Then i have search TOP-20 bot's version history to find out another's gains, but there're only 2 bot's has relatively accurate records

0.2: APS:83.19 PL:1366 - 699 pairings

    Added HOT and LT to the weighting schemes (acts as pre-loaded HOT and LT shots)
    Changed the weighting system to give the most accurate scheme all the score, the others 0 (with a rolling average, so it evens out)
    Now surfs the second wave, and differently I believe than to other TrueSurfing algorithms: at each tick along the second wave I check what the danger would be if I decelerated at that point, and then take the minimum danger of all those decelerations, as well as continuing, as the actual danger. Also, the second wave is weighted equally to the first. (This may be changed at a later date.)
    Takes ram damage into account for enemy energy drop 

0.1: APS:82.92 PL:1350 - 698 pairings 


V. 0.6i: rank: 33 PL-rank: 24 rating: 1944.58 date: 04.01.2007

    fixed movement bug (introduced in V. 0.6f)
    now surfs the first two waves
    moves some degrees away form the enemy instead of staying perpendicular 

V. 0.6h: rank: 47 PL-rank: 37 rating: 1891.7 date: 24.12.2006

So question is "Is +0.17 APS gain corresponds to multiply WS or there're still bugs or room for improvements in my implementation"?

Jdev11:11, 22 November 2011

For GresSuffurd this was a long time ago, but it has reasonable reliable stats between 0.1.8 and 0.2.1. Difference is 18 ELO-points which is almost 1 full APS-point. Note that at that time the only surfingattribute was lateral velocity, and no-one had heard from precise prediction.

0.2.1 (20070117) member of The2000Club 
gun: GV 0.2.2 move: WS 0.1.5 
Rating: 2000 (21nd), PL: 448-40 (37th) 
bugfix weighting of waves 
0.2.0 (20070114) 
gun: GV 0.2.2 move: WS 0.1.4 
Rating: 1992 (22nd), PL: 444-43 (38th) 
removed nearwall segmentation 
also evaluate second wave 
0.1.9 (20070102) 
gun: GV 0.2.2 move: WS 0.1.3 
Rating: 1980 (23nd), PL: 439-43 (40th) 
segment movement also on nearwall (3 segments) 
0.1.8 (20061212) 
gun: GV 0.2.2 move: WS 0.1.2
Rating: 1982 (22nd), PL: 436-44 (41st)  
GrubbmGait15:22, 22 November 2011

Chalk also gained a lot of points in v2.5 (oldwiki:Chalk/VersionHistory) - I think about 50 ELO points, based on some chatter on Chalk/Archived Talk. There were multiple changes, but he attributed it mostly to changing his multiple wave surfing from a very basic approach to more like what I do in my bots.

This has come up a few times lately, so maybe I should release a one-wave version of Diamond or Dookious to the rumble and see where it ends up...

Voidious17:38, 22 November 2011

I think I only gained maybe a quarter APS point from my multi-wave surfing. I'm planning on going back and working on it more in the future sometime -- first, to just verify what difference it currently makes, and then to see if I can improve it. I'm also curious how much of a difference multi-wave surfing makes in true surfing vs go-to surfing. My drive is of the go-to variety, though it may change it's mind once or twice per wave depending on the changing battlefield conditions.

Skotty18:18, 22 November 2011

I think that using bullet shadows is the cause that second wave evaluation has less impact than it used to have. The infuence of bullet shadows is that big that second wave differences are just marginal.

GrubbmGait18:26, 22 November 2011

But i think, that Diamond and DrussGT get ~1 APS from multiply WaveSurfing and ~1 APS from Bullet Shadows. I want to get 2 APS from this things too:)

Jdev09:53, 23 November 2011

Thanks for all for response

Jdev07:54, 23 November 2011

Rumble Kings

Veterans, i think it will be interesting for community to know a history of rumble throne. As i know, it was something like this:

Correct me, please, if there is mistakes and add dates, if you know. Then, i think, this list may be moved to separate page

Jdev07:17, 17 October 2011

This page] should really be migrated to the new wiki =)

Skilgannon07:36, 17 October 2011

O, thank you, i always forgot about old wiki:)

Jdev07:40, 17 October 2011

Test bed with stable results

Hi to all. Who which test beds use to test robots? I cannot find out test bed which will give stable results (+/- 0.05 APS). Now i use every 5th bot roborumble with 5 season vs each and results is in interval -/+ 0.3 APS.

Jdev06:43, 20 September 2011

I speak about results stable in reasonable time and meaningful in APS terms

Jdev07:05, 20 September 2011

Well, I suspect it's total number of battles that matters most. A recent version of Diamond dropped 0.12 APS after 2000 battles in the rumble, but perhaps that was a fluke due to someone's client skipping turns. My APS test beds are 100 bots and I run 5 seasons (500 battles) to get an idea, 10 seasons I consider accurate enough even for pretty minor tweaks, or 20 seasons if I want to feel very confident. I'm not sure what the statistical accuracy is, but +/- 0.05 for 20 seasons (2000 battles) would be about my guess. That's pretty much my experience with the accuracy of RoboRumble results after 2k battles, too.

I've been thinking about adding to RoboResearch the feature to calculate the confidence interval of the overall result, assuming normal distributions. It should be pretty easy.

Voidious14:32, 20 September 2011

I run 30 seasons of 35 rounds against 40 bots (taking approx 18 hours). No idea if it is stable though, it is my first testbed. I do know my testbed does not reflect the rumble correctly, because 0.3.2 and 0.3.5 score on par, and 0.3.7 approx 1.5 APS lower.

GrubbmGait18:16, 20 September 2011

GrubbmGait, you're a hero:) for me 4 hours for test it is limit and i want to get test which give me results in 2 hours. And may be you will try to use Distributed_Robocode - now i can share with you my home netbook (i3 1.3 x 2) and on this week i plan to setup old duron 1.6 as dedicated robocode server. So, i guess, your's test will take at maximum 6 hours (but it's strong depends from which robots is in yours test bed)

Jdev19:35, 20 September 2011

One quick little thought, is theoretically, it should be possible to use PCA to come up with the most significant axes of the roborumble, and rank robots by how well they correlate with each axis. Then, you also rank robots by their standard deviation. You then pick robots which simultaneously have a low standard deviation, and highest correlation with the axes that the PCA determined. Then you can use some linear regression to determine the weight to give each of the robots selected.

That I think, would probably be a good way to find a testbed which simultaneously represents the rumble well and has low noise... Hmm... maybe I should make a patch to Voidious' testbed maker that uses the algorithm I describe in the paragraph above...

Rednaxela20:33, 20 September 2011

I found out good test bed - 7 seasons against every second bot from roborumble (~2900 battles). It produces repeating results within +/- 0.05 APS and error against real RR within +/- 0.1 APS

Jdev05:59, 27 September 2011

I'd think 1 battle against each bot from the rumble would probably give better results... maybe not as reproducible but closer correlated to the rumble.

Skilgannon11:38, 27 September 2011

No Skilgannon, even 3 battles against every bot is worse in terms of stability and correlation with RR

Jdev04:56, 28 September 2011

I totally agree with using very large test beds (lately ~100 bots for me), but I think you're wasting some CPU cycles testing against bots that are unlikely to be affected by your changes. Bots you get 98+ against probably won't be affected unless you're testing changes to your core surfing algorithm or something. Most changes to surf stats are not going to affect HOT bots at all, and flattener tweaks won't affect any bots that have no chance of triggering your flattener.

Voidious15:23, 27 September 2011

My last release show, that you never know what will be affected:) So better if test will execute 7 hours, against 6.5, but give confidence in results:)

Jdev05:02, 28 September 2011