Speed Up Your Macros: Difference between revisions
No edit summary |
|||
Line 133: | Line 133: | ||
* large image on a token will also influence speed, try to keep them at 200x200 pixels or lower. | * large image on a token will also influence speed, try to keep them at 200x200 pixels or lower. | ||
==Comments== | ==<nowiki><!-- Comments --></nowiki>== | ||
There are two ways to put comment in MT script: | There are two ways to put comment in MT script: | ||
<nowiki><!-- this is open comment --></nowiki> | <nowiki><!-- this is open comment --></nowiki> |
Revision as of 10:58, 2 December 2011
THIS IS AN ADVANCED ARTICLE
Introduction
If you start creating your own framework and if you like the process, then most likely you will get to a point that some of your more advanced macros start to become a drag. MT script isn't the fastest of languages and there are a couple of functions or methods that can really slow things down. Fortunately a couple of users (like Aliasmask) have started testing different methods to speed up there code. Below you can find the results, some tips are based on conjecture others have been throughly tested to be faster. I you find a new faster method, don't hesitate to put it here.
Macro vs UDF vs directly
This is again something to take into consideration. Sometimes you need to split up macro because of the CODE nesting limit, sometimes to prevent stack overflow and sometimes cause its easier. Here's the impact of your choices: I've tested the macro:
[r:"Test"]
In three ways. Once directly, once through a User Defined Functions (UDF) Test() and once through the macro call "Test":
[r:"Test"] [r:Test()] [r,macro("Test@lib:cifStopwatch"):""] [r:eval("Test()")]
The result is (with 10,000 cycles): directly 8.5 seconds, UDF 14.3s, Macro 18.5 and Eval 15.4 seconds. In short if speed is of the essence, try to keep it into one macro. If you need to split up: use UDF as much as possible. If you feed it one variable (argument) then the total time for both the UDF and the macro increases with 1 second.
Storing and Retrieving Variables
You can store a variable in three ways:
- on a token using setProperty()
- on a lib:token using setLibProperty()
- on one of the tokens identifiers (token.name, token.gm_name, token.label)
The fastest method to retrieve a simple value is from the identifiers. If the time to retrieve a value from an identifier takes 1 second then the same value takes (on average) 1.2 seconds using getLibProperty() and 1.8 seconds using getProperty(). The same test but using a heavy json object: if we set the identifier again on 1 (still the fastest) we notice that: getLibProperty() is still 1.2 however getProperty() time has increased to 2.8 seconds. The json used was constructed out of 1000 identifiers. And the time average was taken over 10,000 loops.
Now the surprising part: To set a value one would expect similar results but that aint the case. Using the same heavy json it turns out that token.gm_name was the fastest and token.label the slowest !!! If gm_name is set to 1 second than the rest is: 2 seconds for both setProperty() and setLibProperty() (yes equal speed) and 2.4 seconds for token.label. Again 10,000 loops used to test.
I've ran more test, to see which method is faster to store multiple simple variables onto a libtoken and retrieving them again:
- Using json
- Using strProps list
- Each variable seperately.
The last method is by far the slowest (10x the time for the other methods). Using json or strProps does not really make a lot of difference though strProps are faster. What I tested:
[testJson = json.set("{}", "test0",test0,"test1",test1,"test2",test2,...,"test9",test9)] [testJson = json.fromStrProp(strPropFromVars("test0,test1,test2,...,test9","UNSUFFIXED"))] [testStrProp = strPropFromVars("test0,test1,test2,...,test9","UNSUFFIXED")]
i also tried the strFormat trick (though the I could not properly retrieve the json object form the lib with this method:
[testJson = strformat('{"test0":"%{test0}","test1":"%{test1}","test2":"%{test2}",...,"test9":"%{test9}"}')] [testStrProp = strformat('test0=%{test0};test1=%{test1}...;test9=%{test9}')]
Of these 5 methods the strPropFromVars() and the strformat() methods were the fastest: 9.1 seconds (10,000 cycles) and json.set() the slowest 13.1s. The json.fromStrProp() was slightly only slower 9.6s.
Retrieving the data showed roughly the same result, strProps are a bit faster:
[result = getLibProperty("testJson","lib:OntokenMove")] [varsFromStrProp(json.toStrProp(result))] [result = getLibProperty("testStrProp","lib:OntokenMove")] [varsFromStrProp(result)]
Using another method to retrieve the json vars e.g.
[foreach(item, result):set(item, eval(item))]
Is considerably slower.
Another thing that is interesting is that using the above varsFromStrProp and strPropFromVars it hardly matter how many variables you set. I've tested this with setting 2 and 100 variables in one go. It turned out that strPropFromVars took 4x longer (4ms to set 100 vars vs 1 ms to set 2) and varsFromStrProp was equally fast for both 2 and 100! (ok a very small difference, 2 takes 0.9ms and 100 take 1.1ms). This was tested again with 10,000 cycles (I divided the results by 10000 to get to the ms).
jsons
- try to avoid nested jsons objects (so json object within a json object) objects within a json array is likely better
- when storing a json as a property on a token, try to limit the get/setproperty do it once store it in a local variable and pass it along also into submacros. This also accounts if you're changing a property directly (so without get/setproperty) e.g.:
<!-- this (using get/setPropery) -->
[HP = getProperty(tokenName, Hitpoints)]
[HP = HP-1]
[setProperty(tokenNam, HP)]
<!--is the same as this (changing property directly)-->
[Hitpoints = Hitpoints - 1]
- it might be the case that converting (using encode()) a json to string and then storing it on a token. Retrieving it using decode()
- if you want to store a huge and complex json variable temporarily on a token, don't use a property but use token.gm_name (or token.label or token.name) to store it (using a lib token for that). It goes without saying that this is a bit a of an extreme method i.o.w. a hack. If you were to e.g. use the token.name variable on a lib token, interesting (that you don't want) stuff will happen
jsons object vs json array vs lists
For simple operations:
slower ------------------------------------------------------> faster
json object operations --> json array operations --> list operations
The operations were building the structure and retrieving all values. The speed differences are significant!!!
These test were done by comparing getting and setting 1000 x and y coordinates:
- 1 list with x items with every item being a list with y items, using different seperator: "1,2,3,..; 1,2,3,..; 1,2,3,.."
- 1 array with x items where every items contains y items: [[1,2,3,...],[1,2,3,...],[1,2,3,...],etc]
- 1 json containing x*y keys: {"x1y1":{"x":1,"y":1}, "x1y2":{"x":1,"y":2},etc}
Obviously there are situations where a json object or array will be faster just because its smarter coding or much easier to use them. So only give value to this test if you want to do something similar as done with this test.
functions
Nested functions
It seems I had it wrong before, I had it form hearsay, now I've benchmarked it myself. Nested is much faster then unnested. First I tried one nested function vs unnested for 10,000 cycles the result was 10s for nested and 15s for unnested. Then next test I ran a really nested function
[varsFromStrProp(json.toStrProp(json.fromStrProp(strPropFromVars(theList,"UNSUFFIXED")))]
vs unnested
[tmp = strPropFromVars(theList,"UNSUFFIXED")] [testJson = json.fromStrProp(tmp)] [tmp1 = json.toStrProp(testJson)] [varsFromStrProp(tmp1)]
Running both 10,000 times resulted in: Nested 14s and Unnested 31s. It might not help the readability of you code, but nesting your functions can be more then twice as fast!!!
Loop speeds
The following loops: count() for() foreach() Take exactly the same amount of time to roll a 1d100 10000 times. In other words, they're equally fast.
- CIF's stopwatch was used to measure this
This means that you can and should use the right loop function for the right reason. Some examples of good use: Some examples of proper use:
- use foreach() to loop through a list or json array
- use count(n) if you want to execute a routine n times
- use for(i, n, 0, -2) if you want to use an a-typical but regular countdown from n to 0, using i in your routine.
macros
When getting arguments within a UDF (user defined function)
<!-- Slow -->
[h: var1 = json.get(macro.args,0)]
[h: var2 = json.get(macro.args,1)]
<!-- Faster -->
[h: var1 = arg(0)]
[h: var2 = arg(1)]
Notes:
- If you use the macro() function you can only make use of the macro.args method (the slow way).
- This method doesn't work the other way around, if you set macro.return within a UDF you cannot use arg(0) from within the function you called the UDF from. E.g.;
<!--after calling some UDF:-->
[h: doSomething(var)]
<!--this works-->
resultOfDoSomething = macro.return
<!--this won't-->
[resultOfDoSomething = arg(0)]
<!--actually most likely it will 'work' but it won't contain the value you want -->
Tokens
Though this isn't really about macros, it is about speed. What you put in your tokens will also effect the snappy-ness of the game play
- having a lot (guesstimation >100) of macrobuttons on a token will influence dragging it on the map (slow it down) Note: this issue has been partially fixed in MT by Rumble around b70-75. It still has impact on speed, but not by a long shot as much as it used to be.
- having a token with lots of data stored on it, will effect the update of movement of a token on other pc's connected to the server
- large image on a token will also influence speed, try to keep them at 200x200 pixels or lower.
<!-- Comments -->
There are two ways to put comment in MT script:
<!-- this is open comment -->
note the space " " after '<!--' this is essential or it won't be seen as comment. Or
[H:'<!-- this is hidden comment -->']
note the quotes ' ' at the beginning and end, again you get errors if you forget them.
These two methods both have a big pro and a big con. The open comment is processed very fast on a moderately fast pc it takes about 100ms to process 10,000 lines. (100ms is the border time you start to notice in macro execution). In short you can use these freely. Do keep in mind though that if you put comment in a e.g. count(1000): loop then this adds 1000 lines of comment to your code!. The big con of the open comment however is stack. I've benchmarked this as well and it turns out to be completely system dependent but I noticed that the text of about half a page of a book ported straight to the chat will render a stack overflow with a stack set to 5!!! That is not a lot of text. The best method to omit this issue is by setting the output of the macro standard to 0 in the UDF and use macro.return = result at the end. Another method is by making sure that at least all your loops are hidden (h,foreach(),CODE:{}) so all the comment you put inside can be open.
The hidden comment thus has the big advantage that it does not add to the stack and the chances of a stack overflow are a lot less. However the big drawback is that its relatively slow. Mind you its still pretty fast, on (again a moderatly fast pc) it takes 4ms to execute, which means that it gets noticeable after around 250 lines. If however you keep slower systems in mind as well, this number might easily become half that! Another big advantage for the more experienced coder among use: if you use the console to check the running code: [h:'<!-- -->'] shows up <!-- --> doesnt! So to track which routine is currently active I always start my macros with [h:'<!-- macro name -->'].
What I personally do is use [h:'<!-- -->'] outside any loops and if(),CODE statements and <!-- --> inside these loops and if statements. I obviously make sure that these routines are all hidden.
--Wolph42 08:52, 12 August 2010 (UTC)