There is now an online version that you can try, here: LSL-PyOptimizer online
LSL PyOptimizer is a LSL2 script optimizer written in Python 2.7. Currently it only supports code memory optimization (no speed optimization), only for Mono (no LSO), and only for the Second Life flavour of LSL (no OpenSim etc.).
The original LSL compiler does not do any optimizations whatsoever. Either the programmer does the optimization, or the scripts take more memory than necessary when writing something as simple as a = -1;
(yes, the sign takes code memory!).
Given the 64K total code+data memory limit that applies to Mono scripts, this becomes a problem, leading to either difficult to read scripts, if optimized by hand, or in case of large scripts, to code taking valuable memory that could be used to embed more code or data.
The aim of this program is to act as a filter that performs the optimizations automatically, letting the programmer focus on writing readable code.
It also implements several syntax extensions to help improving the readability of scripts and the productivity of the programmer. It works well when combined with a C preprocessor such as Boost::Wave (the one embedded in Firestorm and other TPVs) or GNU cpp.
Firestorm does already incorporate an optimizer, but it is limited to removing unused global variables and functions, and does so by simple string analysis, not by syntactic analysis (e.g. if a local variable with the same name as a global is defined, the global isn't optimized out even if not used). In contrast, the program presented here does full syntax analysis and implements many more optimizations, including removing unused locals, simplifying many expressions, removing dead code, and more.
It is now in the beta stage. Command line options are still subject to change, and new features may be added without notice. Please help by reporting any bugs you find, using the project's issue tracker at GitHub.
break
and continue
statements.&=
, |=
, ^=
, <<=
, >>=
.(string)++x
.switch()
statements, for compatibility with Firestorm.identifier[index]
), for compatibility with Firestorm.cpp
and for mcpp (since 0.1.3alpha).#pragma
and #line
preprocessor directives (0.2.1beta).The next sections explain these features in detail.
The project is hosted on GitHub. The latest version is available at: https://github.com/Sei-Lisa/LSL-PyOptimizer
In order to run the program, you need Python installed. It should work with both Python 2.7 and Python 3.x. Note that Python 3.x support is still in beta; if you run into any error while using it, please file a report.
break
and continue
Support for break
and continue
statements, working as their C equivalents. It also supports break n;
and continue n;
for a constant integer n
which indicates the number of nested loops to exit from (in the case of break
) or to continue
at. The default n
when not specified is 1, meaning to break or continue at the current loop.
Allow arbitrary expressions in globals, as long as they resolve to a single constant. The optimizer will evaluate the expression and substitute the result in the final script. For example, you can have a line like:
float a = llPow(llSin(40*DEG_TO_RAD), 2);
in the globals section, and the optimizer will output the line:
float a = 0.41317588;
instead. Needs constant folding optimization to be enabled. If the expression does not resolve to a constant, a warning will be emitted, and the compilation will fail at that line.
C-like &=
, |=
, ^=
, <<=
, >>=
assignment operator support.
In LSL, keys are automatically cast to string in most cases. For example:
llOwnerSay(llGetOwner());
works perfectly. However, one prominent case where that does not happen automatically is in string concatenation. Confusingly, while the above works, this doesn't:
llOwnerSay("Your key is: " + llGetOwner());
and instead produces a type mismatch error. This syntax extension allows you to concatenate strings and keys, making the type cast transparent.
For example "a" "b"
resolves to the string "ab"
. Very useful when combined with preprocessor macros, for not needing to add strings together to form a larger string. For example:
#define VERSION "1.13"
#define REVISION "b"
...
llOwnerSay("Program version " VERSION
", revision " REVISION);
or even
#define VERSION 1.13
#define REVISION b
#define VERBATIM_STRINGIFY(...) #__VA_ARGS__
#define STRINGIFY(...) VERBATIM_STRINGIFY(__VA_ARGS__)
...
llOwnerSay("Program version " STRINGIFY(VERSION)
", revision " STRINGIFY(REVISION));
will resolve to:
llOwnerSay("Program version 1.13, revision b");
instead of something like
llOwnerSay("Program version " + "1.13" + ", revision " + "b");
The latter can also be resolved by the optimizer, but by default that optimization is disabled as it can be counter-productive (see String constant concatenation below for more information).
For some reason, LSL only allows postfix expressions after a type cast. This feature extends the syntax to allow prefix expressions as well. This means that you can write e.g. (string)(integer)a
instead of (string)((integer)a)
, or (string)++x
instead of (string)(++x)
, sparing you from having to enter the extra parentheses.
Allows duplicate labels obeying the LSL scope rules. This one is tricky to understand. In LSL, syntax-wise, label identifiers obey the same scope rules as variables: when opening a new block, the labels defined inside it are invisible to any of the outer blocks. This means that you can have labels with the same name in different blocks, just like you can have variables with the same name in different blocks.
For example, you can have:
default
{
state_entry()
{
{
integer i;
@label1;
}
{
integer i;
@label1;
}
}
}
Just as the variables, the labels are visible only within their own blocks or any nested block inside them. Unlike variables, labels are also visible if they are declared after the jump
instruction that uses them, as long as it's within a block that encloses the label.
However, that's only the theory, The compiler is prepared to work like that, but when there is more than one label with the same name, the underlying assembler only jumps to the last label (in the case of LSO) or fails and doesn't let the script to be saved (in the case of Mono).
This syntax extension fixes that situation by renaming the labels in the output, to give each a different name and allow Mono scripts to be saved, making the compiler work as intended.
Again, one use is in preprocessor macros. Consider this example:
#define DO {@dowhile;
#define WHILE(expr) if (expr) jump dowhile;}
Without this option, you would only be able to have one DO...WHILE()
loop per function.
The optimizer option ProcessPre
enables processing of certain preprocessor directives that control compilation. Only #pragma
and #line
are supported for now. Typically these directives pass through the preprocessor unchanged (not always; see note below about GNU cpp and #line
), so they will reach the optimizer when a preprocessor is used. The rest of preprocessor directives will be ignored.
The #line
directive is not supposed to be used by the programmer. C/C++ preprocessors, including Boost::Wave, mcpp and GNU cpp, automatically generate them whenever the lines in the original file lose synchronization with the preprocessor's output, in order to keep track of which original line produced a certain output line, for error reporting and debugging purposes. As of version 0.2.2beta, this directive does indeed track which input line generated a certain output line, generating errors with a line number that refers to the original file rather than the preprocessed/optimized file, and including the name of the file where the error was found when it differs from the initial source.
#pragma
is a special C preprocessor directive that allows compilers to extend the language with certain options. In the case of this optimizer, the #pragma
directive is used to enable/disable on the fly the parsing options and extensions, taking precedence over all others. Even ProcessPre
itself can be disabled (it can't be enabled this way because when disabled, #pragma
directives aren't processed).
To use this feature, type the command like this:
#pragma OPT [+/-]option1[,[+/-]option2,...]
Note that it's important that OPT is in upper case. The options list is not case sensitive, but it can't have any spaces.
For example, this code activates the break
and continue
statements extension and the C-like string juxtaposition extension regardless of the options passed in the command line:
#pragma OPT +BreakCont,+AllowMultiStrings
default
{
while (TRUE) break;
llOwnerSay("TEA" "TIME");
}
Like in the command line, prepending a +
sign or not preprending anything to an option has the same effect, namely to activate the option, while prepending a -
sign has the effect of deactivating the option.
The list of options usable in #pragma
directives is:
ExtendedGlobalExpr
ExtendedTypeCast
ExtendedAssignment
ExplicitCast
AllowKeyConcat
AllowMultiStrings
ProcessPre
EnableSwitch
BreakCont
ErrMissingDefault
LazyLists
DupLabels
ShrinkNames
FuncOverride
Inline
For a description of each, you can invoke the program from the command line with: python main.py -O help
(that's the upper case letter O, not the number zero). Note, however, that the only options that can be used in #pragma
directives inlined in the code are the options listed above, which are the ones that affect the parsing, not the optimization.
Important: Changing options in the middle of the program may not be well supported. In particular, activating and deactivating in the same program certain options like BreakCont
may crash the optimizer. Changing the options at the beginning of the script should be safe.
Note: GNU cpp actually generates a preprocessor command that only contains the line number and the file name, pretty much like a #line
directive does, but without the line
part. The parser deals with that format correctly, treating it as if it was a #line
directive.
Note: Firestorm prepends a //
comment to the #line
directives that Boost::Wave generates; since the optimizer doesn't use Firestorm's output directly, it doesn't interpret these lines specially.
Note: Pragma operators of the form _Pragma("pragma string")
introduced in C99 are not supported.
In versions 0.1.3alpha and below, there was a similar option called skippreproc
that merely skipped any preprocessor directives. That option is now removed.
The option inline
in the command line options enables a syntax extension that allows you to use functions as if they were macros. This option is disabled by default, due to its effect on While and For loops.
This feature is in an experimental stage. Use at your own risk.
To declare a function as inline, add the word inline
after the close parenthesis of the parameter list. For example, this definition:
say(string s) inline
{
llOwnerSay(s);
}
allows you to use say
as if it were llOwnerSay
, without actually defining a new function in the optimized output. Of course, the same can be done with a preprocessor macro.
Caveats:
___letters__numbers
. If you use names like these, you risk causing a name collision.These extensions are implemented for compatibility with the syntax extensions originally integrated in Emerald and currently in Firestorm. Their use is discouraged and they are considered legacy extensions.
switch()
statementsEnables use of C-like switch
statements. These produce very awkward code, hard to optimize, and the argument is evaluated multiple times (as many as case
labels are present).
The syntax of the switch
statement as implemented, has two restrictions over its C counterpart:
case
labels can't appear in nested blocks. That's because they are replaced by LSL labels, and as discussed in the Multiple labels with the same name section above, label scope rules prevent their visibility in an outer block, so once converted to labels, the corresponding jump
instructions would not be able to find them. This limitation means that Duff's device or similar constructs can't be implemented with this optimizer.switch()
needs to be followed by a block, not by a single statement. For example, while this works in C, it won't work in this optimizer: switch(1) case 1: break;
case
is treated by this parser as a statement, rather than as a label prefix, making break
be outside the switch
and thus failing at that point. This limitation is probably only of theoretical importance and will not have any practical implication, since single-statement switch
clauses are of little or no practical use (known to the author). Of course it works perfectly when enclosed in braces: switch(1) { case 1: break; }
As an extension, and for compatibility with Firestorm, if there is a block beginning right after a case
or default
statement, the colon is optional. For example, all these are valid:
switch (x) { case 1: ; default: ; } // normal syntax
switch (x) { case 1: {} default: {} }
switch (x) { case 1 {} default {} } // extended syntax
but this will cause an error:
switch (x) { case 1 ; default ; }
That's how Firestorm calls an extended syntax for subindex values referencing individual list elements.
The syntax for assignment is:
mylist[index] = value;
which is designed to be roughly a shortcut for this:
mylist = llListReplaceList(mylist, (list)value, index, index);
The implementation, however, includes creating a function that performs the replacement, which prevents the index from being evaluated twice but also uses more memory. The function is called lazy_list_set
. It can be user-overriden. If you define a function with this prototype:
list lazy_list_set(list target, integer index, list value)
which returns the list with the element replaced, then the optimizer will use yours rather than defining it twice. Note that a preprocessor macro won't work in its place.
For compatibility with Firestorm, when the index is greater than the number of elements in the list, the intermediate values are filled with integer zeros. If you don't want that, you may have a reason to override it.
Note that the value of the assignment as an expression is the whole list, not the element. For example, this will fail because it's assigning a list to an integer:
list a;
integer b = a[5] = 4;
To see why, look at what that is expanded to:
list a;
integer b = a = lazy_list_set(a, 5, (list)4);
which will obviously fail. But this will work:
list a;
integer b;
a[5] = b = 4;
The syntax for reading an element is the same as for assigning, but it returns no type, therefore a type cast is mandatory. For example:
list a;
integer b = (integer)a[3];
That is converted at parsing time to:
list a;
integer b = llList2Integer(a, 3);
If the type it's cast to is list, it needs two parameters (starting and ending index), not one:
list a;
a = (list)a[3, 3];
That is a requirement of the underlying llList2List
function used in this case.
If two or more functions with the same name are defined, the latest definition takes effect. This is done for compatibility with Firestorm; apparently that "feature" is used by some people.
Note that Firestorm also allows calling undefined functions, as long as the function that has the calls is optimized out. That feature is not implemented by this optimizer, as it would complicate expression parsing a lot. It's also considered a misfeature; writing code that relies on it is discouraged.
The optimizer simplifies expressions as much as it knows, which is a fair amount, though there's still room for improvement in this area. Expressions that evaluate to a constant will be replaced with that constant. Most calculation functions are implemented; note, however, that the JSON functions in particular do not follow the broken LSL behaviour too closely, and that llJsonSetValue
in particular is not implemented as of yet.
Other expressions such as a+3+1 are replaced with a+4, and so on. Note, however, that for floats, (a+b)+c
may not equal a+(b+c)
, so that optimization is not always done for floats. Also, as of this writing this optimization is only partial, so some expressions may not be optimized, e.g. 2+a+b+3
is not optimized to a+b+5
. Many boolean expressions are simplified too (more are on the way). For example, (TRUE&&(expression))
is simplified to (expression)
, and (FALSE&&(expression))
is simplified to (FALSE)
provided the expression has no side effects. The famous if (llListFindList(...)!=-1)
to if (~llListFindList(...))
replacement is also performed automatically.
The constant folding optimizer is also responsible for simplifying certain statements, e.g. if (FALSE) { statements; }
is completely removed, and if (TRUE) { statements1; } else { statements2; }
is replaced with just { statements1; }
, removing { statements2; }
. The do...while(constant)
loops and other loops are optimized similarly.
This enables using many preprocessor tricks, like creating an assert()
macro similar to that in C:
#define assert(...) do { if (DEBUG) if (__VA_ARGS__) ; \
else llOwnerSay("ASSERTION FAILED: " #__VA_ARGS__); } while (0)
without worrying about the extra memory that it will take in production code once DEBUG is switched off, or about the loop taking up actual code memory.
This part of the optimizer tracks execution and usage of statements and variables, removing the ones that aren't used. It performs a function similar to that of the Firestorm optimizer, but it can remove also unused locals and dead code after a return
or jump
, and isn't confused by having a global and a local with the same name.
It also replaces references to integer, string and key variables that are not written to (global or local), with their value. For example:
integer ANSWER = 42;
default
{
state_entry()
{
llOwnerSay((string)ANSWER);
}
}
will produce:
default
{
state_entry()
{
llOwnerSay("42");
}
}
after DCR and constant folding. This optimization has one of the largest impacts, as variables and parameters in general seem to take a lot of memory in Mono, and removing as much of them as possible produces good savings.
Long variable and parameter names are nice and readable, but when used as part of the globals or as function parameters, or in function or state names, each character in the identifier takes at least one byte of code memory. In large programs, this can add up to a significant amount. This option replaces global (including functions and states) and parameter identifiers with the shortest possible ones, also reusing as many as it can as well as reusing some system ones. The savings from this alone can be very significant in programs with a large number of globals or states.
Floats under Mono are internally double precision, so float constants take four more bytes than integers. On the other hand, type cast from integer to float takes only one byte. This optimization substitutes floats with integers where possible, for an overall saving of three bytes per number. For example, it transforms llSetTimerEvent(0.0)
into llSetTimerEvent(0)
.
The sign at the beginning of a number constant (except in globals) takes up one byte of code, unless prefixed by a type cast (which does not, under Mono, take up code memory by itself if the destination type is the same). Small saving, but it adds up to the overall. Numbers are thus output with a type cast and surrounded by parentheses, e.g. ((float)-1.5)
instead of -1.5
.
This optimization is turned off by default, as it can be counter-productive. It enables concatenating string constants together. However, consider this situation:
string a = "A very long string that appears in this script once" + ", or not";
string b = "A very long string that appears in this script once" + " or twice";
Since Mono keeps only one copy of each constant string in the code, making the optimizer concatenate them would be counter-productive, generating two long strings that would take more code than the original string plus the shorter ones.
This program is designed to work as a filter. It can read from standard input if the file name argument is "-", and it can (and does by default) output the result to standard output. Any errors and warnings go to standard error always, to not interfere with the script being output.
The input script must be encoded in UTF-8. If you're using an editor, make sure it saves UTF-8.
Running it by hand to optimize your scripts can be cumbersome. The intention is for it to act as a filter that is transparent to the user; however, as of this writing there's no support for any viewer or IDE. Run it without parameters to see the invocation help, for example with python main.py
. Redirect the output to a file if you want to store the result, possibly to open it with an editor and copy it to the clipboard. Or under X Window, if you install the package xclip, you can pipe the output directly to xclip -quiet -selection clipboard
to copy it to the clipboard, rather than using a file, so you can paste it into the viewer. It's a good idea to use the option --bom
to include a UTF-8 byte order mark that other applications can use to recognize the encoding. Examples:
python main.py --bom myscript.lsl | xclip -quiet -selection clipboard
will, under X Window, read myscript.lsl
, optimize it, and copy the optimized result to the clipboard, ready to be pasted into the viewer.
python main.py --bom myscript.lsl -o temp.opt
notepad temp.opt
will, under any system which has an editor called notepad
, read myscript.lsl
, optimize it, and write the optimized result to temp.opt
, then open it in the editor, enabling you to copy it and paste it into the viewer. Under Windows version Vista and above, there's a command line application called clip
that does the same as xclip
does for X Window, enabling you to use this:
python main.py --bom myscript.lsl | clip
to copy the optimized output to the clipboard. Under OS X, pbcopy
does the same as xclip
and clip
.
The clip
application does not recognize the byte order mark, therefore Windows users may need to execute chcp 65001
before using the optimizer, to switch their console to UTF-8 and make the clip
program work properly with non-ASCII characters.
An external preprocessor is supported. If your system has a GNU C Compiler suite already installed, then the cpp
that comes with it (or gcc
adding the -E
option) should be enough. If you don't have it, the recommended preprocessor is mcpp, because it's a standalone executable, easy to install. Download it from http://mcpp.sourceforge.net/download.html, unpack the executable somewhere in the system path (or specify the path to the executable every time with the --precmd
option of the program) and you're ready to go.
Future plans include writing a patch for Firestorm to enable it to run external script filtering programs instead of Firestorm's internal preprocessor and optimizer. That would allow this program to be run using Firestorm's integrated machinery, making usage pretty transparent to the programmer.
Support for other IDEs like Eclipse is not planned, but the author encourages others to write it. Please notify Sei Lisa if you write one, so that it can be listed in this page.
The program uses two external files. One is builtins.txt
, which is in the same format as needed by lslint, and an up-to-date copy can be obtained from the kwdb project: https://bitbucket.org/Sei_Lisa/kwdb/raw/tip/outputs/builtins.txt. The other is fndata.txt
, which is a list of LSL functions and events, with some data related to the semantics of each of them, so the optimizer has the possibility of applying more optimizations that are particular to them.
Making the optimizer smarter is one primary objective. Conversion to SSA form is currently not performed, and would be nice to have, for one, to enable more optimizations where values are not used. There are a number of TODO items in the code about optimizations pending to implement.
Lastly, implementation of some kind of machine-readable way to inform the invoker about the available options, is also in the plans. That would allow the plugin or viewer or whatever to present a dialog with the options for invoking the optimizer.
This program is distributed under the terms of the GNU General Public License (GPL) version 3.
© Copyright 2015-2023 Sei Lisa. All rights reserved.
Just to put it explicitly, processing code with this optimizer does NOT automatically make the result fall under the terms of the license. Only the optimizer itself is subject to the terms of the GPL, similarly to how the C code compiled by a GPL-licensed compiler is not subject to the licensing requirements of the compiler even if the compiler itself is.
When using lazy lists, a fragment of code (a small function called lazy_list_set
) may be included in the output. This function, written by Sei Lisa, is in the public domain and is therefore not subject to any copyright restrictions.
Written by Sei Lisa. Sei Lisa is the author's pseudonym and user name in the Second Life virtual world.
Happy coding!