|
Posted
over 16 years
ago
by
Robin Garner
Page
edited by
Robin Garner
- "Documented new options, and more of the TRACE options."
Overview
The MMTk harness is a debugging tool. It allows you to run MMTk
... [More]
with a simple client - a simple Java-like scripting language - which can explicitly allocate objects, create and delete references, etc. This allows MMTk to be run and debugged stand-alone, without the entire VM, greatly simplifying initial debugging and reducing the edit-debug turnaround time. This is all accessible through the command line or an IDE such as eclipse.
Running the test harness
The harness can be run standalone or via Eclipse (or other IDE).
Standalone
ant mmtk-harness
java -jar target/mmtk/mmtk-harness.jar <script-file> [options...]
There is a collection of sample scripts in the MMTk/harness/test-scripts directory. There is a simple wrapper script that runs all the available scripts against all the collectors,
bin/test-mmtk [options...]
This script prints a PASS/FAIL line as it goes, and puts detailed output in results/mmtk.
In Eclipse
ant mmtk-harness-eclipse-project
Or in versions before 3.1.1
ant mmtk-harness && ant mmtk-harness-eclipse-project
Refresh the project (or import it into eclipse), and then run 'Project > Clean'.
Define a new run configuration with main class org.mmtk.harness.Main.
Click Run (actually the down-arrow next to the the green button), choose 'Run Configurations...'
Select "Java Application" from the left-hand panel, and click the "new" icon (top left).
Fill out the Main tab as below
Fill out the Arguments tab as below
The harness makes extensive use of the java 'assert' keyword, so you should run the harness with '-ea' in the VM options.
Click 'Apply' and then 'Run' to test the configuration. Eclipse will prompt for a value for the 'script' variable - enter the name of one of the available test scripts, such as 'Lists', and click OK. The scripts provided with MMTk are in the directory MMTk/harness/test-scripts.
You can configure eclipse to display vmmagic values (Address/ObjectReference/etc) using their toString method through the Eclipse -> Preferences... -> Java -> Debug -> Detail Formatters menu. The simplest option is to check the box to use toString 'As the label for all variables'.
Test harness options
Options are passed to the test harness as 'keyword=value' pairs. The standard MMTk options that are available through JikesRVM are accepted (leave off the "-X:gc:"), as well as the following harness-specific options:
Option
Meaning
plan
The MMTk plan class. Defaults to org.mmtk.plan.marksweep.MS
collectors
The number of concurrent collector threads (default: 1)
initHeap
Initial heap size. It is also a good idea to use 'variableSizeHeap=false', since the heap growth manager uses elapsed time to make its decisions, and time is seriously dilated by the MMTk Harness.
maxHeap
Maximum heap size (default: 64 pages)
trace
Debugging messages from the MMTk Harness. Useful trace options include
ALLOC - trace object allocation
AVBYTE - Mutations of the 'available byte' in each object header
COLLECT - Detailed information during GC
HASH - Hash code operations
MEMORY - page-level memory operations (map, unmap, zero)
OBJECT - trace object mutation events
REFERENCES - Reference type processing
REMSET - Remembered set processing
SANITY - Gives detailed information during Harness sanity checking
TRACEOBJECT - Traces every call to traceObject during GC (requires MMTk support)
See the class org.mmtk.harness.lang.Trace for more details and trace options - most of the remaining options are only of interest to maintainers of the Harness itself.
watchAddress
Set a watchpoint on a given address or comma-separated list of addresses. The harness will display every load and store to that address.
watchObject
Watch modifications to a given object or comma-separated list of objects, identified by object ID (sequence number).
gcEvery
Force frequent GCs. Options are
ALLOC - GC after every object allocation
SAFEPOINT - GC at every GC safepoint
scheduler
Optionally use the deterministic scheduler. Options are
JAVA (default) - Threads in the script are Java threads, scheduled by the host JVM
DETERMINISTIC - Threads are scheduled deterministically, with yield points at every memory access.
schedulerPolicy
Select from several scheduling policies,
FIXED - Threads yield every 'nth' yield point
RANDOM - Threads yield according to a pseudo-random policy
NEVER - Threads only yield at mandatory yieldpoints
yieldInterval
For the FIXED scheduling policy, the yield frequency.
randomPolicyLength
randomPolicySeed
randomPolicyMin
randomPolicyMax
Parameters for the RANDOM scheduler policy. Whenever a thread is created, the scheduler fixes a yield pattern of 'length' integers between 'min' and 'max'. These numbers are used as yield intervals in a circular manner.
policyStats
Dump statistics for the deterministic scheduler's yield policy.
bits=32|64
Select between 32 and 64-bit memory models.
dumpPcode
Dump the pseudo-code generated by the harness interpreter
timeout
Abort collection if a GC takes longer than this value (seconds). Defaults to 30.
Scripting language
Basics
The language has three types: integer, object and user-defined. The object type behaves essentially like a double array of pointers and integers (odd, I know, but the scripting language is basically concerned with filling up the heap with objects of a certain size and reachability). User-defined types are like Java objects without methods, 'C' structs, Pascal record types etc.
Objects and user-defined types are allocated with the 'alloc' statement: alloc(p,n,align) allocates an object with 'p' pointers, 'n' integers and the given alignment; alloc(type) allocates an object of the given type. Variables are declared 'c' style, and are optionally initialized at declaration.
User-defined types are declared as follows:
type list {
int value;
list next;
}
and fields are accessed using java-style "dot" notation, eg
list l = alloc(list);
l.value = 0;
l.next = null;
At this stage, fields can only be dereferenced to one level, eg 'l.next.next' is not valid syntax - you need to introduce a temporary variable to achieve this.
Object fields are referenced using syntax like "tmp.int[5]" or "tmp.object[i*3]",
ie like a struct of arrays of the appropriate types.
Syntax
script ::= (method|type)...
method ::= ident "(" { type ident { "," type ident}... ")"
( "{" statement... "}"
| "intrinsic" "class" name "method" name "signature" "(" java-class {, java class} ")"
type ::= "type" ident "{" field... "}"
field ::= type ident ";"
statement ::=
"if" "(" expr ")" block { "elif" "(" expr ")" block } [ "else" block ]
| "while "(" expr ")" block
| [ [ type ] ident "=" ] "alloc" "(" expr "," expr [ "," expr ] ")" ";"
| [ ident "=" ] "hash" "(" expr ")" ";"
| "gc" "(" ")"
| "spawn" "(" ident [ "," expr ]... ")" ";"
| type ident [ "=" expr ] ";"
| lvalue "=" expr ";"
lvalue ::= ident "=" expr ";"
| ident "." type "[" expr "]"
type ::= "int" | "object" | ident
expr ::= expr binop expr
| unop expr
| "(" expr ")"
| ident
| ident "." type "[" expr "]"
| ident "." ident
| int-const
| intrinsic
intrinsic ::= "alloc" ( "(" expr "," expr ["," expr] ")
| type
)
| "(" expr ")"
| "gc " "(" ")"
binop ::= " " | "-" | "*" | "/" | "%" | "&&" | "||" | "==" | "!="
unop ::= "!" | "-"
MMTk Unit Tests
There is a small set of unit tests available for MMTk, using the harness as scaffolding. These tests can be run in the standard test infrastructure using the 'mmtk-unit-tests' test set, or the shell script 'bin/unit-test-mmtk'. Possibly more usefully, they can be run from Eclipse.
To run the unit tests in Eclipse, build the mmtk harness project (see above), and add the directory testing/tests/mmtk/src to your build path (navigate to the directory in the package explorer pane in eclipse, right-click>build-path>Use as Source Folder). Either open one of the test classes, or highlight it in the package explorer and press the 'run' button.
View Online
Changes between revision 8
and revision 9:
h1. Overview
The MMTk harness is a debugging tool. It allows you to run MMTk with a simple client - a simple Java-like scripting language - which can explicitly allocate objects, create and delete references, etc. This allows MMTk to be run and debugged stand-alone, without the entire VM, greatly simplifying initial debugging and reducing the edit-debug turnaround time. This is all accessible through the command line or an IDE such as eclipse.
h1. Running the test harness
The harness can be run standalone or via Eclipse (or other IDE).
h2. Standalone
{code}
ant mmtk-harness
java -jar target/mmtk/mmtk-harness.jar <script-file> [options...]
{code}
There is a collection of sample scripts in the MMTk/harness/test-scripts directory. There is a simple wrapper script that runs all the available scripts against all the collectors,
{code}
bin/test-mmtk [options...]
{code}
This script prints a PASS/FAIL line as it goes, and puts detailed output in results/mmtk.
h2. In Eclipse
{code}
ant mmtk-harness-eclipse-project
{code}
Or in versions before 3.1.1
{code}
ant mmtk-harness && ant mmtk-harness-eclipse-project
{code}
Define a new run configuration with main class org.mmtk.harness.Main.
Refresh the project (or import it into eclipse), and then run 'Project > Clean'.
Click Run, Run Configurations...\\ !Screenshot-Run Configurations .png!
Define a new run configuration with main class org.mmtk.harness.Main.
Click Run (actually the down-arrow next to the the green button), choose 'Run Configurations...'
\\ !Screenshot-Run Configurations .png!
\\
Select "Java Application" from the left-hand panel, and click the "new" icon (top left).
Fill out the Main tab as below\\ !Screenshot-Run Configurations -1.png!
Fill out the Main tab as below
\\ !Screenshot-Run Configurations -1.png!
\\
Fill out the Arguments tab as below\\ !Screenshot-Run Configurations -2.png!
Fill out the Arguments tab as below
\\ !Screenshot-Run Configurations -2.png!
The harness makes extensive use of the java 'assert' keyword, so you should run the harness with '-ea' in the VM options.
Click 'Apply' and then 'Run' to test the configuration. Eclipse will prompt for a value for the 'script' variable - enter the name of one of the available test scripts, such as 'Lists', and click OK. The scripts provided with MMTk are in the directory MMTk/harness/test-scripts.
\\
You can configure eclipse to display vmmagic values (Address/ObjectReference/etc) using their toString method through the Eclipse \-> Preferences... \-> Java \-> Debug \-> Detail Formatters menu. The simplest option is to check the box to use toString 'As the label for all variables'.
h2. Test harness options
Options are passed to the test harness as 'keyword=value' pairs. The standard MMTk options that are available through JikesRVM are accepted (leave off the "-X:gc:"), as well as the following harness-specific options:
|| Option \\ || Meaning ||
| plan | The MMTk plan class. Defaults to org.mmtk.plan.marksweep.MS \\ |
| collectors | The number of concurrent collector threads (default: 1) \\ |
| initHeap | Initial heap size. It is also a good idea to use 'variableSizeHeap=false', since the heap growth manager uses elapsed time to make its decisions, and time is seriously dilated by the MMTk Harness. \\ |
| maxHeap | Maximum heap size (default: 64 pages) \\ |
| trace | Debugging messages from the MMTk Harness. Trace options include
* CALL - trace procedure calls
| trace | Debugging messages from the MMTk Harness. Useful trace options include
* ALLOC - trace object allocation
* OBJECT - trace object mutation events \\
See the class org.mmtk.harness.lang.Trace for more details and trace options. \\ |
* AVBYTE - Mutations of the 'available byte' in each object header
* COLLECT - Detailed information during GC
* HASH - Hash code operations
* MEMORY - page-level memory operations (map, unmap, zero)
* OBJECT - trace object mutation events
* REFERENCES - Reference type processing
* REMSET - Remembered set processing
* SANITY - Gives detailed information during Harness sanity checking
* TRACEOBJECT - Traces every call to traceObject during GC (requires MMTk support) \\
See the class org.mmtk.harness.lang.Trace for more details and trace options - most of the remaining options are only of interest to maintainers of the Harness itself. \\ |
| watchAddress | Set a watchpoint on a given address or comma-separated list of addresses. The harness will display every load and store to that address.\\ |
| watchObject | Watch modifications to a given object or comma-separated list of objects, identified by object ID (sequence number).\\ |
| gcEvery | Force frequent GCs. Options are
* ALLOC - GC after every object allocation
* SAFEPOINT - GC at every GC safepoint |
| scheduler | Optionally use the deterministic scheduler. Options are
* JAVA (default) - Threads in the script are Java threads, scheduled by the host JVM
* DETERMINISTIC - Threads are scheduled deterministically, with yield points at every memory access. |
| schedulerPolicy | Select from several scheduling policies,
* FIXED - Threads yield every 'nth' yield point
* RANDOM - Threads yield according to a pseudo-random policy
* NEVER - Threads only yield at mandatory yieldpoints |
| yieldInterval | For the FIXED scheduling policy, the yield frequency. \\ |
| randomPolicyLength \\
randomPolicySeed \\
randomPolicyMin \\
randomPolicyMax | Parameters for the RANDOM scheduler policy. Whenever a thread is created, the scheduler fixes a yield pattern of 'length' integers between 'min' and 'max'. These numbers are used as yield intervals in a circular manner. \\ |
| policyStats | Dump statistics for the deterministic scheduler's yield policy. \\ |
| bits=32\|64 | Select between 32 and 64-bit memory models. \\ |
| dumpPcode | Dump the pseudo-code generated by the harness interpreter \\ |
| timeout | Abort collection if a GC takes longer than this value (in seconds) \\ |
| timeout | Abort collection if a GC takes longer than this value (seconds). Defaults to 30. \\ |
\\
h1. Scripting language
h1.
h2. Basics
The language has three types: integer, object and user-defined. The object type behaves essentially like a double array of pointers and integers (odd, I know, but the scripting language is basically concerned with filling up the heap with objects of a certain size and reachability). User-defined types are like Java objects without methods, 'C' structs, Pascal record types etc.
Objects and user-defined types are allocated with the 'alloc' statement: alloc(p,n,align) allocates an object with 'p' pointers, 'n' integers and the given alignment; alloc(type) allocates an object of the given type. Variables are declared 'c' style, and are optionally initialized at declaration.
User-defined types are declared as follows:
{code}
type list {
int value;
list next;
}
{code}
and fields are accessed using java-style "dot" notation, eg
{code}
list l = alloc(list);
l.value = 0;
l.next = null;
{code}
At this stage, fields can only be dereferenced to one level, eg 'l.next.next' is not valid syntax - you need to introduce a temporary variable to achieve this.
Object fields are referenced using syntax like "tmp.int\[5\]" or "tmp.object\[i*3\]",
ie like a struct of arrays of the appropriate types.
h2. Syntax
\\
{noformat}
script ::= (method|type)...
method ::= ident "(" { type ident { "," type ident}... ")"
( "{" statement... "}"
| "intrinsic" "class" name "method" name "signature" "(" java-class {, java class} ")"
type ::= "type" ident "{" field... "}"
field ::= type ident ";"
statement ::=
"if" "(" expr ")" block { "elif" "(" expr ")" block } [ "else" block ]
| "while "(" expr ")" block
| [ [ type ] ident "=" ] "alloc" "(" expr "," expr [ "," expr ] ")" ";"
| [ ident "=" ] "hash" "(" expr ")" ";"
| "gc" "(" ")"
| "spawn" "(" ident [ "," expr ]... ")" ";"
| type ident [ "=" expr ] ";"
| lvalue "=" expr ";"
lvalue ::= ident "=" expr ";"
| ident "." type "[" expr "]"
type ::= "int" | "object" | ident
expr ::= expr binop expr
| unop expr
| "(" expr ")"
| ident
| ident "." type "[" expr "]"
| ident "." ident
| int-const
| intrinsic
intrinsic ::= "alloc" ( "(" expr "," expr ["," expr] ")
| type
)
| "(" expr ")"
| "gc " "(" ")"
binop ::= " " | "-" | "*" | "/" | "%" | "&&" | "||" | "==" | "!="
unop ::= "!" | "-"
{noformat}
h1. MMTk Unit Tests
There is a small set of unit tests available for MMTk, using the harness as scaffolding. These tests can be run in the standard test infrastructure using the 'mmtk-unit-tests' test set, or the shell script 'bin/unit-test-mmtk'. Possibly more usefully, they can be run from Eclipse.
To run the unit tests in Eclipse, build the mmtk harness project (see above), and add the directory testing/tests/mmtk/src to your build path (navigate to the directory in the package explorer pane in eclipse, right-click>build-path>Use as Source Folder). Either open one of the test classes, or highlight it in the package explorer and press the 'run' button.
\\
View All Revisions |
Revert To Version 8
[Less]
|
|
Posted
over 16 years
ago
by
David Grove
Page
edited by
David Grove
This section describes the architecture of Jikes RVM. The RVM can be divided into the following components:
Core Runtime Services: (thread
... [More]
scheduler, class loader, library support, verifier, etc.) This element is responsible for managing all the underlying data structures required to execute applications and interfacing with libraries.
Magic: The mechanisms used by Jikes RVM to support low-level systems programming in Java.
Compilers: (baseline, optimizing, JNI) This component is responsible for generating executable code from bytecodes.
Memory managers: This component is responsible for the allocation and collection of objects during the execution of an application.
Adaptive Optimization System: This component is responsible for profiling an executing application and judiciously using the optimizing compiler to improve its performance.
View Online
Changes between revision 8
and revision 9:
This section describes the architecture of Jikes RVM. The RVM can be divided into the following components:
* [Core Runtime Services]: (thread scheduler, class loader, library support, verifier, etc.) This element is responsible for managing all the underlying data structures required to execute applications and interfacing with libraries.
* [Magic]: The mechanisms used by Jikes RVM to support low-level systems programming in Java.
* [Compilers|Compilers]: (baseline, optimizing, JNI) This component is responsible for generating executable code from bytecodes.
* [Memory managers|MMTk]: This component is responsible for the allocation and collection of objects during the execution of an application.
* [Adaptive Optimization System]: This component is responsible for profiling an executing application and judiciously using the optimizing compiler to improve its performance.
View All Revisions |
Revert To Version 8
[Less]
|
|
Posted
over 16 years
ago
by
David Grove
Page
edited by
David Grove
If you have extended Jikes RVM and would like to contribute your extension back to the community, please use the patch tracker to submit a patch.
... [More]
When submitting a patch, please include the following:
The patch file for your contribution
The appropriate Statement of Origin
A description of the functionality you are contributing
The version of Jikes RVM used to create your patch
Your contribution will be licensed under the EPL (Eclipse Public License), the license used for Jikes RVM. The license has been approved by the OSI (Open Source Initiative) as a fully certified open source license. If your contribution is included in the system, you will be acknowledged on the contributors web page, along with getting the satisfaction of making the world a better place.
Statement of origin
All contributions must include one of the Statements of Origin below. Insert your name(s) in the first blank(s) and a high-level summary in the blank in a . Examples of a high-level summary are "Fixed bug in scheduler", "Extended type propagation in optimizing compiler", or "Added new garbage collector".
If your contribution is owned by your employer, someone authorized by your employer to make such a decision must add a comment to the patch in the tracker stating that you have permission to contribute it.
Statement of Origin: Single Contributor Single Contributor for all Contributions Multiple Contributors
View Online
Changes between revision 5
and revision 6:
If you have extended Jikes RVM and would like to contribute your extension back to the community, please use the [patch tracker|Issue Tracker#patches] to submit a patch. When submitting a patch, please include the following:
* The patch file for your contribution
* The appropriate Statement of Origin
* A description of the functionality you are contributing
* The version of Jikes RVM used to create your patch
Your contribution will be licensed under the [EPL|License] (Eclipse Public License), the license used for Jikes RVM. The license has been approved by the OSI (Open Source Initiative) as a fully certified open source license. If your contribution is included in the system, you will be acknowledged on the [contributors|Acknowledgments] web page, along with getting the satisfaction of making the world a better place.
h2. Statement of origin
All contributions must include one of the Statements of Origin below. Insert your name(s) in the first blank(s) and a high-level summary in the blank in a(i). Examples of a high-level summary are "Fixed bug in scheduler", "Extended type propagation in optimizing compiler", or "Added new garbage collector".
All contributions must include one of the Statements of Origin below. Insert your name(s) in the first blank(s) and a high-level summary in the blank in a(i) . Examples of a high-level summary are "Fixed bug in scheduler", "Extended type propagation in optimizing compiler", or "Added new garbage collector".
If your contribution is owned by your employer, someone authorized by your employer to make such a decision must add a comment to the patch in the tracker stating that you have permission to contribute it.
*Statement of Origin*: [Single Contributor|^single-contrib.txt] [Single Contributor for all Contributions|^batch-single-contrib.txt] [Multiple Contributors|^multi-contrib.txt]
View All Revisions |
Revert To Version 5
[Less]
|
|
Posted
over 16 years
ago
by
David Grove
Page
edited by
David Grove
Jikes RVM is free, open source software, distributed and freely redistributable under the Eclipse Public License v 1.0 (EPL-1.0). The EPL has
... [More]
been certified by the Open Source Initiative as an open source license. The EPL meets the Debian Free Software Guidelines.
Note: some code in the libraryInterface tree is distributed under other open source licenses. See the various LICENSE files in that tree for details.
Note: rvm/src-generated/opt-burs/jburg contains a tool, jburg, which was derived from iburg and is not distributed under the EPL. See rvm/src-generated/opt-burs/jburg/LICENSE for details.
View Online
Changes between revision 4
and revision 5:
Jikes RVM is free, open source software, distributed and freely redistributable under the Eclipse Public License ([EPL|http://www.eclipse.org/legal/epl-v10.html]). The EPL has been certified by the Open Source Initiative as an open source license. The EPL meets the Debian Free Software Guidelines.
Jikes RVM is free, open source software, distributed and freely redistributable under the Eclipse Public License v 1.0 ([EPL-1.0|http://www.eclipse.org/legal/epl-v10.html]). The EPL has been certified by the Open Source Initiative as an open source license. The EPL meets the Debian Free Software Guidelines.
Note: some code in the libraryInterface tree is distributed under other open source licenses. See the various LICENSE files in that tree for details.
Note: rvm/src-generated/opt-burs/jburg contains a tool, jburg, which was derived from iburg and is not distributed under the EPL. See rvm/src-generated/opt-burs/jburg/LICENSE for details.
View All Revisions |
Revert To Version 4
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
The Jikes™ RVM project is a collaborative software development project dedicated to providing an open source state-of-the-art infrastructure
... [More]
, freely available for performing research on virtual machine technologies for the Java™ programming language. This document describes the composition of the project and the roles and responsibilities of the participants.
Roles in the Jikes RVM Project
There are various roles people play in the Jikes RVM project. The more you contribute, and the higher the quality of your contribution, the more responsibility you can obtain.
User
Users are the people who use Jikes RVM, without contributing code or documentation to the project. Users are encouraged to participate through the mailing lists, asking questions, providing suggestions, and helping other users. Users are also encouraged to report problems using the bug tracking system. Anyone can be a user.
Contributor
A user who contributes code or documentation becomes a contributor. Contributors are the people who contribute enhancements, bug fixes, documentation, or other work that is incorporated into the system. Anyone can be a contributor.
Project Member
Project members are users or contributors who are also members of the Jikes RVM sourceforge project. Project members do not have write access to the svn repository. However, project members can be given "technician" access to one or more of the project trackers so they are able to accept and process tracker items (for example, bug reports or feature requests).
If you are interested in becoming a project member, you should contact a core team members and indicate in what role(s) you want to contribute to the project. A contributor or user can become a project member by the following sequential process:
they contact a core team member and explain what role they want to fulfill in the project (and thus what privileges they need)
at least 3 other core team members support their addition as project member, and
the Jikes RVM Steering Committee approves the addition by majority vote
Core Team Member
A contributor or project member who gives frequent and valuable contributions can be promoted to a core team member. Core team members have write access to the source code repository, and voting rights allowing them to affect the future of the project. The members of the core team are responsible for virtually all of the day-to-day technical decisions associated with the project. They are the gatekeepers, deciding what new code is added to the system. All contributions will be processed by one or more core team members before potential inclusion into the svn repository.
A contributor or project member can become a core team member by the following sequential process:
they are nominated by an existing core team member,
at least 3 other core team members support their nomination, and
the Jikes RVM Steering Committee approves the nomination by majority vote
Becoming a core team member is a privilege that is earned by contributing and showing good judgment. It is a responsibility that should be neither given nor taken lightly. Active participation on the mailing lists is a responsibility of all core team members, and is critical to the success of the project. Core team members are responsible for proactively reporting problems in the bug tracking system, and annotating problem reports with status information, explanations, clarifications, or requests for more information from the submitter. The core team also ensures that nightly regression tests are run on all supported platforms, monitors the results of the tests, and opens defects to track regression test failures. A subset of the core team does most of this monitoring, however all core team members are expected to investigate regression test failures that might have been caused by a source code change they committed.
At times, core team members may go inactive for a variety of reasons. The project relies on active core team members who respond to discussions in a constructive and timely manner. A core team member that is disruptive, does not participate actively, or has been inactive for an extended period may have his or her commit status removed by the Jikes RVM Steering Committee.
Current Jikes RVM Core Team
Steve Blackburn, Australian National University
Michael Bond, UT Austin
Peter Donald, La Trobe University
Daniel Frampton, Australian National University
Robin Garner, Australian National University
David Grove, IBM Research
Michael Hind, IBM Research
Andrew John Hughes, University of Sheffield
J. Eliot B. Moss, University of Massachusetts
Filip Pizlo, Fiji Systems LLC
Steering Committee
The Jikes RVM Steering Committee (SC) is a small group that is responsible for the strategic direction and success of the project. This governing and advisory body is expected to ensure the project's welfare and guide its overall direction.
The initial Jikes RVM SC was selected by the core team. Thereafter, to become a member of the SC, an individual must be nominated by a member of the SC, and unanimously approved by all SC members. The goal is to keep the membership of the SC very small. In the unlikely event that a member of the SC becomes disruptive to the process or ceases to contribute for an extended period, the member may be removed by unanimous vote of remaining SC members.
Current Steering Committee
Steve Blackburn, Australian National University
David Grove, IBM Research
Michael Hind, IBM Research
View Online
Changes between revision 9
and revision 10:
The Jikes™ RVM project is a collaborative software development project dedicated to providing an open source state-of-the-art infrastructure, freely available for performing research on virtual machine technologies for the Java™ programming language. This document describes the composition of the project and the roles and responsibilities of the participants.
h2. Roles in the Jikes RVM Project
There are various roles people play in the Jikes RVM project. The more you contribute, and the higher the quality of your contribution, the more responsibility you can obtain.
h3. User
_Users_ are the people who use Jikes RVM, without contributing code or documentation to the project. Users are encouraged to participate through the mailing lists, asking questions, providing suggestions, and helping other users. Users are also encouraged to report problems using the [bug tracking system|Issue Tracker#bug]. Anyone can be a user.
h3. Contributor
A user who [contributes|Contributions] code or documentation becomes a _contributor_. Contributors are the people who contribute enhancements, bug fixes, documentation, or other work that is incorporated into the system. Anyone can be a contributor.
h3. Project Member
Project members are users or contributors who are also members of the Jikes RVM sourceforge project. Project members do not have write access to the svn repository. However, project members can be given "technician" access to one or more of the project trackers so they are able to accept and process tracker items (for example, bug reports or feature requests).
If you are interested in becoming a project member, you should contact a core team members and indicate in what role(s) you want to contribute to the project. A contributor or user can become a project member by the following sequential process:
# they contact a core team member and explain what role they want to fulfill in the project (and thus what privileges they need)
# at least 3 other core team members support their addition as project member, and
# the Jikes RVM Steering Committee approves the addition by majority vote
h3. Core Team Member
A contributor or project member who gives frequent and valuable contributions can be promoted to a _core team member_. Core team members have write access to the source code repository, and voting rights allowing them to affect the future of the project. The members of the core team are responsible for virtually all of the day-to-day technical decisions associated with the project. They are the gatekeepers, deciding what new code is added to the system. All contributions will be processed by one or more core team members before potential inclusion into the svn repository.
A contributor or project member can become a core team member by the following sequential process:
# they are nominated by an existing core team member,
# at least 3 other core team members support their nomination, and
# the Jikes RVM Steering Committee approves the nomination by majority vote
Becoming a core team member is a privilege that is earned by contributing and showing good judgment. It is a responsibility that should be neither given nor taken lightly. Active participation on the mailing lists is a responsibility of all core team members, and is critical to the success of the project. Core team members are responsible for proactively reporting problems in the bug tracking system, and annotating problem reports with status information, explanations, clarifications, or requests for more information from the submitter. The core team also ensures that nightly regression tests are run on all supported platforms, monitors the results of the tests, and opens defects to track regression test failures. A subset of the core team does most of this monitoring, however all core team members are expected to investigate regression test failures that might have been caused by a source code change they committed.
At times, core team members may go inactive for a variety of reasons. The project relies on active core team members who respond to discussions in a constructive and timely manner. A core team member that is disruptive, does not participate actively, or has been inactive for an extended period may have his or her commit status removed by the Jikes RVM Steering Committee.
h4. Current Jikes RVM Core Team
* [Steve Blackburn|http://cs.anu.edu.au/~Steve.Blackburn], Australian National University
* [Michael Bond|http://www.cs.utexas.edu/~mikebond/], UT Austin
* [Peter Donald|http://www.realityforge.org/], La Trobe University
* [Daniel Frampton|mailto:[email protected]], Australian National University
* [Robin Garner|mailto:[email protected]], Australian National University
* [David Grove|http://www.research.ibm.com/people/d/dgrove], IBM Research
* [Michael Hind|http://www.research.ibm.com/people/h/hind], IBM Research
* [Andrew John Hughes|http://fuseyism.com/], University of Sheffield
* [J. Eliot B. Moss|http://ali-www.cs.umass.edu/~moss], University of Massachusetts
* Filip Pizlo, Purdue University
* [Filip Pizlo|http://www.filpizlo.com/research.html], Fiji Systems LLC
h3. Steering Committee
The Jikes RVM Steering Committee (SC) is a small group that is responsible for the strategic direction and success of the project. This governing and advisory body is expected to ensure the project's welfare and guide its overall direction.
The initial Jikes RVM SC was selected by the core team. Thereafter, to become a member of the SC, an individual must be nominated by a member of the SC, and unanimously approved by all SC members. The goal is to keep the membership of the SC very small. In the unlikely event that a member of the SC becomes disruptive to the process or ceases to contribute for an extended period, the member may be removed by unanimous vote of remaining SC members.
h4. Current Steering Committee
* [Steve Blackburn|http://cs.anu.edu.au/~Steve.Blackburn], Australian National University
* [David Grove|http://www.research.ibm.com/people/d/dgrove], IBM Research
* [Michael Hind|http://www.research.ibm.com/people/h/hind], IBM Research
View All Revisions |
Revert To Version 9
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
The garbage collectors for Jikes RVM are provided by MMTk. The MMTk: The Memory Manager Toolkit describes MMTk and gives a tutorial on how to use
... [More]
and edit it and is the best place to start.
The RVM can be configured to employ various different allocation managers taken from the MMTk memory management toolkit. Managers divide the available space up as they see fit. However, they normally subdivide the available address range to provide:
a metadata area which enables the manager to track the status of allocated and unallocated storage in the rest of the heap.
an immortal data area used to service allocations of objects which are expected to persist across the whole lifetime of the RVM runtime.
a large object space used to service allocations of objects which are larger than some specified size (e.g. a virtual memory page) - the large object space may employ a different allocation and reclamation strategy to that used for other objects.
a small object allocation area which may be divided into e.g.two semi spaces, a nursery space and a mature space, a set of generations, a non-relocatable buddy hierarchy etc depending upon the allocation and reclamation strategy employed by the memory manager.
Virtual memory pages are lazily mapped into the RVM's memory image as they are needed.
The main class which is used to interface to the memory manager is called Plan. Each flavor of the manager is implemented by substituting a different implementation of this class. Most plans inherit from class StopTheWorldGC which ensures that all active mutator threads (i.e. ones which do not perform the job of reclaiming storage) are suspended before reclamation is commenced. The argument passed to -X:processors determines the number of parallel collector threads that will be used for collection.
Generational collectors employ a plan which inherits from class Generational Inter alia, this class ensures that a write barrier is employed so that updates from old to new spaces are detected.
The RVM does not currently support concurrent garbage collection.
Jikes RVM may also use the GCSpy visualization framework. GCSpy allows developers to observe the behavior of the heap and related data structures.
View Online
Changes between revision 5
and revision 6:
The garbage collectors for Jikes RVM are provided by MMTk. The [MMTk: The Memory Manager Toolkit|http://cs.anu.edu.au/~Robin.Garner/mmtk-guide.pdf] describes MMTk and gives a tutorial on how to use and edit it and is the best place to start.
The RVM can be configured to employ various different allocation managers taken from the [MMTk] memory management toolkit. Managers divide the available space up as they see fit. However, they normally subdivide the available address range to provide:
* a metadata area which enables the manager to track the status of allocated and unallocated storage in the rest of the heap.
* an immortal data area used to service allocations of objects which are expected to persist across the whole lifetime of the RVM runtime.
* a large object space used to service allocations of objects which are larger than some specified size (e.g. a virtual memory page) - the large object space may employ a different allocation and reclamation strategy to that used for other objects.
* a small object allocation area which may be divided into e.g.two semi spaces, a nursery space and a mature space, a set of generations, a non-relocatable buddy hierarchy etc depending upon the allocation and reclamation strategy employed by the memory manager.
Virtual memory pages are lazily mapped into the RVM's memory image as they are needed.
The main class which is used to interface to the memory manager is called {{Plan}}. Each flavor of the manager is implemented by substituting a different implementation of this class. Most plans inherit from class {{StopTheWorldGC}} which ensures that all active mutator threads (i.e. ones which do not perform the job of reclaiming storage) are suspended before reclamation is commenced. The argument passed to -X:processors determines the number of parallel collector threads that will be used for collection.
The main class which is used to interface to the memory manager is called {{Plan}}. Each flavor of the manager is implemented by substituting a different implementation of this class. Most plans inherit from class {{StopTheWorldGC}} which ensures that all active mutator threads (i.e. ones which do not perform the job of reclaiming storage) are suspended before reclamation is commenced. The argument passed to {{-X:processors}} determines the number of parallel collector threads that will be used for collection.
Generational collectors employ a plan which inherits from class {{Generational}} Inter alia, this class ensures that a write barrier is employed so that updates from old to new spaces are detected.
The RVM does not currently support concurrent garbage collection.
Jikes RVM may also use the [GCSpy|Using GCSpy] visualization framework. GCSpy allows developers to observe the behavior of the heap and related data structures.
View All Revisions |
Revert To Version 5
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
This section provides some explanation of how Java™ threads are scheduled and synchronized by Jikes™ RVM.
All Java threads (application
... [More]
threads, garbage collector threads, etc.) derive from RVMThread. Each RVMThread maps directly to one native thread, which may be implemented using whichever C/C++ threading library is in use (currently either pthreads or Harmony threads). Unless -X:forceOneCPU is used, native threads are allowed to be arbitrarily scheduled by the OS using whatever processor resources are available; Jikes™ RVM does not attempt to control the thread-processor mapping at all.
Using native threading gives Jikes™ RVM better compatibility for existing JNI code, as well as improved performance, and greater infrastructure simplicity. Scheduling is offloaded entirely to the operating system; this is both what native code would expect and what maximizes the OS scheduler's ability to optimally schedule Java™ threads. As well, the resulting VM infrastructure is both simpler and more robust, since instead of focusing on scheduling decisions it can take a "hands-off" approach except when Java threads have to be preempted for sampling, on-stack-replacement, garbage collection, Thread.suspend(), or locking. The main task of RVMThread and other code in org.jikesrvm.scheduler is thus to override OS scheduling decisions when the VM demands it.
The remainder of this section is organized as follows. The management of a thread's state is discussed in detail. Mechanisms for blocking and handshaking threads are described. The VM's internal locking mechanism, the Monitor, is described. Finally, the locking implementation is discussed.
Tracking the Thread State
The state of a thread is broken down into two elements:
Should the thread yield at a safe point?
Is the thread running Java code right now?
The first mechanism is provided by the RVMThread.takeYieldpoint field, which is 0 if the thread should not yield, or non-zero if it should yield at the next safe point. Negative versus positive values indicate the type of safe point to yield at (epilogue/prologue, or any, respectively).
But this alone is insufficient to manage threads, as it relies on all threads being able to reach a safe point in a timely fashion. New Java threads may be started at any time, including at the exact moment that the garbage collector is starting; a starting-but-not-yet-started thread may not reach a safe point if the thread that was starting it is already blocked. Java threads may terminate at any time; terminated threads will never again reach a safe point. Any Java thread may call into arbitrary JNI code, which is outside of the VM's control, and may run for an arbitrary amount of time without reaching a Java safe point. As well, other mechanisms of RVMThread may cause a thread to block, thereby making it incapable of reaching a safe point in a timely fashion. However, in each of these cases, the Java thread is "effectively safe" - it is not running Java code that would interfere with the garbage collector, on-stack-replacement, locking, or any other Java runtime mechanism. Thus, a state management system is needed that would notify these runtime services when a thread is "effectively safe" and does not need to be waited on.
RVMThread provides for the following thread states, which describe to other runtime services the state of a Java thread. These states are designed with extreme care to support the following features:
Allow Java threads to either execute Java code, which periodically reaches safe points, and native code which is "effectively safe" by virtue of not having access to VM services.
Allow other threads (either Java threads or VM threads) to asynchronously request a Java thread to block. This overlaps with the takeYieldpoint mechanism, but adds the following feature: a thread that is "effectively safe" does not have to block.
Prevent race conditions on state changes. In particular, if a thread running native code transitions back to running Java code while some other thread expects it to be either "effectively safe" or blocked at a safe point, then it should block. As well, if we are waiting on some Java thread to reach a safe point but it instead escapes into running native code, then we would like to be notified that even though it is not at a safe point, it is not effectively safe, and thus, we do not have to wait for it anymore.
The states used to put these features into effect are listed below.
NEW. This means that the thread has been created but is not started, and hence is not yet running. NEW threads are always effectively safe, provided that they do not transition to any of the other states.
IN_JAVA. The thread is running Java code. This almost always corresponds to the OS "runnable" state - i.e. the thread has no reason to be blocked, is on the runnable queue, and if a processor becomes available it will execute, if it is not already executing. IN_JAVA thread will periodically reach safe points at which the takeYieldpoint field will be tested. Hence, setting this field will ensure that the thread will yield in a timely fashion, unless it transitions into one of the other states in the meantime.
IN_NATIVE. The thread is running either native C code, or internal VM code (which, by virtue of Jikes™ RVM's metacircularity, may be written in Java). IN_NATIVE threads are "effectively safe" in that they will not do anything that interferes with runtime services, at least until they transition into some other state. The IN_NATIVE state is most often used to denote threads that are blocked, for example on a lock.
IN_JNI. The thread has called into JNI code. This is identical to the IN_NATIVE state in all ways except one: IN_JNI threads have a JNIEnvironment that stores more information about the thread's execution state (stack information, etc), while IN_NATIVE threads save only the minimum set of information required for the GC to perform stack scanning.
IN_JAVA_TO_BLOCK. This represents a thread that is running Java code, as in IN_JAVA, but has been requested to yield. In most cases, when you set takeYieldpoint to non-zero, you will also change the state of the thread from IN_JAVA to IN_JAVA_TO_BLOCK. If you don't intend on waiting for the thread (for example, in the case of sampling, where you're opportunistically requesting a yield), then this step may be omitted; but in the cases of locking and garbage collection, when a thread is requested to yield using takeYieldpoint, its state will also be changed.
BLOCKED_IN_NATIVE. BLOCKED_IN_NATIVE is to IN_NATIVE as IN_JAVA_TO_BLOCK is to IN_JAVA. When requesting a thread to yield, we check its state; if it's IN_NATIVE, we set it to be BLOCKED_IN_NATIVE.
BLOCKED_IN_JNI. Same as BLOCKED_IN_NATIVE, but for IN_JNI.
TERMINATED. The thread has died. It is "effectively safe", but will never again reach a safe point.
The states are stored in RVMThread.execStatus, an integer field that may be rapidly manipulated using compare-and-swap. This field uses a hybrid synchronization protocol, which includes both compare-and-swap and conventional locking (using the thread's Monitor, accessible via the RVMThread.monitor() method). The rules are as follows:
All state changes except for IN_JAVA to IN_NATIVE or IN_JNI, and IN_NATIVE or IN_JNI back to IN_JAVA, must be done while holding the lock.
Only the thread itself can change its own state without holding the lock.
The only asynchronous state changes (changes to the state not done by the thread that owns it) that are allowed are IN_JAVA to IN_JAVA_TO_BLOCK, IN_NATIVE to BLOCKED_IN_NATIVE, and IN_JNI TO BLOCKED_IN_JNI.
The typical algorithm for requesting a thread to block looks as follows:
thread.monitor().lockNoHandshake();
if (thread is running) {
thread.takeYieldpoint=1;
// transitions IN_JAVA -> IN_JAVA_TO_BLOCK, IN_NATIVE->BLOCKED_IN_NATIVE, etc.
thread.setBlockedExecStatus();
if (thread.isInJava()) {
// Thread will reach safe point soon, or else notify us that it left to native code.
// In either case, since we are holding the lock, the thread will effectively block
// on either the safe point or on the attempt to go to native code, since performing
// either state transition requires acquiring the lock, which we are now holding.
} else {
// Thread is in native code, and thus is "effectively safe", and cannot go back to
// running Java code so long as we hold the lock, since that state transition requires
// acquiring the lock.
}
}
thread.monitor().unlock();
Most of the time, you do not have to write such code, as the cases of blocking threads are already implemented. For examples of how to utilize these mechanisms, see RVMThread.block(), RVMThread.hardHandshakeSuspend(), and RVMThread.softHandshake(). A discussion of how to use these methods follows in the section below.
Finally, the valid state transitions are as follows.
NEW to IN_JAVA: occurs when the thread is actually started. At this point it is safe to expect that the thread will reach a safe point in some bounded amount of time, at which point it will have a complete execution context, and this will be able to have its stack traces by GC.
IN_JAVA to IN_JAVA_TO_BLOCK: occurs when an asynchronous request is made, for example to stop for GC, do a mutator flush, or do an isync on PPC.
IN_JAVA to IN_NATIVE: occurs when the code opts to run in privileged mode, without synchronizing with GC. This state transition is only performed by Monitor, in cases where the thread is about to go idle while waiting for notifications (such as in the case of park, wait, or sleep), and by org.jikesrvm.runtime.FileSystem, as an optimization to allow I/O operations to be performed without a full JNI transition.
IN_JAVA to IN_JNI: occurs in response to a JNI downcall, or return from a JNI upcall.
IN_JAVA_TO_BLOCK to BLOCKED_IN_NATIVE: occurs when a thread that had been asked to perform an async activity decides to go to privileged mode instead. This state always corresponds to a notification being sent to other threads, letting them know that this thread is idle. When the thread is idle, any asynchronous requests (such as mutator flushes) can instead be performed on behalf of this thread by other threads, since this thread is guaranteed not to be running any user Java code, and will not be able to return to running Java code without first blocking, and waiting to be unblocked (see BLOCKED_IN_NATIVE to IN_JAVA transition.
IN_JAVA_TO_BLOCK to BLOCKED_IN_JNI: occurs when a thread that had been asked to perform an async activity decides to make a JNI downcall, or return from a JNI upcall, instead. In all other regards, this is identical to the IN_JAVA_TO_BLOCK to BLOCKED_IN_NATIVE transition.
IN_NATIVE to IN_JAVA: occurs when a thread returns from idling or running privileged code to running Java code.
BLOCKED_IN_NATIVE to IN_JAVA: occurs when a thread that had been asked to perform an async activity while running privileged code or idling decides to go back to running Java code. The actual transition is preceded by the thread first performing any requested actions (such as mutator flushes) and waiting for a notification that it is safe to continue running (for example, the thread may wait until GC is finished).
IN_JNI to IN_JAVA: occurs when a thread returns from a JNI downcall, or makes a JNI upcall.
BLOCKED_IN_JNI to IN_JAVA: same as BLOCKED_IN_NATIVE to IN_JAVA, except that this occurs in response to a return from a JNI downcall, or as the thread makes a JNI upcall.
IN_JAVA to TERMINATED: the thread has terminated, and will never reach any more safe points, and thus will not be able to respond to any more requests for async activities.
Blocking and Handshaking
Various VM services, such as the garbage collector and locking, may wish to request a thread to block. In some cases, we want to block all threads except for the thread that makes the request. As well, some VM services may only wish for a "soft handshake", where we wait for each thread to perform some action exactly once and then continue (in this case, the only thread that blocks is the thread requesting the soft handshake, but all other threads must "yield" in order to perform the requested action; in most cases that action is non-blocking). A unified facility for performing all of these requests is provided by RVMThread.
Four types of thread blocking and handshaking are supported:
RVMThread.block(). This is a low-level facility for requesting that a particular thread blocks. It is inherently unsafe to use this facility directly - for example, if thread A calls B.block() while thread B calls A.block(), the two threads may mutually deadlock.
RVMThread.beginPairHandshake(). This implements a safe pair-handshaking mechanism, in which two threads become bound to each other for a short time. The thread requesting the pair handshake waits until the other thread is at a safe point or else is "effectively safe", and prevents it from going back to executing Java code. Note that at this point, neither thread will respond to any other handshake requests until RVMThread.endPairHandshake() is called. This is useful for implementing biased locking, but it has general utility anytime one thread needs to manipulate something another thread's execution state.
RVMThread.softHandshake(). This implements soft handshakes. In a soft handshake, the requesting thread waits for all threads to perform some action exactly once, and then returns. If any of those threads are effectively safe, then the requesting thread performs the action on their behalf. softHandshake() is invoked with a SoftHandshakeVisitor that determines which threads are to be affected, and what the requested action is. An example of how this is used is found in org.jikesrvm.mm.mmtk.Collection and org.jikesrvm.compilers.opt.runtimesupport.OptCompiledMethod.
RVMThread.hardHandshakeSuspend(). This stops all threads except for the garbage collector threads and the thread making the request. It returns once all Java threads are stopped. This is used by the garbage collector itself, but may be of utility elsewhere (for example, dynamic software updating). To resume all stopped threads, call RVMThread.hardHandshakeResume(). Note that this mechanism is carefully designed so that even after the world is stopped, it is safe to request a garbage collection (in that case, the garbage collector will itself call a variant of hardHandshakeSuspend(), but it will only affect the one remaining running Java thread).
The Monitor API
The VM internally uses an OS-based locking implementation, augmented with support for safe lock recursion and awareness of handshakes. The Monitor API provides locking and notification, similar to a Java lock, and may be implemented using either a pthread_mutex and a pthread_cond, or using Harmony's monitor API.
Acquiring a Monitor lock, or awaiting notification, may cause the calling RVMThread to block. This prevents the calling thread from acknowledging handshakes until the blocking call returns. In some cases, this is desirable. For example:
In the implementation of handshakes, the code already takes special care to use the RVMThread state machine to notify other threads that the caller may block. As such, acquiring a lock or waiting for a notification is safe.
If acquiring a lock that may only be held for a short, guaranteed-bounded length of time, the fact that the thread will ignore handshake requests while blocking is safe - the lock acquisition request will return in bounded time, allowing the thread to acknowledge any pending handshake requests.
But in all other cases, the calling thread must ensure that the handshake mechanism is notified that thread will block. Hence, all blocking Monitor methods have both a "NoHandshake" and "WithHandshake" version. Consider the following code:
someMonitor.lockNoHandshake();
// perform fast, bounded-time critical section
someMonitor.unlock(); // non-blocking
In this code, lock acquisition is done without notifying handshakes. This makes the acquisition faster. In this case, it is safe because the critical section is bounded-time. As well, we require that in this case, any other critical sections protected by someMonitor are bounded-time as well. If, on the other hand, the critical section was not bounded-time, we would do:
someMonitor.lockWithHandshake();
// perform potentially long critical section
someMonitor.unlock();
In this case, the lockWithHandshake() operation will transition the calling thread to the IN_NATIVE state before acquiring the lock, and then transition it back to IN_JAVA once the lock is acquired. This may cause the thread to block, if a handshake is in progress. As an added safety provision, if the lockWithHandshake() operation blocks due to a handshake, it will ensure that it does so without holding the someMonitor lock.
A special Monitor is provided with each thread. This monitor is of the type NoYieldpointsMonitor and will also ensure that yieldpoints (safe points) are disabled while the lock is held. This is necessary because any safe point may release the Monitor lock by waiting on it, thereby breaking atomicity of the critical section. The NoYieldpointsMonitor for any RVMThread may be accessed using the RVMThread.monitor() method.
Additional information about how to use this API is found in the following section, which discusses the implementation of Java locking.
Thin and Biased Locking
Jikes™ RVM uses a hybrid thin/biased locking implementation that is designed for very high performance under any of the following loads:
Locks only ever acquired by one thread. In this case, biased locking is used, an no atomic operations (like compare-and-swap) need to be used to acquire and release locks.
Locks acquired by multiple threads but rarely under contention. In this case, thin locking is used; acquiring and releasing the lock involves a fast inlined compare-and-swap operation. It is not as fast as biased locking on most architectures.
Contended locks. Under sustained contention, the lock is "inflated" - the lock will now consist of data structures used to implement a fast barging FIFO mutex. A barging FIFO mutex allows threads to immediately acquire the lock as soon as it is available, or otherwise enqueue themselves on a FIFO and await its availability.
Thin locking has a relatively simple implementation; roughly 20 bits in the object header are used to represent the current lock state, and compare-and-swap is used to manipulate it. Biased locking and contended locking are more complicated, and are described below.
Biased locking makes the optimistic assumption that only one thread will ever want to acquire the lock. So long as this assumption holds, acquisition of the lock is a simple non-atomic increment/decrement. However, if the assumption is violated (a thread other than the one to which the lock is biased attempts to acquire the lock), a fallback mechanism is used to turn the lock into either a thin or contended lock. This works by using RVMThread.beginPairHandshake() to bring both the thread that is requesting the lock and the thread to which the lock is biased to a safe point. No other threads are affected; hence this system is very scalable. Once the pair handshake begins, the thread requesting the lock changes the lock into either a thin or contended lock, and then ends the pair handshake, allowing the thread to which the lock was biased to resume execution, while the thread requesting the lock may now contend on it using normal thin/contended mechanisms.
Contended locks, or "fat locks", consist of three mechanisms:
A spin lock to protect the data structures.
A queue of threads blocked on the lock.
A mechanism for blocked threads to go to sleep until awoken by being dequeued.
The spin lock is a org.jikesrvm.scheduler.SpinLock. The queue is implemented in org.jikesrvm.scheduler.ThreadQueue. And the blocking/unblocking mechanism leverages org.jikesrvm.scheduler.Monitor; in particular, it uses the Monitor that is attached to each thread, accessible via RVMThread.monitor(). The basic algorithm for lock acquisition is:
spinLock.lock();
while (true) {
if (lock available) {
acquire the lock;
break;
} else {
queue.enqueue(me);
spinLock.unlock();
me.monitor().lockNoHandshake();
while (queue.isQueued(me)) {
// put this thread to sleep waiting to be dequeued, and do so while the thread
// is IN_NATIVE to ensure that other threads don't wait on this one for
// handshakes while we're blocked.
me.monitor().waitWithHandshake();
}
me.monitor().unlock();
spinLock.lock();
}
}
spinLock.unlock();
The algorithm for unlocking dequeues the thread at the head of the queue (if there is one) and notifies its Monitor using the lockedBroadcastNoHandshake() method. Note that these algorithms span multiple methods in org.jikesrvm.scheduler.ThinLock and org.jikesrvm.scheduler.Lock; in particular, lockHeavy(), lockHeavyLocked(), unlockHeavy(), lock(), and unlock().
View Online
Changes between revision 11
and revision 12:
This section provides some explanation of how Java[™|Trademarks] threads are scheduled and synchronized by Jikes[™|Trademarks] RVM.
All Java threads (application threads, garbage collector threads, etc.) derive from {{RVMThread}}. Each {{RVMThread}} maps directly to one native thread, which may be implemented using whichever C/C++ threading library is in use (currently either pthreads or Harmony threads). Unless \-X:forceOneCPU is used, native threads are allowed to be arbitrarily scheduled by the OS using whatever processor resources are available; Jikes[™|Trademarks] RVM does not attempt to control the thread-processor mapping at all.
Using native threading gives Jikes[™|Trademarks] RVM better compatibility for existing JNI code, as well as improved performance, and greater infrastructure simplicity. Scheduling is offloaded entirely to the operating system; this is both what native code would expect and what maximizes the OS scheduler's ability to optimally schedule Java[™|Trademarks] threads. As well, the resulting VM infrastructure is both simpler and more robust, since instead of focusing on scheduling decisions it can take a "hands-off" approach except when Java threads have to be preempted for sampling, on-stack-replacement, garbage collection, Thread.suspend(), or locking. The main task of {{RVMThread}} and other code in {{org.jikesrvm.scheduler}} is thus to override OS scheduling decisions when the VM demands it.
The remainder of this section is organized as follows. The management of a thread's state is discussed in detail. Mechanisms for blocking and handshaking threads are described. The VM's internal locking mechanism, the {{Monitor}}, is described. Finally, the locking implementation is discussed.
h2. Tracking the Thread State
The state of a thread is broken down into two elements:
* Should the thread yield at a safe point?
* Is the thread running Java code right now?
The first mechanism is provided by the {{RVMThread.takeYieldpoint}} field, which is 0 if the thread should not yield, or non-zero if it should yield at the next safe point. Negative versus positive values indicate the type of safe point to yield at (epilogue/prologue, or any, respectively).
But this alone is insufficient to manage threads, as it relies on all threads being able to reach a safe point in a timely fashion. New Java threads may be started at any time, including at the exact moment that the garbage collector is starting; a starting-but-not-yet-started thread may not reach a safe point if the thread that was starting it is already blocked. Java threads may terminate at any time; terminated threads will never again reach a safe point. Any Java thread may call into arbitrary JNI code, which is outside of the VM's control, and may run for an arbitrary amount of time without reaching a Java safe point. As well, other mechanisms of {{RVMThread}} may cause a thread to block, thereby making it incapable of reaching a safe point in a timely fashion. However, in each of these cases, the Java thread is "effectively safe" - it is not running Java code that would interfere with the garbage collector, on-stack-replacement, locking, or any other Java runtime mechanism. Thus, a state management system is needed that would notify these runtime services when a thread is "effectively safe" and does not need to be waited on.
{{RVMThread}} provides for the following thread states, which describe to other runtime services the state of a Java thread. These states are designed with extreme care to support the following features:
* Allow Java threads to either execute Java code, which periodically reaches safe points, and native code which is "effectively safe" by virtue of not having access to VM services.
* Allow other threads (either Java threads or VM threads) to asynchronously request a Java thread to block. This overlaps with the {{takeYieldpoint}} mechanism, but adds the following feature: a thread that is "effectively safe" does not have to block.
* Prevent race conditions on state changes. In particular, if a thread running native code transitions back to running Java code while some other thread expects it to be either "effectively safe" or blocked at a safe point, then it should block. As well, if we are waiting on some Java thread to reach a safe point but it instead escapes into running native code, then we would like to be notified that even though it is not at a safe point, it is not effectively safe, and thus, we do not have to wait for it anymore.
The states used to put these features into effect are listed below.
* NEW. This means that the thread has been created but is not started, and hence is not yet running. NEW threads are always effectively safe, provided that they do not transition to any of the other states.
* IN_JAVA. The thread is running Java code. This almost always corresponds to the OS "runnable" state - i.e. the thread has no reason to be blocked, is on the runnable queue, and if a processor becomes available it will execute, if it is not already executing. IN_JAVA thread will periodically reach safe points at which the {{takeYieldpoint}} field will be tested. Hence, setting this field will ensure that the thread will yield in a timely fashion, unless it transitions into one of the other states in the meantime.
* IN_NATIVE. The thread is running either native C code, or internal VM code (which, by virtue of Jikes[™|Trademarks] RVM's metacircularity, may be written in Java). IN_NATIVE threads are "effectively safe" in that they will not do anything that interferes with runtime services, at least until they transition into some other state. The IN_NATIVE state is most often used to denote threads that are blocked, for example on a lock.
* IN_JNI. The thread has called into JNI code. This is identical to the IN_NATIVE state in all ways except one: IN_JNI threads have a {{JNIEnvironment}} that stores more information about the thread's execution state (stack information, etc), while IN_NATIVE threads save only the minimum set of information required for the GC to perform stack scanning.
* IN_JAVA_TO_BLOCK. This represents a thread that is running Java code, as in IN_JAVA, but has been requested to yield. In most cases, when you set {{takeYieldpoint}} to non-zero, you will also change the state of the thread from IN_JAVA to IN_JAVA_TO_BLOCK. If you don't intend on waiting for the thread (for example, in the case of sampling, where you're opportunistically requesting a yield), then this step may be omitted; but in the cases of locking and garbage collection, when a thread is requested to yield using {{takeYieldpoint}}, its state will also be changed.
* BLOCKED_IN_NATIVE. BLOCKED_IN_NATIVE is to IN_NATIVE as IN_JAVA_TO_BLOCK is to IN_JAVA. When requesting a thread to yield, we check its state; if it's IN_NATIVE, we set it to be BLOCKED_IN_NATIVE.
* BLOCKED_IN_JNI. Same as BLOCKED_IN_NATIVE, but for IN_JNI.
* TERMINATED. The thread has died. It is "effectively safe", but will never again reach a safe point.
The states are stored in {{RVMThread.execStatus}}, an integer field that may be rapidly manipulated using compare-and-swap. This field uses a hybrid synchronization protocol, which includes both compare-and-swap and conventional locking (using the thread's {{Monitor}}, accessible via the {{RVMThread.monitor()}} method). The rules are as follows:
* All state changes except for IN_JAVA to IN_NATIVE or IN_JNI, and IN_NATIVE or IN_JNI back to IN_JAVA, must be done while holding the lock.
* Only the thread itself can change its own state without holding the lock.
* The only asynchronous state changes (changes to the state not done by the thread that owns it) that are allowed are IN_JAVA to IN_JAVA_TO_BLOCK, IN_NATIVE to BLOCKED_IN_NATIVE, and IN_JNI TO BLOCKED_IN_JNI.
The typical algorithm for requesting a thread to block looks as follows:
{code}
thread.monitor().lockNoHandshake();
if (thread is running) {
thread.takeYieldpoint=1;
// transitions IN_JAVA -> IN_JAVA_TO_BLOCK, IN_NATIVE->BLOCKED_IN_NATIVE, etc.
thread.setBlockedExecStatus();
if (thread.isInJava()) {
// Thread will reach safe point soon, or else notify us that it left to native code.
// In either case, since we are holding the lock, the thread will effectively block
// on either the safe point or on the attempt to go to native code, since performing
// either state transition requires acquiring the lock, which we are now holding.
} else {
// Thread is in native code, and thus is "effectively safe", and cannot go back to
// running Java code so long as we hold the lock, since that state transition requires
// acquiring the lock.
}
}
thread.monitor().unlock();
{code}
Most of the time, you do not have to write such code, as the cases of blocking threads are already implemented. For examples of how to utilize these mechanisms, see {{RVMThread.block()}}, {{RVMThread.hardHandshakeSuspend()}}, and {{RVMThread.softHandshake()}}. A discussion of how to use these methods follows in the section below.
Finally, the valid state transitions are as follows.
* NEW to IN_JAVA: occurs when the thread is actually started. At this point it is safe to expect that the thread will reach a safe point in some bounded amount of time, at which point it will have a complete execution context, and this will be able to have its stack traces by GC.
* IN_JAVA to IN_JAVA_TO_BLOCK: occurs when an asynchronous request is made, for example to stop for GC, do a mutator flush, or do an isync on PPC.
* IN_JAVA to IN_NATIVE: occurs when the code opts to run in privileged mode, without synchronizing with GC. This state transition is only performed by {{Monitor}}, in cases where the thread is about to go idle while waiting for notifications (such as in the case of park, wait, or sleep), and by {{org.jikesrvm.runtime.FileSystem}}, as an optimization to allow I/O operations to be performed without a full JNI transition.
* IN_JAVA to IN_JNI: occurs in response to a JNI downcall, or return from a JNI upcall.
* IN_JAVA_TO_BLOCK to BLOCKED_IN_NATIVE: occurs when a thread that had been asked to perform an async activity decides to go to privileged mode instead. This state always corresponds to a notification being sent to other threads, letting them know that this thread is idle. When the thread is idle, any asynchronous requests (such as mutator flushes) can instead be performed on behalf of this thread by other threads, since this thread is guaranteed not to be running any user Java code, and will not be able to return to running Java code without first blocking, and waiting to be unblocked (see BLOCKED_IN_NATIVE to IN_JAVA transition.
* IN_JAVA_TO_BLOCK to BLOCKED_IN_JNI: occurs when a thread that had been asked to perform an async activity decides to make a JNI downcall, or return from a JNI upcall, instead. In all other regards, this is identical to the IN_JAVA_TO_BLOCK to BLOCKED_IN_NATIVE transition.
* IN_NATIVE to IN_JAVA: occurs when a thread returns from idling or running privileged code to running Java code.
* BLOCKED_IN_NATIVE to IN_JAVA: occurs when a thread that had been asked to perform an async activity while running privileged code or idling decides to go back to running Java code. The actual transition is preceded by the thread first performing any requested actions (such as mutator flushes) and waiting for a notification that it is safe to continue running (for example, the thread may wait until GC is finished).
* IN_JNI to IN_JAVA: occurs when a thread returns from a JNI downcall, or makes a JNI upcall.
* BLOCKED_IN_JNI to IN_JAVA: same as BLOCKED_IN_NATIVE to IN_JAVA, except that this occurs in response to a return from a JNI downcall, or as the thread makes a JNI upcall.
* IN_JAVA to TERMINATED: the thread has terminated, and will never reach any more safe points, and thus will not be able to respond to any more requests for async activities.
h2. Blocking and Handshaking
Various VM services, such as the garbage collector and locking, may wish to request a thread to block. In some cases, we want to block all threads except for the thread that makes the request. As well, some VM services may only wish for a "soft handshake", where we wait for each thread to perform some action exactly once and then continue (in this case, the only thread that blocks is the thread requesting the soft handshake, but all other threads must "yield" in order to perform the requested action; in most cases that action is non-blocking). A unified facility for performing all of these requests is provided by {{RVMThread}}.
Four types of thread blocking and handshaking are supported:
* {{RVMThread.block()}}. This is a low-level facility for requesting that a particular thread blocks. It is inherently unsafe to use this facility directly - for example, if thread A calls B.block() while thread B calls A.block(), the two threads may mutually deadlock.
* {{RVMThread.beginPairHandshake()}}. This implements a safe pair-handshaking mechanism, in which two threads become bound to each other for a short time. The thread requesting the pair handshake waits until the other thread is at a safe point or else is "effectively safe", and prevents it from going back to executing Java code. Note that at this point, neither thread will respond to any other handshake requests until {{RVMThread.endPairHandshake()}} is called. This is useful for implementing biased locking, but it has general utility anytime one thread needs to manipulate something another thread's execution state.
* {{RVMThread.softHandshake()}}. This implements soft handshakes. In a soft handshake, the requesting thread waits for all threads to perform some action exactly once, and then returns. If any of those threads are effectively safe, then the requesting thread performs the action on their behalf. {{softHandshake()}} is invoked with a {{SoftHandshakeVisitor}} that determines which threads are to be affected, and what the requested action is. An example of how this is used is found in {{org.jikesrvm.mm.mmtk.Collection}} and {{org.jikesrvm.compilers.opt.runtimesupport.OptCompiledMethod}}.
* {{RVMThread.hardHandshakeSuspend()}}. This stops all threads except for the garbage collector threads and the thread making the request. It returns once all Java threads are stopped. This is used by the garbage collector itself, but may be of utility elsewhere (for example, dynamic software updating). To resume all stopped threads, call {{RVMThread.hardHandshakeResume()}}. Note that this mechanism is carefully designed so that even after the world is stopped, it is safe to request a garbage collection (in that case, the garbage collector will itself call a variant of {{hardHandshakeSuspend()}}, but it will only affect the one remaining running Java thread).
h2. The Monitor API
The VM internally uses an OS-based locking implementation, augmented with support for safe lock recursion and awareness of handshakes. The {{Monitor}} API provides locking and notification, similar to a Java lock, and may be implemented using either a {{pthread_mutex}} and a {{pthread_cond}}, or using Harmony's monitor API.
Acquiring a {{Monitor}} lock, or awaiting notification, may cause the calling {{RVMThread}} to block. This prevents the calling thread from acknowledging handshakes until the blocking call returns. In some cases, this is desirable. For example:
* In the implementation of handshakes, the code already takes special care to use the {{RVMThread}} state machine to notify other threads that the caller may block. As such, acquiring a lock or waiting for a notification is safe.
* If acquiring a lock that may only be held for a short, guaranteed-bounded length of time, the fact that the thread will ignore handshake requests while blocking is safe - the lock acquisition request will return in bounded time, allowing the thread to acknowledge any pending handshake requests.
But in all other cases, the calling thread must ensure that the handshake mechanism is notified that thread will block. Hence, all blocking {{Monitor}} methods have both a "NoHandshake" and "WithHandshake" version. Consider the following code:
{code}
someMonitor.lockNoHandshake();
// perform fast, bounded-time critical section
someMonitor.unlock(); // non-blocking
{code}
In this code, lock acquisition is done without notifying handshakes. This makes the acquisition faster. In this case, it is safe because the critical section is bounded-time. As well, we require that in this case, any other critical sections protected by {{someMonitor}} are bounded-time as well. If, on the other hand, the critical section was not bounded-time, we would do:
{code}
someMonitor.lockWithHandshake();
// perform potentially long critical section
someMonitor.unlock();
{code}
In this case, the {{lockWithHandshake()}} operation will transition the calling thread to the IN_NATIVE state before acquiring the lock, and then transition it back to IN_JAVA once the lock is acquired. This may cause the thread to block, if a handshake is in progress. As an added safety provision, if the {{lockWithHandshake()}} operation blocks due to a handshake, it will ensure that it does so without holding the {{someMonitor}} lock.
A special {{Monitor}} is provided with each thread. This monitor is of the type {{NoYieldpointsMonitor}} and will also ensure that yieldpoints (safe points) are disabled while the lock is held. This is necessary because any safe point may release the {{Monitor}} lock by waiting on it, thereby breaking atomicity of the critical section. The {{NoYieldpointsMonitor}} for any {{RVMThread}} may be accessed using the {{RVMThread.monitor()}} method.
Additional information about how to use this API is found in the following section, which discusses the implementation of Java locking.
h2. Thin and Biased Locking
Jikes[™|Trademarks] RVM uses a hybrid thin/biased locking implementation that is designed for very high performance under any of the following loads:
* Locks only ever acquired by one thread. In this case, biased locking is used, an no atomic operations (like compare-and-swap) need to be used to acquire and release locks.
* Locks acquired by multiple threads but rarely under contention. In this case, thin locking is used; acquiring and releasing the lock involves a fast inlined compare-and-swap operation. It is not as fast as biased locking on most architectures.
* Contended locks. Under sustained contention, the lock is "inflated" - the lock will now consist of data structures used to implement a fast barging FIFO mutex. A barging FIFO mutex allows threads to immediately acquire the lock as soon as it is available, or otherwise enqueue themselves on a FIFO and await its availability.
Thin locking has a relatively simple implementation; roughly 20 bits in the object header are used to represent the current lock state, and compare-and-swap is used to manipulate it. Biased locking and contended locking are more complicated, and are described below.
Biased locking makes the optimistic assumption that only one thread will ever want to acquire the lock. So long as this assumption holds, acquisition of the lock is a simple non-atomic increment/decrement. However, if the assumption is violated (a thread other than the one to which the lock is biased attempts to acquire the lock), a fallback mechanism is used to turn the lock into either a thin or contended lock. This works by using {{RVMThread.beginPairHandshake()}} to bring both the thread that is requesting the lock and the thread to which the lock is biased to a safe point. No other threads are affected; hence this system is very scalable. Once the pair handshake begins, the thread requesting the lock changes the lock into either a thin or contended lock, and then ends the pair handshake, allowing the thread to which the lock was biased to resume execution, while the thread requesting the lock may now contend on it using normal thin/contended mechanisms.
Contended locks, or "fat locks", consist of three mechanisms:
* A spin lock to protect the data structures.
* A queue of threads blocked on the lock.
* A mechanism for blocked threads to go to sleep until awoken by being dequeued.
The spin lock is a {{org.jikesrvm.scheduler.SpinLock}}. The queue is implemented in {{org.jikesrvm.scheduler.ThreadQueue}}. And the blocking/unblocking mechanism leverages {{org.jikesrvm.scheduler.Monitor}}; in particular, it uses the {{Monitor}} that is attached to each thread, accessible via {{RVMThread.monitor()}}. The basic algorithm for lock acquisition is:
{code}
spinLock.lock();
while (true) {
if (lock available) {
acquire the lock;
break;
} else {
queue.enqueue(me);
spinLock.unlock();
me.monitor().lockNoHandshake();
while (queue.isQueued(me)) {
// put this thread to sleep waiting to be dequeued, and do so while the thread
// is IN_NATIVE to ensure that other threads don't wait on this one for
// handshakes while we're blocked.
me.monitor().waitWithHandshake();
}
me.monitor().unlock();
spinLock.lock();
}
}
spinLock.unlock();
{code}
The algorithm for unlocking dequeues the thread at the head of the queue (if there is one) and notifies its {{Monitor}} using the {{lockedBroadcastNoHandshake()}} method. Note that these algorithms span multiple methods in {{org.jikesrvm.scheduler.ThinLock}} and {{org.jikesrvm.scheduler.Lock}}; in particular, {{lockHeavy()}}, {{lockHeavyLocked()}}, {{unlockHeavy()}}, {{lock()}}, and {{unlock()}}.
View All Revisions |
Revert To Version 11
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
This section provides some tips on collecting performance numbers with Jikes RVM.
Which boot image should I use?
To make a long story short
... [More]
the best performing configuration of Jikes RVM will almost always be production. Unless you really know what you are doing, don't use any other configuration to do a performance evaluation of Jikes RVM.
Any boot image you use for performance evaluation must have the following characteristics for the results to be meaningful:
config.assertions=none. Unless this is set, the runtime system and optimizing compiler will perform fairly extensive assertion checking. This introduces significant runtime overhead. By convention, a configuration with the Fast prefix disables assertion checking.
config.bootimage.compiler=opt. Unless this is set, the boot image will be compiled with the baseline compiler and virtual machine performance will be abysmal. Jikes RVM has been designed under the assumption that aggressive inlining and optimization will be applied to the VM source code.
What command-line arguments should I use?
For best performance we recommend the following:
-X:processors=all: By default, Jikes™ RVM uses only one processor for garbage collection. Setting this option tells the garbage collection system to utilize all available processors.
Set the heap size generously. We typically set the heap size to at least half the physical memory on a machine.
Compiler Replay
The compiler-replay methodology is deterministic and eliminates memory allocation and mutator variations due to non-deterministic application of the adaptive compiler. We need this latter methodology because the non-determinism of the adaptive compilation system makes it a difficult platform for detailed performance studies. For example, we cannot determine if a variation is due to the system change being studied or just a different application of the adaptive compiler. The information we record and use are hot methods and blocks information. We also record dynamic call graph with calling frequency on each edge for inlining decisions.
Here is how to use it:
Generate the profile information, using the following command line arguments:
For edge profile
-X:base:edge_counters=true
-X:base:edge_counter_file=my_edge_counter_file
For adaptive compilation profile
-X:aos:enable_advice_generation=true
-X:aos:cafo=my_compiler_advice_file
For dynamic call graph profile (used by adaptive inlining)
-X:aos:dcfo=my_dynamic_call_graph_file
-X:aos:final_report_level=2
Typically you might run a benchmark several times and choose the set of replay data that produced the best performance.
Use the profile you generated for compiler replay, using the following command line arguments:
-X:aos:enable_replay_compile=true
-X:vm:edgeCounterFile=my_edge_counter_file
-X:aos:cafi=my_compiler_advice_file
-X:aos:dcfi=my_dynamic_call_graph_file
Measuring GC performance
MMTk includes a statistics subsystem and a harness mechanism for measuring its performance. If you are using the DaCapo benchmarks, the MMTk harness can be invoked using the '-c MMTkCallback' command line option, but for other benchmarks you will need to invoke the harness by calling the static methods
org.mmtk.plan.Plan.harnessBegin()
org.mmtk.plan.Plan.harnessEnd()
at the appropriate places. Other command line switches that affect the collection of statistics are
Option
Description
-X:gc:printPhaseStats=true
Print statistics for each mutator/gc phase during the run
-X:gc:xmlStats=true
Print statistics in an XML format (as opposed to human-readable format)
-X:gc:verbose
This is incompatible with MMTk's statistics system.
-X:gc:variableSizeHeap=false
Disable dynamic resizing of the heap
Unless you are specifically researching flexible heap sizes, it is best to run benchmarks in a fixed size heap, using a range of heap sizes to produce a curve that reflects the space-time tradeoff. Using replay compilation and measuring the second iteration of a benchmark is a good way to produce results with low noise.
There is an active debate among memory management and VM researchers about how best to measure performance, and this section is not meant to dictate or advocate any particular position, simply to describe one particular methodology.
Jikes RVM is really slow! What am I doing wrong?
Perhaps you are not seeing stellar Jikes™ RVM performance. If Jikes RVM as described above is not competitive product JVMs, we recommend you test your installation with the DaCapo benchmarks. We expect Jikes RVM performance to be very close to Sun's HotSpot 1.5 server running the DaCapo benchmarks (see our Nighlty DaCapo performance comparisionspage for the daily data). Of course, running DaCapo well does not guarantee that Jikes RVM runs all codes well.
Some kinds of code will not run fast on Jikes RVM. Known issues include:
Jikes RVM start-up may be slow compared to the some product JVMs.
Remember that the non-adaptive configurations (-X:aos:enable_recompilation=false -X:aos:initial_compiler=opt) opt-compile every method the first time it executes. With aggressive optimization levels, opt-compiling will severely slow down the first execution of each method. For many benchmarks, it is possible to test the quality of generated code by either running for several iterations and ignoring the first, or by building a warm-up period into the code. The SPEC benchmarks already use these strategies. The adaptive configuration does not have this problem; however, we cannot stipulate that the adaptive system will compete with the product on short-running codes of a few seconds.
Performance on tight loops may suffer. The Jikes RVM mechanism for safe points (thread preemption for garbage collection, on-stack-replacement, profiling, etc) relies on the insertion of a yield test on every back edge. This will hurt tight loops, including many simple microbenchmarks. We should someday alleviate this problem by strip-mining and hoisting the yield point out of hot loops, or implementing a safe point mechanism that does not require an explicit check.
The load balancing in the system is naive and unfair. This can hurt some styles of codes, including bulk-synchronous parallel programs.
The Jikes RVM developers wish to ensure that Jikes RVM delivers competitive performance. If you can isolate reproducible performance problems, please let us know.
View Online
Changes between revision 10
and revision 11:
This section provides some tips on collecting performance numbers with Jikes RVM.
h2. Which boot image should I use?
To make a long story short the best performing configuration of Jikes RVM will almost always be {{production}}. Unless you really know what you are doing, don't use any other configuration to do a performance evaluation of Jikes RVM.
Any boot image you use for performance evaluation must have the following characteristics for the results to be meaningful:
* config.assertions=none. Unless this is set, the runtime system and optimizing compiler will perform fairly extensive assertion checking. This introduces significant runtime overhead. By convention, a configuration with the Fast prefix disables assertion checking.
* config.bootimage.compiler=opt. Unless this is set, the boot image will be compiled with the baseline compiler and virtual machine performance will be abysmal. Jikes RVM has been designed under the assumption that aggressive inlining and optimization will be applied to the VM source code.
h2. What command-line arguments should I use?
For best performance we recommend the following:
* {{\-X:processors=all}}: By default, Jikes™ RVM uses only one processor. Setting this option tells the runtime system to utilize all available processors.
* {{\-X:processors=all}}: By default, Jikes™ RVM uses only one processor for garbage collection. Setting this option tells the garbage collection system to utilize all available processors.
* Set the heap size generously. We typically set the heap size to at least half the physical memory on a machine.
* Use a dedicated machine with no other users. The Jikes RVM thread and synchronization implementation do not play well with others.
h2. Compiler Replay
The compiler-replay methodology is deterministic and eliminates memory allocation and mutator variations due to non-deterministic application of the adaptive compiler. We need this latter methodology because the non-determinism of the adaptive compilation system makes it a difficult platform for detailed performance studies. For example, we cannot determine if a variation is due to the system change being studied or just a different application of the adaptive compiler. The information we record and use are hot methods and blocks information. We also record dynamic call graph with calling frequency on each edge for inlining decisions.
Here is how to use it:
# Generate the profile information, using the following command line arguments:
h4. For edge profile
{panel}
\-X:base:edge_counters=true
\-X:base:edge_counter_file=my_edge_counter_file
{panel}
h4. For adaptive compilation profile
{panel}
\-X:aos:enable_advice_generation=true
\-X:aos:cafo=my_compiler_advice_file
{panel}
h4. For dynamic call graph profile (used by adaptive inlining)
{panel}
\-X:aos:dcfo=my_dynamic_call_graph_file
\-X:aos:final_report_level=2
{panel}
Typically you might run a benchmark several times and choose the set of replay data that produced the best performance.
\\
# Use the profile you generated for compiler replay, using the following command line arguments:
{panel}
\-X:aos:enable_replay_compile=true
\-X:vm:edgeCounterFile=my_edge_counter_file
\-X:aos:cafi=my_compiler_advice_file
\-X:aos:dcfi=my_dynamic_call_graph_file
{panel}\\
h2. Measuring GC performance
MMTk includes a statistics subsystem and a harness mechanism for measuring its performance. If you are using the DaCapo benchmarks, the MMTk harness can be invoked using the '-c MMTkCallback' command line option, but for other benchmarks you will need to invoke the harness by calling the static methods
{panel}
org.mmtk.plan.Plan.harnessBegin()
org.mmtk.plan.Plan.harnessEnd()
{panel}
at the appropriate places. Other command line switches that affect the collection of statistics are
|| Option || Description ||
| \-X:gc:printPhaseStats=true | Print statistics for each mutator/gc phase during the run |
| \-X:gc:xmlStats=true | Print statistics in an XML format (as opposed to human-readable format) \\ |
| \-X:gc:verbose \\ | This is incompatible with MMTk's statistics system. \\ |
| \-X:gc:variableSizeHeap=false | Disable dynamic resizing of the heap \\ |
Unless you are specifically researching flexible heap sizes, it is best to run benchmarks in a fixed size heap, using a range of heap sizes to produce a curve that reflects the space-time tradeoff. Using replay compilation and measuring the second iteration of a benchmark is a good way to produce results with low noise.
There is an active debate among memory management and VM researchers about how best to measure performance, and this section is not meant to dictate or advocate any particular position, simply to describe one particular methodology.
h2. Jikes RVM is really slow\! What am I doing wrong?
Perhaps you are not seeing stellar Jikes™ RVM performance. If Jikes RVM as described above is not competitive product JVMs, we recommend you test your installation with the DaCapo benchmarks. We expect Jikes RVM performance to be very close to Sun's HotSpot 1.5 server running the DaCapo benchmarks (see our [Nighlty DaCapo performance comparisions|http://jikesrvm.anu.edu.au/~dacapo/release/index.php]page for the daily data). Of course, running DaCapo well does not guarantee that Jikes RVM runs all codes well.
Some kinds of code will not run fast on Jikes RVM. Known issues include:
# Jikes RVM start-up may be slow compared to the some product JVMs.
# Remember that the non-adaptive configurations (-X:aos:enable_recompilation=false \-X:aos:initial_compiler=opt) opt-compile _every_ method the first time it executes. With aggressive optimization levels, opt-compiling will severely slow down the first execution of each method. For many benchmarks, it is possible to test the quality of generated code by either running for several iterations and ignoring the first, or by building a warm-up period into the code. The SPEC benchmarks already use these strategies. The adaptive configuration does not have this problem; however, we cannot stipulate that the adaptive system will compete with the product on short-running codes of a few seconds.
# We expect Jikes RVM to perform well on codes with many threads, such as VolanoMark. However, if you have a code with many threads, each using JNI, Jikes RVM performance will suffer due to factors in the design of the current thread system.
# Performance on tight loops may suffer. The Jikes RVM thread system relies on quasi-preemption; the optimizing compiler inserts a thread-switch test on every back edge. This will hurt tight loops, including many simple microbenchmarks. We should someday alleviate this problem by strip-mining and hoisting the yield point out of hot loops.
# Performance on tight loops may suffer. The Jikes RVM mechanism for safe points (thread preemption for garbage collection, on-stack-replacement, profiling, etc) relies on the insertion of a yield test on every back edge. This will hurt tight loops, including many simple microbenchmarks. We should someday alleviate this problem by strip-mining and hoisting the yield point out of hot loops, or implementing a safe point mechanism that does not require an explicit check.
# The load balancing in the system is naive and unfair. This can hurt some styles of codes, including bulk-synchronous parallel programs.
The Jikes RVM developers wish to ensure that Jikes RVM delivers competitive performance. If you can isolate reproducible performance problems, please let us know.
View All Revisions |
Revert To Version 10
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
Jikes RVM includes a testing framework for running functional and performance tests and it also includes a number of actual tests. See External
... [More]
Test Resources for details or downloading prerequisites for the tests. The tests are executed using an Ant build file and produce results that conform to the definition below. The results are aggregated and processed to produce a high level report defining the status of Jikes RVM.
The testing framework was designed to support continuous and periodical execution of tests. A "test-run" occurs every time the testing framework is invoked. Every "test-run" will execute one or more "test-configuration"s. A "test-configuration" defines a particular build "configuration" (See Configuring the RVM for details) combined with a set of parameters that are passed to the RVM during the execution of the tests. i.e. a particular "test-configuration" may pass parameters such as -X:aos:enable_recompilation=false -X:aos:initial_compiler=opt -X:irc:O1 to test the Level 1 Opt compiler optimizations.
Every "test-configuration" will execute one or more "group"s of tests. Every "group" is defined by a Ant build.xml file in a separate sub-directory of $RVM_ROOT/testing/tests. Each "test" has a number of input parameters such as the classname to execute, the parameters to pass to the RVM or to the program. The "test" records a number of values such as execution time, exit code, result, standard output etc. and may also record a number of statistics if it is a performance test.
The project includes several different types of _test run_s and the description of each the test runs and their purpose is given in Test Run Descriptions.
Note
The buildit script provides a fast and easy way to build and the system. The script is simply a wrapper around the mechanisms described below.
Ant Properties
There is a number of ant properties that control the test process. Besides the properties that are already defined in Building the RVM the following properties may also be specified.
Property
Description
Default
test-run.name
The name of the test-run. The name should match one of the files located in the build/test-runs/ directory minus the '.properties' extension.
pre-commit
results.dir
The directory where Ant stores the results of the test run.
${jikesrvm.dir}/results
results.archive
The directory where Ant gzips and archives a copy of test run results and reports.
${results.dir}/archive
send.reports
Define this property to send reports via email.
(Undefined)
mail.from
The from address used when emailing report.
[email protected]
mail.to
The to address used when emailing report.
[email protected]
mail.host
The host to connect to when sending mail.
localhost
mail.port
The port to connect to when sending mail.
25
<configuration>.built
If set to true, the test process will skip the build step for specified configurations. For the test process to work the build must already be present.
(Undefined)
skip.build
If defined the test process will skip the build step for all configurations and the javadoc generation step. For the test process to work the build must already be present.
(Undefined)
skip.javadoc
If defined the test process will skip the javadoc generation step.
(Undefined)
Defining a test-run
A test-run is defined by a number of properties located in a property file located in the build/test-runs/ directory.
The property test.configs is a whitespace separated list of test-configuration "tags". Every tag uniquely identifies a particular test-configuration. Every test-configuration is defined by a number of properties in the property file that are prefixed with test.config.<tag>. and the following table defines the possible properties.
Property
Description
Default
tests
The names of the test groups to execute.
None
name
The unique identifier for test-configuration.
""
configuration
The name of the RVM build configuration to test.
<tag>
target
The name of the RVM build target. This can be used to trigger compilation of a profiled image
"main"
mode
The test mode. May modify the way test groups execute. See individual groups for details.
""
extra.args
Extra arguments that are passed to the RVM.
""
extra.rvm.args
Extra arguments that are passed to the RVM. These may be varied for different runs using the same image.
""
Note
The order of the test-configurations in test.configs is the order that the test-configurations are tested. The order of the groups in test.config.<tag>.test is the order that the tests are executed.
The simplest test-run is defined in the following figure. It will use the build configuration "prototype" and execute tests in the "basic" group.
build/test-runs/simple.properties
test.configs=prototype
test.config.prototype.tests=basic
The test process also expands properties in the property file so it is possible to define a set of tests once but use them in multiple test-configurations as occurs in the following figure. The groups basic, optests and dacapo are executed in both the prototype and prototype-opt test\configurations.
build/test-runs/property-expansion.properties
test.set=basic optests dacapo
test.configs=prototype prototype-opt
test.config.prototype.tests=${test.set}
test.config.prototype-opt.tests=${test.set}
Test Specific Parameters
Each test can have additional parameters specified that will be used by the test infrastructure when starting the Jikes RVM instance to execute the test. These additional parameters are described in the following table.
Parameter
Description
Default Property
Default Value
initial.heapsize
The initial size of the heap.
${test.initial.heapsize}
${config.default-heapsize.initial}
max.heapsize
The initial size of the heap.
${test.max.heapsize}
${config.default-heapsize.maximum}
max.opt.level
The maximum optimization level for the tests or an empty string to use the Jikes RVM default.
${test.max.opt.level}
""
processors
The number of processors to use for garbage collection for the test or 'all' to use all available processors.
${test.processors}
all
time.limit
The time limit for the test in seconds. After the time limit expires the Jikes RVM instance will be forcefully terminated.
${test.time.limit}
1000
class.path
The class path for the test.
${test.class.path}
extra.args
Extra arguments that are passed to the RVM.
${test.rvm.extra.args}
""
exclude
If set to true, the test will be not be executed.
""
To determine the value of a test specific parameters, the following mechanism is used;
Search for one of the the following ant properties, in order.
test.config.<build-configuration>.<group>.<test>.<parameter>
test.config.<build-configuration>.<group>.<parameter>
test.config.<build-configuration>.<parameter>
test.config.<build-configuration>.<group>.<test>.<parameter>
test.config.<build-configuration>.<group>.<parameter>
If none of the above properties are defined then use the parameter that was passed to the <rvm> macro in the ant build file.
If no parameter was passed to the <rvm> macro then use the default value which is stored in the "Default Property" as specified in the above table. By default the value of the "Default Property" is specified as the "Default Value" in the above table, however a particular build file may specify a different "Default Value".
Excluding tests
Sometimes it is desirable to exclude tests. The test exclusion may occur as the test is known to fail on a particular target platform, build configuration or maybe it just takes too long. To exclude a test, you must define the test specific parameter "exclude" to true either in .ant.properties or in the test-run properties file.
i.e. At the time of writing the Jikes RVM does not fully support volatile fields and as a result th test named "TestVolatile" in the "basic" group will always fail. Rather than being notified of this failure we can disable the test by adding a property such as "test.config.basic.TestVolatile.exclude=true" into test-run properties file.
Executing a test-run
The tests are executed by the Ant driver script test.xml. The test-run.name property defines the particular test-run to execute and if not set defaults to "sanity". The command ant -f test.xml -Dtest-run.name=simple executes the test-run defined in build/test-runs/simple.properties. When this command completes you can point your browser at ${results.dir}/tests/${test-run.name}/Report.html to get an overview on test run or at ${results.dir}/tests/${test-run.name}/Report.xml for an xml document describing test results.
View Online
Changes between revision 22
and revision 23:
Jikes RVM includes a testing framework for running functional and performance tests and it also includes a number of actual tests. See [External Test Resources] for details or downloading prerequisites for the tests. The tests are executed using an Ant build file and produce results that conform to the definition below. The results are aggregated and processed to produce a high level report defining the status of Jikes RVM.
The testing framework was designed to support continuous and periodical execution of tests. A "_test-run_" occurs every time the testing framework is invoked. Every "_test-run_" will execute one or more "_test-configuration_"s. A "_test-configuration_" defines a particular build "_configuration_" (See [Configuring the RVM] for details) combined with a set of parameters that are passed to the RVM during the execution of the tests. i.e. a particular "_test-configuration_" may pass parameters such as {{\-X:aos:enable_recompilation=false \-X:aos:initial_compiler=opt \-X:irc:O1}} to test the Level 1 Opt compiler optimizations.
Every "_test-configuration_" will execute one or more "_group_"s of tests. Every "_group_" is defined by a Ant build.xml file in a separate sub-directory of {{$RVM_ROOT/testing/tests}}. Each "_test_" has a number of input parameters such as the classname to execute, the parameters to pass to the RVM or to the program. The "_test_" records a number of values such as execution time, exit code, result, standard output etc. and may also record a number of statistics if it is a performance test.
The project includes several different types of \_test run_s and the description of each the test runs and their purpose is given in [Test Run Descriptions].
{note:title=Note}
The [buildit|Using buildit] script provides a fast and easy way to build and the system. The script is simply a wrapper around the mechanisms described below.
{note}
h2. Ant Properties
There is a number of ant properties that control the test process. Besides the properties that are already defined in [Building the RVM] the following properties may also be specified.
|| Property || Description || Default ||
| test-run.name | The name of the _test-run_. The name should match one of the files located in the [build/test-runs/|http://svn.sourceforge.net/viewvc/jikesrvm/rvmroot/trunk/build/test-runs/] directory minus the '.properties' extension. | pre-commit |
| results.dir | The directory where Ant stores the results of the test run. | {{${jikesrvm.dir\}/results}} |
| results.archive | The directory where Ant gzips and archives a copy of test run results and reports. | {{$\{results.dir\}/archive}} |
| send.reports | Define this property to send reports via email. | (Undefined) |
| mail.from | The from address used when emailing report. | [email protected] |
| mail.to | The to address used when emailing report. | [email protected] |
| mail.host | The host to connect to when sending mail. | localhost |
| mail.port | The port to connect to when sending mail. | 25 |
| <configuration>.built | If set to true, the test process will skip the build step for specified configurations. For the test process to work the build must already be present. | (Undefined) |
| skip.build | If defined the test process will skip the build step for all configurations and the javadoc generation step. For the test process to work the build must already be present. | (Undefined) |
| skip.javadoc | If defined the test process will skip the javadoc generation step. | (Undefined) |
h2. Defining a test-run
A _test-run_ is defined by a number of properties located in a property file located in the [build/test-runs/|http://svn.sourceforge.net/viewvc/jikesrvm/rvmroot/trunk/build/test-runs/] directory.
The property _test.configs_ is a whitespace separated list of _test-configuration_ "tags". Every tag uniquely identifies a particular _test-configuration_. Every _test-configuration_ is defined by a number of properties in the property file that are prefixed with _test.config.<tag>._ and the following table defines the possible properties.
|| Property || Description || Default ||
| tests | The names of the test groups to execute. | None |
| name | The unique identifier for _test-configuration_. | "" |
| configuration | The name of the RVM build configuration to test. | <tag> |
| target | The name of the RVM build target. This can be used to trigger compilation of a profiled image | "main" |
| mode | The test mode. May modify the way test groups execute. See individual groups for details. | "" |
| extra.args | Extra arguments that are passed to the RVM. | "" |
| extra.rvm.args | Extra arguments that are passed to the RVM. These may be varied for different runs using the same image. | "" |
{info:title=Note}
The order of the test-configurations in _test.configs_ is the order that the test-configurations are tested. The order of the groups in _test.config.<tag>.test_ is the order that the tests are executed.
{info}
The simplest _test-run_ is defined in the following figure. It will use the build configuration "_prototype_" and execute tests in the "_basic_" group.
{noformat:title=build/test-runs/simple.properties}
test.configs=prototype
test.config.prototype.tests=basic
{noformat}
The test process also expands properties in the property file so it is possible to define a set of tests once but use them in multiple test-configurations as occurs in the following figure. The groups basic, optests and dacapo are executed in both the prototype and prototype-opt test\configurations.
{noformat:title=build/test-runs/property-expansion.properties}
test.set=basic optests dacapo
test.configs=prototype prototype-opt
test.config.prototype.tests=${test.set}
test.config.prototype-opt.tests=${test.set}
{noformat}
h3. Test Specific Parameters
Each test can have additional parameters specified that will be used by the test infrastructure when starting the Jikes RVM instance to execute the test. These additional parameters are described in the following table.
|| Parameter || Description || Default Property || Default Value ||
| initial.heapsize | The initial size of the heap. | $\{test.initial.heapsize\} | $\{config.default-heapsize.initial\} |
| max.heapsize | The initial size of the heap. | $\{test.max.heapsize\} | $\{config.default-heapsize.maximum\} |
| max.opt.level | The maximum optimization level for the tests or an empty string to use the Jikes RVM default. | $\{test.max.opt.level\} | "" |
| processors | The number of processors to use for the test or 'all' to use all available processors. | $\{test.processors\} | all |
| processors | The number of processors to use for garbage collection for the test or 'all' to use all available processors. | $\{test.processors\} | all |
| time.limit | The time limit for the test in seconds. After the time limit expires the Jikes RVM instance will be forcefully terminated. | $\{test.time.limit\} | 1000 |
| class.path | The class path for the test. | $\{test.class.path\} | |
| extra.args | Extra arguments that are passed to the RVM. | $\{test.rvm.extra.args\} | "" |
| exclude | If set to true, the test will be not be executed. | | "" |
To determine the value of a test specific parameters, the following mechanism is used;
# Search for one of the the following ant properties, in order.
## test.config.<build-configuration>.<group>.<test>.<parameter>
## test.config.<build-configuration>.<group>.<parameter>
## test.config.<build-configuration>.<parameter>
## test.config.<build-configuration>.<group>.<test>.<parameter>
## test.config.<build-configuration>.<group>.<parameter>
# If none of the above properties are defined then use the parameter that was passed to the <rvm> macro in the ant build file.
# If no parameter was passed to the <rvm> macro then use the default value which is stored in the "Default Property" as specified in the above table. By default the value of the "Default Property" is specified as the "Default Value" in the above table, however a particular build file may specify a different "Default Value".
h3. Excluding tests
Sometimes it is desirable to exclude tests. The test exclusion may occur as the test is known to fail on a particular target platform, build configuration or maybe it just takes too long. To exclude a test, you must define the test specific parameter "exclude" to true either in .ant.properties or in the test-run properties file.
i.e. At the time of writing the Jikes RVM does not support suspending and resuming threads and as a result the test named "TestSuspend" in the "basic" group will always fail. Rather than being notified of this failure we can disable the test by adding a property such as "test.config.basic.TestSuspend.exclude=true" into test-run properties file.
i.e. At the time of writing the Jikes RVM does not fully support volatile fields and as a result th test named "TestVolatile" in the "basic" group will always fail. Rather than being notified of this failure we can disable the test by adding a property such as "test.config.basic.TestVolatile.exclude=true" into test-run properties file.
h2. Executing a test-run
The tests are executed by the Ant driver script _test.xml_. The _test-run.name_ property defines the particular test-run to execute and if not set defaults to "_sanity_". The command {{ant \-f test.xml \-Dtest-run.name=simple}} executes the test-run defined in _build/test-runs/simple.properties_. When this command completes you can point your browser at {{$\{results.dir\}/tests/$\{test-run.name\}/Report.html}} to get an overview on test run or at {{$\{results.dir\}/tests/$\{test-run.name\}/Report.xml}} for an xml document describing test results.
View All Revisions |
Revert To Version 22
[Less]
|
|
Posted
over 16 years
ago
by
Filip Pizlo
Page
edited by
Filip Pizlo
Jikes™ RVM executes Java virtual machine byte code instructions from .class files. It does not compile Java™ source code. Therefore, you must
... [More]
compile all Java source files into bytecode using your favorite Java compiler.
For example, to run class foo with source code in file foo.java:
% javac foo.java
% rvm foo
The general syntax is
rvm [rvm options...] class [args...]
You may choose from a myriad of options for the rvm command-line. Options fall into two categories: standard and non-standard. Non-standard options are preceded by "-X:".
Standard Command-Line Options
We currently support a subset of the JDK 1.5 standard options. Below is a list of all options and their descriptions. Unless otherwise noted each option is supported in Jikes RVM.
Option
Description
{-cp or -classpath} <directories and zip/jar files separated by ":">
set search path for application classes and resources
-D<name>=<value>
set a system property
-verbose:[ class | gc | jni ]
enable verbose output
-version
print current VM version and terminate the run
-showversion
print current VM version and continue running
-fullversion
like "-version", but with more information
-? or -help
print help message
-X
print help on non-standard options
-jar
execute a jar file
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
Non-Standard Command-Line Options
The non standard command-line options are grouped according to the subsystem that they control. The following sections list the available options in each group.
Core Non-Standard Command-Line Options
Option
Description
-X:verbose
Print out additional lowlevel information for GC and hardware trap handling
-X:verboseBoot=<number>
Print out additional information while VM is booting, using verbosity level <number>
-X:sysLogfile=<filename>
Write standard error message to <filename>
-X:ic=<filename>
Read boot image code from <filename>
-X:id=<filename>
Read boot image data from <filename>
-X:ir=<filename>
Read boot image ref map from <filename>
-X:vmClasses=<path>
Load the com.ibm.jikesrvm.* and java.* classes from <path>
-X:processors=<number|"all">
The number of processors that the garbage collector will use
Memory Non-Standard Command-Line Options
Option
Description
-Xms<number><unit>
Initial size of heap where <number> is an integer, an extended-precision floating point or a hexadecimal value and <unit> is one of T (Terabytes), G (Gigabytes), M (Megabytes), pages (of size 4096), K (Kilobytes) or <no unit> for bytes
-Xmx<number><unit>
Maximum size of heap. See above for definition of <number> and <unit>
Garbage Collector Non-Standard Command-Line Options
These options are all prefixed by -X:gc:.
Boolean options.
Option
Description
protectOnRelease
Should memory be protected on release?
echoOptions
Echo when options are set?
printPhaseStats
When printing statistics, should statistics for each gc-mutator phase be printed?
xmlStats
Print end-of-run statistics in XML format
eagerCompleteSweep
Should we eagerly finish sweeping at the start of a collection
fragmentationStats
Should we print fragmentation statistics for the free list allocator?
verboseFragmentationStats
Should we print verbose fragmentation statistics for the free list allocator?
verboseTiming
Should we display detailed breakdown of where GC time is spent?
noFinalizer
Should finalization be disabled?
noReferenceTypes
Should reference type processing be disabled?
fullHeapSystemGC
Should a major GC be performed when a system GC is triggered?
ignoreSystemGC
Should we ignore calls to java.lang.System.gc?
variableSizeHeap
Should we shrink/grow the heap to adjust to application working set?
eagerMmapSpaces
If true, all spaces are eagerly demand zero mmapped at boot time
sanityCheck
Perform sanity checks before and after each collection?
Value options.
Option
Type
Description
markSweepMarkBits
int
Number of bits to use for the header cycle of mark sweep spaces
verbose
int
GC verbosity level
stressFactor
bytes
Force a collection after this much allocation
metaDataLimit
bytes
Trigger a GC if the meta data volume grows to this limit
boundedNursery
bytes
Bound the maximum size of the nursery to this value
fixedNursery
bytes
Fix the minimum and maximum size of the nursery to this value
debugAddress
address
Specify an address at runtime for use in debugging
Base Compiler Non-Standard Command-Line Options
Boolean options
Option
Description
edge_counters
Insert edge counters on all bytecode-level conditional branches
invocation_counters
Select methods for optimized recompilation by using invocation counters
Opt Compiler Non-Standard Command-Line Options
Boolean options.
Option
Description
local_constant_prop
Perform local constant propagation
local_copy_prop
Perform local copy propagation
local_cse
Perform local common subexpression elimination
global_bounds
Perform global Array Bound Check elimination on Demand
monitor_removal
Try to remove unnecessary monitor operations
invokee_thread_local
Compile the method assuming the invokee is thread-local
no_callee_exceptions
Assert that any callee of this compiled method will not throw exceptions?
simple_escape_ipa
Eagerly compute method summaries for simple escape analysis
field_analysis
Eagerly compute method summaries for flow-insensitive field analysis
scalar_replace_aggregates
Perform scalar replacement of aggregates
reorder_code
Reorder basic blocks for improved locality and branch prediction
reorder_code_ph
Reorder basic blocks using Pettis and Hansen Algo2
inline_new
Inline allocation of scalars and arrays
inline_write_barrier
Inline write barriers for generational collectors
inline
Inline statically resolvable calls
guarded_inline
Guarded inlining of non-final virtual calls
guarded_inline_interface
Speculatively inline non-final interface calls
static_splitting
CFG splitting to create hot traces based on static heuristics
redundant_branch_elimination
Eliminate redundant conditional branches
preex_inline
Pre-existence based inlining
ssa
Should SSA form be constructed on the HIR?
load_elimination
Should we perform redundant load elimination during SSA pass?
coalesce_after_ssa
Should we coalesce move instructions after leaving SSA?
expression_folding
Should we try to fold expressions with constants in SSA form?
live_range_splitting
Split live ranges using LIR SSA pass?
gcp
Perform global code placement
gcse
Perform global code placement
verbose_gcp
Perform noisy global code placement
licm_ignore_pei
Assume PEIs do not throw or state is not observable
unwhile
Turn whiles into untils
loop_versioning
Loop versioning
handler_liveness
Store liveness for handlers to improve dependence graph at PEIs
schedule_prepass
Perform prepass instruction scheduling
no_checkcast
Should all checkcast operations be (unsafely) eliminated?
no_checkstore
Should all checkstore operations be (unsafely) eliminated?
no_bounds_check
Should all bounds check operations be (unsafely) eliminated?
no_null_check
Should all null check operations be (unsafely) eliminated?
no_synchro
Should all synchronization operations be (unsafely) eliminated?
no_threads
Should all yield points be (unsafely) eliminated?
no_cache_flush
Should cache flush instructions (PowerPC SYNC/ISYNC) be omitted? NOTE: Cannot be correctly changed via the command line!
reads_kill
Should we constrain optimizations by enforcing reads-kill?
monitor_nop
Should we treat all monitorenter/monitorexit bytecodes as nops?
static_stats
Should we dump out compile-time statistics for basic blocks?
code_patch_nop
Should all patch point be (unsafely) eliminated (at initial HIR)?
instrumentation_sampling
Perform code transformation to sample instrumentation code.
no_duplication
When performing inst. sampling, should it be done without duplicating code?
processor_specific_counter
Should there be one CBS counter per processor for SMP performance?
remove_yp_from_checking
Should yieldpoints be removed from the checking code (requires finite sample interval).
Value options.
Option
Description
ic_max_target_size
Static inlining heuristic: Upper bound on callee size
ic_max_inline_depth
Static inlining heuristic: Upper bound on depth of inlining
ic_max_always_inline_target_size
Static inlining heuristic: Always inline callees of this size or smaller
ic_massive_method_size
Static inlining heuristic: If root method is already this big, then only inline trivial methods
ai_max_target_size
Adaptive inlining heuristic: Upper bound on callee size
ai_min_callsite_fraction
Adaptive inlining heuristc: Minimum fraction of callsite distribution for guarded inlining of a callee
edge_count_input_file
Input file of edge counter profile data
inlining_guard
Selection of guard mechanism for inlined virtual calls that cannot be statically bound
fp_mode
Selection of strictness level for floating point computations
exclude
Exclude methods from being opt compiled
unroll_log
Unroll loops. Duplicates the loop body 2^n times.
cond_move_cutoff
How many extra instructions will we insert in order to remove a conditional branch?
load_elimination_rounds
How many rounds of redundant load elimination will we attempt?
alloc_advice_sites
Read allocation advice attributes for all classes from this file
frequency_strategy
How to compute block and edge frequencies?
spill_cost_estimate
Selection of spilling heuristic
infrequent_threshold
Cumulative threshold which defines the set of infrequent basic blocks
cbs_hotness
Threshold at which a conditional branch is considered to be skewed
ir_print_level
Only print IR compiled above this level
Adaptive System Non-Standard Command-Line Options
Boolean options
Option
Description
enable_recompilation
Should the adaptive system recompile hot methods?
enable_advice_generation
Do we need to generate advice file?
enable_precompile
Should the adaptive system precompile all methods given in the advice file before the user thread is started?
enable_replay_compile
Should the adaptive system use the pseudo-adaptive system that solely relies on the advice file?
gather_profile_data
Should profile data be gathered and reported at the end of the run?
adaptive_inlining
Should we use adaptive feedback-directed inlining?
early_exit
Should AOS exit when the controller clock reaches early_exit_value?
osr_promotion
Should AOS promote baseline-compiled methods to opt?
background_recompilation
Should recompilation be done on a background thread or on next invocation?
insert_yieldpoint_counters
Insert instrumentation in opt recompiled code to count yieldpoints executed?
insert_method_counters_opt
Insert intrusive method counters in opt recompiled code?
insert_instruction_counters
Insert counters on all instructions in opt recompiled code?
insert_debugging_counters
Enable easy insertion of (debugging) counters in opt recompiled code.
report_interrupt_stats
Report stats related to timer interrupts and AOS listeners on exit.
disable_recompile_all_methods
Disable the ability for an app to request all methods to be recompiled.
Value options
Option
Description
method_sample_size
How many timer ticks of method samples to take before reporting method hotness to controller.
initial_compiler
Selection of initial compiler.
recompilation_strategy
Selection of mechanism for identifying methods for optimizing recompilation.
method_listener_trigger
What triggers us to take a method sample?
call_graph_listener_trigger
What triggers us to take a method sample?
logfile_name
Name of log file.
compilation_advice_file_output
Name of advice file.
dynamic_call_file_output
Name of dynamic call graph file.
compiler_dna_file
Name of compiler DNA file (no name ==> use default DNA). Discussed in a comment at the head of VM_CompilerDNA.java.
compiler_advice_file_input
File containing information about the methods to Opt compile.
dynamic_call_file_input
File containing information about the hot call sites.
logging_level
Control amount of event logging (larger ==> more).
final_report_level
Control amount of info reported on exit (larger ==> more).
decay_frequency
After how many clock ticks should we decay.
dcg_decay_rate
What factor should we decay call graph edges hotness by.
dcg_sample_size
After how many timer interrupts do we update the weights in the dynamic call graph?
ai_seed_multiplier
Initial edge weight of call graph is set to ai_seed_multiplier * (1/ai_control_point).
offline_inline_plan_name
Name of offline inline plan to be read and used for inlining.
early_exit_time
Value of controller clock at which AOS should exit if early_exit is true.
invocation_count_threshold
Invocation count at which a baseline compiled method should be recompiled.
invocation_count_opt_level
Opt level for recompilation in invocation count based system.
counter_based_sample_interval
What is the sample interval for counter-based sampling.
ai_hot_callsite_threshold
What percentage of the total weight of the dcg demarcates warm/hot edges.
max_opt_level
The maximum optimization level to enable.
Virtual Machine Non-Standard Command-Line Options
Boolean Options
Option
Description
measureCompilation
Time all compilations and report on exit.
measureCompilationPhases
Time all compilation sub-phases and report on exit.
stackTraceFull
Stack traces to consist of VM and application frames.
stackTraceAtExit
Dump a stack trace (via VM.syswrite) upon exit.
verboseTraceClassLoading
More detailed tracing then -verbose:class.
errorsFatal
Exit when non-fatal errors are detected; used for regression testing.
Value options
Option
Description
maxSystemTroubleRecursionDepth
If we get deeper than this in one of the System Trouble functions, try to die.
interruptQuantum
Timer interrupt scheduling quantum in ms.
schedulingMultiplier
Scheduling quantum = interruptQuantum * schedulingMultiplier.
traceThreadScheduling
Trace actions taken by thread scheduling.
verboseStackTracePeriod
Trace every nth time a stack trace is created.
edgeCounterFile
Input file of edge counter profile data.
CBSCallSamplesPerTick
How many CBS call samples (Prologue/Epilogue) should we take per time tick.
CBSCallSampleStride
Stride between each CBS call sample (Prologue/Epilogue) within a sampling window.
CBSMethodSamplesPerTick
How many CBS method samples (any yieldpoint) should we take per time tick.
CBSMethodSampleStride
Stride between each CBS method sample (any yieldpoint) within a sampling window.
countThreadTransitions
Count, and report, the number of thread state transitions. This works better on IA32 than on PPC at the moment.
forceOneCPU
Force all threads to run on one CPU. The argument specifies which CPU (starting from 0).
Running Jikes RVM with valgrind
Jikes RVM can run under valgrind, as of SVN revision 6791 (29-Aug-2007). Applying a patch of this revision to release 3.2.1 should also produce a working system. Versions of valgrind CVS prior to release 3.0 are also known to have worked.
To run a Jikes RVM build with valgrind, use the -wrap flag to invoke valgrind, eg
rvm -wrap "path/to/valgrind --smc-check=all <valgrind-options>" <jikesrvm-options> ...
this will insert the invocation of valgrind at the appropriate place for it to operate on Jikes RVM proper rather than a wrapper script.
Under some circumstances, valgrind will load shared object libraries or allocate memory in areas of the heap that conflict with Jikes RVM. Using the flag -X:gc:eagerMmapSpaces=true will prevent and/or detect this. If this flag reveals errors while mapping the spaces, you will need to rearrange the heap to avoid the addresses that valgrind is occupying.
View Online
Changes between revision 22
and revision 23:
Jikes[™|Trademarks] RVM executes Java virtual machine byte code instructions from {{.class}} files. It does _not_ compile Java[™|Trademarks] source code. Therefore, you must compile all Java source files into bytecode using your favorite Java compiler.
For example, to run class {{foo}} with source code in file {{foo.java}}:
{noformat}
% javac foo.java
% rvm foo
{noformat}
The general syntax is
{noformat}
rvm [rvm options...] class [args...]
{noformat}
You may choose from a myriad of options for the {{rvm}} command-line. Options fall into two categories: _standard_ and _non-standard_. Non-standard options are preceded by "{{*\-X:*}}".
h3. Standard Command-Line Options
We currently support a subset of the JDK 1.5 standard options. Below is a list of all options and their descriptions. Unless otherwise noted each option is supported in Jikes RVM.
|| Option || Description ||
| \{-cp or \-classpath\} <directories and zip/jar files separated by ":"> | set search path for application classes and resources |
| \-D<name>=<value> | set a system property |
| \-verbose:\[ class \| gc \| jni \] | enable verbose output |
| \-version | print current VM version and terminate the run |
| \-showversion | print current VM version and continue running |
| \-fullversion | like "-version", but with more information |
| \-? or \-help | print help message |
| \-X | print help on non-standard options |
| \-jar | execute a jar file |
| \-javaagent:<jarpath>\[=<options>\] | load Java programming language agent, see java.lang.instrument |
h3. Non-Standard Command-Line Options
The non standard command-line options are grouped according to the subsystem that they control. The following sections list the available options in each group.
h4. Core Non-Standard Command-Line Options
|| Option || Description ||
| \-X:verbose | Print out additional lowlevel information for GC and hardware trap handling |
| \-X:verboseBoot=<number> | Print out additional information while VM is booting, using verbosity level <number> |
| \-X:sysLogfile=<filename> | Write standard error message to <filename> |
| \-X:ic=<filename> | Read boot image code from <filename> |
| \-X:id=<filename> | Read boot image data from <filename> |
| \-X:ir=<filename> | Read boot image ref map from <filename> |
| \-X:vmClasses=<path> | Load the com.ibm.jikesrvm.\* and java.\* classes from <path> |
| \-X:cpuAffinity=<number> | The physical CPU to which first virtual processor is bound |
| \-X:processors=<number\|"all"> | The number of virtual processors |
| \-X:processors=<number\|"all"> | The number of processors that the garbage collector will use |
h4. Memory Non-Standard Command-Line Options
|| Option || Description ||
| \-Xms<number><unit> | Initial size of heap where <number> is an integer, an extended-precision floating point or a hexadecimal value and <unit> is one of T (Terabytes), G (Gigabytes), M (Megabytes), pages (of size 4096), K (Kilobytes) or <no unit> for bytes |
| \-Xmx<number><unit> | Maximum size of heap. See above for definition of <number> and <unit> |
h4. Garbage Collector Non-Standard Command-Line Options
These options are all prefixed by {{\-X:gc:}}.
Boolean options.
|| Option || Description ||
| protectOnRelease | Should memory be protected on release? |
| echoOptions | Echo when options are set? |
| printPhaseStats | When printing statistics, should statistics for each gc-mutator phase be printed? |
| xmlStats | Print end-of-run statistics in XML format |
| eagerCompleteSweep | Should we eagerly finish sweeping at the start of a collection |
| fragmentationStats | Should we print fragmentation statistics for the free list allocator? |
| verboseFragmentationStats | Should we print verbose fragmentation statistics for the free list allocator? |
| verboseTiming | Should we display detailed breakdown of where GC time is spent? |
| noFinalizer | Should finalization be disabled? |
| noReferenceTypes | Should reference type processing be disabled? |
| fullHeapSystemGC | Should a major GC be performed when a system GC is triggered? |
| ignoreSystemGC | Should we ignore calls to java.lang.System.gc? |
| variableSizeHeap | Should we shrink/grow the heap to adjust to application working set? |
| eagerMmapSpaces | If true, all spaces are eagerly demand zero mmapped at boot time |
| sanityCheck | Perform sanity checks before and after each collection? |
Value options.
|| Option || Type || Description ||
| markSweepMarkBits | int | Number of bits to use for the header cycle of mark sweep spaces |
| verbose | int | GC verbosity level |
| stressFactor | bytes | Force a collection after this much allocation |
| metaDataLimit | bytes | Trigger a GC if the meta data volume grows to this limit |
| boundedNursery | bytes | Bound the maximum size of the nursery to this value |
| fixedNursery | bytes | Fix the minimum and maximum size of the nursery to this value |
| debugAddress | address | Specify an address at runtime for use in debugging |
h4. Base Compiler Non-Standard Command-Line Options
Boolean options
|| Option || Description ||
| edge_counters | Insert edge counters on all bytecode-level conditional branches |
| invocation_counters | Select methods for optimized recompilation by using invocation counters |
h4. Opt Compiler Non-Standard Command-Line Options
Boolean options.
|| Option || Description ||
| local_constant_prop | Perform local constant propagation |
| local_copy_prop | Perform local copy propagation |
| local_cse | Perform local common subexpression elimination |
| global_bounds | Perform global Array Bound Check elimination on Demand |
| monitor_removal | Try to remove unnecessary monitor operations |
| invokee_thread_local | Compile the method assuming the invokee is thread-local |
| no_callee_exceptions | Assert that any callee of this compiled method will not throw exceptions? |
| simple_escape_ipa | Eagerly compute method summaries for simple escape analysis |
| field_analysis | Eagerly compute method summaries for flow-insensitive field analysis |
| scalar_replace_aggregates | Perform scalar replacement of aggregates |
| reorder_code | Reorder basic blocks for improved locality and branch prediction |
| reorder_code_ph | Reorder basic blocks using Pettis and Hansen Algo2 |
| inline_new | Inline allocation of scalars and arrays |
| inline_write_barrier | Inline write barriers for generational collectors |
| inline | Inline statically resolvable calls |
| guarded_inline | Guarded inlining of non-final virtual calls |
| guarded_inline_interface | Speculatively inline non-final interface calls |
| static_splitting | CFG splitting to create hot traces based on static heuristics |
| redundant_branch_elimination | Eliminate redundant conditional branches |
| preex_inline | Pre-existence based inlining |
| ssa | Should SSA form be constructed on the HIR? |
| load_elimination | Should we perform redundant load elimination during SSA pass? |
| coalesce_after_ssa | Should we coalesce move instructions after leaving SSA? |
| expression_folding | Should we try to fold expressions with constants in SSA form? |
| live_range_splitting | Split live ranges using LIR SSA pass? |
| gcp | Perform global code placement |
| gcse | Perform global code placement |
| verbose_gcp | Perform noisy global code placement |
| licm_ignore_pei | Assume PEIs do not throw or state is not observable |
| unwhile | Turn whiles into untils |
| loop_versioning | Loop versioning |
| handler_liveness | Store liveness for handlers to improve dependence graph at PEIs |
| schedule_prepass | Perform prepass instruction scheduling |
| no_checkcast | Should all checkcast operations be (unsafely) eliminated? |
| no_checkstore | Should all checkstore operations be (unsafely) eliminated? |
| no_bounds_check | Should all bounds check operations be (unsafely) eliminated? |
| no_null_check | Should all null check operations be (unsafely) eliminated? |
| no_synchro | Should all synchronization operations be (unsafely) eliminated? |
| no_threads | Should all yield points be (unsafely) eliminated? |
| no_cache_flush | Should cache flush instructions (PowerPC SYNC/ISYNC) be omitted? NOTE: Cannot be correctly changed via the command line\! |
| reads_kill | Should we constrain optimizations by enforcing reads-kill? |
| monitor_nop | Should we treat all monitorenter/monitorexit bytecodes as nops? |
| static_stats | Should we dump out compile-time statistics for basic blocks? |
| code_patch_nop | Should all patch point be (unsafely) eliminated (at initial HIR)? |
| instrumentation_sampling | Perform code transformation to sample instrumentation code. |
| no_duplication | When performing inst. sampling, should it be done without duplicating code? |
| processor_specific_counter | Should there be one CBS counter per processor for SMP performance? |
| remove_yp_from_checking | Should yieldpoints be removed from the checking code (requires finite sample interval). |
Value options.
|| Option || Description ||
| ic_max_target_size | Static inlining heuristic: Upper bound on callee size |
| ic_max_inline_depth | Static inlining heuristic: Upper bound on depth of inlining |
| ic_max_always_inline_target_size | Static inlining heuristic: Always inline callees of this size or smaller |
| ic_massive_method_size | Static inlining heuristic: If root method is already this big, then only inline trivial methods |
| ai_max_target_size | Adaptive inlining heuristic: Upper bound on callee size |
| ai_min_callsite_fraction | Adaptive inlining heuristc: Minimum fraction of callsite distribution for guarded inlining of a callee |
| edge_count_input_file | Input file of edge counter profile data |
| inlining_guard | Selection of guard mechanism for inlined virtual calls that cannot be statically bound |
| fp_mode | Selection of strictness level for floating point computations |
| exclude | Exclude methods from being opt compiled |
| unroll_log \\ | Unroll loops. Duplicates the loop body 2^n times. |
| cond_move_cutoff | How many extra instructions will we insert in order to remove a conditional branch? |
| load_elimination_rounds | How many rounds of redundant load elimination will we attempt? |
| alloc_advice_sites | Read allocation advice attributes for all classes from this file |
| frequency_strategy | How to compute block and edge frequencies? |
| spill_cost_estimate | Selection of spilling heuristic |
| infrequent_threshold | Cumulative threshold which defines the set of infrequent basic blocks \\ |
| cbs_hotness | Threshold at which a conditional branch is considered to be skewed |
| ir_print_level \\ | Only print IR compiled above this level |
h4. Adaptive System Non-Standard Command-Line Options
Boolean options
|| Option || Description ||
| enable_recompilation \\ | Should the adaptive system recompile hot methods? |
| enable_advice_generation | Do we need to generate advice file? |
| enable_precompile | Should the adaptive system precompile all methods given in the advice file before the user thread is started? |
| enable_replay_compile | Should the adaptive system use the pseudo-adaptive system that solely relies on the advice file? |
| gather_profile_data | Should profile data be gathered and reported at the end of the run? |
| adaptive_inlining | Should we use adaptive feedback-directed inlining? |
| early_exit | Should AOS exit when the controller clock reaches early_exit_value? |
| osr_promotion | Should AOS promote baseline-compiled methods to opt? |
| background_recompilation | Should recompilation be done on a background thread or on next invocation? |
| insert_yieldpoint_counters | Insert instrumentation in opt recompiled code to count yieldpoints executed? |
| insert_method_counters_opt | Insert intrusive method counters in opt recompiled code? |
| insert_instruction_counters | Insert counters on all instructions in opt recompiled code? |
| insert_debugging_counters | Enable easy insertion of (debugging) counters in opt recompiled code. |
| report_interrupt_stats \\ | Report stats related to timer interrupts and AOS listeners on exit. |
| disable_recompile_all_methods \\ | Disable the ability for an app to request all methods to be recompiled. |
Value options
|| Option || Description ||
| method_sample_size \\ | How many timer ticks of method samples to take before reporting method hotness to controller. |
| initial_compiler \\ | Selection of initial compiler. |
| recompilation_strategy | Selection of mechanism for identifying methods for optimizing recompilation. |
| method_listener_trigger | What triggers us to take a method sample? \\ |
| call_graph_listener_trigger | What triggers us to take a method sample? |
| logfile_name \\ | Name of log file. |
| compilation_advice_file_output \\ | Name of advice file. \\ |
| dynamic_call_file_output | Name of dynamic call graph file. |
| compiler_dna_file | Name of compiler DNA file (no name ==> use default DNA). Discussed in a comment at the head of VM_CompilerDNA.java. |
| compiler_advice_file_input | File containing information about the methods to Opt compile. |
| dynamic_call_file_input | File containing information about the hot call sites. |
| logging_level | Control amount of event logging (larger ==> more). |
| final_report_level | Control amount of info reported on exit (larger ==> more). |
| decay_frequency | After how many clock ticks should we decay. |
| dcg_decay_rate | What factor should we decay call graph edges hotness by. |
| dcg_sample_size | After how many timer interrupts do we update the weights in the dynamic call graph? |
| ai_seed_multiplier \\ | Initial edge weight of call graph is set to ai_seed_multiplier * (1/ai_control_point). |
| offline_inline_plan_name | Name of offline inline plan to be read and used for inlining. |
| early_exit_time | Value of controller clock at which AOS should exit if early_exit is true. |
| invocation_count_threshold | Invocation count at which a baseline compiled method should be recompiled. |
| invocation_count_opt_level | Opt level for recompilation in invocation count based system. |
| counter_based_sample_interval | What is the sample interval for counter-based sampling. |
| ai_hot_callsite_threshold \\ | What percentage of the total weight of the dcg demarcates warm/hot edges. |
| max_opt_level | The maximum optimization level to enable. \\ |
h4. Virtual Machine Non-Standard Command-Line Options
Boolean Options
|| Option || Description \\ ||
| measureCompilation\\ | Time all compilations and report on exit. |
| measureCompilationPhases \\ | Time all compilation sub-phases and report on exit. \\ |
| stackTraceFull \\ | Stack traces to consist of VM and application frames. \\ |
| stackTraceAtExit | Dump a stack trace (via VM.syswrite) upon exit. |
| verboseTraceClassLoading | More detailed tracing then \-verbose:class. |
| errorsFatal | Exit when non-fatal errors are detected; used for regression testing. |
Value options
|| Option || Description ||
| maxSystemTroubleRecursionDepth\\ | If we get deeper than this in one of the System Trouble functions, try to die. |
| interruptQuantum | Timer interrupt scheduling quantum in ms. |
| schedulingMultiplier \\ | Scheduling quantum = interruptQuantum * schedulingMultiplier. |
| traceThreadScheduling \\ | Trace actions taken by thread scheduling. |
| verboseStackTracePeriod | Trace every nth time a stack trace is created.\\ |
| edgeCounterFile \\ | Input file of edge counter profile data. \\ |
| CBSCallSamplesPerTick \\ | How many CBS call samples (Prologue/Epilogue) should we take per time tick. |
| CBSCallSampleStride | Stride between each CBS call sample (Prologue/Epilogue) within a sampling window. \\ |
| CBSMethodSamplesPerTick | How many CBS method samples (any yieldpoint) should we take per time tick. \\ |
| CBSMethodSampleStride \\ | Stride between each CBS method sample (any yieldpoint) within a sampling window. |
| countThreadTransitions | Count, and report, the number of thread state transitions. This works better on IA32 than on PPC at the moment. |
| forceOneCPU | Force all threads to run on one CPU. The argument specifies which CPU (starting from 0). |
\\
h2. Running Jikes RVM with valgrind
Jikes RVM can run under valgrind, as of SVN revision 6791 (29-Aug-2007). Applying a patch of this revision to release 3.2.1 should also produce a working system. Versions of valgrind CVS prior to release 3.0 are also known to have worked.
To run a Jikes RVM build with valgrind, use the {{\-wrap}} flag to invoke valgrind, eg
{code}
rvm -wrap "path/to/valgrind --smc-check=all <valgrind-options>" <jikesrvm-options> ...
{code}
this will insert the invocation of valgrind at the appropriate place for it to operate on Jikes RVM proper rather than a wrapper script.
Under some circumstances, valgrind will load shared object libraries or allocate memory in areas of the heap that conflict with Jikes RVM. Using the flag \-X:gc:eagerMmapSpaces=true will prevent and/or detect this. If this flag reveals errors while mapping the spaces, you will need to rearrange the heap to avoid the addresses that valgrind is occupying.
View All Revisions |
Revert To Version 22
[Less]
|