This post gives some new results on GraniteDS vs. BlazeDS when it comes to raw AMF3 (de)serialization performance. Running the benchmarks basically shows that GraniteDS 3.1 can be roughly up to 2 or 3 times faster than BlazeDS 4.0 for AMF3 serialization and up to 4 or 5 times faster for AMF3 deserialization.
Let’s start at the end and review the results.
These results were obtained by running the full benchmark 5 consecutive times and calculating mean times (the variation between two runs is very low on the same machine). The setup / environment giving these results is as follow:
Model Name: MacBook Pro Model Identifier: MacBookPro8,2 Processor Name: Intel Core i7 Processor Speed: 2.2 GHz Number of Processors: 1 Total Number of Cores: 4 L2 Cache (per Core): 256 KB L3 Cache: 6 MB Memory: 8 GB
System Version: OS X 10.9.2 (13C1021) Kernel Version: Darwin 13.1.0 Boot Volume: Macintosh HD Boot Mode: Normal
java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
Apache Ant(TM) version 1.8.2 compiled on December 20 2010
BlazeDS Version (flex-messaging-common and flex-messaging-core):
Implementation-Title: BlazeDS - Common Library Implementation-Version: 220.127.116.1131 Implementation-Vendor: Adobe Systems Inc. Implementation-Title: BlazeDS - Community Edition Implementation-Version: 18.104.22.16831 Implementation-Vendor: Adobe Systems Inc.
GraniteDS Version (granite-server-core):
The benchmark was executed in console mode after a full reboot, with network disabled.
What kind of data are benchmarked?
The benchmark uses two beans DataObject1 and DataObject2. Each instance of DataObject2 contains a HashSet of DataObject1 (between 10 and 20 DataObject1 per DataObject2 instance). DataObject2 and DataObject1 also have other properties of type String, Date, int, boolean and double).
The creation of the data is handled by the CreateRandomData class.
The benchmark uses the following collections:
- Big List of Objects: an ArrayList that contains 10,000 distinct instances of DataObject2,
- Small List of Objects: an ArrayList that contains 50 distinct instances of DataObject2,
- Big List of Strings: an ArrayList that contains 10,000 distinct String of length less than 100 characters.
The most “real-world” test is the one using a list of 50 beans (aka “Small List of Objects”): serializing a collection of 10,000 beans or strings is very unusual.
How does this benchmark work?
The benchmark is ran through an Apache Ant build file that spawns a new JVM for each test.
Basically, it first create random Java data (eg. a list of random Strings) and save them in a file with the standard Java serialization (through ObjectOutputStream). Then, it calls a benchmark class (GraniteDS or BlazeDS), which reads the serialized data (through ObjecInputStream) and repeatedly (eg. 10,000 times) encode them in the AMF3 format. The benchmark class then repeatedly decode the AMF3 encoded data the same number of times.
Each benchmark then prints out the total amount of time spend to repeatedly encode and repeatedly decode the data in the AMF3 format.
Finally, the benchmark goes through a cross checking process by deserializing with GraniteDS what was encoded with BlazeDS and vice-versa.
If you want to go into details, check the full benchmark sources on Github here.
How to get the benchmark sources and run it?
First, you need to clone the benchmark project:
$ git clone https://github.com/fwolff/amf-benchmark.git
Then, go to the newly created amf-benchmark directory and run ant:
$ cd amf-benchmark $ ant
This will compile the benchmark sources, run it and print the results to the standard output.
You can also customize the benchmark with a model of your choice. Comments are welcome!