Targeting SIMD in Java

Vectorised instruction execution can be targeted directly in C++ but with Java there are extra layers of abstraction to go through. Folklore aside, when does vectorisation or SIMD execution actually happen in Java? Skeptical of old wives’ tales, I investigate when SIMD instructions are actually used in Java 9, and how to disable it by programming badly.

Building a Benchmark Harness

I use JMH to write benchmarks. For the uninitiated, it is the standard Java micro-benchmarking framework and handles various pitfalls, the most salient of which being that it ensures that your code actually executes, and ensures measurement is performed against a monotonic measure of time. Benchmarks produced without JMH are unlikely to be correct and should arouse suspicion, especially if used during a sales process.

Averages always lie, so minimisation of 99.9th percentile execution time is a better objective to have in mind. As a general performance indicator I measure throughput in operations per time unit, which is useful to correlate with JVM compilation events.

To tune performance, before even worrying about achieving SIMD execution, we need to be aware of recompilation and failures to inline. These features are enabled by the arguments below and my simple harness has a specific mode to support these diagnostics:

-XX:+PrintCompilation 
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintInlining

The proof that code is actually being vectorised is to observe the emission of AVX instructions. To prove this has happened (and if it does, it will be correlated with astonishing performance statistics), we need to see the assembly code so I run the benchmark in a mode that will print out the generated assembly via the arguments:

-XX:+UnlockDiagnosticVMOptions
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print

However, SIMD execution only happens when SuperWord parallelism is enabled (and by default, it is) so we won’t even need to look at the assembly unless we see a clear difference when running the benchmark without the UseSuperWord option:

-XX:-UseSuperWord

What Gains Can Be Expected?

Is vectorised code execution a panacea? Assuming we can force the JVM to use SIMD, how much performance can be expected? It turns out that java.util.Arrays.fill can be vectorised, so we can get a taste of the difference it makes. We can observe its impact by benchmarking throughput with and without SuperWord instruction level parallelism.

    private void benchmark(String... jvmArgs) throws RunnerException {
            Options opt = new OptionsBuilder()
                    .include(this.getClass().getName() + ".*")
                    .mode(Mode.Throughput)
                    .timeUnit(TimeUnit.MILLISECONDS)
                    .warmupIterations(5)
                    .measurementIterations(5)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    .operationsPerInvocation(1_000_000)
                    .jvmArgs(jvmArgs)
                    .build();
            new Runner(opt).run();
    }

    @Benchmark
    public void fillArray(Blackhole bh)  {
        double[] array = new double[1 << 10];
        for(int i = 0; i < 1_000_000; ++i) {
            Arrays.fill(array, i);
            bh.consume(array);
        }
    }
# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:-UseSuperWord
...

Benchmark                 Mode  Cnt    Score     Error   Units
TestBenchmark.fillArray  thrpt    5  966.947 ± 596.705  ops/ms

# VM version: JDK 9-ea, VM 9-ea+166
...
# VM options: -XX:+UseSuperWord
...

Benchmark                 Mode  Cnt     Score      Error   Units
TestBenchmark.fillArray  thrpt    5  4230.802 ± 5311.001  ops/ms

The difference in throughput is palpable and astonishing.

Intrinsics

Various intrinsics, such as Arrays.fill above, compile down to vectorised instructions. When SuperWord parallelism is enabled, they will always run faster. The JIT compiler will also target simple hand-written code, but intrinsics are a safer bet.

Adding Arrays Together

Addition of arrays will either use SIMD or not; it depends how you go about writing your code. If your code is too complicated then you will not achieve vectorised execution. I will start from a method which does not vectorise, transform it to one that does by a process of simplification, and then break it again by trying to be too clever. The code which does vectorise is the same naive code I would have written in C++ as a student, ignorant of JVM internals. All of the code is available at github.

To start off with I created a highly contrived class called BadCode which is actually inspired, albeit seriously exacerbated, by a generic function in an API I have seen in a professional setting. The so called object oriented (sic) API seeks to be able to operate on any type of primitive array, so takes the lowest common denominator (java.lang.Object) as parameters and performs instanceof checks to get the correctly typed array instance. While the code is bloated, this provides as much flexibility as possible, which suits the supplier of the API – which has many clients with disparate use cases – but not necessarily the performance constrained caller.

The code is too bloated to include here but can be seen on github. It has two major performance bugs which will be addressed in turn:

  1. It’s verbose enough not to inline
  2. It has more than one exit point

Because the method supports so many use cases, the code size is very large (2765 bytes). By default, any method with a code size of greater than 2000 bytes will fail to inline. Inlining is at the very base of our hierarchy of performance needs. We can see that the JIT compiler has failed to inline the method by printing inlining and compilation:

   2193  715 %     4       com.openkappa.simd.generated.TestBenchmark_badClass_jmhTest::badClass_thrpt_jmhStub @ 13 (55 bytes)
                              @ 19   com.openkappa.simd.TestBenchmark::badClass (9 bytes)   force inline by CompileCommand
                                @ 2   com.openkappa.simd.TestBenchmark$BadClassState::compute (16 bytes)   inline (hot)
                                  @ 12   com.openkappa.simd.BadCode::bigMethod (2765 bytes)   hot method too big

Taking a look at our throughput, we can see there is a problem. The code isn’t getting any faster, so clearly no optimisations are being applied.

# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.193 ops/us
# Warmup Iteration   2: 0.176 ops/us
# Warmup Iteration   3: 0.216 ops/us
# Warmup Iteration   4: 0.183 ops/us
# Warmup Iteration   5: 0.294 ops/us
Iteration   1: 0.260 ops/us
Iteration   2: 0.224 ops/us
Iteration   3: 0.170 ops/us
Iteration   4: 0.169 ops/us
Iteration   5: 0.169 ops/us

Our three nines is terrible, taking up to six microseconds to add two arrays:

TestBenchmark.badClass:badClass·p0.999         sample       6.296           us/op

Regardless of the large code size, the approach taken in the API has enforced multiple exit points in the method, which will always disable SIMD execution. Running with SuperWord disabled does not worsen performance, implying our code is not getting vectorised.

Smaller Code

Having noticed that the method is not getting inlined, let alone compiling to AVX instructions, we need to make the code smaller first. In this scenario our values are always double[] so the API provider effectively forces us to pay for the disparate use cases they must support, and this taxation without representation harms performance. We can rewrite it to be smaller, targeting our own specific use case. The code is concise enough to include here, and is the code any student would write to perform element-wise addition of two arrays. Notice the loop condition.

public class SmallerCode {

    public double[] smallMethod(double[] left, double[] right) {
        double[] result = new double[left.length];
        for(int i = 0; i < left.length && i < right.length; ++i) {
            result[i] = left[i] + right[i];
        }
        return result;
    }
}

Let’s benchmark SmallerCode, where the same code has a size of 43B. The method is indeed inlined.

   1374  647       3       com.openkappa.simd.TestBenchmark$SmallerCodeState::compute (16 bytes)   made not entrant
                              @ 12   com.openkappa.simd.SmallerCode::smallMethod (43 bytes)   inline (hot)

Throughput is doubled and we see evidence of dynamic optimisation.

# Run progress: 25.00% complete, ETA 00:02:42
# Fork: 1 of 1
# Warmup Iteration   1: 0.372 ops/us
# Warmup Iteration   2: 0.340 ops/us
# Warmup Iteration   3: 0.432 ops/us
# Warmup Iteration   4: 0.497 ops/us
# Warmup Iteration   5: 0.499 ops/us
Iteration   1: 0.398 ops/us
Iteration   2: 0.364 ops/us
Iteration   3: 0.408 ops/us
Iteration   4: 0.544 ops/us
Iteration   5: 0.401 ops/us

This code is twice as fast, and our three nines is better, just by virtue of keeping the code simple.

TestBenchmark.smallerCode:smallerCode·p0.999             sample       2.164          us/op

But are we getting SIMD execution? Possibly – disabling SuperWord yields noticeably worse results.

# Run progress: 25.00% complete, ETA 00:02:22
# Fork: 1 of 1
# Warmup Iteration   1: 0.261 ops/us
# Warmup Iteration   2: 0.343 ops/us
# Warmup Iteration   3: 0.294 ops/us
# Warmup Iteration   4: 0.320 ops/us
# Warmup Iteration   5: 0.316 ops/us
Iteration   1: 0.293 ops/us
Iteration   2: 0.276 ops/us
Iteration   3: 0.304 ops/us
Iteration   4: 0.291 ops/us
Iteration   5: 0.279 ops/us

It’s worth inspecting the assembly to see if we can observe the emission of AVX instructions once we reenable UseSuperWord. Assembly is easier to read if inlining is disabled, this can be controlled in JMH with this annotation: @CompilerControl(CompilerControl.Mode.DONT_INLINE). Applying the annotation, the assembly is printed using the appropriate JVM args:

-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintAssembly
-XX:PrintAssemblyOptions=hsdis-print-bytes 
-XX:CompileCommand=print
0x00000154edb691e0: vmovdqu ymm0,ymmword ptr [rbp+r10*8+10h]

0x00000154edb691e7: vaddpd  ymm0,ymm0,ymmword ptr [rdx+r10*8+10h]

0x00000154edb691ee: vmovdqu ymmword ptr [r8+r10*8+10h],ymm0

0x00000154edb691f5: add     r10d,4h           

0x00000154edb691f9: cmp     r10d,ebx          

0x00000154edb691fc: jl      154edb691e0h      

It’s true – this code is indeed vectorised – see the AVX instructions in bold!

Ruining It

Having witnessed vectorisation when adding arrays together, let’s try and break it. There are a few patterns we can apply to break our vectorised code:

  • Putting an OR condition as the loop condition
  • Putting a non-inlined method inside the loop
  • Putting an arbitrary method as the loop condition
  • Manually unrolling the loop
  • Using a long as the loop variable
  • Multiple exit points

The list goes on but now let’s really fuck it up. We were using double[] so let’s see what happens if we use a DirectByteBuffer as the backing for a homegrown vector construct instead. Instead of returning a new heap allocated array, we will write our doubles into a byte buffer, and use an offset and length to delineate the arrays. For instance, the code below will write the sum of two arrays stored in a byte buffer back into the same byte buffer. We can abstract vectors by creating small on-heap objects for each array which know the offset and length of each array in the buffer.

public int add(ByteBuffer byteBuffer,
               int leftOffset, int leftLength,
               int rightOffset, int rightLength,
               int resultOffset) {
        int resultIndex = resultOffset;
        for(int l = leftOffset, r = rightOffset;
            l < leftOffset + leftLength && r < rightOffset + rightLength;
            l += 8, r += 8, resultIndex += 8) {
            byteBuffer.putDouble(resultIndex, byteBuffer.getDouble(l) + byteBuffer.getDouble(r));
        }
        return resultIndex;
    }

Is this clever? We’ve beaten the garbage collector. It certainly feels like a clever engineering story. No, performance wise, we are back to where we were with the remarkably convoluted and conflated catch all function. Entertainingly, had we only ever benchmarked against BadCode::bigMethod we may not have noticed this performance regression.

# Run progress: 0.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration   1: 0.156 ops/us
# Warmup Iteration   2: 0.160 ops/us
# Warmup Iteration   3: 0.198 ops/us
# Warmup Iteration   4: 0.190 ops/us
# Warmup Iteration   5: 0.272 ops/us
Iteration   1: 0.220 ops/us
Iteration   2: 0.242 ops/us
Iteration   3: 0.216 ops/us
Iteration   4: 0.248 ops/us
Iteration   5: 0.351 ops/us

TestBenchmark.byteBuffer:byteBuffer·p0.999     sample       6.552           us/op

The obvious indicator that this is not being vectorised is that performance does not degrade when setting

-XX:-UseSuperWord

And to be sure we can inspect the assembly code emitted (without disabling UseSuperWord!).

  0x000001869216cb67: mov     edx,ecx

  0x000001869216cb69: add     edx,r9d

  0x000001869216cb6c: cmp     ecx,edx

  0x000001869216cb6e: jnl     1869216cb1dh

  0x000001869216cb70: mov     r13d,dword ptr [r12+r11*8+8h]
                                                
  0x000001869216cb75: cmp     r13d,0f8007ed8h
                                      
  0x000001869216cb7c: jne     1869216d015h

  0x000001869216cb82: lea     r13,[r12+r11*8]

  0x000001869216cb86: movzx   eax,byte ptr [r13+29h]

  0x000001869216cb8b: test    eax,eax

  0x000001869216cb8d: je      1869216d015h    

The reality is that whenever sun.misc.Unsafe is used, directly or indirectly, we lose access to SIMD. The bottom line is if you want to exploit instruction level parallelism, then be prepared to write code even a beginner could understand instantly. Bizarre off-heap data structures and vectorised execution? Unlikely.

Microsecond Latency Rules Engine with RoaringBitmap

Implementing a rules engine can shorten development time and remove a lot of tedious if statements from your business logic. Unfortunately they are almost always slow and often bloated. Simple rules engines can be implemented by assigning integer salience to each line in a truth table, with rule resolution treated as an iterative intersection of ordered sets of integers. Implemented in terms of sorted sets, it would be remiss not to consider RoaringBitmap for the engine’s core. The code is at github.

Classification Table and Syntax

This rules engine builds on the simple idea of a truth table usually used to teach predicate logic and computer hardware. Starting with a table and some attributes, interpreting one attribute as a classification, we get a list of rules. It is trivial to load such a table from a database. Since classifications can overlap, we prioritise by putting the rules we care about most – or the most salient rules – at the top of the table. When multiple rules match a fact, we take the last in the set ordered by salience. So we don’t always have to specify all of the attributes to get a classification, we can rank attributes by their importance left to right, where it’s required that all attributes to the left of a specified attribute are also specified when matching a fact against a rule set.

It’s possible to define rules containing wildcards. Wildcard rules will match any query (warning: if these are marked as high salience they will hide more specific rules with lower salience). It’s also possible to specify a prefix with a wildcard, which will match any query that matches at least the prefix.

Below is an example table consisting of rules for classification of regional English accents by phonetic feature.

English Accent Rules

thought cloth lot palm plant bath trap accent
/ɔ/ /ɒ/ /ɑ/ /ɑː/ /ɑː/ /ɑː/

/æ/ Received Pronunciation (UK)
/ɔ/ /ɔ/ /ɑ/ /ɑ/ /æ/ /æ/

/æ/ Georgian (US)
/ɑ/ /ɑ/ /ɑ/ /ɑ/ /æ/ /æ/

/æ/ Canadian
* * /ɑ/ /ɑ/ /æ/ /æ/

/æ/ North American
* * * * * *

/æ/ Non Native
* * * * * *

* French

 

In the example above, the vowel sounds used in words differentiating speakers of several English accents are configured as a classification table. The accent column is the classification of any speaker exhibiting the properties specified in the six leftmost columns. UK Received Pronunciation is the most specific rule and has high salience, whereas various North American accents differ from RP in their use of short A vowels. A catch all for North American accents would wild card the sounds in thought and caught (contrast Boston pronunciations with Texas). So long as trap has been pronounced with a short A (which all English speakers do), and no other rule would recognise the sounds used in the first six words, the rule engine would conclude the speaker is using English as a second language. If not even the word trap is recognisable, then the speaker is probably unintelligible, or could be French.

Implementation

A rule with a given salience can be represented by creating a bitmap index on salience by the attribute values of the rules. For instance, to store the rule \{foo, bar\} \rightarrow 42, with salience 10, create a bitmap index on the first attribute of the rule, and set the 10th bit of the “foo” bitmap; likewise for the “bar” bitmap of the second index. Finding rules which match both attributes is a bitwise intersection, and since we rank by salience, the rule that wins is the first in the set. An obvious choice for fast ordered sets is RoaringBitmap.

RoaringBitmap consists of containers, which are fast, cache-friendly sorted sets of integers, and can contain up to 2^{16} - 1 shorts. In RoaringBitmap, containers are indexed by keys consisting of the most significant 16 bits of the integer. For a rules engine, if you have more than 2^{16} - 1 rules you have a much bigger problem anyway, so a container could index all the rules you could ever need, so RoaringBitmap itself would be overkill. While RoaringBitmap indexes containers by shorts (it does so for the sake of compression), we can implement wildcard and prefix matching by associating containers with Strings rather than shorts. As the core data structure of the rules engine, a RoaringBitmap container is placed at each node of an Apache commons PatriciaTrie. It’s really that simple – see the source at github.

When the rules engine is queried, a set consisting of all the rules that match is intersected with the container found at the node in the trie matching the value specified for each attribute. When more than one rule matches, the rule with the highest salience is accessed via the Container.first() method, one of the features I have contributed to RoaringBitmap. See example usage at github.

 

 

HTTP Content Negotiation

Ever had the situation where you’ve implemented a REST API, and then another team or potential customer comes along and asks if you could just change the serialisation format? Or maybe you are using a textual media type and want to experiment to see if performance would improve with a binary format? I see this done wrong all the time.

There is a mechanism for content negotiation built in to HTTP since the early days. It’s a very simple mechanism so this won’t be a long post. There is a short Java example using Jackson and Jersey to allow parametric content negotiation to be performed transparently to both the server and client.

Never Ever Do This

One option would be to implement a completely new service layer for each content type you need to serve. This not only costs you time, and probably test coverage, but moving between serialisation formats as a client of your API will probably constitute a full-blown migration. Keeping the implementations synchronised will become difficult, and somebody needs to tell the new guy he needs to implement the change in the JSON version as well as the SMILE version, and if the XML version could be updated by the end of the week, that would be great. I have seen commercial products that have actually done this.

Don’t Do This (Unless You’ve Already Done It)

Another option that I see in various APIs is using a query parameter for GET requests. This offends my sensibilities slightly, because it has the potential to conflate application and transport concerns, but it’s not that bad. The SOLR REST API has a query parameter wt which allows the client to choose between JSON and XML, for instance. It’s not exactly cumbersome and the client can easily change the serialisation format by treating it as a concern of the application. But what if you want to query the API from a browser? Browsers speak HTTP ex cathedra, not the local patois your API uses. The browser will have to put up with whatever your defaults are, and hope it can consume that format. It’s unnecessary complexity in the surface area of your API but it’s quite common, and at least it isn’t weird.

1996 Calling

All you have to do is set the Accept header of the request to the media type (for instance – application/cbor) you want to consume in the response.  If the server can produce the media type, it will do so, and then set the Content-Type header on the response. If it can’t produce the media type (and this should come out in integration testing) the server will respond with a 300 response code with textual content listing the supported media types that the client should choose from.

Content Negotiation with Jersey and Jackson JAX-RS providers

Jackson makes this very easy to do in Java. We want to be able to serialise the response below into a MIME type of the client’s choosing – they can choose from JSON, XML, CBOR, SMILE and YAML. The response type produced is not constrained by annotations on the method; none of the formats need ever be formally acknowledged by the programmer and can be added or removed by modifying the application’s classpath.

@Path("content")
public class ContentResource {

  @GET
  public Response getContent() {
    Map<String, Object> content = new HashMap<>();
    content.put("property1", "value1");
    content.put("property2", 10);
    return Response.ok(content).build();
  }
}

There are Jackson JAX-RS providers included by the maven dependencies below. Their presence is transparent to the application and can be used via the standard JAX-RS Resource SPI. Handler resolution is implemented by a chain-of-responsibility where the first resource accepting the Accept header will serialise the body of the response and set the Content-Type header.

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-json-provider</artifactId>
            <version>2.8.7</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-xml-provider</artifactId>
            <version>2.8.7</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-smile-provider</artifactId>
            <version>2.8.7</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-cbor-provider</artifactId>
            <version>2.8.7</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-yaml-provider</artifactId>
            <version>2.8.7</version>
        </dependency>

There’s some simple coding to do to configure the Jersey resource and run it embedded with Jetty.

public class ServerRunner {

  public static void main(String[] args) throws Exception {
    ResourceConfig config = new ResourceConfig();
    config.packages(true, "com.fasterxml.jackson.jaxrs");
    config.register(ContentResource.class);
    ServletHolder servletHolder = new ServletHolder(new ServletContainer(config));
    Server server = new Server(8080);
    ServletContextHandler handler = new ServletContextHandler(server, "/*");
    handler.addServlet(servletHolder, "/*");
    server.start();
    server.join();
  }
}

If you do this, then you can leave the decision up to your client and just add and remove resource provider jars from your classpath. A silver lining is that you can test the mechanism yourself from IntelliJ:

accept

response

And you can get non-technical users to check from a browser:

broswer.PNG