Can Java 9 Leverage AVX-512 Yet?

Advanced Vector Extensions (AVX) has gone through several iterations, doubling the size of SIMD registers available several times. The state of the art is currently AVX-512 which offers 512-bit registers. In parallel, support for auto-vectorisation of handwritten Java has been improving. With an early access JDK9 on an Intel(R) Core(TM) i7-6700HQ CPU I have observed usage of both 256-bit (vector addition) and 128-bit registers (sum of squares). I was very kindly granted access to a Knights Landing server where I ran the same code. I was hoping to see evidence of usage of 512-bit registers, though I didn’t see it directly I did see a strong indication that this occurs.

SIMD Registers

There is a hierarchy of register types from several iterations of SIMD instruction sets: 128 bits, named xmm, originally from Streaming SIMD Extensions (SSE), 256-bits, named ymm, originally from AVX, and 512-bits, named zmm, introduced with AVX-512. A 512-bit register is enormous; it can contain 16 ints and can be used as a single operand to various vector instructions allowing up to 16 ints to be operated on in parallel. Each of these registers can be used as if it is the predecessor by masking the upper bits.

On my laptop I have seen Java 9 use 128-bit xmm registers in a sum of squares calculation on doubles:

0x000001aada783fca: vmovsd  xmm0,qword ptr [rbp+r13*8+10h] ...


As an aside, the weird thing about this code is that only a 64-bit qword, as opposed to an xmmword, is being loaded into xmm0 here. The code generated for floats is very interesting in this respect because it seems to load a full 256-bit ymmword of the array and supplies it as an operand to vmulps.

  0x00000245b32aa0dc: vmovdqu ymm1,ymmword ptr [r8+r9*4+10h]
0x00000245b32aa0e3: vmulps  ymm1,ymm1,ymm1


I have also seen 256-bit ymm registers in use in the simpler float vector addition:

  0x000002759215a17a: vmovdqu ymm0,ymmword ptr [rcx+rax*4+50h]
0x000002759215a186: vmovdqu ymmword ptr [rbx+rax*4+50h],ymm0


Where a 256-bit ymmword is being used to load chunks of the array into the register. The interesting thing about running the float vector addition case is that you can trace the optimisations being applied as the code runs; starting off with scalar instructions, graduating to AVX instructions on xmm registers, finally learning to utilise the ymm registers. This is quite an impressive feat of the optimiser.

What’s next? 512-bit zmm registers!

Knights Landing

Knights Landing has 32 512-bit zmm registers, I try to use them by running the code at github, and observing the assembly code emitted.

The Knights Landing server has 256 processors, each with a modest clock speed, but support for AVX-512 core and several extensions (see the flags). It is designed for massively parallel workloads, and, for single threaded code, would not compare favourably with a commodity desktop. But if you can write code to keep 256 cores busy concurrently, it will likely profit from running on a Knights Landing.

processor       : 255
vendor_id       : GenuineIntel
cpu family      : 6
model           : 87
model name      : Intel(R) Xeon Phi(TM) CPU 7210 @ 1.30GHz
stepping        : 1
microcode       : 0x130
cpu MHz         : 1299.695
cache size      : 1024 KB
physical id     : 0
siblings        : 256
core id         : 73
cpu cores       : 64
apicid          : 295
initial apicid  : 295
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt
bogomips        : 2600.02
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:


I am running on this server simply because it supports AVX-512, and am interested to see if Java can use it. The code is purposefully single threaded, and the point is not to confirm that one of 256 1.3Ghz processors is twice as slow as one of my eight 2.7GHz processors when measured on a single threaded workload, the interesting aspect is the instructions generated.

The operating system is CentOS Linux release 7.2.1511 (Core), the Java version used was Java HotSpot(TM) 64-Bit Server VM (build 9+177, mixed mode).

Observations

My observations were mixed: I didn’t see any 512-bit registers being used, some code expected to vectorise didn’t, but, critically, also saw evidence that the hsdis-amd64.so disassembler did not understand some of the assembly used. I hope, just because I like things to work, that this is hiding evidence of 512-bit register use. As you can see, there are a lot of new instructions in AVX-512 which may explain these black holes in the output.

I ran vector addition and sum of squares across a range of vector sizes and primitive types and saw little use of AVX instructions in my handwritten code, let alone evidence of 512-bit zmm register operands, yet extensive use of 256-bit registers for String intrinsics.

The first data point to look at is the relative timings (remember: absolute timings are meaningless because of Knights Landing design motivations, even more so because assembly was being printed during this run) which are interesting: some kind of optimisation is being applied to sum of squares only for ints.

Benchmark                          (size)   Mode  Cnt   Score    Error   Units
c.o.s.ss.SumOfSquares.SS_Doubles   100000  thrpt   10   2.145 ±  0.004  ops/ms
c.o.s.ss.SumOfSquares.SS_Doubles  1000000  thrpt   10   0.206 ±  0.001  ops/ms
c.o.s.ss.SumOfSquares.SS_Floats    100000  thrpt   10   2.150 ±  0.002  ops/ms
c.o.s.ss.SumOfSquares.SS_Floats   1000000  thrpt   10   0.212 ±  0.001  ops/ms
c.o.s.ss.SumOfSquares.SS_Ints      100000  thrpt   10  16.960 ±  0.043  ops/ms
c.o.s.ss.SumOfSquares.SS_Ints     1000000  thrpt   10   1.015 ±  0.019  ops/ms
c.o.s.ss.SumOfSquares.SS_Longs     100000  thrpt   10   6.379 ±  0.014  ops/ms
c.o.s.ss.SumOfSquares.SS_Longs    1000000  thrpt   10   0.429 ±  0.033  ops/ms


I looked at how SS_Ints was being executed and saw the usage of the vpaddd (addition of packed integers) instruction with xmm register operands, in between two instructions the disassembler (hsdis) seemingly could not read. Maybe these unprintable instructions are from AVX-512 or extensions? It’s impossible to say. The same mechanism, using vpaddq, was not used for longs, but is used on my laptop.

  0x00007fc31576edf1: (bad)                     ;...0e



The case of vector addition is less clear; there are many uninterpreted instructions in the output, while it clearly vectorises on my laptop using the largest registers available. The only vector instruction present was vzeroupper, which zeroes the upper 128 bits of a 256-bit register, and is usually used for optimising use of SSE on more modern architectures. Incidentally, there is explicit advice from Intel against using this instruction on Knights Landing. I saw assembly for the scalar implementation of this code (this will always happen prior to optimisation), but there’s no evidence the code was vectorised. However, there were 110 (bad) instructions in the output for float array addition alone.

An interesting presentation, by one of the key Hotspot compiler engineers, outlines some of the limits of SIMD support in Java. It neither includes examples of 512-bit register usage nor states that this is unsupported. Possibly Project Panama‘s Long8 type will utilise 512-bit registers. I will recompile the disassembler and give JDK9 the benefit of the doubt for now.

Publishing Dropwizard Metrics to Kafka

This post is about combining Dropwizard metrics with Kafka to create self instrumenting applications producing durable streams of application metrics, which can be processed (and re-processed) in many ways. The solution is appealing because Kafka is increasingly popular, and therefore likely to be available infrastructure, and Dropwizard metrics likewise, being leveraged by many open source frameworks with many plugins for common measurements such as JVM and web application metrics.

DropWizard

Dropwizard metrics allows you to create application metrics as an aspect of your application quickly. An application instrumented  with Dropwizard consists of a MetricRegistry – basically an in memory key-value store of the state of named metrics – and one or more Reporters. There are several built in reporters including ConsoleReporter, CsvReporterGangliaReporter and GraphiteReporter (the Ganglia and Graphite reporters require that you are actually running these services). An unofficial reporter designed for Ambari Metrics is hosted here.  Nobody really wants to work with JMX anymore, but, just in case you’re working with prehistoric code, there is also a JMXReporter available out of the box. Reporters are very loosely coupled with instrumentation cut points throughout your code, so it’s very easy to change a reporting strategy. Instrumenting an application manually is extremely simple – you just can’t go wrong following the getting started page – and there are several annotation processing mechanisms for instrumenting methods; for instance there are numerous integrations to be found on Github for frameworks like Spring. Indeed, I wrote my own annotation binding using Guice type listeners on a recent project, which was certainly easy enough (using techniques in this post on type listeners).

Kafka

The only work that needs to be done is to extend the Reporter mechanism to use Kafka as a destination. Despite being fast, the real beauty of writing metrics to Kafka is that you can do what you want with them afterwards. If you want to replicate them real time onto ZeroMQ topics, you can do that just as easily as you can run Spark Streaming or a scheduled Spark Batch job over your application metrics. If you’re building your own monitoring dashboard, you could imagine having a real time latest value, along with hourly or daily aggregations. In fact you can process the metrics at whatever frequency you wish within Kafka’s retention period. I truly believe your application metrics belong in Kafka, at least in the short term.

Extending ScheduledReporter

The basic idea is to extend ScheduledReporter composing a KafkaProducer. ScheduledReporter is unsurprisingly invoked repeatedly at a specified rate. On invocation, the idea is to loop through all gauges, meters, timers, and so on, serialise them (there may be a performance boost available from CBOR), and send them to Kafka via the KafkaProducer on a configurable topic. Then wherever in your application you would have created, say, an Slf4jReporter, just create a KafkaReporter instead.

Code

        <dependency>
<groupId>io.dropwizard.metrics</groupId>
<artifactId>metrics-core</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.2.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.6</version>
</dependency>


Whether you like them or not, all metrics reporters come with builders, so to be consistent you need to implement one. The builder needs to collect some details about Kafka so it knows where to send the metrics. The reporter is going to be responsible for creating a format in this example, but that can be factored out, in which case it would need to be exposed on the builder. In common with all reporters, there are configuration parameters relating to default units etc. which must be exposed for the sake of consistency.

public static class KafkaReporterBuilder {

private final MetricRegistry registry;
private final KafkaProducer<String, byte[]> producer;
private final String topic;
private String name = "KafkaReporter";
private TimeUnit timeUnit = TimeUnit.MILLISECONDS;
private TimeUnit rateUnit = TimeUnit.SECONDS;
private ObjectMapper mapper;

public KafkaReporterBuilder(MetricRegistry registry,
KafkaProducer<String, byte[]> producer,
String topic) {
this.registry = registry;
this.producer = producer;
this.topic = topic;
}

public KafkaReporterBuilder withName(String name) {
this.name = name;
return this;
}

public KafkaReporterBuilder withTimeUnit(TimeUnit timeUnit) {
this.timeUnit = timeUnit;
return this;
}

public KafkaReporterBuilder withRateUnit(TimeUnit rateUnit) {
this.rateUnit = rateUnit;
return this;
}

public KafkaReporterBuilder withMapper(ObjectMapper mapper) {
this.mapper = mapper;
return this;
}

public KafkaReporter build() {
return new KafkaReporter(registry,
name,
MetricFilter.ALL,
rateUnit,
timeUnit,
mapper == null ? new ObjectMapper() : mapper,
topic,
producer);
}
}


Here we will use the metric name as the key of the message, this is because we need all messages of the same metric to go to the same partition to guarantee chronological order. Here we take a KafkaProducer with String keys and byte[] values – the name will be the key, the serialised metric will be the value. It’s better for testability to defer the construction of the KafkaProducer to the caller, so the producer can be mocked, but KafkaProducers are really easy to construct from properties files, for instance see the Javadoc.

The next step is to implement the reporter.

public class KafkaReporter extends ScheduledReporter {

private final KafkaProducer<String, byte[]> producer;
private final ObjectMapper mapper;
private final String topic;

protected KafkaReporter(MetricRegistry registry,
String name,
MetricFilter filter,
TimeUnit rateUnit,
TimeUnit durationUnit,
ObjectMapper mapper,
String topic,
KafkaProducer<String, byte[]> producer) {
super(registry, name, filter, rateUnit, durationUnit);
this.producer = producer;
this.mapper = mapper;
this.topic = topic;
}

@Override
public void report(SortedMap<String, Gauge> gauges,
SortedMap<String, Counter> counters,
SortedMap<String, Histogram> histograms,
SortedMap<String, Meter> meters,
SortedMap<String, Timer> timers) {
report(gauges);
report(counters);
report(histograms);
report(meters);
report(timers);
}

private void report(SortedMap<String, ?> metrics) {
metrics.entrySet()
.stream()
.map(kv -> toRecord(kv.getKey(), kv.getValue(), this::serialise))
.forEach(producer::send);
}

private <T> ProducerRecord<String, byte[]> toRecord(String metricName, T metric, Function<T, byte[]> serialiser) {
return new ProducerRecord<>(topic, metricName, serialiser.apply(metric));
}

private byte[] serialise(Object value) {
try {
return mapper.writeValueAsBytes(value);
} catch(JsonProcessingException e) {
throw new RuntimeException("Value not serialisable: " + value, e);
}
}
}


To use it to publish all application metrics to Kafka in CBOR format, once every five seconds:

    MetricRegistry registry = ...
Properties kafkaProperties = ...
KafkaProducer<String, byte[]> producer = new KafkaProducer<>(properties);
KafkaReporter reporter = new KafkaReporter.KafkaReporterBuilder(registry, producer, "topic")
.withMapper(new ObjectMapper(new CBORFactory()))
.build();
reporter.start(5, TimeUnit.SECONDS);
...
reporter.stop();


A Quick Look at RoaringBitmap

This article is an introduction to the data structures found in the RoaringBitmap library, which I have been making extensive use of recently. I wrote some time ago about the basic idea of bitmap indices, which are used in various databases and search engines, with the caveat that no traditional implementation is optimal across all data scenarios (in terms of size of the data set, sparsity, cardinalities of attributes and global sort orders of data sets with respect to specific attributes). RoaringBitmap is a dynamic data structure which aims to be that one-size-fits-all solution across all scenarios.

Containers

A RoaringBitmap should be thought of as a set of unsigned integers, consisting of containers which cover disjoint subsets. Each subset can contain values from a range of size $2^{16}-1$, and the subset is indexed by a 16 bit key. This means that in the worst case it only takes 16 bits to represent a single 32 bit value, so unsigned 32 bit integers can be stored as Java shorts. The choice of container size also means that in the worst case, the container will still fit in L1 cache on a modern processor.

The implementation of the container covering a disjoint subset is free to vary between RunContainer, BitmapContainer and ArrayContainer, depending entirely on properties of the subset. When inserting data into a RoaringBitmap, it is decided whether to create a new container, or to mutate an existing container, depending on whether the values fit in the range covered by the container’s key. When performing a set operation, for instance by intersecting two bitmaps or computing their symmetric difference, a new RoaringBitmap is created by performing operations container by container, and it is decided dynamically which container implementation is best suited for the result. For cases where it is too difficult to determine the best implementation automatically, the method runOptimize is available to the programmer to make sure.

When querying a RoaringBitmap, the query can be executed container by container (which incidentally makes the query naturally parallelisable, but it hasn’t been done yet), and each pair from the cartesian product of combinations of container implementations must be implemented separately. This is manageable because there are only three implementations, and there won’t be any more. There is less work to do for symmetric operations, such as union and intersection, than with asymmetric operations such as contains.

RunContainer

When there are lots of clean words in a section of a bitmap, the best choice of container is run length encoding. The implementation of RunContainer is simple and very compact. It consists of an array of shorts (not ints, the most significant 16 bits are in the key) where the values at the even indices are the starts of runs, and the values at the odd indices are the lengths of the respective runs. Membership queries can be implemented simply using a binary search, and quantile queries can be implemented in constant time. Computing container cardinality requires a pass over the entire run array.

ArrayContainer

When data is sparse within a section of the bitmap, the best implementation is an array (short[]).  For very sparse data, this isn’t theoretically optimal, but for most cases it is very good and the array for the container will fit in L1 cache for mechanical sympathy. Cardinality is very fast because it is precomputed, and operations would be fast in spite of their precise implementation by virtue of the small size of the set (that being said, the actual implementations are fast). Often when creating a new container, it is necessary to convert to a bitmap for better compression as the container fills up.

BitmapContainer

BitmapContainer is the classic implementation of a bitset. There is a fixed length long[] which should be interpreted bitwise, and a precomputed cardinality. Operations on BitmapContainers tend to be very fast, despite typically touching each element in the array, because they fit in L1 cache and make extensive use of Java intrinsics. If you find a method name in here and run your JVM on a reasonably modern processor, your code will quickly get optimised by the JVM, sometimes even to a single instruction. A much hackneyed example, explained better by Michael Barker quite some time ago, would be Long.bitCount, which translates to the single instruction popcnt and has various uses when operating on BitmapContainers. When intersecting with another container, the cardinality can only decrease or remain the same, so there is a chance an ArrayContainer will be produced.

Examples

There is a really nice Scala project on github which functions as a DSL for creating RoaringBitmaps – it allows you to create an equality encoded (see my previous bitmap index post) RoaringBitmap in a very fluid way. The project is here.

I have implemented bit slice indices, both equality and range encoded, in a data quality tool I am building. That project is hosted here. Below is an implementation of a range encoded bit slice index as an example of how to work with RoaringBitmaps.

public class RangeEncodedOptBitSliceIndex implements RoaringIndex {

private final int[] basis;
private final int[] cumulativeBasis;
private final RoaringBitmap[][] bitslice;

public RangeEncodedOptBitSliceIndex(ProjectionIndex projectionIndex, int[] basis) {
this.basis = basis;
this.cumulativeBasis = accumulateBasis(basis);
this.bitslice = BitSlices.createRangeEncodedBitSlice(projectionIndex, basis);
}

@Override
public RoaringBitmap whereEqual(int code, RoaringBitmap existence) {
RoaringBitmap result = existence.clone();
int[] expansion = expand(code, cumulativeBasis);
for(int i = 0; i < cumulativeBasis.length; ++i) {
int component = expansion[i];
if(component == 0) {
result.and(bitslice[i][0]);
}
else if(component == basis[i] - 1) {
result.andNot(bitslice[i][basis[i] - 2]);
}
else {
result.and(FastAggregation.xor(bitslice[i][component], bitslice[i][component - 1]));
}
}
return result;
}

@Override
public RoaringBitmap whereNotEqual(int code, RoaringBitmap existence) {
RoaringBitmap inequality = existence.clone();
inequality.andNot(whereEqual(code, existence));
return inequality;
}

@Override
public RoaringBitmap whereLessThan(int code, RoaringBitmap existence) {
return whereLessThanOrEqual(code - 1, existence);
}

@Override
public RoaringBitmap whereLessThanOrEqual(int code, RoaringBitmap existence) {
final int[] expansion = expand(code, cumulativeBasis);
final int firstIndex = cumulativeBasis.length - 1;
int component = expansion[firstIndex];
int threshold = basis[firstIndex] - 1;
RoaringBitmap result = component < threshold ? bitslice[firstIndex][component].clone() : existence.clone();     for(int i = firstIndex - 1; i >= 0; --i) {
component = expansion[i];
threshold = basis[i] - 1;
if(component != threshold) {
result.and(bitslice[i][component]);
}
if(component != 0) {
result.or(bitslice[i][component - 1]);
}
}
return result;
}

@Override
public RoaringBitmap whereGreaterThan(int code, RoaringBitmap existence) {
RoaringBitmap result = existence.clone();
result.andNot(whereLessThanOrEqual(code, existence));
return result;
}

@Override
public RoaringBitmap whereGreaterThanOrEqual(int code, RoaringBitmap existence) {
RoaringBitmap result = existence.clone();
result.andNot(whereLessThan(code, existence));
return result;
}
}


The library has been implemented under an Apache License by several contributors, the most significant contributions coming from computer science researcher Daniel Lemire, who presented RoaringBitmap at Spark Summit 2017. The project site is here and the research paper behind the library is freely available.

How a Bitmap Index Works

Bitmap indices are used in various data technologies for efficient query processing. At a high level, a bitmap index can be thought of as a physical materialisation of a set of predicates over a data set, is naturally columnar and particularly good for multidimensional boolean query processing. PostgreSQL materialises a bitmap index on the fly from query predicates when there are multiple attributes constrained by a query (for instance in a compound where clause). The filter caches in ElasticSearch and Solr are implemented as bitmap indices on filter predicates over document IDs. Pilosa is a distributed bitmap index query engine built in Go, with a Java client library.

Bitmap indices are not a one-size-fits-all data structure, and in degenerate cases can take up more space than the data itself; using a bitmap index in favour of a B-tree variant on a primary key should be considered an abuse. Various flavours of bitmap implementation exist, but the emerging de facto standard is RoaringBitmap led by Daniel Lemire. RoaringBitmap is so ubiquitous that it is handled as a special case by KryoSerializer – no registration required if you want to use Spark to build indices.

Naive Bitmap Index

To introduce the concept, let’s build a naive uncompressed bitmap index. Let’s say you have a data set and some way of assigning an integer index, or record index from now on, to each record (the simplest example would be the line number in a CSV file), and have chosen some attributes to be indexed. For each distinct value of each indexed attribute of your data, compute the set of indices of records matching the predicate. For each attribute, create a map from the attribute values to the sets of corresponding record indices. The format used for the set doesn’t matter yet, but either an int[] or BitSet would be suitable depending on properties of the data and the predicate (cardinality of the data set, sort order of the data set, cardinality of the records matching the predicate, for instance). Using a BitSet to encode the nth record index as the nth bit of the BitSet can be 32x smaller than an int[] in some cases, and can be much larger when there are many distinct values of an attribute, which results in sparse bitmaps.

The tables below demonstrate the data structure. The first table represents a simple data set. The second and third tables represent bitmap indices on the data set, indexing the Country and Sector attributes.

Record Index Country Sector
0 GB Financials
1 DE Manufacturing
2 FR Agriculturals
3 FR Financials
4 GB Energies

The bitmap index consists of the two tables below:

Country
Value Record Indices Bitmap
GB 0,4 0x10001
DE 1 0x10
FR 2,3 0x1100
Sector
Value Record Indices Bitmap
Financials 0,3 0x1001
Manufacturing 1 0x10
Agriculturals 2 0x100
Energies 4 0x10000

It’s worth noting three patterns in the tables above.

1. The number of bitmaps for an attribute is the attribute’s distinct count.
2. There are typically runs of zeroes or ones, and the lengths of these runs depend on the sort order of the record index attribute
3. A bitmap index on the record index attribute itself would be as large as the data itself, and a much less concise representation. Bitmap indices do not compete with B-tree indices for primary key attributes.

Query Evaluation

This simple scheme effectively materialises the result of predicates on the data and is particularly appealing because these predicates can be composed by performing efficient logical operations on the bitmaps. Query evaluation is most efficient when both the number of bitmaps and size of each bitmap are as small as possible. An efficient query plan will touch as few bitmaps as possible, regardless of bitmap size. Here are some examples of queries and their evaluation plans.

Single Attribute Union
select *
from records
where country = "GB" or country = "FR"

1.  Access the country index, read the bitmaps for values “FR” and “GB”
2. Apply a bitwise logical OR to get a new bitmap
3. Access the data stored by record id with the retrieved indices
Multi Attribute Intersection
select *
from records
where country = "GB" and sector = "Energies"

1. Access the country index, and read the bitmap for value “GB”
2. Access the sector index, and read the bitmap for value “Energies”.
3. Apply a bitwise logical AND to get a new bitmap
4. Access the data stored by record id with the retrieved indices
Single Attribute Except Clause
select *
from records
where country <> "GB"

1. Access the country index, and read the bitmap for value “GB”
2. Negate the bitmap
3. Access the data stored by record id with the retrieved indices

The index lends itself to aggregate queries (and aggregates on predicates)

Count
select country, count(*)
from records
group by country

1. Access the country index
2. Iterate over the keys
• Count the bits in the bitmap, store the count against the key in a map
Count with Filter
select country, count(*)
from records
where sector <> "Financials"
group by country

1. Access the sector index and read the bitmap for “Financials”
2. Negate the bitmap, call the negated bitmap without_financials
3. Access the country index
4. Iterate over the keys
• Intersect each bitmap with without_financials
• Count the bits in the resultant bitmap, store the count against the key in a map

The two main factors affecting the performance of query processing are the number of bitmaps that need to be accessed, and the size of each bitmap (which concerns both memory/disk usage and CPU utilisation) – both should be minimised. Choosing the correct encoding for expected queries (one should expect range queries for dates, but equality and set membership queries for enumerations) can reduce the number of bitmap accesses required to evaluate a query; whereas compression can reduce the bitmap sizes.

Encoding

Only predicates for equality are efficient with the scheme so far. Suppose there is a well defined sort order for an attribute $a$. In order to evaluate

select *
from records
where a > x and a < y


every bitmap in the range $(x, y)$ would have to be accessed and united. This could easily become a performance bottleneck. The encoding could be adapted for evaluating range predicates. Instead of setting the nth bit if the nth record has $a = y$ (equality encoding), it could be set if the nth record has $a \le y$ (range encoding). In this encoding only one bitmap would need to be accessed to evaluate a predicate like $a \le y$, rather than the $|[a_{min}, y]|$ bitmaps required using the equality encoding. In order to evaluate $a \in [x, y]$, only the bitmaps for $x$ and $y$ are needed. Not much is lost in order to support equality predicates in a range encoding; only the bitmap for the value and its predecessor are required.

Compression

The scheme presented so far works as a toy model but the bitmaps are just too large to be practical. A bitmap index on a single attribute with $m$ distinct values over a data set consisting of $n$ records, using a BitSet would consume $mn$ bits, using an int[] would consume $32mn$ bits. Therefore, some kind of compression is required to make the approach feasible.

Often in real world data sets, there are attributes with skewed histograms, a phenomenon known as Zipf’s Law. In a typical financial data set exhibiting this property, most trades will be in very few instruments (EURGBP for instance), and there will be very few trades in the rest of the traded instruments. The bitmaps for the values at both the head and tail of these histograms become less random and therefore compressible. At the head, the bits are mostly set; at the tail mostly unset. Compression seeks to exploit this.

One well understood compression strategy is run length encoding. If there are $m$ bits set in a row, followed by $n$ unset bits and again followed by $p$ bits set, 0x1…10..01..1 of size $m + n + p$ bit could be replaced by $m1n0p1$ which is typically a lot smaller (though worse if the runs are very short). Since there are only two possible values, only ranges of set bits need to be represented – it is only necessary to store the start index and length of each run, so the bitmap becomes the set of tuples $\{(0,m), (m+n, p)\}$. Notably the sort order of the record index with respect to the attribute affects the compression ratio for run length encoding because it can make runs longer or shorter.

In reality, run length encoding on bits is not practical since modern processors operate on words not bits. Instead of counting runs of bits, several algorithms count runs of bytes (BBC – Byte-aligned Bitmap Compression) or words (WAH – Word Aligned Hybrid, EWAH – Enhanced Word Aligned Hybrid). These algorithms are faster at the expense of reduced compression. In these schemes compression is improved when there are long runs of clean words (where all the bits in the word are the same), and the compression ratio is degraded when there are many dirty words, which cannot be compressed at all. The number of clean words and hence the compression ratio for a bitmap index depends on the order of the record index attribute. However, an optimal sort order with respect to an index on one attribute will generally be sub-optimal for an index on another.

In order to maintain the attractive boolean query processing capabilities, the OR, AND, XOR, NAND, NOR and NOT operations each need to redefined to operate on the compressed form of the bitmap, and in the case of algorithms like EWAH these adapted operations are more efficient, CPU and cache-friendly, than on naive uncompressed bitmaps.

Previously I was ambivalent between the use of BitSet and int[] to encode the sets of record indices (Set was not proposed because of the inherent cost of wrapped integers). This is because neither of these types is really appropriate for the task in all cases. If we use an uncompressed BitSet we end up with high memory usage for a large data set, even if most of the bits are unset, which is often compressible at the word level. With very sparse data, when most of the bits would be zero, it would take less space to store the record indices in an int[] instead. By choosing dynamically whether to use integers, uncompressed bits, or compressed words is actually roughly how the RoaringBitmap library optimises performance. More about that here.

Reducing the Number of Bitmaps

Query evaluation performance degrades with the number of bitmaps that need to be accessed. Choosing the right encoding for query patterns and reducing the size of each bitmap are both key for performance and practicality, but it can help save storage space to have fewer bitmaps per attribute. So far each value has been encoded as a single bitmap, and each bitmap has been associated with only one value. The total number of bitmaps can be reduced by applying a factorisation on values with a bitmap per factor, so each bitmap will be associated with several values and each value will be associated with several bitmaps. To this end, mapping values into integers by some means would allow integer arithmetic to be used for index construction. A simple mechanism to map a set of objects to integers would be a dictionary, a more complicated but better mechanism might be a perfect hash function like CHM (an order preserving transformation!) or BZW (which trades order preservation for better compression).

Bit Slicing

Supposing a mapping between a set of values and the natural numbers has been chosen and implemented, we can define a basis to factorise each value. The number 571, for example, can be written down as either $5*10^2 + 7*10^1 + 1*10^0$ in base-10 or $1*8^3 + 0*8^2 + 7*8^1 + 3*8^0$ in base-8. Base-10 uses more coefficients, whereas base-8 uses more digits. Bit slice indexing is analogous to arithmetic expansion of integers, mapping coefficients to sets, or slices, of bitmaps; digits to bitmaps.

Mapping a set of objects $S$ into base $n$, where $\log_n(|S|) \approx \mathcal{O}(m)$, we can use $mn$ bitmaps to construct the index. The bases do not need to be identical (to work with date buckets we could choose to have four quarters, three months, and 31 days for example) but if they are the bases are said to be uniform. An example of a base 3 uniform index on currencies is below:

Records
Record Index Currency
0 USD
1 GBP
2 GBP
3 EUR
4 CHF
Currency Encoding
Currency Code Base 3 Expansion
USD 0 0*3 + 0
GBP 1 0*3 + 1
EUR 2 0*3 + 2
CHF 3 1*3 + 0
Single Component Currency Index
Currency Bitmap
USD 0x1
GBP 0x110
EUR 0x1000
CHF 0x10000
Bit Sliced Currency Index
Power of 3 Remainder Bitmap
1 0 0x1111
1 1 0x10000
1 2 0x0
0 0 0x10001
0 1 0x110
0 2 0x1000

Here we have actually used six bitmaps instead of four, but the factorisation comes into its own when more currencies are added. With a 2-component base-3 index, we can use six bitmaps to encode up to nine values.

To evaluate a query, map the value into its integer representation, factorise the number with respect to the bases of the index, and then intersect at most $m$ bitmaps together. This is slower than a single bitmap access, but has some useful properties if data is hierarchically bucketed as well as trading query performance for storage space. To evaluate queries at the top level of bucketing, only one bitmap access is required; at the second level, two bitmap accesses are required and so on. So there is a trade off between degraded performance with granular values with increased performance for roll-ups.

Here are some links on the topic that I found interesting

Roaring Bitmap – http://roaringbitmap.org/
Bitmap Index Design and Evaluation – http://www.comp.nus.edu.sg/~chancy/sigmod98.pdf
Sorting improves word-aligned bitmap indexes – https://arxiv.org/pdf/0901.3751.pdf
Consistently faster and smaller compressed bitmaps with Roaring – https://arxiv.org/pdf/1603.06549.pdf

Advanced AOP with Guice Type Listeners

There are cross-cutting concerns, or aspects, in any non-trivial program. These blocks of code tend to be repetitive, unrelated to business logic, and don’t lend themselves to being factored out. If you have ever added the same statement at the start of several methods, you have encountered an aspect. For instance, audit, instrumentation, authentication, authorisation could all be considered aspects. If you’d use a sledgehammer to crack a walnut, Spring can help you with AOP by using proxies. Guice can also perform AOP out of the box allowing you to bind implementations of MethodInterceptor. In fact, tutorials were being written about doing that before I wrote my first line of Java. However, it gets more complicated when you need a separate (potentially stateful) interceptor per usage of an annotation, making it infeasible to bind the interceptor statically. If only you could bind the interceptor dynamically, when the intercepted type is first requested, it would be so easy to do. This is exactly what the interface TypeListener allows.

TypeListener is a simple interface with a single method

  <I> void hear(TypeLiteral<I> type, TypeEncounter<I> encounter);


This method gets invoked the first time a type requested for injection is encountered. At this point you can introspect the TypeLiteral and bind a new MethodInterceptor instance to the TypeEncounter. The mechanics of detecting and binding requested interception is common, so factor it out into a base listener class, deferring creation of the MethodInterceptor until later.

abstract class MethodInterceptorBinder implements TypeListener {

@Override
public <T> void hear(TypeLiteral<T> literal, TypeEncounter<T> encounter) {
Arrays.stream(literal.getRawType().getDeclaredMethods())
.filter(m -> !m.isSynthetic())
.forEach(m -> bindInterceptor(m, encounter));
}

private void bindInterceptor(Method method, TypeEncounter<?> encounter) {
final MethodInterceptor interceptor = getInterceptor(method);
if (interceptor != null) {
encounter.bindInterceptor(Matchers.only(method), interceptor);
}
}

protected abstract MethodInterceptor getInterceptor(Method method);
}


Suppose we would like to audit calls to a method, associating an audit topic with each method. Then we can just extend MethodInterceptorBinder as below, and bind the listener in a module somewhere. Every method annotated for audit will be audited, and audited separately.

public class AuditBinder extends MethodInterceptorBinder {

private final Auditor auditor;

public AuditBinder(Auditor auditor) {
this.auditor = auditor;
}

@Override
protected MethodInterceptor getInterceptor(Method method) {
Audited audited = method.getAnnotation(Audited.class);
return null != audited ?
new AuditingInterceptor(auditor, audited.topic()) :
null;
}
}

public class AuditModule extends AbstractModule {

private final Auditor auditor;

public AuditModule(Auditor auditor) {
this.auditor = auditor;
}

@Override
protected void configure() {
bindListener(Matchers.any(), new AuditBinder(auditor));
}
}


Lifecycle Management with Guice Provision Listeners

Typically in a Java web application you will have services with resources which need lifecycle management – at the very least closing gracefully at shutdown. If you’d use a sledgehammer to crack a walnut, there’s Spring, which will do this for you with init and destroy methods. I’ll explain why I dislike Spring in another post. You could also add a shutdown hook to every class you implement, but this is repetitive and what happens if you extend a class which already has its own shutdown hook? I like Guice as a DI framework because it is minimal, type-safe, interoperates with JSR-330, but it doesn’t contain lifecycle management functionality. Since Guice 4.0, this has been very easy to add as a DIY add-on using a ProvisionListener.

The ProvisionListener interface has a single method  void onProvision(ProvisionInvocation provisionInvocation); which gets called each time an object is created. This is your chance to check if the instance needs closing and if the instance should live for the entire application lifetime. For the sake of simplicity, this listener just checks if the instance implements an interface, and that the provision is eager or a singleton, but you can execute arbitrary java code here to do something more sophisticated.

public class CloseableListener implements ProvisionListener {

private final LifeCycleObjectRepository repo;

public CloseableListener(LifeCycleObjectRepository repo) {
this.repo = repo;
}

@Override
public <T> void onProvision(ProvisionInvocation<T> provisionInvocation) {
T provision = provisionInvocation.provision();
if(provision instanceof Closeable && shouldManage(provisionInvocation)) {
repo.register((Closeable)provision);
}
}

private boolean shouldManage(ProvisionInvocation<?> provisionInvocation) {
return provisionInvocation.getBinding().acceptScopingVisitor(new BindingScopingVisitor<Boolean>() {
@Override
public Boolean visitEagerSingleton() {
return true;
}

@Override
public Boolean visitScope(Scope scope) {
return scope == Scopes.SINGLETON;
}

@Override
public Boolean visitScopeAnnotation(Class<? extends Annotation> scopeAnnotation) {
return scopeAnnotation.isAssignableFrom(Singleton.class);
}

@Override
public Boolean visitNoScoping() {
return false;
}
});
}
}


Here LifeCycleObjectRepository has the responsibility of registering and holding onto an instance until it is closed itself.

public class LifeCycleObjectRepository {

private static final Logger LOGGER = LoggerFactory.getLogger(LifeCycleObjectRepository.class);

private final Set<Closeable> closeableObjects = Sets.newConcurrentHashSet();

void register(Closeable closeable) {
LOGGER.info("Register {} for close at shutdown", closeable);
}
}

public synchronized void closeAll() {
closeableObjects.forEach(c -> {
try {
LOGGER.info("Close {}", c);
c.close();
} catch (IOException e) {
LOGGER.error("Error closing object", e);
}
});
closeableObjects.clear();
}
}


This is almost a complete solution, now we need to make sure we close the LifeCycleObjectRepository when we get a SIGTERM, and register the CloseableListener so it can collect provisions of singletons, without leaking these details everywhere. To stop the details of the CloseableListener leaking, we can wrap it in a module which binds the listener, and installs the client module.

public class LifeCycleAwareModule extends AbstractModule {
private final Module module;
private final LifeCycleObjectRepository repo;
protected LifeCycleAwareModule(LifeCycleObjectRepository repo, Module module) {
this.lifeCycleState = lifeCycleState;
this.module = module;
}

@Override
protected void configure() {
bindListener(Matchers.any(), new CloseableListener(repo));
install(module);
}
}


Finally, implement a LifeCycleManager to own – and close in a shutdown hook – a LifeCycleObjectRepository. The LifeCycleManager receives all Guice modules required to bind the application, and wraps them with the LifeCycleObjectRepository to enable lifecycle management.

public class LifeCycleManager {

private final LifeCycleObjectRepository repo = new LifeCycleObjectRepository();
private final Injector injector;

public LifeCycleManager(Module... modules) {
this(ImmutableList.copyOf(modules));
}

public LifeCycleManager(Iterable<Module> modules) {
this.injector = Guice.createInjector(enableLifeCycleManagement(repo, modules));
}

public <T> T getInstance(Class<T> type) {
return injector.getInstance(type);
}