Welcome *standalone* Eclipse MicroProfile specifications

Introduction to the standalone specifications

Almost two years ago I wrote an introduction to the Eclipse MicroProfile specification. You can find these here (part 1) and here (part 2). Now that there are four new specifications available since version 3.3 (we are now at version 4.0), I like to introduce these to you as well.

To be a MicroProfile compliant implementation, a vendor must implement all of the 12 core specifications. To prevent that the core platform grows too big, and thus becomes an obstacle for any new MicroProfile implementation, all new MicroProfile specifications are placed outside this core platform.

Up until now there are four new so called standalone specifications:

  1. Reactive Streams Operators
  2. Reactive Messaging
  3. Context Propagation
  4. GraphQL

1. Reactive Stream Operators

The Reactive Streams Operators specification provides flow control and error handling when subscribing to, and processing streams of, events. It bundles a reactive streams API with a set of standard operators for streams.

The idea behind this specification is to provide the equivalent of java.util.stream , with methods like map, flatMap, filter  and forEach. but asynchronous, and with support for back-pressure, and error and completion signals propagation.

A simple example:

PublisherBuilder<String> publisherBuilder = ReactiveStreams.of("these", "are", "microprofile", "reactive", "stream", "operators")
           .onError(System.out::println)
           .onComplete(() -> System.out.println("Publisher finished!"));

SubscriberBuilder<String, List<String>> subscriberBuilder =  ReactiveStreams.<String>builder()
           .map(String::toUpperCase)
           .filter(s -> s.length() > 5)
           .toList();

CompletionRunner<List<String>> completionRunner = publisherBuilder.to(subscriberBuilder);

completionRunner.run().whenComplete((strings, throwable) -> {
       System.out.println("Strings with more than 5 characters: " + strings);
});

This example creates a publisher which streams a list of words to a subscriber. The subscriber filters out words with more than 5 characters and converts these words to uppercase. When the stream is complete, the result is printed to the console.

The publisher also contains an onError and onComplete method. When, for some reason the publisher fails an exception is printed to the console. Otherwise the ‘Publisher finished!’ text is printed.

The result of this example is:

Publisher finished!
Strings with more than 5 characters: [MICROPROFILE, REACTIVE, STREAM, OPERATORS]

Currently there are three implementations of this feature:

Akka Streams
Zero Dependency
SmallRye Reactive Streams Operators

2. Reactive Messaging

Reactive Messaging is an easy-to-use way to send, receive, and process messages from and to streams of messages or events. It does this by connecting named pairs of methods or connectors with channels.

Under the hood it uses the Reactive Streams Operators specification described in the previous section.
For example, to calculate the square of an incoming stream of numbers and send the result out on another channel we can use the following snippet of code.

public class ReactiveMessagingExample {

    @Channel("destination")
    @OnOverflow(Strategy.DROP)
    Emitter<Integer> emitter;

    @Outgoing("source")
    public PublisherBuilder<Integer> source() {
       return ReactiveStreams.of(1,2,3,4,5,6,7,8,9,10);
    }

    @Incoming("source")
    public void square(Integer input) {
       emitter.send(input * input);
    }

    @Incoming("destination")
    public void result(Integer result) {
       System.out.println("Result of square is: " + result);
    }
}

In this example a stream of Integers is published to a channel named source  using the @Outgoing  annotation on the source  method. Another method, named square  listens to this channel, using the @Incoming  annotation and emits the result of the square  function to the destination  channel using an emitter.

A third method, result , listens to this channel and prints the result, as follows:

Result of square is: 1
Result of square is: 4
Result of square is: 9
Result of square is: 16
Result of square is: 25
Result of square is: 36
Result of square is: 49
Result of square is: 64
Result of square is: 81
Result of square is: 100

The emitter has an @OnOverflow  annotation, to enable the most recent value of the stream to be dropped if the downstream subscriber cannot keep up.

This is a fairly simple (in memory) example, but if we use it to interact with various messaging technologies like JMS or Apache Kafka, this can be very useful. For this, Reactive Messaging provides a Connector API to connect to external messaging systems. Some of these connectors are already provided by the vendors of these implementations.

Some implementations of this specification are:

Lightbend Alpakka
SmallRye Reactive Messaging (for example used in Quarkus)

This specification is also included in the MicroProfile implementations of:

Open Liberty
Helidon

3. Context Propagation

CompletionStage  and CompletableFuture  enable you to chain together pipelines of dependent actions, where execution of each dependent stage is triggered by the completion of the stage(s) upon which that stage depends.

But the context under which the dependent stages are executed is unpredictable. So it is not possible to use, for example, ThreadLocal  to store context information like in a traditional blocking application as these stages likely are executed in different threads.

MicroProfile Context Propagation provides two interfaces to solve this problem. The first, ManagedExecutor, gives the means to obtain managed instances of CompletableFuture. Backed by the managed executor using the underlying thread pool, and the default mechanism of defining thread context propagation.

The second interface, ThreadContext, provides methods for individually contextualizing dependent stage actions. This gives the user more fine-grained control over the capture and propagation of thread context.

Instances of the ManagedExecutor and ThreadContext interfaces can be built using a fluent builder pattern. The builders can also be used with dependency injection.

For example, use the following code snippet to create a ManagedExecutor which propagates the application context and CDI context to all of the dependent stage actions:

ManagedExecutor managedExecutor = ManagedExecutor.builder()
       .maxAsync(5)
       .propagated(ThreadContext.APPLICATION, ThreadContext.CDI)
       .build();

In addition, to create an instance of the ThreadContext to the application context, and clear the security and transaction context, use:
ThreadContext threadContext = ThreadContext.builder()
       .propagated(ThreadContext.APPLICATION)
       .cleared(ThreadContext.SECURITY, ThreadContext.TRANSACTION)
       .unchanged(ThreadContext.ALL_REMAINING)
       .build();

Here’s an example, in which I use the ManagedExecutor to propagate a CDI Bean to the different stages:
@Inject
ManagedExecutor managedExecutor;

@Inject
MdcLogContext mdcLogContext;

@RestClient
ErgastClient ergastClient;

@GET
@Path("/drivers")
@Produces("application/json")
public CompletionStage<List<Driver>> drivers() throws ExecutionException, InterruptedException {
   logger.info("Storing context data");
   MDC.put("context", "context data");
   mdcLogContext.setContext(MDC.getCopyOfContextMap());

   return managedExecutor.supplyAsync(() -> {
       MDC.setContextMap(mdcLogContext.getContext());
       logger.info("Retrieving race");
       return ergastClient.raceResultsAsync();
   })
   .thenApplyAsync(result -> {
               MDC.setContextMap(mdcLogContext.getContext());
               logger.info("retrieving drivers from race");
               return result.thenApplyAsync(result1 -> result1.getMrData().getRaceTable().getRaces().stream()
                       .flatMap(race -> race.getResults().stream()).map(RaceResult::getDriver)
                       .collect(Collectors.toList()));
    })
    .get();
}

In this example, which you can find on my Github, a property context  with value “context data” is stored in the Mapped Diagnostics Context, to be added to every logged line. Since the MDC is thread local, I use a MdcLogContext Bean to store all the properties of the MDC and retrieve these properties in the dependent stages, to add to the MDC to use in the log statement.

This CDI bean is propagated to every stage by the injected ManagedExecutor. When the endpoint is called the following lines are logged:

INFO  executor-thread-195 Storing context data []
INFO  executor-thread-202 Retrieving race [context data]
INFO  executor-thread-204 retrieving drivers from race [context data]

You can find implementations of this specification in:

SmallRye Context Propagation (used in Quarkus)

And in the MicroProfile implementation of:

Open Liberty

4. GraphQL

GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data.

The purpose of GraphQL is to:

  • Give clients the power to ask for exactly what they need. In REST API’s data the returned is fixed and cannot be influenced by the client, even if the client does not require all the fields from the returned data. This is called over-fetching. When a client requires more REST calls based on the first call to retrieve all the data that is needed it is called under-fetching. GraphQL makes it possible to only return the fields needed.
  • Make it easier to evolve APIs over time. GraphQL can also add additional fields and capabilities to existing API’s based on the client request, which will not break changes to the existing API’s, so it still works for existing clients.

With a couple of annotations you can make your endpoint GraphQL aware. For example in the endpoint to retrieve the drivers from a race in the previous section, we can add the @GraphQLApi  annotation to define a GraphQL endpoint. Using a @Query  annotation on the method defines the method to be queryable with GraphQL. This is shown in the following code snippet:

@GraphQLApi
@ApplicationScoped
@Path("/")
public class DriversPerRaceController {

   @Query
   @GET
   @PATH(“/drivers”)
   @Produces("application/json")
   public List<Driver> drivers() {
	...
   }
}

The Driver class contains the fields givenName, familyName, dateOfBirth  and nationality .

Client side we can use the following query to retrieve only the name of the driver:

query drivers {
  drivers {
    givenName
    familyName
  }
}

Executing this query in the GraphQ UI at http://localhost:8080/graphql-ui/ returns only the names of the drivers. You can also use query variables in the query,

Besides querying the data it is also possible to mutate the data with GraphQL using the @Mutation  annotation. Using this annotation it is possible to add, update and delete data using GraphQL.

Given the following endpoint:

@Mutation
@POST
@Path("/addDriver")
public Driver addDriver(Driver driver) {
   // additional code to store the driver
   logger.info("driver added");
   return driver;
}

When the following statement is executed in the GraphQL UI, the driver is added to the database and the givenName  and familyName  are returned to the client.
mutation addDriver {
  addDriver(driver: {
      familyName: "Vaillant",
      givenName: "Michel",
      dateOfBirth: "1957-02-07",
      nationality: "French"
    }
  )
  {
    givenName
    familyName
  }
}

Implementations of the MicroProfile GraphQL specification are:

Smallrye GraphQL (used in Quarkus)

and in the MicroProfile implementations:

Open Liberty
WildFly
Helidon

Summary

This concludes a short introduction of the standalone MicroProfile specifications. The specifications are called standalone, because they are not part of the core platform specification.

There are some libraries available that implement these specifications, which can be used separately from the core platform. And implementations of these standalone specifications can also be found in compliant MicroProfile implementations like Quarkus and Open Liberty.

The source code for the examples can be found at my GitHub repository.

The project is created using the MicroProfile Starter, which you find at https://start.microprofile.io/, using Quarkus as the MicroProfile Runtime.

Resources

Sample code

https://github.com/Misano9699/microprofile-standalone

Other resources

https://microprofile.io/
https://github.com/eclipse/microprofile-reactive-streams-operators
https://github.com/eclipse/microprofile-reactive-messaging
https://github.com/eclipse/microprofile-context-propagation
https://github.com/eclipse/microprofile-graphql
https://quarkus.io/

Leave a Reply

Your email address will not be published. Required fields are marked *