Using CompletableFuture with RestTemplates in Spring-Boot Part 2
Complatable Future with RestTemplate
In the previous article we learned how to use RestTemplate to call apis in sequence and compose their result and write to the client. We will try to optimize this now by makeing these 2 request work in parallel and return this response.
We will same UserProfileSupplier
& UserAddressSupplier
from the previous post and make some adjustment to the caller of these 2 api calls instead. Here is how i have done it:
1@RestController
2@RequestMapping(value = "/compose")
3@AllArgsConstructor
4public class ApiCompositionController {
5 private final RestTemplate restTemplate;
6
7 @GetMapping(value = "/parallel")
8 public CompletableFuture<Map<String, Object>> parallel() throws ExecutionException, InterruptedException {
9 CompletableFuture<Optional<UserProfileResponse>> upFuture = CompletableFuture
10 .supplyAsync(new UserProfileSupplier(restTemplate), executorService);
11 CompletableFuture<Optional<UserAddressResponse>> uaFuture = CompletableFuture
12 .supplyAsync(new UserAddressSupplier(restTemplate), executorService);
13 return upFuture.thenCombine(uaFuture, (userProfileResponse, userAddressResponse)
14 -> Map.of("status", "parallel",
15 "profile", userProfileResponse.map(UserProfileResponse::getProfile).orElse(new UserProfile()),
16 "address", userAddressResponse.map(UserAddressResponse::getAddress).orElse(new UserAddress())
17 ));
18 }
19}
Lets execute ab
tool again too see if we have a better result
1ashish@ashish:~$ ab -n 1000 -c 100 http://localhost:8080/compose/parallel
2
3Server Software:
4Server Hostname: localhost
5Server Port: 8080
6
7Document Path: /compose/parallel
8Document Length: 160 bytes
9
10Concurrency Level: 100
11Time taken for tests: 183.510 seconds
12Complete requests: 1000
13Failed requests: 154
14 (Connect: 0, Receive: 0, Length: 154, Exceptions: 0)
15Non-2xx responses: 154
16Total transferred: 534346 bytes
17HTML transferred: 429346 bytes
18Requests per second: 5.45 [#/sec] (mean)
19Time per request: 18350.980 [ms] (mean)
20Time per request: 183.510 [ms] (mean, across all concurrent requests)
21Transfer rate: 2.84 [Kbytes/sec] received
22
23Connection Times (ms)
24 min mean[+/-sd] median max
25Connect: 0 0 0.5 0 2
26Processing: 1011 17103 9616.9 17028 31001
27Waiting: 1011 17103 9617.0 17028 31001
28Total: 1014 17103 9616.8 17028 31001
29
30Percentage of the requests served within a certain time (ms)
31 50% 17028
32 66% 23035
33 75% 26047
34 80% 28054
35 90% 30841
36 95% 30907
37 98% 31000
38 99% 31000
39 100% 31001 (longest request)
Terrible!
This is not what we were looking for, lets take a closer look and see what went wrong and check if there is anything to improve.
The value of ForkJoinPool.commonPool().getPoolSize()
was always once. Turns out that this value is always Runtime.getRuntime().availableProcessors() -
So you can imagine that this tasks are running with max 11 workers assigned to them.
There is also an overridden of CompletableFuture.supplyAsync
which takes Executor as the second argument. Lets create our own thread pool executor and pass it here and check if we can get some better result.
Executor code:
1private final ExecutorService executorService = Executors.newFixedThreadPool(1024, r -> {
2 final Thread thread = new Thread(r);
3 thread.setDaemon(true);
4 thread.setName("Exe-Thread: " + thread.getId() + " - " + thread.getThreadGroup());
5 return thread;
6});
And completable future will look like this
1CompletableFuture<Optional<UserProfileResponse>> upFuture = CompletableFuture
2 .supplyAsync(new UserProfileSupplier(restTemplate), executorService);
3CompletableFuture<Optional<UserAddressResponse>> uaFuture = CompletableFuture
4 .supplyAsync(new UserAddressSupplier(restTemplate), executorService);
Lets run ab
tool again to test
1ab -n 1000 -c 100 http://localhost:8080/compose/parallel
2
3
4Server Software:
5Server Hostname: localhost
6Server Port: 8080
7
8Document Path: /compose/parallel
9Document Length: 160 bytes
10
11Concurrency Level: 100
12Time taken for tests: 11.367 seconds
13Complete requests: 1000
14Failed requests: 0
15Total transferred: 265000 bytes
16HTML transferred: 160000 bytes
17Requests per second: 87.97 [#/sec] (mean)
18Time per request: 1136.692 [ms] (mean)
19Time per request: 11.367 [ms] (mean, across all concurrent requests)
20Transfer rate: 22.77 [Kbytes/sec] received
21
22Connection Times (ms)
23 min mean[+/-sd] median max
24Connect: 0 0 1.0 0 5
25Processing: 1001 1017 33.6 1003 1187
26Waiting: 1001 1017 33.5 1003 1187
27Total: 1001 1018 34.2 1003 1191
28
29Percentage of the requests served within a certain time (ms)
30 50% 1003
31 66% 1007
32 75% 1011
33 80% 1016
34 90% 1059
35 95% 1114
36 98% 1125
37 99% 1161
38 100% 1191 (longest request)
Lets compare to the previous results We have here:
- doubled the transfer rate
- ~88 r/s
- each request is executed somewhere to 1s.
Source code
https://gitlab.com/spring-boot-cloud-samples/completable-future-rest-template
In the next post, we will examine what is happening inside the appication using debugging and monitoring tools like grafana and visualvm.
Have ☕ till then!
comments powered by Disqus