Friday, December 29, 2017

Upgrading Dell M4700 from 500G HDD to 1T Samsung Evo 850 SSD v-nand

Below is a list of things encountered when upgrading a Dell Precision M4700 laptop with Windows 7 with a 500G HDD to a 1 terabyte Samsung Evo v-nand 850 SSD.

  • Used HDClone 7 Free Edition to make a copy of the current harddisk. Copied onto a Samsung T5 external SSD.  HDClone very nicely copies over everything, even from a live (running) Windows 7 machine. It creates all partitions also on the external SSD; these are visible as separate drives when reconnecting the USB drive. It can be even made bootable, but I didn't need that.

  • Swapped in the SSD as shown here: https://www.youtube.com/watch?v=D6Cn3bONxEo
    Note how the SDD has to "click" with the notches of the bracket (metal frame).

  • Put in the Dell Windows 7 SP1 DVD. It installed. For quite a while it shows "Windows is loading files...", but in the end it got through, after about 5-10 minutes.

  • After logging in, the wireless device was not detected. So no internet. Also other drivers were not installed yet or failed:

  • So: "Ethernet controller", "Network controller", "PCI Simple Communications Controller", "SM Bus Controller", "Universal Serial Bus (USB) Controller", "Unknown device"

  • Then tried to use the Dell 'Resource Media' DVD to install the drivers. But the usage of the program that starts then is just plain impossible to understand. E.g see the screenshot below:


    The touchpad is marked as installed (I think that the checkbox indicates that). When I then installed any other driver, no checkbox appeared at all on the left of any of them.
    Plus, what is the order to install the drivers? Found this post but seems a lot of manual work. Plus you have to know which devices you have in your machine to know which driver matches. Eg this post shows what driver to look for for one specific error. That list you should be able to find on the bill when you ordered your M4700.

    In the end managed to get the ethernet driver installed (filter on the word 'ethernet' on the Dell drivers page, you should find "Intel I2xx/825xx Gigabit Ethernet Network Controller Drivers")

  • Then managed to get the internet connection working via a wired cable after installing the above ethernet driver. Then used the Dell's Analyze Detect Drivers option  to update the correct drivers in the right order.
    All but the 'Unknown device' errors where gone in the Device Manager. Didn't dare to update the BIOS since that was working fine before.

  • After that about 180 windows updates to install and then all worked fine. Score of machine is now 7.3 on a scale of maximum 7.9 (no idea what it was before I upgraded):



    But I do notice the difference in startup for example: complete Windows 7 startup from powered off state is about 10-15 seconds. Not bad :)

  • And then the tedious job of installing all non-OS software began...
Lessons learned for next time:
  • HDClone is very handy
  • Export all browsers's favorites before taking out the old disk. You can find the favorites back, but not in an easy importable format.
  • Impossible to understand the Dell Resource Media DVD.  
  • Keep the service tag of your M4700 ready.

Thursday, December 28, 2017

Logback DBAppender sometimes gives error on AWS Aurora: IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

LOGBack DBAppender IllegalStateException


Sometimes when starting a Spring Boot application with Logback DBAppender configured for PostgreSQL or AWS Aurora in logback-spring.xml, it gives this error:

java.lang.IllegalStateException: Logback configuration error detected: ERROR in ch.qos.logback.core.joran.spi.Interpreter@22:16 - RuntimeException in Action for tag [appender] java.lang.IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

The error can be quite confusing. From the documentation it says that Logback should be able to detect the dialect from the driver class.

But apparently it doesn't. Sometimes. After investigating, it turns out that this error is also given when the driver can't connect correctly to the database. Because it will then not be able to find the metadata either, which it uses to detect the dialect. And thus you get this error too in that case!
A confusing error message indeed.

A suggestion in some post was to specify the <sqlDialect> tag, but that is not needed anymore in recent Logback versions. Indeed, it now gives these errors when putting it in logback-spring.xml file either below <password> or below <connectionSource>:

ERROR in ch.qos.logback.core.joran.spi.Interpreter@25:87 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][connectionSource][dataSource][sqlDialect]]
or
ERROR in ch.qos.logback.core.joran.spi.Interpreter@27:79 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][sqlDialect]]
To get a better error message it's better to implement the setup of the LogBack DBAppender in code, instead of in the logback-spring.xml. See for examples here and here.




Thursday, November 2, 2017

What's not so good about my new Dell XPS 15 laptop (+ a bunch of good things)

Recenty I got a new laptop, again a Dell. I decided to go for a thin "ultrabook", the XPS 15, 16G RAM, 512G SSD. A good review you can find here. I didn't take the 4K version on purpose, since some reviews say it is struggling a bit with that. Plus some software just can't handle it, like Remote Desktop, so  you have to scale down anyway.



Here is an overview of the pros and cons I found while using it.

Pros

  • Sleek design

  • Thin

  • A lot lighter than the M4700

  • Smaller power-supply

  • Fast; no problem with 3-4 IntelliJ workspaces open, over 50 Chrome tabs, DBeaver, Firefox with about 10 tabs

Cons

  • Some backlight bleeding in the bottom right corner of the screen. Most notable when showing a black screen. Not really noticeable during daylight. Here's an example of a really bad case.

  • Crappy keyboard; the page up-down, home, end keys can only be used by pressing the Fn key. If you are a coder, it's really annoying, since you use those keys a lot. Hope next time they make separate keys again.

  • Corners get scratched easily if you put it in your bag without a protecting sleeve

  • Sometimes a flicker (screen turns completely black) on the externally connected screen. Not sure yet if it's the cable.

  • Screen can't fold back flat fully.

  • For some reason the default resolution is set to 125% right after using it for the first time.

  • The connection for the HDMI cable is on the side! And sometimes sits a bit in the way when using the mouse.

  • The professional version of the XPS 15 named Precision 5520 should give you better quality components. But the warranty period has been reduced from 3 years to 1 year. Does it still make it worth to buy the over 500 euros more expensive Precision 5520? Apparently they don't dare to give a longer warrant anymore for the better quality components...


Just got a sleeve for the XPS-15, a CushCase. Ordered via Amazon. Took about 3 weeks to arrive. Fits in a regular mailbox. Fits the XPS-15 nicely; no need to really push.

Wednesday, August 16, 2017

Lessons learned - Jackson, API design, Kafka

Introduction

This blogpost describes a bunch of lessons learned during a recent project I worked on.
They are just a bunch grouped together, too small to "deserve" their own separate post :)
Items discussed are: Jackson, JSON, API design, Kafka, Git.


Lessons learned

  • Pretty print (nicely format) JSON in a Linux shell prompt:

    cat file.json | jq

    You might have to 'apt-get install jq' first.

  • OpenOffice/LibreOffice formulas:

    To increase date and year by one from cell H1031=DATE(YEAR(H1031)+1; MONTH(H1031)+1; DAY(H1031))

    Count how many times a range of cells A2:A4501 has the value 1: =COUNTIF(A2:A4501; 1)

  • For monitoring a system a split between a health-status page and performance details is handy. The first one is to show issues/metrics that would require immediate action. Performance is for informational purposes, and usually does not require immediate action.

  • API design: Even if you have a simple method that just returns a date (string) for example, always return JSON (and not just a String of that value). Usueful for backwards compatibility: more fields can be easily added later.

  • When upgrading gitlab, it (gitlab) had changed a repository named 'users' to 'users0'. Turns out 'users' is a reserved repository name in gitlab since version 8.15.

    To change your local git settings to the new users0 perform these steps to update your remote origin:

    # check current setting
    $ git remote -vorigin  https://gitlab.local/gitlab/backend/users (fetch)
    origin  https://gitlab.local/gitlab/backend/users (push)

    # change it to the new one
    $ git remote set-url origin https://gitlab.local/gitlab/backend/users0

    # see it got changed
    $ git remote -v
    origin  https://gitlab.local/gitlab/backend/users0 (fetch)
    origin  https://gitlab.local/gitlab/backend/users0 (push)

  • Jackson JSON generating (serializing): probably a good practice is to not use @JsonInclude(JsonInclude.Include.NON_EMPTY)  or NON_NULLm since that would mean a key will be just not in the JSON when its value is empty or null. That could be confusing to the caller: sometimes it's there sometimes not.  So just leave it in, so it will be set to null.   Unless it would be a totally unrelated field like: amount and currency. If amount is null, currency (maybe) doesn't make sense, so then it could be left out.

  • Java:

    Comparator userComparator = (o1, o2)->o1.getCreated().compareTo(o2.getCreated());

    can be replaced now in Java 8 by:

    Comparator userComparator = Comparator.comparing(UserByPhoneNumber::getCreated);

  • Kafka partitioning tips: http://blog.rocana.com/kafkas-defaultpartitioner-and-byte-arrays

  • Kafka vs RabbitMQ:

    - Kafka is optimized for producers producing lots of data (batch-oriented producers) and consumers that are usually slower that the producers.
    - Performance: Rabbit: makes about 20K/s  Kafka: up to 150K/s.
    - Unlike other message system, Kafka brokers are stateless. This means that the consumer has to maintain how much it has consumed.

Friday, August 11, 2017

Java: generate random Date between now minus X months plus Y months

Introduction

This blogpost shows a Java 8+ code example on how to generate a timestamp between two months relative to today ("now").

The code

This example code creates a random java.util.Date between 12 months ago from today until 1 month ahead from today, which will be available in variable randomDate.

LocalDateTime nowMinusYear = LocalDateTime.now().minusMonths(12);
ZonedDateTime nowMinusYearZdt = nowMinusYear.atZone(ZoneId.of("Europe/Paris"));
beginTimeInMilliseconds = nowMinusYearZdt.toInstant().toEpochMilli();

LocalDateTime nowPlusMonth = LocalDateTime.now().plusMonths(1);
ZonedDateTime nowPlusMonthZdt = nowPlusMonth.atZone(ZoneId.of("Europe/Paris"));
endTimeInMilliseconds = nowPlusMonthZdt.toInstant().toEpochMilli();

System.out.println("System.out.currentInmillis = " + System.currentTimeMillis() + ", beginTimeInMilliseconds = " + beginTimeInMilliseconds + ", endTimeInMilliseconds = " + endTimeInMilliseconds);

Date randomDate = new Date(getRandomTimeInMillisBetweenTwoDates());
...

private static long getRandomTimeInMillisBetweenTwoDates() {
   long diff = endTimeInMilliseconds - beginTimeInMilliseconds + 1;
   return beginTimeInMilliseconds + (long) (Math.random() * diff);
}



How do Kubernetes and its pods behave regarding SIGTERM, SIGKILL and HTTP request routing

Introduction

During a recent project we saw that HTTP requests are still arriving in pods (Spring Boot MVC controllers) even though Kubernetes' kubelet told the pod to exit by sending it a SIGTERM.
Not nice, because that means that those HTTP requests that still get routed to the (shutting down) pod will most likely fail, since the Spring Boot Java process for example has already closed already all its connection pools.

See this post (also shown below) for an overview of the Kubernetes architecture, e.g regarding kubelets.


Analysis

The process for Kubernetes to terminate a pod is as follows:
  1. The kubelet always sends a SIGTERM before a SIGKILL.
  2. Only when a POD does not finish within the graceful period (default 30 sec) after SIGTERM, the kubelet sends a SIGKILL.
  3. Kubernetes keeps routing traffic to a pod until the readiness probe fails, even after the pod received a SIGTERM.
So for a pod there is always an interval between receiving the SIGTERM and the next readiness probe request for that pod. In that period requests can (and most likely) will still be routed to that pod, and even (business) logic can still be executed in the terminated pod.

This means that after sending the SIGTERM, the readiness probes must fail as soon as possible to prevent the SIGTERMed pod from receiving more HTTP requests. But still there will be a (small) period of time requests can be routed to the pod.

A solution would be to terminate the webserver within the pod's process (in this case Spring Boot's webserver) immediately gracefully after receiving a SIGTERM. This way any still directed requests  before the readiness probe fails will fail in any way, i.e no more requests are accepted.  
So still you would have some failing requests getting passed on to the pod.  But at least no business logic will be executed anymore.

This and other options/considerations are discussed here.





Wednesday, August 9, 2017

Cassandra Performance Tips - Batch inserts, Materialized views, Fallback strategy

Introduction

During a recent project we ran into multiple issues with Cassandra's performance. For example with queries being slow or having timeouts on only a specific environment (though they should have the same setup), inconsistently stored results, and how to optimize batch inserts when using Scala.

This blogpost describes how they were solved or attempted to be solved.


Setup: Cassandra running in a cluster with three nodes.

Performace related lessons learned

  1. On certain environments of the DTAP build-street slow queries (taking seconds) and weird consistency results appeared. Not all 4 environments were the same, though Acceptance and Production where as much the same as possible.

    We found as causes:

    Slow queries and timeouts: Cassandra driver was logging at both OS and driver level
    Inconsistently stored results: The clocks from different clients accessing C* were not the same, some off for minutes. Since the default in v3 of the Datastax/Cassandra driver protocol is clientside generated timestamps, you can get in trouble of course, since then the one with the most recent timestamp just always wins. But implementing serverside also won't be obvious, since different C* coordinators can give a millisec different timestamp.

  2. For Gatling performance tests written in Scala, we first needed inserting 50K records in a Cassandra database, simulating users already registered to the system. Trying to make this perform several options were tried:

    a- Plain string concatenated or prepared statements where taking over 5mins in total
    b- Inserting as a batch (apply batch) has limit of 50KiB in text size. That limit is too low for us: 50K records is almost 5MB. Splitting up was too much of a hassle.
    c- Making the calls async, as done here: https://github.com/afiskon/scala-cassandra-example
    But we were getting:

    17:20:57.275 [ERROR] [pool-1-thread-13] TestSimulation - ERROR while inserting row nr 13007, exception =
    com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.20.25.101:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/172.20.25.101] Pool is busy (no available connection and the queue has reached its max size 256)))
    at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
    at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:336)
    at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
    at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
    at cle.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:102)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1184)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1120)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:295)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
    at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
    at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
    at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
    at UserDAO.insert(UserDao.scala:58)
    ...


    Turns out it is the driver's local acquisition queue that fills up. You can increase it via poolingOptions.setMaxQueueSize, see: http://docs.datastax.com/en/developer/java-driver/3.1/manual/pooling/#acquisition-queue
    We set it to 50000 to be safe it would just queue all records (50K). For a production environment this might not be a good idea of course, you might need to tune it to your needs.
    And the threads we set to 20 in the Executioncontext (used by the DAO from the example from github above). You can set it as in this example: http://stackoverflow.com/questions/15285284/how-to-configure-a-fine-tuned-thread-pool-for-futures

  3. Increasing CPU from 4 to 8 did seem to improve performance, less CPU saturation.

  4. Each time adding one more materialized view increases insert performance by 10%  (see here)

  5. For consistency and availability when one of the nodes might be gone or unreachable due to network problems, we setup Cassandra write such that first EACH_QUORUM is tried, then if fails, LOCAL_QUORUM as fallback strategy.

Below articles did help to analyse the problems further:

Tuesday, August 8, 2017

Gatling Lessons learned

Introduction

This post describes a couple of best practices when using Gatling, the Scala based performance and load-testing tool.


Also one or two Scala related tips will be shown.

Lessons learned

  1. You can't set a header-field in the header like this in a scenario:

    .header("request-tracing-id", UUID.randomUUID().toString())

    This is because the scenario is only created once, and unless you use session variables it, it is all static (a function).

    To solve this one can use a feeder, like this:

    val feeder = Iterator.continually(Map("traceHeader" -> UUID.randomUUID().toString))

    And then replace the UUID.randomUUID().toString() line with:

    .header("request-trace-id", "${traceHeader}")

  2. A Scala 2.11 example of a ternary (maybe not the best solution Scala-wise but readable :)

    .value(availableBalanceDecimal, if (dto.availableBalanceDecimal.isEmpty) null else dto.availableBalanceDecimal.get)

  3. Connecting correctly to a service with a Gatling test in one environment (Nginx, ingress, Kubernetes) for some reason did not work. But it was able to connect to the service under test correctly in another environment. Apparently it had something to do with a proxy in between because had to add .proxy() and it worked:

    val httpConf = http
    .baseURL("http://172.20.33.101:30666") // Here is the root for all relative URLs
    .header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
    .header(HttpHeaderNames.Accept, HttpHeaderValues.ApplicationJson)
    .proxy(Proxy("performance.project.com", 30666))   // note it is the *same* machine as the baseURL,but specified by name... 


  4. A .check() with in it a .saveAs() will *not* happen when the earlier expression evaluates to false, or the conversion fails.
    Kindof makes sense when evaluates to false, but you might miss this one; or maybe you don't even want this .is() because all it means now only isFinished will be set to FINISHED and else it won't be set in the below example.

    someText is always found in the session, but the other one, isFinished, is not.

    .check(jsonPath("$[0].status").saveAs("someText"))
    .check(jsonPath("$[0].status").is("FINISHED").saveAs("isFinished"))
    ...

    .exec(session => {
       val isFinished = session.get("isFinished").asOption[String]
       logger.debug("Generated isFinished = {}", isFinished.getOrElse("Could not find expected isFinished..."))
       session
    })

    .doIf(session =>
       (!session.get("someText").as[String].equals("FINISHED") ||
       session.get("isFinished").as[Boolean].equals(false)
    ))(
    ...


    When running at DEBUG level the above logs:
       Session(Map(someText -> TIMED_OUT, ...)  /// So .saveAs() ocurred
       13:45:56.983 [DEBUG] SomeSimulation - Generated isFinished = Could not find expected isFinished...  


    So the second.saveAs() did not occur for isFinished at all, since it is not set to true nor false; it is not set at all!

Wednesday, April 12, 2017

Lessons learned Docker microservices architecture with Spring Boot

Introduction

During my last project consisting of a Docker microservices architecture, built with Spring Boot, using RabbitMQ as communication channel, I learned a bunch of lessons, here's a summary of them.

Architecture

Below is a high level overview of the architecture that was used.


Docker

  • Run 1 process/service/application per docker container (or put stuff in init.d but that's not intended use of docker)

  • Starting background processes in the CMD cause container to exit. So either have a script waiting at the end (e.g tail -f /dev/null) or keep the process (i.e the one prefixed with CMD) running in the foreground. Other useful Dockerfile tips you can find here

  • As far as I can tell Docker checks if Dockerfile has changed, and if so, creates a new image instance (diffs only?)

  • Basic example to start RabbitMq docker image, as used in the build tool:

    $ docker pull 172.18.19.20/project/rabbitmq:latest
    docker rm -f build-server-rabbitmq
    $ # Map the RabbitMQ regular and console ports
    $ docker run -d -p 5672:5672 -p 15672:15672 --name build-server-rabbitmq 172.18.19.20/rabbitmq:latest

  • If there's no docker0 interface (check by running command ifconfig) then probably there are ^M characters in the config file at /etc/default/docker/docker.config. To fix it, perform a dos2unix on that file.

  • Check for errors at startup of docker in /var/log/upstart/docker.log

  • If your docker push <image> asks for a login (and you don't expect that) or it returns some weird html like "</html>" then you're probably missing the host in front of the image name, e.g: 172.18.19.20:6000/projectname/some-service:latest

  • Stuff like /var/log/messages is not visible in a Docker container, but is in its host! So look there for example to find out why a process is not starting/gets killed at startup without any useful logging (like we had with clamd)

  • How to remove old dangling unused docker images: docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

Spring Boot

  • Some jackson2 dependencies were missing from the generated Spring Initializr project, noticed when creating unittests. These dependencies were additionally needed in scope test:

    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-databind</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-annotations</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-core</artifactid>
      <version>2.5.0</version></dependency>


    Not sure anymore why these then didn't get <scope>test</scope> then... Guess it was needed also in some regular code... :)

  • In the Spring Boot AMQP Quick Start the last param has name .with(queueName) during binding, but that's the topic key! (which is related to the binding key used at sending), so not the queue name.

  • Spring Boot Actuator's /health will check all related dependencies! So if you have a dependency in your pom.xml to a project which uses spring-boot-starter-amqp, /health will check now for an AMQP queue being up! So add a for those if you don't want that.

  • Spring Boot's default AppAplicationIT probably needs a @DirtiesContext for your tests, otherwise the tests might re-use or create more beans than you think (we saw that in our message receiver tests helper class).

  • @Transactional in Spring: by default only for unchecked exceptions!! It's documented but still a thing to watch out for.

  • And of course: Spring's @Transactional does not work on private methods (due to proxy stuff it creates)

  • To see in Spring Boot the transaction logging, put this in application.properties:

    logging.level.org.springframework.jdbc=TRACE
    logging.level.org.springframework.transaction=TRACE


    Note that by default @Transactional just rolls back, it does not log anything, so if you don't log your runtime exceptions, you won't see much in your logs.

  • mockMvc from spring is not really invoking from "outside", our spring sec context filter (for which you can use @Secured(role)) was allowing calls while no authentication was provided for. RestTemplate seems to work from "the outside".

  • Scan order can mess up @ControllerAdvice error handler it seems. Had to change the order sometimes:

    Setup:
    - Controller is in: com.company.request.web.
    - General error controller is in com.company.common package.

    Had to change
    @ComponentScan(value = {"com.company.security", "com.company.common", "com.company.cassandra", "com.company.module", "com.company.request"})

    to

    @ComponentScan(value = {"com.company.security", "com.company.cassandra", "com.company.module", "com.company.request", "com.company.common"})

    Note that the general error controller has now been put in last

  • Spring Boot footprint seems relatively big especially for microservices. At least 500MB or something is needed, so we have quite big machines for about 20 services. Maybe plain Spring (iso Spring boot) might be more lightweight...

Bamboo build server

  • When Bamboo gets slow and the CPU seems quite busy and memory availability on its server seems fine, increase the Xmss and Xmsx (or related). Found this out because the java Bamboo process was running out of heap sometimes, increasing heap also fixed performance.

  • To have Bamboo builds fail on quality gates not met in SonarQube, install in Sonar the build breaker plugin. See the plugin docs and Update Center. This FAQ says so.

Stash

  • The Stash (now called Bitbucket) API: in /rest/git/1.0/projects/{projectKey}/repos/{repositorySlug}/tags a 'slug' is just a repository name. 

Microservices with event based architecture

  • When you do microservices, IMMEDIATELY take into account during coding + reviews that multiple instances can do concurrent access to database.

    This has affect on your queries. Most likely correct implementation for uniqueness check on inserts:
    1- add unique constraint
    2- run insert
    3- catch uniqueness exception --> you know it already exists. Solution with SELECT NOT EXISTS is not guaranteed unique.

  • Also take deleting of data (e.g user deletes himself) into account from the start. Especially when using events and/or eventual consistency in combination with an account-balance or similar. Because what if one services in the whole chain of things to execute for a delete fails? Has the user still some money left on his/her account then? In short: take care of CRUD.

  • Multiple services are sending the same event? That can indicate 2 services are doing the same thing --> Not good probably.

  • Microservices advantages:

    - Forces you to better think about where to put stuff in comparison to monolith where you more often can be tempted to "just do a quick fix".
    - language independency for service implementation: choose the best language for the job

    Disadvantages:
    - more time needed for design
    - eventual consistency is quite tough to understand & work with, also conceptually
    - infrastructure is more complex including all communication between services

    More cons can be found here.

Tomcat

  • Limit the maximum size of what can be posted to a servlet is not as easy as it seems for REST services:

    - maxPostSize in Tomcat is enforced only for specific contenttype: Tomcat only enforces that limit if the content type is application/x-www-form-urlencoded

    - And the other 3 below XML options are for multipart only:

    <multipart-config>
      <!-- 52MB max -->
      <max-file-size>52428800</max-file-size>
      <max-request-size>52428800</max-request-size>
      <file-size-threshold>0</file-size-threshold></multipart-config>


    So that one won't work for uploading just a byte[]. The only solution is in the servlet (e.g Spring @Controller) you'll have to check for the limit you want to allow.

  • maxthreads seems set to be unlimited by default or something. 50 seems to perform better. (workerthreads) 

Security

  • To securely generate a random number: SecureRandom randomGenerator = SecureRandom.getInstance("NativePRNG");

  • Good explanation of secure use of a salt to use for hashing can be found here

Cassandra

  • Unique constraints are not possible in Cassandra, so there you will even have to implement unique constraints in the business logic (and make it eventually consistent)

  • CassandraOperations query for one field:

    Select select = QueryBuilder.select(MultiplePaymentRequestRequesterEntityKey.ID).from(MultiplePaymentRequestRequesterEntity.TABLE_NAME);
    select.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
    select.where(QueryBuilder.eq(MultiplePaymentRequestRequesterEntityKey.REQUESTER, requester));
    return cassandraTemplate.queryForList(select, UUID.class);


    See also here.

  • Note below two keys don't seem to get picked up by Cassandra in the Spring Data Cassandra version 1.1.4.RELEASE:  

    <groupid>org.springframework.data</groupid>
    <artifactid>spring-data-cassandra</artifactid>

    @PrimaryKeyColumn(name = OTHER_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meUserId;

    @PrimaryKeyColumn(name = ME_USER_ID, ordinal = 1, type = PrimaryKeyType.CLUSTERED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meId;

    This *does* get picked up: put it into a separate class:

    @Data
    @AllArgsConstructor
    @PrimaryKeyClass
    public class HistoryKey implements Serializable {

      @PrimaryKeyColumn(name = HistoryEntity.ME_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID meUserId;

      @PrimaryKeyColumn(name = HistoryEntity.OTHER_USER_ID, ordinal = 1, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID otherUserId;

      @PrimaryKeyColumn(name = HistoryEntity.CREATED, ordinal = 2, type = PrimaryKeyType.CLUSTERED, ordering = Ordering.DESCENDING)
      private Date created;

    }

  • Don't use Cassandra for all types of use cases. An RDMS still has its value, e.g for ACID requirements. Cassandra is eventually consistent.

Kubernetes

Miscellaneous

  • Use dig for DNS resolving problems

  • Use pgAdmin III for PostgreSQL GUI

  • To stop SonarQube complaining about unused private fields when using Lombok @Data annotation: add to each of those classes @SuppressWarnings("PMD.UnusedPrivateField")

  • Managed to not need transactions nor XA transactions for message publishing, message reading, store db, message sending, by using the confirm + ack mechanism.
    And allow message to be read again. DB then sees: oh already stored (or do an upsert).
    So,when processing message from the queue:
    1- store in db
    2- send message on queue
    3- only then ack back to queue that read was successful

  • Performance: instantiate the Jackson2 ObjectMapper once as static, not in each call, so:
    private static final ObjectMapper mapper = new ObjectMapper();
  • Javascript: when an exception occurs in a callback and it is not handled, processing just ends. Promises have better error handling.

  • clamd would not start correctly; it would try to start but then show 'Killed' when started via the commandline. Turns out it runs out of memory when starting up.  Though we had enough RAM (16G total, 3G free), it turns out clamd needs swap configured!

  • Linux bash shell script to loop through projects for tagging with projects with spaces in their name:

    PROJECTS="
      project1
      project space2
    ";
    IFS=$'\n'
    for PROJECT in $PROJECTS
    do
      TRIM_LEADING_SPACE_PROJECT="$(echo -e "${PROJECT}" | sed -e 's/^[[:space:]]*//')"
      echo "Cloning '$TRIM_LEADING_SPACE_PROJECT'"
      git clone --depth=1 http://$USER:$GITPASSWD@github.com/projects/$TRIM_LEADING_SPACE_PROJECT.git
    done

  • OpenVPN in Windows 10: Sometimes it hangs on "Connecting..."  It doesn't show the popup to enter username/pwd. Go to View logs. Then when you see: Enter management password in the logs: ???? you have to kill the OpenVPN Daemon under Processes tab (windows taskmanager). The service is stopped when exiting the app but that's not enough!

  • javascript/nodejs log every call that comes in:

    app.use(function (req, res, next) {
      console.log('Incoming request = ' + new Date(), req.method, req.url);
      logger.debug('log at debug level');
      next()
    }

  • If ever your mouse is suddenly not working anymore your VirtualBox guest, kill the process in your guest-machine mentioned in comment 5 here. After that the mouse works again in your vbox guest.

  • Fix Firefox to version 45.0.0 for selenium driver tests:

    sudo apt-get install -y firefox=45.0.2+build1-0ubuntu1
    sudo apt-mark hold firefox

  • Setting the cookie attribute Secure (indicating cookie should only be sent over httpS) can be seen when using curl to request the URL(s) that should send that cookie plus the new attribute, even when using HTTP. See also my previous post.

    But when using a browser and HTTP, you probably won't see the secure cookie appear in the cookie store. This is (probably) because the browser knows not to store it in that case because it's HTTP being used.

  • Idempotency within services is key for resilience and be able to resend an event or perform an API call again.

Wednesday, January 18, 2017

OpenVPN how to route all IPV4 traffic through the OpenVPN tunnel

Introduction

Originally I was connected from a Windows 10 machine via OpenVPN to a network (segment?) for "our" project. I could access all servers and websites related to it.  But when switching to another project (using the same OpenVPN settings) I could only access the new project's servers when at the premise of that project. At home or from any other place, I could not get to the servers, e.g Jenkins. The error shown was "This site can't be reached" in Chrome. See screenshot below for the exact error:



But I could get to the microservices pods directly by IP address, e.g 172.18.33.xyz (xyz are not the same in below example IP addresses, just obfuscators). So quite strange.

The administrator of the OpenVPN server didn't know how to fix the problem either. Suggested was to make sure "to route all IPV4 traffic through VPN". That made me search on the interwebs and I found below solution to work, without having to change any server settings. (I did not even have access to those server settings.)

Analyzing the problem

A) Trying the website with the hostname:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.xyz] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    20 ms    19 ms    20 ms  d13.xs4all.com [195.109.5.xyz]
  3    22 ms    22 ms    22 ms  3d13.xs4all.com [195.109.7.xyz]
...


B) Trying the well-known google gateway:
C:\Users\moi>tracert 8.8.8.8
Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    21 ms    20 ms    21 ms  d12.xs4all.com [195.109.5.xyz]
...

Hmm so its route goes via the same initial gateway for both external IPs and the hostname, so not via the VPN.

C) Trying with the IP that works (note not the IP for the hostname from above!):
C:\Users\moi>tracert 172.18.33.xyz
Tracing route to 172.18.33.xyz over a maximum of 30 hops
  1    97 ms    21 ms    20 ms  192.169.200.xyz
  2    45 ms    98 ms    29 ms  172.16.11.xyz
  3   130 ms    65 ms    68 ms  172.16.11.xyz
...

As you can see, the first entrypoint gateway is a different one, and most likely the wrong one.

The solution

The solution was to add this to the .ovpn OpenVPN configuration file:

route-method exe
route-delay 2
redirect-gateway def1

For me even only the last line (redirect-gateway def1) was sufficient, but for others the other two lines had to be added too.

D) After adding the setting, you can see the IP of the gateway changed to, the what turns out to, be the correct one:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.yyy] over a maximum of 30 hops:
  1   143 ms    31 ms    21 ms  192.169.200.xyz
  2    21 ms    20 ms    21 ms  static.services.de [88.20.160.xyz]
  3    21 ms    21 ms    25 ms  10.31.17.xyz
  4    25 ms    21 ms    91 ms  10.31.17.xyz
...

References used:
- http://superuser.com/questions/120069/routing-all-traffic-through-openvpn-tunnel
- http://askubuntu.com/questions/665394/how-to-route-all-traffic-through-openvpn-using-network-manager