Probably most of us like to write and read well-written code. The code that is easily understandable when we blur the names of methods and identifiers, leaving the types only. However, there is one more excellent thing, in my opinion — reasonably “typed prose”, or a code that looks like a story.

On the example of a simple counter that greets everyone who increments its value, I’ll show you how named parameters can bring us closer to the above-described goal.

I chose Swift here because it is very similar to popular programming languages and will not require additional explanations.

Our counter needs a method that will take 2 parameters: the number by which we want to increase the counter value and the name of the person to be greeted. Let’s forget for a moment about the single responsibility principle, in favour of simplicity and clarity of examples. A first implementation that comes to my mind looks like this:

Everything seems to be well-named and legible, it’s hard to get lost in the implementation. Let’s take a look now at the usage example:

Unfortunately this code is far from perfect example I mentioned earlier. The name of the method and its first argument are rather unambiguous, but a second parameter can be a puzzle for someone who is not familiar with the implementation.

Let’s change the names of parameters and make our API readable like a story:

Now, at first glance, you can see what is the purpose of the second parameter, and the name for the first one has become more concise. The code read from left to right creates the sentence: “increment this counter by 5 and greet Marcin”.

Ideal solution? Not really, let’s take a look at method implementation:

It’s not so good here, we sacrificed the readability and comprehensibility for the nice API.

In most of programming languages we are forced to choose either the readability of the interface or readability of the implementation. There is one more popular solution, that is a hybrid of both:

However it’s far from the “concise prose”.

Now comes the moment when Swift (and Objective-C) enter on a unicorn with their optional external parameters names (well known as argument labels).

What is this?

Each function parameter has both an argument label and a parameter name. The argument label is used when calling the function; each argument is written in the function call with its argument label before it. The parameter name is used in the implementation of the function. By default, parameters use their parameter name as their argument label. [source]

Let’s take a look how it works in practice:

Quite good! We can eat a cake and have it, by implementing labels that create a sentence without sacrificing the readability of implementation.

Interestingly, because named labels, if defined, are always required when calling the function, it is possible to overload methods with the same parameter types:

The version of the same code in Objective-C would look as follows:

Declaration of interface (headers):

A quick syntax explanation:

- on the beginning of the method means that this is an instance method,

+ would mean a static method here.

(void) is a return type, followed by the name of the method, which is also the label of the first parameter, then after the colon in brackets we give the type of the parameter, meaning Int , finally the local parameter name (here amount ).

Later there are variations of 3 previous elements (e.g. argumentLabel:(Type)localName ), declaring other parameters.

Implementation:

Here, implementation of the method’s body looks almost identical as in the Swift example.

And a use case:

( in Objective-C [object method:value anotherParameter:anotherValue] is an equivalent to object.method(value, anotherParameter = anotherValue) from other languages).

Every programming language I learned for fun and brainz had at least one feature that I could not find in any other languages. In the case of Objective-C it was not only a controversial syntax, but also the argument labels, which were adapted by Swift.

If you’re not psyched about the code that can be read as a narrative, you can still feel jealous of the possibility of better method overloading. Probably if the creators of Scala, adopted this feature, the magnet pattern would never be invented.

Bartosz Andrzejczak — Software Wizard

This month I learned

Let’s travel back in time. It’s 2016 again and you’re googling “container orchestration”. You stumble upon an article titled Docker Swarm, Kubernetes or DC/OS. Which one to choose? What was the conclusion back then? Depending on the author it might have been biased in any way, but the general conclusion was “Choose whatever suits you best. There’s no clear winner in that race.”.

Now we’re back in 2018 and we might google this question again. Other then some general articles about the concept, you’ll find no mention of DC/OS or Swarm. Kubernetes has conquered the market. Why is that, though?

This month I’ve learned why today Docker Swarm is not a valid competitor for Kubernetes.

Let’s imagine a small Swarm-based cluster on AWS: one manager, two workers — an extremely simple setup. There is a couple of services running, all having a single replica.

Actually, they have been running for at least a couple of days now.

In the metrics, we’ve observed that the traffic has been steadily rising and one of the services became our bottleneck. Obviously, all the services are stateless and only waiting to be scaled up, so we do exactly that — using docker service scale command. Something’s not right though. The second replica cannot start for some reason.

A quick look at the result of docker service ps tells us that the image is no longer available. How come? We’ve pushed the image into our private repository on AWS ECR, no one has touched it. Repository listing confirms that the image is still there. Going through the logs on the instance where swarm tried to put the container we can spot the culprit:

msg="pulling image failed" error="pull access denied for somerepo/someimage, repository does not exist or may require 'docker login'"

Aha! So there’s no authentication on the instance where the container was placed. We’re using AWS credential helper, but for the sake of it let’s call docker login manually too. Login was successful, but the container is still not starting. Maybe we should try the manual login on the manager though? No, that isn’t helping either. The problem is, obtaining credentials using ~/.docker/config.json is done in the Docker client. Then the authentication information is sent into the daemon, so it can download the image from a private repository.

Unfortunately, Docker Swarm doesn’t contact the client in any way when placing the container on the node. It will go straight into the daemon, which knows nothing about credentials.

Fair enough. In Docker Swarm, when creating a service, the authentication isn’t fully automatic either. We have to pass --with-registry-auth parameter into the call to docker service create , so it obtains the authentication token on a master node and passes it to the final placing instance. That parameter isn’t available on the docker scale command though. It just isn’t. Don’t worry just yet, as docker scale is really just a shortcut to docker update --replicas , which accepts --with-registry-auth parameter. Works like a charm!

So what’s the big deal? One faulty command doesn’t render the whole library unusable. Sadly this little quirk has very far-reaching consequences. As the token for accessing a private repository is obtained during service creation/modification, it can and will expire. Probably whenever you’re expecting it the least, like during a big demo for a promising prospect.

What happens, if the instance with a few of your containers dies after a few days of uptime? After all, these things happen all the time on Amazon. Happily, Docker Swarm will recognize that the number of running containers doesn’t match the number of replicas set up for the services and it will place those containers on other instances. The mechanism for that is the same as for container scaling. Can you see the problem now?

It’s 3am, everyone on your team is sound asleep, and the application dies, because two of your machines in different availability zones just died, and took a crucial service down with them. 503 status code won’t satisfy your users from other timezones. This is a nightmare come true. Docker Swarm will try to put those containers on all the remaining instances and fail every time as the image is not found.

What you can do then? There’s an easy way and the hard way, depending on how you look at it. Neither if them is perfect (or in my opinion even good enough) though.

In terms of ease of setup, the easy way requires for you to set up some kind of an alert on Amazon, that will wake you up at any time of the day and night if one of your instances gets terminated. If that happens you’ll need to get to the computer as fast as you can and make sure all the services are up and running. Pretty painful.

The hard way requires more preparation but in theory shouldn’t interrupt your sleep. You can write a script running on every instance in your cluster, that will read the Docker Swarm logs and whenever a container cannot start because the image isn’t found it will send a message to the swarm master, that in turn will use the docker update --with-registry-auth command updating authentication information on that particular service. It’s a dirty work-around, but it might just be enough. For now.

There’s actually a third way which you could consider. Using either a public repository or a private, but self-hosted, one. Again far from perfect and not something I would recommend.

This is a crucial feature. This is one of the main things that you want out of your container orchestration software. And it’s broken in Docker Swarm. Beware of that.

This isn’t anything new to the maintainers. It’s a know issue created in early 2017, almost 2 years ago, and it doesn’t look like it will be resolved anytime soon, if ever. If you want to follow it on GitHub, here’s the link.

Marta Mielcarek — UX/UI designer

This Month I learned

Recently, working on one of the projects, we encountered a problem with new functionalities — they were piling up, making the development process significantly extended, as the changing priorities introduced chaos in the application. The solution to emerging problems was to create a fast and intense workshop that would help us implement new functionalities and improve the current application structure.

The plan of our 4-hours workshop went as follows:

Analyzing the current state of the project (0.5h)

Giving the context to the elements being implemented (0.5h)

Writing out all ideas and determining the difficulty of their implementation (1.5h)

Customer engagement (1h)

Workshop summary (0.5h)

Analyzing the current state of the project (0.5h)

On a whiteboard we put printed screenshots which we managed to implement so far. We have created paths that show the transition of individual steps in the application. Thanks to this, we could easily find elements that mislead the user and that were not planned in a logical manner. Finally, we have commented on each element and set things up for further discussion.

Giving the context to the elements being implemented (0.5h)

We have determined what new elements can be implemented in the application. We used “user stories”, which are short explanations describing how users use the application. Each story contains answers to three questions: who I am, what I want to do and why I want to do it. Ten “user stories” was enough to clearly define what the newly developed elements of the application are designed for.

Writing out all ideas and determining their difficulty (1.5h)

We wrote out all the ideas that came to our minds during the workshop on post-its. We put them to the whiteboard to discuss ideas and reject those less important. Some ideas required preliminary sketches to give a better view of the subject.

Customer engagement (1h)

When we had an established new vision for implementing new elements of the application, we had to make sure that the chosen direction was accurate. We asked two people who were not involved in the project for a short interview. Not all suggestions could be implemented, due to the fact that they required a lot of developers’ work, but the interviews made it possible to make sure that our assumptions were correct.

Workshop summary (0.5h)

Summary of the workshop allowed us to wrap-up all the information. We prepared a report with an initial schedule for the implementation of new functions.

The workshop helped us fight with the information chaos which, at a certain moment, occurred in our project. Discussion, drawing ideas and confronting them with the vision of a potential client, and setting a work schedule, allowed us to regain control over the project.

October community snippets from us!

During “The most enthusiastic conference on Java” Jacek pondered the possible directions in which JDK’s Reactive Streams support may go in the future while talking on Reactive Streams in Java 9+.

Jasiek and Łukasz attended the second blockchain conference in Geneva where experts, developers, entrepreneurs and lawyers shared their way of thinking concerning blockchain technology development.

We also took a chance to invite you guys to Blockchain Fiesta in Cracow! Hope to see you there on 16th of November!

Jasiek spoke at the seminar hosted by prof. Andrzej Blikle in Warsaw University of Technology Business School. Invited guests talked about how teal companies manage processes and knowledge.

Doobie is a pure functional JDBC layer for Scala and Cats.

It is not an ORM, nor is it a relational algebra; it simply provides a principled way to construct programs (and higher-level libraries) that use JDBC.

During his presentation Michał talked what is Doobie and how it can help us in communication with the SQL database, especially if we already have some experience in FP. He used a lot of live coding and provided examples from Scala language.

We became a member of newly established Software Development Association Poland that promotes and supports the growth and integration of Polish software companies.

Ida spent a few days in Prague, attending GeeCON conference. GeeCON is a JVM event where everyone can find something interesting. She especially enjoyed two soft presentations:

Working Remotely: Dream vs Reality by Peter Van de Voorde. Peter described how he manages being a father and a husband and working from home at Atlassian.

Introduce girls to programming and reduce the gender gap by Ansgar Brauner. Ansgar explained how important it is to learn programming at a very early age, not only for girls but for every kid.

This time Tricity Java community talked about blockchain for developers. And Szimano presented his experience with hyperledger composer in a live coding session.

Adam reflected on how we are using annotations in Java. And offered some alternatives.

Voxxed Days Bristol is a conference for the South West software developer community. Michał reflected on machine learning problem, how can it be processed on Spark including data cleaning, normalization and a learning process.

A group for enthusiasts of Haskell and other purely functional programming languages has started its activity with the first inaugural meetup in October. The main goal of the group is to show possibilities to use Haskell language in commercial projects.

Adam Szlachta gave a 101 presentation about functional programming and Kamil Figiela presented a practical introduction to Haskell. Ida thinks that Cracow’s event was an excellent occasion to integrate local functional community and she cannot wait for the next meetup!

Jacek explained what Reactive Streams is all about and how (not) to use the APIs available in JDK 9+. He also considered in what direction the support for this standard in JDK could develop.