Páginas

Monday, January 30, 2017

Do not squash commits mindlessly!

I've always wanted to write about my frustration with projects and code reviewers that insist on mindlessly squashing all commits of a pull request before merge, instead of accepting well factored commits.

Because I've just come across a blog post from Matthew Garrett about the same topic, I'll cite him:

http://mjg59.dreamwidth.org/42759.html

(...) When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. (...) 

It's terrible when we go see the history of a certain line of code just to discover it was changed last together with other 20,000 lines, in a commit with a poor message, in a pull request with empty description...

Does TDD work?! Baby steps?!

I believe one good quality of thinkers, engineers, programmers, you name it, is skepticism.
Not taking things for granted allows one to dig deeper, try to understand pros and cons, dive into a detailed level of understanding.

TDD is old enough that you can find online (and physical books even) a myriad of related material, both favoring it and bashing it. I'd say that questioning it is great, whether you are in favor, against or indifferent towards it.

I'd like to share some links I've (re)visited recently:

TDD


The first one is a question, "why does TDD work?", or perhaps it should've been "does TDD work?!?"

I sympathize with this comment (emphasis mine):

TDD is, in my opinion, mainly about ways to make sure you can check parts rather than a big 'all or nothing' at the end. But the adagium of TDD, 'build a test first' is not meant to be 'make a test before you think about what you want to accomplish'. Because thinking of a test IS part of designing. Specifying what you want that exact part to do, is designing. Before you ever start to type, you have already done some designing. (In that way I think the term 'test driven design' is misleadingly implying a oneway path, where it is really a feedback loop).


Good design can come either through picking tests intelligently, or by any other technique... how do you learn and discover good designs? You either learn from existing designs, or you try things out.
I think the former is more effective, and immensely prepares you for the latter.

To a mathematician's mind, I'm speaking to you now, though I see programming are much more than maths, as programming projects are also related to arts, social science, communication, and more...
The way how I learned many topics in maths (and physics, etc): I was taught some theory, and then I did hundreds, perhaps thousands, of exercises. Often an exercise builds up on ideas you've learned in previous challenges. Often a problem is hard enough that you need to be presented with a solution... then you understand it, it makes the problem seem much easier than it originally was. And them you feel empowered to solve similarly tough exercises, and to innovate when faced with the unseen.


What if TDD doesn't work as in this quote from Peter Norvig:

“Well, the first thing you do is write a test that says I get the right answer at the end,” and then you run it and see that it fails, and then you say, “What do I need next?”—that doesn’t seem like the right way to design something to me. It seems like only if it was so simple that the solution was preordained would that make sense. I think you have to think about it first. You have to say, “What are the pieces? How can I write tests for pieces until I know what some of them are?” And then, once you’ve done that, then it is good discipline to have tests for each of those pieces and to understand well how they interact with each other and the boundary cases and so on. Those should all have tests. But I don’t think you drive the whole design by saying, “This test has failed.”


Indeed. Writing arbitrary tests and making them pass don't take you anywhere.

The second link I share is a good commented example of how it goes when the order of tests in unfavorable and we get stuck. One needs to think. You need to build a baggage, a toolbox, and it takes time ;-)


At all times, let's keep in mind that who drives the show is YOU (by writing tests or anything else).

Like other techniques, TDD doesn't always work, for it depend on individual and collective "talent" to get things done at a certain quality level.

(...) how come it does not always work?

Because testing requires a VERY different mindset than building does. Not every one is able to switch back and from, in fact some people will not be able to build proper tests simply because they cannot set their mind to destroy their creation. This will yield projects with too few tests or tests just enough to reach a target metrics (code coverage comes to mind). They will happy path tests and exception tests but will forget about the corner cases and boundary conditions.

Others will just rely on tests forgoing design partly or altogether. (...)


One positive thing about practicing collectively in a coding dojo is the ability to criticize design decisions, and to share that "baggage" on things that work, and things that doesn't, most often being able to show why it does or doesn't. And, looking from a different perspective, there's space to learn things one would not otherwise think in isolation.

Baby steps


This is notably quite long already... but closing with the third link, here goes something about the "size" of baby steps, something that people being introduced to TDD and baby steps are often asking, when they start to think that one ought to write senseless code when you know an obvious implementation. No, you do not need to write mindless code nor are you supposed to do that.

If you know how to do something, if it is obvious for you and you are comfortable, go ahead and do it. Stay in the flow. Now, when your instincts fail, there is the "fake until you make it" technique to keep you up in the game. Faking endlessly and thoughtlessly will not magically solve the problem for you, but gives you time to observe.
When overconfidence fails you, you can learn to know when to step back.

A whiteboard or a piece a paper, or an interactive interpreter, all might be equally good tools for fostering thinking.



We practice TDD and Baby Steps in coding dojos, though those are not magical techniques that solve all problems. The breath of domains we write programs for is so vast that no single technique could possibly be a silver bullet. Those two are certainly not sufficient in the toolbox of one aspiring to be a good programmer.

What are other useful techniques for you?

Wednesday, January 25, 2017

Go workshop



Yesterday, I've facilitated an introductory workshop about the Go programming language. So far the feedback has been positive, both during our retrospective at the end of the workshop, the corridor conversations later, and comments on the event page.

There were 9 participants, some from Red Hat, some from Solar Winds, some students, some unemployed, and some from other companies which I do not recall. It was a very good team, and I attribute the success of the workshop to them.

They were familiar with C, Java, C#, Python and perhaps other languages. It was either their very first contact with Go, or they've had limited contact with it in some recent project.

We had only two hours to pack a good deal of content. There were, however, some key points that we made sure to include in those exciting hours:

  • Learning should be fun
  • The learning process can be improved when we cooperate
  • Testing is a serious matter, and Go's design brings in some peculiarities to writing tests that are not seen in other languages.

The official Go documentation is really good, and, thus, part of the workshop was about following along The Tour of Go. The other part was about testing, and how to approach the coding exercises from the tour using Test Driven Development.

Due to our time constraints, we covered all of the "Basics" topics from the tour, and we had about one hour and twenty minutes of effective hands-on coding, which I consider great.

During the retrospective, I remember people talking about function closures, error handling in Go, testing. It was satisfying to see it was useful and everybody had something they've learned to share with the group.

I've heard from one participant that he got so excited about Go that he kept playing with it until late in the morning...

The slides I've used are available here:


Notes from Golang-Brno #4: Refactoring, containers, plugins, ...

Yesterday, I facilitated an introductory Go workshop here in Brno, CZ. This post is not about the workshop though, but just some notes from the talks that followed in the evening meeting.

Update: I've written some notes about the Go workshop.

There were two talks. The first one was an excellent pick of interesting talks and stories from dotGo 2016, very well presented by Jan Klat. I was glad to meet him :-)

Refactoring: difficulty with refactoring types

One of the talks Jan shared with us was about refactorings. While I haven't watched the original talk, one think that came to mind is the Type Alias proposal that is being tracked here:
https://github.com/golang/go/issues/18130

Self deploying Go: debuggability

Another talk was about deploying to "the cloud" very small containers with just statically linked Go programs. My remark was that this works really well for demos and proofs-of-concept, but has several limitations and drawbacks in practice.

Yeah, as part of my work in OpenShift I can say "been there, done that". The thing to keep in mind is that often you will want to spawn a remote shell in your container... but if all you have is a single binary... that means no shell, no common Linux utils... and very limited way of debugging problems unless you add all the things into your Go binary. No, don't do that. Save yourself.

Most of the time, it is a false economy and a misconceived objective to try to have 5 MB binaries in your containers in production. Once you deploy your image to something like OpenShift or Kubernetes, your nodes will already have a local copy of your image, so scaling, starting more instances of a container does not require pulling in more data over the network.

A more interesting thing is to layer your images properly to reuse the base layers across your containers images. Say you have a base image for all of your Go micro services that is 500 MB, and you have 3 services. Each binary implementing your service is roughly 10 MB. Now if you share a common base image, loading the first image for the very first time will transfer 110 MB, but the next 2 images will need to transfer only 10 MB each.

Plugins in Go

Jan mentioned some talk about plugins, some "novel" attempts but folks at Drone.io.... but it all sounded to me more or less what we find already in production in software from Hashicorp, like Packer for example.

The problem/solution is at least as old as April 2013:
https://github.com/mitchellh/packer/issues/1

More:
https://www.youtube.com/watch?v=SRvm3zQQc1Q (all the story behind plugins in Go, early attempts, current design)https://github.com/hashicorp/go-plugin



The second talk was about Mall.cz and their rewriting of an existing system into Go components. Unfortunately there was very little about Go, too much about internal details, and we were left with the feeling that the new solution is buzzword-compliant, more complex, and does not address the problem with the original system...

That's honest feedback. Anyway, the two guys presenting were good, open to questions and explaining their thoughts and decisions, so I stayed until the end of the event and enjoyed it.

My notes:

Iris Web framework, fast?!

Yeah, so they seem to have chosen to use a web framework that I've never heard of, because... because it is fast? Hmm... they came with a very suspicious graph, claiming not only Iris is orders of magnitude faster than anything else, but also that most of the existing "Go web frameworks" are "faster" than net/http in the standard library... how come?
Most, if not all of those frameworks/tools/libraries do delegate the hard work to net/http, so there is no way on Earth or any other planet they would be faster than the net/http.

What's more, finding the source code on GitHub, it turns out that Iris has had code contributions from a single developer. All due respect to the Iris author, but in an Open Source world, that, plus the relative immaturity of the code, plus the speed claims are really really big warning signs.

People are free to choose whatever they want. Without going into much detail, my philosophy is keep is dependencies to a minimum. One must judge really well the ROI, the value, a given dependency is bringing compared to the complexity it is adding to your project. And keep in mind also transitive dependencies.

Trash, Go vendoring tool

There is a myriad of tools out there to help you vendor your dependencies along with your code... and also a lot to be said and learned about this topic.

Now, it was really funny to hear that the solution to Go vendoring is.... "put all of your code to trash". Sometimes that's really good advice -- fear not delete Thy code.

By the way, if your dependencies are "trash", why do you even depend on them?!

I never used trash, but what I found weird during the talk is that the speakers were happy about the fact that it deletes all the "useless" files, including "useless" tests, and parts of the dependencies that you do not use... It does look nice on the first sight, but left me with questions like what happens when you want to upgrade the versions of your dependencies and so on.

Containers in development and/or in production?

They were using Docker containers for development, and automating deployment with Chef in production.

Using net/http == using concurrency and goroutines

One of the reasons they were excited about Go was the existence of goroutines and channels. In the few minutes Go was mentioned, this was a recurring topic. But they seemed too excited about writing concurrent Go, apparently ignoring that is no trivial thing :-)

Throwing goroutines and channels into a code base will NOT magically improve performance. On the contrary, when done wrong it can harm performance, and introduce more subtle bugs.

However, one thing to note is that just because they are using Iris, and underneath net/http, their programs already have goroutines and all the fanciness of concurrency in Go! Yeah, that's another important lesson to be learned... the API of net/http has no channels, no explicit goroutines, and you get to write your handlers as regular functions, all the underlying complexity well factored away from your eyes.

Tests: build tags

They've heard about "build tags", but I think they've misunderstood it. Their example of running tests was something like this:

go test -run Unit
go test -run Integration

That means they understood tags as simply naming every test function as "TestUnitFoo". While that may work, it is a waste of time/characters/whatever to name your tests functions in that fashion.

Build tags, or build constraints, are well documented here:


For tests, one common and reasonable pattern is to write your unit tests normally, in *_test.go files that go along with your packages. Then, for integration or other types of tests, put them in separate packages, also in *_test.go files, but add a build constraint that is only satisfied when you intend to run integration tests. In other words, add this to the top of your files:

// +build integration

package foo

import "testing"

...



That's all from the talks yesterday :-)

Wednesday, January 11, 2017

Expanding the Coding Dojo Brno!

I believe that learning computer programming is a continuous process that involves deliberate practice and gradually pushing yourself into mastering new skills.

There are also important social skills involved, like team work and communication.

In a coding dojo meeting, we work as a self-organized group to learn and share knowledge by following certain principles and practices. We practice:
  • collective problem solving 
  • Test-Driven Development 
  • pair programming 
  • baby steps 
  • code refactoring 
  • and others... 
We also keep everybody in the loop, making the environment inclusive and welcoming for people with all sorts of background. It is a collaborative community, not a competition.

All that happens while we solve computer programming exercises (called "katas") in a programming language of our choice.

A bit of history

Coding dojos started to appear around the globe throughout the boom of Agile development. The ParisDojo was founded back in December 2004, and then, four years later in December 2008, with the help of an awesome community of developers, I've founded the Coding Dojo Rio. The community is so amazing that it has grown non-stop and holds regular meetings every week for over 8 years now.

Last January, with two colleagues, I started the Coding Dojo Brno group here in Brno. We held over 40 meetings in the Red Hat office in the year 2016, and, though we've always been an open-to-all group, we've had limited participation from the external community. This year we're finally expanding, and we'll be holding meetings in a new venue, a room in the Masaryk University's Faculty of Informatics, starting next week.

We're also trying to gain more visibility through a Facebook group: facebook.com/groups/CodingDojoBrno.

If you are interested in programming, knowledge sharing, agile, TDD, etc, get in touch and come to one of our next events!

Go workshop and coding dojo in Brno

Later this month the Golang-Brno community starts its activities in 2017 with an introductory workshop and coding dojo, followed by a regular meetup with talks:

More info:


I'll facilitate the workshop. The idea is to help people understand the basics of Go by following the official tour, and then put everyone to code in a randori coding dojo, where we'll use the testing package and the go tool to write automated tests and small programs.

Friday, January 6, 2017

Configuring keyboard repeat rate on Gnome 3

It's been some years that I use a rather fast keyboard repeat interval. With a well tuned delay, it just seems to make a lot more sense to the programmer in me.

It allows me to focus on navigating the screen in more effective ways, like word-by-word instead of char-by-char, and also makes it quick to repeat keys when you intend to do so, like typing:

--------------------------------------------------------------

Can you type that without feeling like watching a slow motion movie?!

I don't really know when I started doing it, nor the original motivation. I think when I met Tim Ottinger in a coding dojo in Beijing he mentioned how frustrated he got whenever he touched somebody else's slow keyboard on a workshop or some other event, and would immediately suggest and show how to change the key repeat rate settings.
So it might have been Tim, or might have been someone else in that story, but the story is true :P

Last year, doing coding dojos in Brno, I've noticed the opposite feeling in people touching my "fast" keyboard... they often get so scared when they unintentionally seeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

Phew!

If you think fast key repeat rate is for you, here is how to set in in Gnome 3 (for reference, works on Fedora 25).

First, inspect your current settings:

$ gsettings list-recursively org.gnome.desktop.peripherals.keyboard                                        
org.gnome.desktop.peripherals.keyboard repeat-interval uint32 0
org.gnome.desktop.peripherals.keyboard delay uint32 500
org.gnome.desktop.peripherals.keyboard repeat true

I've set the repeat-interval to the fastest value, 0. And the delay is just the default, 500ms. The repeat value controls whether keys repeat or never repeat.

Set each key to your preferred value with the command:

$ gsettings set org.gnome.desktop.peripherals.keyboard KEY VALUE

Replace KEY and VALUE with the appropriate key and value.


Note: the same configuration is available in a graphical interface in Gnome Settings > Universal Access > Typing > Repeat Keys. Unfortunately, with that interface you only have dials, and no way to enter precise values or to inspect the current values.
If you want to do it with a graphical interface and have precise values, you can install and use the package dconf-editor.

Update: after a reboot I realized that the minimum usable value for repeat-interval is 1. It was apparently okay to set it to zero, but after a reboot I could not login on a Wayland session (crash caused by a division by zero...), while Xorg would let me login, but with the keyboard settings silently defaulted (though gsetttings and Universal Settings would still display the values I've configured).
https://bugzilla.redhat.com/show_bug.cgi?id=1413305

Monday, January 2, 2017

Backing up entire disks and making .img files smaller

As part of migrating to a new laptop and backing up an entire disk partition, here I document what I have done.

Creating full disk images

Work from a Live USB with Fedora 25 or any other modern distro, so that the partitions that will be backed up are not in use.
Use the Disks utility to create a disk image of some partition, e.g., the original /home.

This will create a .img file as big as the partition size, independent of how much data is actually stored. Next, we shrink the image to a minimal size to fit the data it contains, potentially saving some backup space.

Shrink disk image to its minimal size

If all you have is a single partition image, as created in the step above, we can use a loop device and the resize2fs utility to make the image smaller.

First, we'll find an available loop device with:

sudo losetup -f

Note down the name of the loop device, e.g., /dev/loop1, and use it in the next steps.

Mount the image:

sudo losetup /dev/loop1 'Disk Image of ... .img'

And resize it to its minimum size:

sudo resize2fs -Mp /dev/loop1

The -M flag resizes the image to the minimum size; while -p prints progress information.
Be patient, resize2fs may take some time depending on the partition size, disk speed, etc.

Note: if you get an error from the command above, it might be needed to run this before running resize2fs again (the error message will tell):

sudo e2fsck -f /dev/loop1

When resize2fs finishes, unload the loop device:

sudo losetup -d /dev/loop1

Finally, run resize2fs on the .img file to resize it:

resize2fs -Mp 'Disk Image of ... .img'

This should be relatively fast.
We're done! Check the new image size.

Sunday, January 1, 2017

Get a list of all RPM packages installed

New year, time to start with a fresh install of Fedora 25...
While dnf has worked really well for in-place upgrades, this time I went with a migration into a clean install on new hardware.

This is what I used to get a list of all RPM packages installed, so that I can reinstall on the new system:

sudo dnf --disableexcludes=all repoquery --qf '%{name}' --installed | sort