Links of the week

  1. git standup -d 1000 via
  2. Free ssl with let’s encrypt with nginx
  3. Still valid

Links of the week

  1. ECMAScript 6: -> Compatibility of your javascript with ES6. I use Google Chrome Canary (currently 97% compatible).
  2. -> picture polyfill. Deliver appropriate image size based on client conditions (screensize, viewport, etc).
  3. lsof -i :22
  4. SSL is not an option. How to achieve A+ score
  5. Convert crt to pfx: openssl pkcs12 -export -out -inkey -in>

iTerm3 and docker startup

If you open iTerm2 and try to run docker ps you’ll probably receive below error:

Cannot connect to the Docker daemon. Is the docker daemon running on this host?

The solution to use iTerm2 instead of Docker QuickStart Terminal is relatively simple:

That will generate below output:

You then need to run:

The output of it is:

All you need to do now is to run:

That’s all 😉


Node.JS Module Patterns

1. The simplest module

2.Module Patterns
2.1. Define a global

Node: Don’t pollute the global space

2.2. Export an anonymous function

2.3. Export a named function

2.4. Export an anonymous object

2.5. Export a named object

2.6. Export an anonymous prototype

2.7. Export a named prototype


  • Named exports – allow you to export multiple ‘things’ from a single module
  • Anonymous exports – simpler client interface

Manage multiple versions of nodejs with nvm

I’m using nodejs as my primary development language and usually I deploy my projects in docker containers. There days I realised my development nodejs version is not the latest one.

So I was looking to update it really easy with the option to choose between the current version (reminds me on .net framework version via app.pool or php versioning).

Nvm seems the perfect tool to do the job:

Now using node v6.1.0 (npm v3.8.6)

default -> 6.1 (-> v6.1.0)

The output was v6.1.0

You can check the installed node.js versions with:


Docker Engine and DockerCompose on OSX – configure iterm2

After you install docker toolbox via

just add below on your iterm console and add support for zsh


MongoDB – Security

By default security in mongoDB is turned off. You can enable it by using mongod –auth or add the key into mongodb.config

If you try any commands from mongodb, the result will look like below:

However you have access to admin database.

or you can try to start the shell from command line:

Type of users (clients):

  • “admin” users:
    • can do administration
    • created in the “admin” database
    • can access all databases
  • regular users:
    • access a specific database
    • read/write or readOnly

To alter the roles:

Available roles:

  • read
  • readWrite
  • dbAdmin
  • userAdmin
  • clusterAdmin
  • readAnyDatabase
  • readWriteAnyDatabase
  • dbAdminAnyDatabase
  • userAdminAnyDatabase

To secure cluster mongodb you can enable Mongodb authentication and authorization with –keyFile flag. When using –keyFile with a replica set, database contents are sent over the network between mongod nodes unencrypted.

The keyfile must be present on all members of replicaset.

Login and test the replica set authentication:

Starting MongoDB 2.6 –auth is implied by –keyFile.

Additional resources:


Docker private registry

Docker even has a public registry called Docker Hub to store Docker images. While Docker lets you upload your Docker creations to their Docker Hub for free, anything you upload is also public. This might not be the best option when you work on your projects or inside your organization.

So I wanted to spin up my own private docker registry based on … docker. Actually docker-compose as I need to components for my repo: the docker server itself and the nginx server acting as a proxy to handle authentication and ssl.

Below the docker-compose.yml file I’m using to spin up the environment:

On my host I created two directories to handle data images and nginx configuration.

In nginx directory I added the configuration file nginx.conf together with the ssl certificate.

The registry.password file is used by nginx to keep users and passwords for our registry.

Initially created with

you can strip -c and add additional users.

You can spin up the docker registry with:

If everything is ok, you can check if platform is up:

If you want to start docker-registry as a service you need to create the Upstart script:

Let’s test our new Upstart script by running:

Feel free to check the log files:

Your private repository is accessible via

Using our private docker registry:

You can login to your registry with:

From your client machine, create a small empty image to push to our new registry. We will create a simple image based on the ubuntu image from Docker Hub

Inside the container create a dummy directory (eg. my_application) and commit the change:

then tag the image:

Note that you are using the local name of the image first, then the tag you want to add to it. The tag does not use https://, just the domain, port, and image name.

Now we can push that image to our registry. This time we’re using the tag name only:

Similar you can pull images from your private registry:


Happy Dockering!

Dealing with Technical Debt

The phrase “technical debt” has become a commonly used phrase in software development. Technical debt was introduced by Ward Cunningham to describe the cumulative consequences of corners being cut throughout a software project’s design and development.

Imagine you need to add a feature/functionality to your software. You see two ways of doing it, the short & quick & dirty one that will make further changes harder or the clean way that will take longer to add.

Technical debit is analogous to financial debt; the technical debt incurs interest payments which come in the form of extra effort you have to do in the future because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design.

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.” (Ward Cunningham, 1992)

technical debt

Technical debt trends to accumulate over time

The time you spend below the line takes away from the time you are able to spend above the line in terms of delivering new value to the customers.

Impacts of Technical Debt

  • Too much time maintaining existing systems, not enough time to add value – fixing defects or n * patching the existing code.
  • Takes more effort (time & money) to add new functionality – you are not able to react to what’s happening on the market in terms of competitive products.
  • User dissatisfaction – usually the functionality you’re adding is not what they expect or they struggle with the products due the amount of defects.
  • Latent defects – a defect that is not known to the customer unless he faces an unforeseen situation but at the same time the developer or the seller is aware of the defect.

Business impact of Technical Debt

  • Higher TCO – total cost of ownership (purchase price of asset + cost of operation)
  • Longer TTM – time to market to deliver new value
  • Reduced agility – less ability to react to changes on the market


  • Innovator’s Dilemma – once dominant in the market, your competition can develop and deliver new functionality faster than you
  • studies reveal that every 1$ of competitive advantage gained by cutting quality / taking shortcuts, it costs 4 x times to restore the quality back to the product
  • The biggest consequence – it slows your ability to deliver future features, thus handing you an opportunity cost for lost revenue

The “source” of Technical Debt

Technical debt is comes from work that is not really Done or steps skipped in order to get the product out of the door (schedule pressure).

A “valuable” source of “undone” work is generated by not knowing what’s required and we add code that performs functionality that’s incorrect. In this scenario we potentially try to tweak the functionality with workarounds by doing some “code manipulation”.

Company culture aka “that’s not my job” can sustain the technical debt (QA vs Dev vs DevOps).

Forms of Technical Debt

  • Defects
  • Lack of automated build
  • High code complexity – hard to modify/test
  • Lack of automatic deployment – reduce human steps involvement
  • Lack of unit tests
  • Business Logic in wrong places
  • Too few acceptance tests
  • High cyclometric complexity
  • Duplicated code or modules
  • Unreadable names / algorithms
  • Highly coupled code

All forms of technical debt are associated with code modify / changes and testing – DONE requires testing:

  • acceptance criteria
  • details of the design/solution
  • automate all tests
  • write unit tests and code
  • review documentation
  • test integrated software
  • regression testing of existing functionality
  • fix broken code

It is important to take sprint capacity and perform the above steps every single sprint.

Paying Off Technical Debt

  1. Stop creating new debt
  2. Make a small payment regularly (eg. each sprint)
  3. Repeat step 2 🙂

1.Stop creating new debt

  • Clearly define what “DONE” work is!
  • Know “what” the functionality is – clear acceptance criteria
  • Automated testing
    • repeatable, fast – do it every time the code changes!
    • regression tests – at least every sprint to be able to run through all your regression tests and know that whatever used to work still works.
    • new functionalities – automatically testing so that can be very clear not only works when it’s new but next sprint when we’re modifying things around functionality, it’s still working as expected.
  • Refactor – not rewriting the application/product but increasing the maintainability of the code without changing the behavior. It can be changes to make code readable/understandable (variable/methods name change to match the true meaning), taking our redundant code and creating a method. Fixing defects is different to refactoring. Refactoring is best done very small steps, a little bit at a time and actually requires automatic testing to make sure that the refactor has not changed the behavior. See Martin Fowler – Refactoring – Improving the design of existing code
  • Automated process – build & release -> eliminate human errors = reduce the source of technical debt. Let computers do repetitive work where human can easily induce errors.

Note: Those are some examples and not an exhaustive list of things that can generate technical debt.

2. Pay off existing debt

  • Fix defects – preferable as soon as you find them. Get them at the top of the list, work them off (zero bug policy), get them out of the code, refactor the code, improve the structure, make the names meaningful, reduce complexity, remove duplicate code, improve the test coverage.
  • Test coverage – it doesn’t mean only lines of code, it means testing though the logical use of the code. Test coverage is expensive so it’s most important to test the product with actually being used – the coverage we care about most because it’s how customers are using our software.

The goal: to pay the tech debt a little bit at a time every single sprint and avoid “technical bankruptcy”!

3. Prevent tech debt

  • cross-functional teams
  • agreed upon definition of done
  • frequent user feedback
  • discipline, transparency – not taking shortcuts

Additional links: