Vagrant on 123-reg Cloud server

To install Vagrant on your cloud server, you need to download and run the installation kit. Before going further, be sure that you have dpkg and Virtual box installed:

Go to the downloads page of Vagrant and check for the latest release, download and install it:

Next you’ll need to install Kernel headers:

Then, reconfigure the VirtualBox DKMS:

If everything is ok, you should be able to type into your shell:

Feel free to look on other articles for some quick demos and capabilities of vagrant.



MongoDB – Profiling and performance

You can retrieve the status of your server by using:

See the list of running operations in current shell:


  • Exercise care killing operations on secondaries (replica set)
  • Exercise care killing compact operations
  • Don’t kill internal operations – data migration / sync


The mongostat utility provides a quick overview of the status of a currently running mongod or mongos instance.


Mongotop tracks the amount of time a MongoDB instance spends reading and writing data.



The database profiler collects fine grained data about MongoDB operations, cursors, and commands. Profiling can be enabled at database level or per MongoDB instance.

Levels of profiling :

  • 0 = off, no profiling
  • 2 = on
  • 1 = selective, show only slow operations ( > slowms)

A “new” collection will be visible. As a note, the system.profile collection is capped.

The output:

Retrieve the last profile info:

Additional links:


CMS choice

I’m looking for a PHP based CMS, easy to scale and flexible enough to be the starting point for the next generation of e-commerce website for the hosting company I work for.

Why PHP ? I started by adding scalability on top and I felt nodejs was the answer. However things are not pretty generous on CMS side. There are few names but they look at the early stage of development or suffering from the lack of features: KeystoneJS, Pencilblue, Apostrophe, Ghost.

Note: Reaction Commerce – is a really interesting project as they are the only one building something for the eCommerce industry. They are using Meteor, Node.js (note: interesting combination), MongoDB and CoffeScript and it is launched as a Docker container.

I’m going to collect the strengths and weaknesses of few PHP options available with the mention I’m going to write a separate post for Reaction Commerce.

We’ll discuss about:

  • Expression Engine
  • Craft
  • ProcessWire

ExpressionEngine is built by EllisLab, a company that also created CodeIgniter, a popular PHP framework for building robust web applications. ExpressionEngine 2.x is built on top of CodeIgniter.

Craft is built by Pixel and Tonic, a company who, interestingly, got started creating third-party add-ons for ExpressionEngine. Their add-ons – Playa and Matrix – are well-built, renown plugins within the ExpressionEngine community.

ProcessWire – It’s basically PHP with a really extensive jQuery-like API – so literally anything is possible.

Data modelling

A model is simply a type of content your site stores. You might have a “blog post”, “product”, or “staff member” model. ExpressionEngine calls these model types a channel while Craft calls them a section.

Flexibility of the model by custom fields.


  • Responsive control panel
  • Live preview
  • Entry draft/version functionality
  • Has several pricing options to fit your needs
  • Custom entry types (if you have several “types” of blog posts that differ in content/layout)


  • More add-ons for things like e-commerce
  • Been around longer
  • Well known within large companies

Craft uses Twig as its template engine.

Additional links:


Query Index Performance

Query index performance

Indexes dramatically improve the speed and the efficient resolution of read operations. Without indexes, MongoDB must scan every document of a collection to select those documents that match the query statement. This scan is highly inefficient and require the MongoDB to process a large volume of data.

Indexes store a small portion of the data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field as specified in index.

CreateIndex command – is very similar to sort command (1 = ascending, -1 = descending). Compound index has multiple fields in it’s definition. You can index array contents, subdocuments and subfields.

Optional you can specify the name of the index. If unspecified, MongoDB generates an index name by concatenating the names of the indexed fields and the sort order.

If the field does not exists in any of the documents from the collection, MongoDB will create the index without any warning. A MongoDB index can have keys of different types (i.e., ints, dates, string).

Retrieve the list of indexes for a collection:

Delete an index:

ObjectID – unique, autogenerated (unless specified), 12 bytes, it provides valuable information about the document: eg. when was created:

Explain queries:

MongoDB allow to examine queries and see what indexes are used and figure out the performance of the query statement.


  • nReturned – indicates the number of documents matching returned
  • nscannedObjects indicates the total number of documents scanned
  • nscanned indicates the total number of documents or index entries scanned

We look for totalKeysExamined = 0 (no index yet) and totalDocsExamined = 2 and nReturned  = 2.

Another example, this time using 1.000.000 documents.

and output:

Collection scan (COLLSCAN) occurs when indexes are not involved:

$explain allows you to evaluate the write operations. When applying it does not perform the operations but it estimates the time required to perform it and figure out what indexes are used.


queryPlanner vs executionStats vs allPlansExecution

The behavior of db.collection.explain() and the amount of information returned depend on the verbosity mode. The default mode for $explain is queryPlanner.

executionStats includes queryPlanner and additional information:

  • time to execute the query
  • number of documents returned
  • documents examined.

The output:

MongoDB runs the query optimizer to choose the winning plan, executes the winning plan to completion, and returns statistics describing the execution of the winning plan.

If we drop the index and rerun the statement the output will change:

We see executionTimeMillis = 24ms, totalKeysExamined and totalDocsExamined but also the executionStages.

allPlansExecution - like executionStats but runs each available plan and returns the stats for each one.

The output will show all available execution plans with the stats:

If you look on allPlansExecution, the first plan returned nReturned = 0 documents and was stopped because it was overruled by secondPlan that was already finished.


Indexes are automatically used by mongoDB when performing queries (apart from using sparse index). The $hint operator forces the query optimizer to use the specified index to run a query. This is useful when you want to test performance of a query with different indexes.

Covered queries

As per the official MongoDB documentation, a covered query is a query in which:

  • all the fields in the query are part of an index and
  • all the fields returned in the query are in the same index

Since all the fields present in the query are part of an index, MongoDB matches the query conditions and returns the result using the same index without actually looking inside documents. Since indexes are present in RAM, fetching data from indexes is much faster as compared to fetching data by scanning documents.

Selectivity is the primary factor that determines how efficiently an index can be used. Ideally, the index enables us to select only those records required to complete the result set, without the need to scan a substantially larger number of index keys (or documents) in order to complete the query. Selectivity determines how many records any subsequent operations must work with. Fewer records means less execution time.

MongoDB Index types

In MongoDB there are few index types:

Sparse index is useful in case of rare fields. Sparse indexes only contain entries for documents that have the indexed field, even if the index field contains a null value. The index skips over any document that is missing the indexed field.

TTL index – MongoDB is using to automatically remove documents from a collection after a certain amount of time.

No TTL indexed document will be deleted before the expireAfterSeconds is passed since the last update in that document.

Geospatial index – supports only 2d dimensions (eg: locations: [longitude, latitude]).

Text indexes – the field you want to index must be a text.


Additional links:


MongoDB – Internals & Performance

Storage Engine

Starting version 3.0, MongoDB adopted a pluggable architecture allowing the option to choose the storage engine. A storage engine is the part of a database that is responsible for managing how data is stored, both in memory and on disk.

Different engines perform better for specific workloads, one storage engine might offer better performance for read-heavy workloads, and another might support a higher-throughput for write operations.

MMAPv1  – the original MongoDB storage engine and is the default storage engine for MongoDB versions before 3.2. It maps the data files directly to virtual memory allowing the operting system to do the most of the work of the storage engine.

WiredTiger – default storage engine starting in MongoDB 3.2. Details can be found here.

The storage engine determine:

  • data format – different storage engines can implement different types of compression, and different ways of storing the BSON for mongoDB
  • format of indexes – indexes are controlled by the storage engine. MongoDB uses Btrees. With MongoDB 3.0, WiredTiger is using B+ trees, with other formats expected to come in later releases.

To retrieve the current storage engine you can run below command:


MMAPv1 uses mmap unix system call that maps files on disk directly into virtual memory space. This treats the data files like they were already in the memory.

MMAPv1 provides collection-level locking starting MongoDB 3.0 compared to database-level v2.2 & v2.6. MongoDB implements multiple readers – single writer locks.

By using journal (write ahead-log) MongoDB ensures consistency of the data. Using journal you write what you are about to do, then you do it. So if a disk failure occur while performing fsync() to the disk, the storage engine doesn’t perform the update.

See Journaling for more information about the journal in MongoDB.

By default, MongoDB uses Power of 2 Sized Allocations so that every document in MongoDB is stored in a record which contains the document itself and extra space, or padding. Padding allows the document to grow as the result of updates while minimizing the likelihood of reallocation.


WiredTiger Storage Engine is the first pluggable storage engine and brings few new features to MongoDB:

  • Document Level Locking – good concurrency protocol – you can technically achieve no locks and writes could scale with the number of threads (assuming no update to the same document or limit the threads to number of cores).
  • Compression
  • It locks some pitfalls of MMAPv1.
  • Big performance gains

To swich MongoDB to use WiredTiger simply start mongod with:

Please be aware your existing mongoDB server should not contain any MMAPv1 existing databases into /data/db/.

WiredTiger stores data on disk in Btrees, similar with Btrees used by MMAPv1 is using for indexes. New writes are initially separate, performed on files on unused regions  and incorporated later in the background.

During an update, WiredTiger writes a new version of documents rather then overwriting existing data. So you don’t need to be worried about document moving or padding factor.

WiredTiger provides two caches:

  • WiredTiger Cache (WT Cache) –  half of your RAM (default)
  • File System Cache (FS Cache)

Checkpoints – act as recovery points and are handle the “transfer” data from WT cache to FS Cache and then to the disk. During a checkpoint data goes from the WT Cache to FS Cache and then flushed to disk. It initiates a new check point 60s after the end of the last checkpoint. Each checkpoint is a consistent shapshot of your data. During the write of a new checkpoint, the previous checkpoint is still valid. As such, even if MongoDB terminates or encounters an error while writing a new checkpoint, upon restart, MongoDB can recover from the last valid checkpoint.

Compression – since WiredTiger has it’s own cache and since the data in WT Cache doesn’t’ have to be in the same format as in FS Cache, WF allows 3 levels of compression:

  • Snappy (default) – fast
  • zlib – more compression
  • none

Additional links:




MongoDB – Sharding

Sharding is the process of storing data records across multiple nodes when having demands of data growth. MongoDB is solves the problem with horizontal scaling by using the sharding mechanism.

Compared with replica set there are some changes from the client perspective. The client no longer talks with mongod instances directly. Instead a new component mongos is added. Mongos is aware where the data is stored after the shard (data partition) and it’s routing the queries to the appropriate mongod proceses. Usually mongos runs alongaside with the application client or on a very light environment as it’s only job is to route the queries.

Mongos is using config servers (at least 3 servers to be deployed for reliability) to retrieve metadata about sharded data. Each config server is storing the same configuration.

MongoDB 3.2 deprecates the use of three mirrored mongod instances for config servers.

Starting in MongoDB 3.2, config servers for sharded clusters can be deployed as a replica set. The replica set config servers must run the WiredTiger storage engine.

A complete replica set is required for each shard to provide reliability and fail-over at the data level.

The config servers will ensure an even distribution of data across shards.

Shard key – it is a field (compound field) in a document that will be used to partition the data.

Shard key selection:

  • the shard key is the common in queries for the collection
  • good “cardinality” / granularity
  • consider compound key shard keys
  • is the key monotonic increasing ? (eg. _id, timestamp)

I’m going to use screen under the xubuntu.

The output should look like below:

Let’s connect to the node rs1 and configure replica set rs0:

The output for rs.status() should look like below:

We’re going to import data into rs0, primary node:

Let’s create the second replica set  rs1 by following the same steps:

The output so far:

Now let’s configure replica set rs1 buy connecting to it’s primary node:

The output is similar to replica set rs0:

Now it’s the time to create the config servers:

The output should be similar to below:

Not it’s time to start the mongos server. This will be used by all clients to query our mongoDB sharded platform.

Before sharding the collection we create the index for ShardingKey:

We’re ready to shard out collection by request_ip and _id.

The initial distribution of data:

At this stage rs0 is keeping all our data. Try to generate some additional data to see the behavior of the shard.

Best practices on sharding:

  • only shard the big collections
  • pick sharding key carefully, they aren’t easily changeable ( it require creation of a new collection and data copy)
  • consider pre-split if bulk inserts
  • be aware of monotonically increasing shard keys (eg: timestamp, _id)
  • adding shards is easy but isn’t instantaneous
  • always connect to mongos instance and use mongod only for some dba operations when you want to talk directly with a certain server
  • keep non-mongos processes off of 27017 to avoid mistakes
  • use logical server names for config servers instead of IPs and hostnames.




MongoDB – Replication

Replication is the process of synchronizing data across multiple servers.

Replica sets and scaling are used to achieve reliable and high performance deployments. Replica sets ensure multiple copies of data are available. They are build using multiple types of MongoDB nodes. Replica sets exists on odd numbers in order to allow election of a primary node. The Write operations go to primary node, reads can be distributed to the order nodes.

Replication benefits:

  • High Availability – automatic failover
  • Data Safety – durability, extra copies
  • Disaster recovery
  • Scaling (some situations)

Node types:

  • Primary node – writes always go to it
  • Regular node – function as secondary nodes and it can take over the role of primary node in the event of a failure.
  • Arbiter node – it doesn’t keep a copy of the data. It plays a role in the elections that select a primary if the current primary is unavailable.
  • Special purpose nodes – active backup

On regular nodes we can apply restrictions to keep the role of that node (only read so node will never be promoted as primary node).

Building a replica set – it requires installing MongoDB on additional hosts (I recommend an automatic tool to do the provisioning – eg: vagrant + ansible) or to use a cloud based solution. Please be aware it’s highly recommended to keep your data folder out of any container so mapping host folder to guest can be a good practice.

Replica Set

Create the replica set configuration document:

The configuration is showed below:

Verifying if failover works

First we’ll need to config that all nodes are running. The status of the replica set can confirm that. Then we’ll remove one server to simulate the failure and we expect the replica set to elect a primary and remain responsive.

We’ll connect to the one of the running mongoDB instances (eg: port 27018).

The previous elected primary node shows as “(not reachable/healthy)”.

You can see the second replica node was elected as primary in few seconds.

Read concern

After you finish the configuration of the replica set and everything is up and running you can start using it. If you’re performing an insert (on primary) via shell and you want to read the data from one of the secondary nodes, you will probably have an error:

You need to accept eventually inconsistent reads:

Sample exercise

ReadPreference modes – it affects consistency and speed by telling mongoDB how to route the read requests among nodes in a replica set:

  • primary – default, for high data consistency requirements, all data are read from the primary.
  • primaryPreferred – use secondary if primary node is not responding.
  • seconday – go to second for the reads as primary needs to be 100% for write.
  • secondaryPreferred – secondary is the top of the list for reads, but you can go to primary if requested (you cannot reach any secondaries).
  • nearest – based on network latency (recommended on remote data center)

You can see an implementation in nodeJS for the ReadPreference by checking reference 4.

Level of Write Concern

The level of write concern instructs MongoDB how to respond to writes by describing the level of acknowledgement requested from MongoDB for write operations in standalone, replica set or sharded clusters.

The Write concern is reflected in the consistency, redundancy and responsiveness of the entire mongoDB.

Write concern can tell MongoDB to act synchronous or asynchronous to persistence operations. If data consistency requirements are high then write concern should be synchronous (MongoDB will wait until data is replicated on all nodes).

  • Unacknowledged – async – send the write and immediately move. it doesn’t even wait the mongod process to confirm the request.
  • Acknowledged – it’s the default level – The mongo client will wait mongod process to confirm the request. It doesn’t mean mongod process has done anything with the request.
  • Journaled – instructs mongod process to respond only after the Write has been written to the journal on disk.
  • Replica acknowledgment – instructs mongod process to respond only after the Write has been written to the primary and to only 1 or more nodes from the replica set.

Values for w:

  • 0 – Requests no acknowledgment of the write operation.
  • 1 – Requests acknowledgement
  • n > 1 – copies of data spread on n nodes.

For large volumes of data, sharding is the preferred method to scale as replica set is increasing the traffic.

Single replica set has limitation of 12 nodes (edit: raised to 50 nodes as of MongoDB 3.0)

You can retrieve the errors from specific each host by using