This is the last part of our series about trackr. In the last two blog posts we’ve shown you the tools and frameworks we used to built the backend as well as the frontend. In case you have missed the previous posts you might want to read them now to catch up.

In this post we will highlight the overall development process and give some insights on the tools we’ve used.

Development Process

We used Confluence to outline our requirements. It’s a great tool for extensive specification and it also adds collaboration capabilities. So everyone can see and comment on the requirements until everything has been defined. It is also a good place to put wireframes into.

Based on the specification we created smaller user stories which were ready to be implemented. When it comes to the development process we of course wanted to follow an agile approach. Since we weren’t able to adhere to a fixed time frame we decided to go with Kanban rather than Scrum. Within JIRA we have set up an agile board to keep track of the progress.

Coming as no surprise we used Git as our SCM. Our only central repository is on GitHub. Since our team is still pretty small we did not yet think of a very elaborate workflow. We do follow the principle of having a master branch that is always pointing to the latest production release, a development branch for ready features and (possibly local) feature branches. We did not commit any of the frontend or backend dependencies to Git. Although after the trouble we had with Bower maybe it would have been a good idea to commit these dependencies as well.

Other than that I don’t think there’s much to say about trackr and Git - as soon as you’re a little proficient with it, it won’t get in your way.

Continuous Integration

We wanted a continuous integration approach for trackr. Upon a feature is merged into the development branch the build server should check out the changes, run all tests, build the artifact and deploy it to our test system. We wanted to try something other than Jenkins and decided to use TeamCity by Jetbrains.

After giving TeamCity access to the source code it detected the build.gradle file and immediately proposed a Gradle build. We only had to set the goals and TeamCity was good to go. Since we used the Gradle wrapper we didn’t have to install it again. For the frontend Grunt, Karma and Bower are needed but can just be put on the path within the build configuration. All other branches are only tested but not built.

With a neat little plugin for Karma even the Javascript tests can be reported to TeamCity.

The deployment was done via Gradle and worked seamlessly, too.

TeamCity also integrates nicely into IntelliJ IDEA. You can subscribe to builds and get notifications about them. If e.g. a build fails with an exception it will even send the stacktrace right to the IDE. Theoretically remote debugging should work but we couldn’t get it to run. Also, TeamCity and JIRA are connected to our GitHub repository. So we could always put the issue number in the commit messages and both tools would handle them and display additional information. All in all we didn’t push TeamCity to its limits but it served us pretty well.

For Javascript we already have JSHint which covers some code quality checking, for Java we decided to use SonarQube. Java 8 support was only included in the week of the 24.3.2014, so we had to wait until then. Only one plugin, the general Java plugin got the support for Java 8, PMD is still not working. Probably most of the interesting Java checks are in this plugin, so in the end we didn’t get very much out of Sonar.

Due to our continuous integration our test system always reflects the most recent state of the application. Everyone can log in and experience the most recent features for themselves.

But in order to generate easy to share short movies, I started creating small animated GIF files using QuickTimePlayer in OSX to record a portion of the screen and then ffmpeg to convert the movie file to GIF and ImageMagick’s convert to optimize it for size. With webm (hopefully) around the corner the following commands have been used.

We shared videos either on imgur or embedded in our own websites with the HTML5 video tag.

ffmpeg + convert
ffmpeg -i movie.mov -r 15 movie.gif
convert movie.gif -layers Optimize movie_optimized.gif
# or webm, assuming the libvpx-720p preset is present
ffmpeg -i movie.mov -vpre libvpx-720p -b 2500k -an -f webm -y movie.webm

Database Migrations

We are using Flyway to manage our database changes in a consistent manner. By using Flyway we can supply SQL scripts for all relevant schema changes and Flyway takes care of applying these. This helps keeping different environments in sync. Flyway scripts are enumerated sequentially so Flyway can figure out the order in which these scripts must be applied. It also keeps a reference to the most recently applied script so only newer scripts are applied.

We still do the actual deployment manually. While it works for us it is prone to errors (i.e. someone forgets to call Flyway prior deployment). As a consequence we are working on a tool to automate deployments to our environments. A first version already exists and we will open source the tool as soon as a stable state has been reached.

So stay tuned!