Back to Github

When I started Digital Library project I gave a chance to the project management services provided by Github and they failed. Mainly because I didn’t know them and didn’t have enough patience to pick up some new knowledge, I wanted to move forward. In the following a few weeks the project got up to speed, so my goal was accomplished, and in my thoughts there was place to integrate something new. During this a few weeks I started to follow Microsoft’s open source projects, especially aspnetcore, dotnet/runtime and entityframeworkcore. As I read their tickets and pull requests I slowly understood how they integrate Github and Azure DevOps Builds. The result is that I put together a test project and tried Github – Azure DevOps integration in the most common scenarios I’m going to use. The result is that, finally, I started to use Github as I originally wanted.

One of the reasons I wanted to run my project fully on Github is self PR. Even though I don’t consider essential being on Github in my career, I’m an engineering manager and not a developer, but still being on Github and being able to display some professionalism might result something positive in the future. The other aspect is that, making your professionals transparent is a responsibility I have to deal with.

The lessons… First, you just have to understand when you are ready to absorb new knowledge, which may result some struggle and not the progress you seek. There is no such thing you are always able to integrate something new in your structured knowledge. This ability can be crippled temporarily by other priorities. In my case the need to be able to move forward was more important.

The other lesson is that you have to be able to define what is important for you. The way I wanted to manage my project was based on an earlier experience (14 developers, 4-6 feature developed in paralell, feature branches and multiple supported versions). My problem space is way simpler, and I was needed some time, and possibly clear head too, to understand it. Again, the need to be able to move forward clouded my judgment.

Anatomy of implemting a REST Api endpoint – Part Three

This writing is about how Azure DevOps can support software delivery.

So far there is a feature ticket about Dimension Structures CRUD functionality, and listing and adding new functionalities already implemented. Let’s see how Azure DevOps implement transparency in software delivery.

Dimension Structure CRUD ticket can display that to how many chunks (story tickets) this job was splitted and the status of these tickets. This way possible to track the progress of implementation without being distracted by lower level, development related information. Beside these you can use deadlines, risks and effort fields to track your progress, but, since I’m the only one working on this project I don’t use these.

Screenshot 2020-01-15 at 17.19.20

A story, attached to a feature as child, can show all the delivery related details you need to know to be able to see what happened. In my case the folowing data is displayed in a story ticket:

  • On which branch the implementation happened.
  • Build related information. There are two types of builds, one is just build, the other one is “Integrated in”.  You can attach multiple from both. I use the first type to mark the build where the full implementation successfully built on the server. Test results are displayed on the build page. In case of bigger projects it might be overkill. For me works fine. I use the “Integrated In” links to mark the build where the particular implementation went in to master branch. Master is the release branch.
  • Tasks for tracking different development related activities, and where you can track your time and remaining time. At this level also possible to mark a build where this job was completed/integrated. It is useful in case of bigger team where multiple developers work on a story. In my case, it would be overkill, however, builds automatically attached to the task.
  • Pull Request, which contains all the info needed to be able to review and merge the code change into master or other branches.
  • Test related stuff, I can’t use this part of Azure DevOps because I don’t use VS so I don’t know yet how to connect test cases in a dll to test cases in Azure DevOps. However, it is a really powerful feature and increases transparency.

Overall, when I put the delivery manager hut on my head and I want to know what is the status of the team deliveries then a well managed board can help a lot, especially when the tool, in this case Azure DevOps, does the majority of the work for you.

The master branch looks like this. Clean and lean…

Screenshot 2020-01-15 at 18.06.03

Feel free to dig the tickets in my Digital Library project.

Anatomy of implementing a REST Api endpoint – Part Two

As I finished the listing functionality of the grid I started to implement the add method of the endpoint. The listing function implementation from the pull request point of view seems pretty good. The commits lead through the reviewer what and why happened. It is easy to review. But, implementing of add feature went sideways. I was distracted by other things during development and couldn’t pay enough attention to commit the code whenever it is logically right. The result is a few commits and a huge one at the end.

It might seems acceptable to have a pull request like this. In my opinion, not really acceptable. Were I the reviewer I would have a short discussion with the developer and try to understand why a commit discipline wasn’t applied.

Let’s take a look at the last commit. It contains implementation at both, client and server side. At server side it contains changes at multiple places like MasterDataHttpClient, Controllers, business logic (kind of repository) and validation. Were I who reviews this I would say a few WTFs, because the way we write code is not about for the computer to understand it. The computer understands the code if it is in a single line without spaces, basically impossible to understand for human being. The way we change the code, the way how changes should be introduced by commits, the variable names and so on must serve the easy understanding of another human being. In many cases, the reviewer. And the developer who is going to make changes in the code a few months or years later.

Anatomy of implementing a REST Api endpoint – Part One

In my Digital Library project a new Api endpoint is needed. This job is tracked here, implementation is happening on this branch. In this post I’d like to dump my decision process during implementation. In the recent months I was coding a lot and I recognised a few bad habits.


What is the big picture? Where this function will be used? Currently, it will feed a UI grid for users to manage Dimension Structure entities. Right now there is no plan to publish this endpoint for any other usage.

What is the purpose of this function? It returns with list of Dimension Structure entities. If there are no entities in the list then empty list is the result. Any error happens at server side it returns with a Bad Request including exception.

Drafting Api code, meaning pulling together any necessary code, e.g. infrastructure, service basics, DI or whatever is needed, to have a kind of working implementation. It is required for being able to execute test cases. In this case I had to clean up the codebase a little bit because there are two domain layers using the same entity (obvious sign of introducing a new entity type) and I had to separate them. First a few commits contains this.

Create integration tests, or in other words, find the test level which is the most valuable in this implementation. This point is about finding the balance between fast pace development and quality. There will be another post about this later. In this case there will be only a single test case.

(Note: Since all above happened while I was in the in the best coffee shop at Budapest I disabled continuous testing for reducing battery usage, meaning the code compilation happens when I press the compile button and not for all save action. As a result, I realized that moving test files to another directory cause compilation error a few commits later, after I pushed the code when the server build failed. I should have compile the code before commit and push. I had to fix this. As a consequence, compilation error hid a failing test which also required fix in order to get a green build. Fix.)

And this is the point where I realise I started the implementation with the wrong function. In order to be able to create test data I need to have the Add function too. Add function will be implemented without tests, because even if it does something unfit for requirement (we don’t know the requirement yet), the point is having the data in the database and list it. Luckily, Dimension Structure entity doesn’t have any complicated which might have effect on listing logic. So, this way of doing things feasible enough.

Implementation. The implementaion is done, and it contains changes on many levels in the codebase, such as interfaces, business logic layer (kind of repository layer), http client and tests, and so on. You can review the changes between commit 3d1cd9ab and c82e3772. Even in a little codebase like this, introducing a new Api function cause nice amount of impact on multiple places.

There are lessons from this phase. First, when I have continuous testing enabled in Rider sometimes the dlls get cached and as a consequence issues remains hidden. Once a full build is done these issues can be discovered. Preventing bugs going up to origin by push I have to do a full “clean solution” ==> “rebuild” ==> “executing all tests” round to be sure the code is ok and the server build is not going to fail. Once I have enough experience with the caching like phenomena I’ll put together a solution for JetBrains and file bug.

The other lesson is also coming from continuous testing. I use the “Cover new and outdated tests” option meaning that not all tests are executed by save action. This way of doing things, in one hand one of the fastest continuous testing enabled method, on the other it hides test parallelization issues. A “clean solution” ==> “rebuild” ==> “executing all tests” round discovers these.

Minimal grid implementation which can list Dimension Structure entities. Nothing special here. A little HTML with Blazor Server Side goodness and that is all.

You can see all the changes on this page, where the commits, tags created by successful builds are displayed in an easy to understand way. You take a look at it and can see what happened. No time spent on parsin, collecting the information because it been done by the tool. You can focus on making decision.

The other nice feature of Azure DevOps is that, the ticket tracking this work has all the information, so it is easy to track what happened. As manager, responsible for delivery, really valuable the high level of traceability and transparency provided by this tool. But, again, it is my preference of doing software delivery, and it is not a MS sponsored post.

Why Azure DevOps and not GitHub?

The reason behind why I use Azure DevOps for Digital Library project instead of Github, which is the kind of default place to host an open source project, is that, I know Github less than Azure DevOps. I used the latter for more than 5 years at Dealogic. At first, I thought that publushing my project and running it on Github is a great opportunity to know better Github, but I faced a question and I couldn’t find answer for it and I decided to move back everything to Azure DevOps. The case will be described later.

It is not a feature comparion.

I needed version management, build/pipeline, some project management capabilities and a wiki where additional information can be published. Both system can provide these. I already have an Azure DevOps account and my VPS machine is used as build server. I don’t have any special setup requiring special build capabilities, just I already had this machine for other reasons, and running out of free build minutes occured once, so I started using it as build server. It is provided by Contabo.

Azure DevOps builds can be easily connected to a Github repo. A few clicks only.

Every push triggered a build against the given branch. When a build was successful the source code was tagged by build number. This way of tagging resulted a release on Github which was something I didn’t want. I asked on Stackoverflow how it can be disabled, but no answer so far. Documentation doesn’t help at all. No conceptual explanation of Github processes and way of doing things. Since, deleting all releases/tags manually whenever there is a new build is not sustainable, I ditched Github.

Why I ditched Github instantly? Firstly, I already have a solution for the problem, so I wasn’t forced to figure out how Github works in this case. On the other hand, I want to spend my time to work on my project and dealing with infrastructure related questions and, eventually, making a compromise which I don’t want.

Azure DevOps can already have public projects, and authorization is kind of ok.

Digital Library Project

I moved all my code in my Digital Library project to a public Azure DevOps project and it will be subject of a few future articles. I’m going to discuss topics around software delivery in these articles.

Digital Library Project is an idea about how is possible to manage huge amount of documents and managing the relations between them.  It will be a wiki on drugs. Once it will be done.

The story of this idea is that, somewhere late ’90 I read The Pinball Effect book written by James Burke and it ignited my thoughts that time. But, I couldn’t do with the thoughts as that time I didn’t know anything about information science and programming. A few years later I studied library science with information technology and I got some insights how information can be structured, stored and managed in multiple ways. Kind of a solution for my thoughts a few years earlier. So, I went deeper in programming and other related stuff and I ended up in information technology as a tester and later manager.

Later articles I’m going to go deeper in details.

Why I like MacBook Pro’s Touch Bar?

There is a decent hussle around MacBook Pro’s Touch Bar. Lot of people hate it, or have some negative opinion on it.

I have been using MBP’s for almost two years. It took a while to adjust my Windows and Linux based habits to OSX, and it didn’t go well at first. After more than a year of using MBP I went back to a Dell Precision 5530 which was a nightmare. This is a story for another time.

You might need to know that, I’m a hobby prorgammer who uses mainly JetBrains products and command line. Any other thing I use is a browser, Microsoft Office products and VLC. Occasionally, I edit small videos.

Touch bar. At first, I couldn’t see any use of it. I understood that I can easily adjust volume, screen brightness and things like this. Fine. It seemed a way better solution than dealing with Fn keys. A few months later I discovered that JetBrains products can utilise Touch Bar very well.

Debugging. When I use Windows function keys (F1-12) are utilized for different debug options like step in, step over, etc. There is a common layout, but, honestly, I never remember it. Being in development mode is a different mindset. Inn my case, being focused on the code’s behaviour doesn’t make possible to remember any other thing. The least I need is to spend even a little brain capacity for remembering which function key is which function. When I debug on MBP using Rider the icons are displayed on the Touch Bar. It means I don’t have to remember which key which function. Icons, graphical representations of functions work better for me.

The other I really like is the following, Touch Bar content changes depending on which button is pressed from the following: Shift, Control, Alt and Command. See the context section in the linked Jetbrains page. You can bind 3-5 functions to every key context. It means, 10-15 functions can be used easily. It means two button need to be pressed. Pressing only two button is important for me. I personally don’t like 3 button, or primary-secondary keystrokes.

Colonel David H. Hackworth: About Face

Why I started to read this book?

I heard about this book first time from Jocko in his second podcast. This is the book he read many times. Here the question automatically comes up, why Jocko opens this book multiple times? What is in it?


Patiente is a virtue. Hackworth always wanted to be infantry leader and he had to wait many years after his years in Korea to get an assignment. He just did hist best during these years as AAA officer, and the units under his command got just excellent.

If you don’t know, or just don’t want to play politics in your career giving your best and the result coming with it will be the differentiators.

You have to have good relationship with your people. You have to serve them as leader. Serve them by excellent training and high expectations, and by making their life a bit better.

You have to learn always about leadership. There is no point where you know everything about it and you are just good. There is always room for improvement.

Who is David H. Hackworth?

Wikipedia, but worth to read the book to get some insights on his thoughts and basic values.

Some sentence from the book displaying its huge value.

“I prayed all the time. But early on, I’d made a pact with myself: it was never Dear God, please look after me; it was always Dear God, please look after my men and make sure that no one gets killed.