Code Metrics Viewer 2 CTP

I published a pre-release version of the Code Metrics Viewer 2 extension targeting Visual Studio 2012. It´s the continuation of my first contribution, but it works completely different. The first version was just a user-interface that integrated the Code Metrics Power-Tool 10.0 into the development environment, but the new version brings its own calculation functionality – and instead of analyzing IL it acts on the source code. The current release supports metric calculation for C#-projects, but functionality supporting Visual Basic is on schedule.

The extension is dependent on the Microsoft “Roslyn” CTP (September 2012, v1.2), which need to be downloaded and installed separately. Because I wasn´t sure, if I am allowed to distribute the Roslyn binaries, I decided to exclude them from the extension package and defined a dependency to the Roslyn Components instead. The Extension Manager will show the following dialog to you if the required package dependency is missing.

roslyn-alert

The download can be found at the Visual Studio Developer Center: http://bit.ly/YujBX8 or on NuGet: http://bit.ly/WheTeO. If the components will be removed after installing the extension, calculation of metric results isn´t possible and the following error message will be shown: “The calculation of metric results has failed. Couldn´t find the Roslyn CTP components.”

missing-roslyn

Advertisements

How to export reports to Excel

Sometimes it could be helpful to export the result data to Excel, in order to work with the calculated data using common reporting tools, or if one want simply prepare data for a presentation, or just show and explain metrics to people having a less technical background. The latest update (version 1.5.0) allows saving reports to Excel 2007/2010 compatible worksheets. Exported documents contain all information available by the user interface. In addition, default column filters are applied and cells with calculated values are colored – depending on the metric, scope and their current value. Rows are grouped by module, namespace, and type – so the document allows to navigate through the hierarchy very quickly. There is no special “Export” button in the UI – just select the Excel file format from the filter list within the “Save file” dialog, in order to save the report as an Excel document.

Is it possible to compare metric results?

Yes, it is! The latest version of the extension has basic comparison functionality. I thought this could be very helpful if one would use the extension during code review and/or refactoring. When I used the tool during refactoring, I had the problem that it was hard to see, if values were getting better or worse. So, I thought it would be nice to load a previously calculated report and compare the data against the latest results, to be able to show some kind of “trend” within the grid. This is one of the features I wanted to wait with until the end of this summer because there is still some work to do… in other words, by now it´s not perfect :-) In some situation the comparison might not work…

I calculated metrics for a very simple console application… Here you can see the Main()-method of the static Program-class, which has a maintainability index of 57. Of course, this is still okay, but it could be better…

So, I made some refactorings to the Program class… I created two new methods named ExtractPublicKey() and WriteKeyFile, recompiled the assembly and calculated the metrics again…

Now I can use the compare feature, to see if I made some “good” progress :-) As you can see in the screenshot, there is a new button named “Compare…” in the toolbar. If you press that button, a file dialog will show up, where you can select the candidate report for comparison. If you want to work with the compare feature I recommend to enable the auto-save functionality in the options dialog – so, a new report will be stored in the solution folder, each time you calculate code metrics.

In my example, I have chosen the report, which was calculated before I did the refactorings – and I see, that the maintainability index increased by 7.

By the way… if you like the tool, I would be glad if you would rate the tool with five-stars in the Visual Studio Gallery :-)

How to interpret received metrics results?

Okay; let´s face it… calculating numbers is one thing. Getting a feeling for what the numbers are telling us is another one. The power tool calculates five metrics; the maintainability index, cyclomatic complexity, depth of inheritance, class coupling and lines of code. But what do the numbers really mean? Well, I would say there is no unique answer to that question. But there is a very good description about the interpretation of the results at Vitaly´s WebLog. Seems that the mentioned blog post is no longer available; so I´ll try to shed some light…

Lines of Code

Let´s start with one of the most controversial metrics (in my opinion): the (effective) lines of code metric. This metric is calculated on method level and depends on the IL code that is generated by the compiler. Sometimes it might happen that you just wonder yourself about that number, and you may have the feeling that the metric result is wrong. Indeed, the result might slightly differ from what you´ve actually written in source-code, or what the individual developer would treat as an effective line of code (for instance, comments are not counted by this metric). Anyway, those slight differences don´t matter… this method can be used as an indicator for methods, which are huge (having more than 20 lines of code), because in most cases huge methods often tend to fulfil multiple purposes, or satisfy different concerns which make it hard to apply changes or provide tests. Code Metrics Viewer rates this metric value the following way: 1-10 lines are good (green), 11-20 lines are still okay (yellow), everything above 20 lines is critical (red) and should be reviewed and possibly refactored into smaller functions.

Class Coupling

This metric can be used as an indicator of how evolvable a function, a class, or at least an assembly project actually is. It is calculated for each level and represents the number of types (except built-in language types) beeing used by a method, class, etc.. Lower values are better. Code Metrics Viewer rates this metric value the following way: 0-9 dependencies is good (green), 10-30 dependencies (on member level) and 10-80 dependencies (on type level) are still okay (yellow), more than 30 dependencies (on member level) and more than 80 dependencies (on type level) are critical (red) and should be reviewed and possibly refactored.

Depth of Inheritance

The depth of inheritance metric indicates the number of types within the inheritance chain (the total number of base classes). Lower values are better because the more there are the tougher it could be to follow the flow of the code when debugging or analyzing. Code Metrics Viewer rates the metric value the following way: 1-2 base types are good (green), 3-4 base types are still okay (yellow), everything above 4 is critical (red) and should be reviewed and possibly refactored.

Cyclomatic Complexity

This metric is calculated on method level and indicates the total number of independent branches of the method´s control-flow graph. The value increases by the number of logical expressions which can change the control flow (if, switch/case, for- and while-loop statements). A method that does not contain any control-flow statements has a cyclomatic complexity of one, which means there´s only a single branch. Code Metrics Viewer rates the metric value the following way: 1-10 branches are good (green), 11-20 branches are still okay (yellow), more than 20 branches is critical (red) and should be reviewed and possibly refactored. The cyclomatic complexity metric is quite important because it can be seen as “the minimum number of tests required”, in order to cover all branches… on the other hand, it can be used to unveil code that is hard (or impossible) to test.

Maintainability Index

The maintainability index metric can be used as an overall quality indicator even if not all of the other provided metrics are taken into account to calculate that metric result. Actually, only the cyclomatic complexity and lines of code metric results are used directly – and some other metric values that are not exposed by the Code Metrics Power Tool. Those “hidden” values are called Halstead complexity measures, whereby only the Halstead volume is used for the calculation of the maintainability index (of course, the class coupling has an impact on the Halstead volume, as well as used operators and operands). Result values are between 0 and 100, whereby larger values indicate a higher (better) maintainability. The Code Metrics Viewer rates the metric value the following way: 100-20 is good (green), 19-10 is still okay (yellow), 9-0 is critical (red), but I usually review everything that has a lower value than 50.

How can I calculate code metrics?

After a solution was loaded and successfully build, it can be analyzed by pressing the “Analyze Solution” button. The Code Metrics Viewer will utilize the power tool to create the code metric report for each assembly in the solution. Depending on the solution size, this can take a while… The results will be shown in the grid.