Is it possible to compare metric results?

Yes, it is! The latest version of the extension has basic comparison functionality. I thought this could be very helpful, if one would use the extension during code review and/or refactoring. When I used the tool during re-factoring, I had the problem that it was hard to see, if values were getting better or worser. So, I thought it would be nice to load a previously calculated report and compare the data against the latest results, to be able to show some kind of “trend” within the grid. This is one of the features I wanted to wait with until end of this summer, because there is still some work to do… in other words, by now it´s not perfect :-) In some situation the comparison might not work…

I calculated metrics for a very simple console application… Here you can see the Main()-method of the static Program-class, which has a maintainability index of 57. Of course, this is still okay, but it could be better…

So, I made some refactorings to the Program-class… I created two new methods named ExtractPublicKey() and WriteKeyFile, recompiled the assembly and calculated the metrics again…

Now I can use the compare feature, to see if I made some “good” progress :-) As you can see in the screenshot, there is a new button named “Compare…” in the toolbar. If you press that button, a file dialog will show up, where you can select the candidate report for comparison. If you want to work with the compare feature I recommend to enable the auto-save functionality in the options dialog – so, a new report will be stored in the solution folder, each time you calculate code metrics.

In my example I have choosen the report, which was calculated before I did the refactorings – and I see, that the maintainability index increased by 7.

By the way… if you like the tool, I would be glad if you would rate the tool with five-stars in the Visual Studio Gallery :-)

Calculated results can be kept for further analysis

In accordance to the test tools for Visual Studio Professional, the extension stores a compound report containing all calculated results of the solution in a folder named “MetricResults” below the solution directory. The output path can be changed in the “Reports” options page. The output path can either be a relative or an absolute path, but Visual Studio macro´s are not supported. The “Auto-Save”-feature is enabled by default. So, each time a report is calculated it will be stored in the configured output directory. The number of automatically saved reports can be limited, so the extension will only keep the latest results – the default value is 25. If you set the limit to zero, all reports will be kept; nothing will be deleted.

The /directory-option is now supported

If you have missed support for the /directory-option of the Power Tool, then the latest version (v1.3.7) might make you happy. Code Metrics Viewer allows you now to specifiy a directoy location, which can be used by the Power Tool to search for assembly dependencies.

The option can be enabled per project; all you need to do is to add a custom property to the project file; you should already be familiar with the basics of MSBuild, because I will not explain it here… Open the project file in the editor (or notepad) and add the following property-group to the contents of your project file(s) and customize the path.


<PropertyGroup>
 <CodeMetricsDependencyDirectory
   Condition="'$(CodeMetricsDependencyDirectory)' == ''"> Your path goes here </CodeMetricsDependencyDirectory>
</PropertyGroup>

That´s it. Code Metrics Viewer will look for that property, when you analyze the solution.

How to interpret received metrics results?

Okay; let´s face it… calculating numbers is one thing. Getting a feeling for what the numbers are telling us is another one. The power tool calculates five metrics; the maintainability index, cyclomatic complexity, depth of inheritance, class coupling and lines of code. But what do the numbers really mean? Well, I would say there is no unique answer to that question. But there is a very good description about the interpretation of the results at Vitaly´s WebLog. Seems that the mentioned blog-post is no longer available; so I´ll try to shed some light…

Lines of Code

Let´s start with one of the most controversial metrics (in my opinion): the (effective) lines of code metric. This metric is calculated on method level and depends on the IL code that is generated by the compiler. Sometimes it might happen that you just wonder yourself about that number; and you may have the feeling that the metric result is wrong. Indeed, the result might slightly differ from what you´ve actually written in source-code, or what the individual developer would treat as an effective line of code (for instance, comments are not counted by this metric). Anyway, those slightly differences don´t matter… this method can be uses as an indicator for methods, which are huge (having more than 20 lines of code), because in most cases huge methods often tend to fullfil multiple purposes, or satisfy different concerns which makes it hard to apply changes or provide tests. Code Metrics viewer rates this metric value the following way: 1-10 lines is good (green), 11-20 lines is still okay (yellow), everything above 20 lines is critical (red) and should be reviewed and possibly refactored into smaller functions.

Class Coupling

This metric can be used as an indicator on how evolvable a function, a class, or at least an assembly project actually is. It is calculated for each level and represents the number of types (except built-in language types) beeing used by a method, class, etc.. Lower values are better. Code Metrics Viewer rates this metric value the following way: 0-9 dependencies is good (green), 10-30 dependencies (on member level) and 10-80 dependencies (on type level) are still okay (yellow), more than 30 dependencies (on member level) and more than 80 dependencies (on type level) are critical (red) and should be reviewed and possibly refactored.

Depth of Inheritance

The depth of inheritance metric indicates the number of types within the inheritance chain (the total number of base-classes). Lower values are better, because the more there are the tougher it could be to follow the flow of the code when debugging or analysing. Code Metrics Viewer rates the metric value the following way: 1-2 base types is good (green), 3-4 base types is still okay (yellow), everything above 4 is critical (red) and should be reviewed and possibly refactored.

Cyclomatic Complexity

This metric is calculated on method level and indicates the total number of independent branches of the method´s control-flow graph. The value increases by the number of logical expressions which can change the control flow (if, switch/case, for- and while-loop statements). A method that does not contain any control-flow statements has a cyclomatic complexity of one, which means there´s only a single branch. Code Metrics Viewer rates the metric value the following way: 1-10 branches is good (green), 11-20 branches is still okay (yellow), more than 20 branches is crititcal (red) and should be reviewed and possibly refactored. The cyclomatic complexity metric is quite important, because it can be seen as “the minimum number of tests required”, in order to cover all branches… on the other hand it can be used to unveil code that is hard (or impossible) to test.

Maintainability Index

The maintainability index metric can be used as an overall-quality indicator even if not all of the other provided metrics are taken into account to calculate that metric result. Actually, only the cyclomatic complexity and lines of code metric results are used directly – and some other metric values that are not exposed by the Code Metrics Power Tool. Those “hidden” values are called Halstead complexity measures, whereby only the Halstead volume is used for the calculation of the maintainability index (of course, class coupling has an impact on the Halstead volume, as well as used operators and operands). Result values are between 0 and 100, whereby larger values indicate a higher (better) maintainability. The Code Metrics Viewer rates the metric value the following way: 100-20 is good (green), 19-10 is still okay (yellow), 9-0 is critical (red), but I usually review everything that has a lower value than 50.

How to get results for code behind XAML-files

In June, a user reported a problem where he does not see any results for code behind XAML files. I dove into the problem and figured out that code behind XAML files was handled by the power tool like generated code, which I had disabled by default using the /igc switch. The latest version of the tool allows to take control over the switch; if you want to calculate code metrics for generated code you have to make sure, that the /igc option is disabled.

How can I calculate code metrics?

After a solution was loaded and successfully build, it can be analyzed by pressing the “Analyze Solution” button. The Code Metrics Viewer will utilize the power tool to create the code metric report for each assembly in the solution. Depending on the solution size, this can take a while… The results will be shown in the grid.