Skip to main content

NDepend - Why you should try it

This days I had some time to play with NDepend and I was impress to see what kind of information you can extract from your current project.
There are a lot of metrics that can be extracted from NDepend, when you analyze your project for the first time with NDepend you will not know on what metrics you take into account – there are so many (and this is good).

Code Quality and Code Metrics 
NDepend is not only calculate different metrics of your code but is able to give you feedback related to them. Different combination of metrics will trigger alerts and warnings. This is very useful when you want to improve your code and you want to go directly to the problem. Basically, NDepend has a predefined list of code qualities attributes that are run over your solution.
For example, NDepend is able to detect and display the methods that are to complex (has cyclomatic complexity to high – over 40). In this cases a critical alert will be displayed.  Another nice code quality rule that I like is related to the number of parameters of each method. If a method has over 8 parameters (this value can be configurable) NDepend will display automatically a critical alert that notify user about this issues.
On top of this, NDepend is able to run regression checks over code quality. For example it is able to compare the complexity of method in time and to notify us when the complexity of methods increase. Or, it is able to notify us when we had methods to classes that already contains a lot of methods.
If you integrate NDepend with your source control and build machine you will be surprise to discover that you have a rule that can enforce developers to respect code quality. NDepend can detect if code quality of a specific part of the code decrease (number of lines, number of methods, number of fields and so on) and can notify users when the quality is going down. You can even make a build fail because of this.
Going further with code analyzing, NDepend is able to retrieve information about the quality of our design and how the architecture is layered. He can notify us when we are using empty interfaces, we have cycles during type initialization or even it is able to calculate the size of instances and notify us when the size of instances is too big. You should know that there are cases when size of instances over 64k can affect the performance of our system. If we are using in UI layer types from DAL or DB layer, NDepend it is smart enough to detect this and notify us. Of course cases like high cohesion and low coupling is automatically detect.

API Changes
This is a pretty interesting feature of NDepend. This was the first tool that I saw that is able to detect an API change and notify us. For example when a namespace is removed or a type is removed, we will automatically notify that a change was made and other components are affected by this change. On top of this it is able to detect fields that were used and they are not used anymore.

Death Code and other features
Like ReSharper or similar tools, NDepend is able to detect code that is not used by our application. Also is able to recommend to us to change the visibility level of different methods, classes, property based on the usage. Additional to this we will be notified in the case when a filed ca be static, const and so on.
It has the capacity to analyze the naming format of our fields, properties, methods, classes and namespace and to give us recommendation based on a specific naming convention or to alert us when naming is to long for example or when we don’t respect our defined standard.

Yes, it is true that a part of the features can be found on other tools like ReSharper, but NDepend has some features that are unique.

Cool features
First thing are Code Metrics. In comparison with other tools it is very easily to configure and modify them. I remember when I tried to configure some metrics with another tool in combination with Sonar. We had problem running metrics over the code and we invest a lot of time fixing them. It was a nightmare. Using CQLing you have the possibility to write your own metrics over the code or change the one that already exist. This is one of the features that I really love.
Be aware of one thing. NDepend has a lot of metrics, you should use only the one that are useful for your and for your team. Because of the number of metrics that are displayed by default, NDepend can intimidate a junior very easily.
Another important thing is the speed. The code is analyzed extremely fast. In comparison with other tools, the speed is higher with 8x or 10x factor. Yes, it’s true – try it on a big project.
The 3rd  are diagrams that are generated by it. Not only that we have dependency graph (at all levers – class, namespace, components) and tree map matrix, we also have Abstractness vs Instability that can give us some useful information about how we wrote our code.  

You should try it and you will discover a new way how you can visualize code metrics and other information related to you code.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP