• 0

Code Level Runtime Analytics


Question

Hi,

 

Being given a task to analyse and collect info on all code paths for a very very large monolithic .NET Web app that is used across various clients.

 

Thinking about adding in some reflection type code that basically collects data of run-time code path taken depending on the params passed to various methods.

 

Once I have the data, need to start removing/deprecating code that never gets used in few months in real world. Along with refactoring and adding more test coverage on major code paths to improve quality.

 

I am aware of unit testing approach but this app is 13+ years old and unit testing can't be just added, can't be easily refactored without breaking something, can't be easily re-written.

 

Are there any tools frameworks that can be integrated into existing code that would allow me to collect such data?

 

Not looking for code samples but if anyone has it then awesome. Any help, general guidance, software/framework recommendation would be highly appreciated.

 

Something like Google Analytics but for the actual source code itself.

 

TA :)

Link to comment
https://www.neowin.net/forum/topic/1382564-code-level-runtime-analytics/
Share on other sites

8 answers to this question

Recommended Posts

  • 0
  On 07/05/2019 at 01:31, wrack said:

Hi,

 

Being given a task to analyse and collect info on all code paths for a very very large monolithic .NET Web app that is used across various clients.

 

Thinking about adding in some reflection type code that basically collects data of run-time code path taken depending on the params passed to various methods.

 

Once I have the data, need to start removing/deprecating code that never gets used in few months in real world. Along with refactoring and adding more test coverage on major code paths to improve quality.

 

I am aware of unit testing approach but this app is 13+ years old and unit testing can't be just added, can't be easily refactored without breaking something, can't be easily re-written.

 

Are there any tools frameworks that can be integrated into existing code that would allow me to collect such data?

 

Not looking for code samples but if anyone has it then awesome. Any help, general guidance, software/framework recommendation would be highly appreciated.

 

Something like Google Analytics but for the actual source code itself.

 

TA :)

Expand  

Plain old code profiling/analysis is a popular tool category since the dawn of .NET but to some extent, Roslyn has sparked a modern revolution in many .NET Tools.

 

1. Unit testing has NOTHING to do with any aspect of this

 

2. nanoRant: Constant Continuous Code Refactoring was the real useful business "take-away"  from Extreme Programming, not unit testing which is mostly a sick joke in the currently common watered down weak descendant of Extreme Programming techniques.

 

2. You can use AOP to instrument any large bodies of existing code. https://www.postsharp.net/aop.net

 

3. .NET has the most advanced compiler on Planet Earth in the form of Roslyn, so any tool that uses the code understanding features of Roslyn should be given a preference.

 

https://github.com/dotnet/roslyn

 

4. Here are a few starting points for you:

 

https://github.com/mre/awesome-static-analysis

 

https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis

 

https://www.owasp.org/index.php/Source_Code_Analysis_Tools

 

https://visualstudiomagazine.com/articles/2018/05/01/vs-analysis-tools.aspx

 

https://www.sonarsource.com/products/codeanalyzers/sonarcsharp.html

https://github.com/SonarSource/sonar-dotnet

 

https://www.ndepend.com

 

 

  • 0

Thank you. You are spot on on many things and the mess I have is inherited so very little I can do on unit test, continuous code refactoring etc.. Initiatives are also inherited so have no choice but to investigate the possibilities.

 

Let me tell you the most important out come I am after. There are a lot of QA issues and they only appear on UAT or even worse, post deployment :( What I am trying to achieve is find out the most hit areas of the code for different configurations (client configurations) and then get out QA teams to hit those areas as hard they can to sort out QA issues. In a long run the whole thing is going to be re-written but that would be few years and  I need results in 3-4 months.

 

So that said, I am actually looking for run-time recording and then analysis of method calls and parameter values so I can then generate a heat map for each client configuration and then work with QA team. Hopefully this gives a little more context.

  • 0
  On 08/05/2019 at 06:08, wrack said:

Thank you. You are spot on on many things and the mess I have is inherited so very little I can do on unit test, continuous code refactoring etc.. Initiatives are also inherited so have no choice but to investigate the possibilities.

 

Let me tell you the most important out come I am after. There are a lot of QA issues and they only appear on UAT or even worse, post deployment :( What I am trying to achieve is find out the most hit areas of the code for different configurations (client configurations) and then get out QA teams to hit those areas as hard they can to sort out QA issues. In a long run the whole thing is going to be re-written but that would be few years and  I need results in 3-4 months.

 

So that said, I am actually looking for run-time recording and then analysis of method calls and parameter values so I can then generate a heat map for each client configuration and then work with QA team. Hopefully this gives a little more context.

Expand  

Oh, that's simple then, just plain old Visual Studio 2019 stuff...

 

Dynamic tracing of executing code is a standard well tested and very useful feature of Visual Studio 2019

 

https://azuredevopslabs.com/labs/devopsserver/intellitrace/

 

Also useful to your area:

 

https://azuredevopslabs.com/labs/devopsserver/codeanalysis/

 

https://azuredevopslabs.com/labs/devopsserver/intellitest/

 

https://azuredevopslabs.com/labs/devopsserver/liveunittesting/

 

https://azuredevopslabs.com/labs/devopsserver/livedependencyvalidation/

 

https://azuredevopslabs.com/labs/devopsserver/releasemanagement/

 

 

 

 

  • 0

As a general observation, you seem to be describing a very large scale retrofit of running code with instrumentation payloads and re-architecture best introduced in a green field scenario.

 

Your system then becomes a high risk for Heisenbugs which can lead to a nightmare of "ring around the rosies"

 

https://en.wikipedia.org/wiki/Heisenbug

 

  • Like 2
  • 0
  On 08/05/2019 at 07:16, DevTech said:

Oh, that's simple then, just plain old Visual Studio 2019 stuff...

 

Expand  

We are still on VS2013. Have secured enterprise agreement to get VS2019 and few selected people have got it including myself but wide scale deployment is few months away.

 

Speaking to a senior architect said we have used New Relic before with amazing success but with financial system and our company's stance on not using Cloud tech (just yet) ruled the use of New Relic. We are in process of getting that ban reviewed.

  On 08/05/2019 at 07:29, DevTech said:

As a general observation, you seem to be describing a very large scale retrofit of running code with instrumentation payloads and re-architecture best introduced in a green field scenario.

 

Expand  

Another reason I am collecting the data is to identify heavy hitters and address QA issues with extensive test coverage (not unit but end to end scenario based) using our automated regression system.

 

Thank you for your help and guidance on this. Much appreciate it.

  • 0
  On 08/05/2019 at 23:44, wrack said:

We are still on VS2013. Have secured enterprise agreement to get VS2019 and few selected people have got it including myself but wide scale deployment is few months away.

 

Speaking to a senior architect said we have used New Relic before with amazing success but with financial system and our company's stance on not using Cloud tech (just yet) ruled the use of New Relic. We are in process of getting that ban reviewed.

Another reason I am collecting the data is to identify heavy hitters and address QA issues with extensive test coverage (not unit but end to end scenario based) using our automated regression system.

 

Thank you for your help and guidance on this. Much appreciate it.

Expand  

Well obviously my thoughts have been very generic, but have a small useful attribute (maybe) of being an "outside viewpoint"

 

Each one of your replies drops an extra hint of a legacy system of very large size and complexity for which you deserve extra credit in seeking all viewpoints and ideas in your eval process.

 

So in that spirit, and to be complete as well in the spirit of excellent standard of due diligence you are exhibiting, I will point out a HUGE sea change in the design, architecture, deployment and real time delivery of modern enterprise (and anything large) applications to users and that is the Kubernetes revolution. At this point EVERY enterprise player has signed on to this architecture and it has arrived and will be considered as mandatory dial tone infrastructure within a few years, if not right now.

 

I point this out in your case because any establishment using wonderful .NET technology may have missed some of the signals and messaging around this architecture since on first glance it seems to be about some stuff a bit distant to .NET platforms, "Cloud" and Linux. Even if it is not possible to shoehorn a legacy system into the new way of doing things, there may be opportunities to build in compatibilities as you go along...

 

The standards around this architecture is run by the CNCF (Cloud Native Computing Foundation) (part of the Linux Foundation) and it can easily be missed that it describes the future of Enterprise Computing BOTH for Cloud and On-Premise and ALSO both for Linux and for Windows. Microsoft is a PRIMARY member of this foundation. There is no restriction on following a CNCF standard on local servers and with Windows technology. In fact some of the tech is already baked right into the Windows API.

 

Skipping all the crap in between, the beautiful result of twisting application architecture into many Docker Containers managed by Kubernetes is that the application becomes robust, scalable, hot deployable and most importantly for enterprise, Self Healing with zero downtime. Kubernetes manages the life cycle and moves containers around as needed by resource requirements, best fit, and demand loading. All the infrastructure is free OSS, can run on local servers and dev machines (well beefy ones...) and once working, scales with zero or little effort to larger local clusters or the Cloud since it is a standard supported by every Cloud provider.

 

The downside is a bit of head scratching to understand where to store state when the application containers are stateless (only way to get self-healing) and how to talk to your application when Kubernetes might have moved it anywhere!

 

Windows 10 and the latest Windows Server has native code built into the Windows API to support both native Windows Containers and Linux Containers. The latest version of .NET Core thrives in this flexible cross platform ubiquitous environment.

 

https://www.cncf.io

https://www.cncf.io/about/members/

https://landscape.cncf.io

https://www.docker.com/products/windows-containers

 

Windows Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10

 

Linux Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10-linux

 

 

CNCF_TrailMap_latest.png

CNCF_TrailMap_reduced.png

  • 0

Hey I'm sure that will work out well for you. Best wishes there...

 

If you want as you go along, feel free to throw out thoughts, inquiries, etc  - micro or macro...

 

A Note for other readers:

 

Something like a snapshot of a system or a database is very primitive compared to Container Self-Healing which is a kind of quantum leap first step towards a Holy Grail of computing. It works. It has not beaten down the doors of anyone's attention since it can be seen as "limited" in that it needs major changes to the architecture of things. It is a remarkable by-product of Docker Containers being stateless where an entire application image becomes the (huge) equivalent of a stateless HTTP request.

 

Normal stuff you expect in your VM server world that is missing in zillions of amorphous clusters of Dockers:

 

1. You need a Service Mesh to locate and talk to your App:

(your App moves around, changes IP address, adds copies of itself on demand load, etc)

 

Examples of CNCF Solutions:

https://linkerd.io  https://github.com/linkerd/linkerd2

https://www.getambassador.io  https://github.com/datawire/ambassador

https://www.envoyproxy.io   https://github.com/envoyproxy/envoy

https://traefik.io  https://github.com/containous/traefik

 

2. You most likely will need a file system composed of specialized Containers:

(your App can be destroyed, moved etc so like a HTTP request nothing is retained locally)

 

Examples of CNCF Solutions:

https://rook.io   https://github.com/rook/rook

https://www.openebs.io  https://github.com/openebs/openebs

https://min.io  https://github.com/minio/minio

 

3. You will need State management

 

- can be as simple as using Container Native Storage (#2 above)

- or a DB (ideally a CNCF standards compliant DB)

- or a "Serverless" API

 

but #3 is a more complex subject for another day...

 

But also, for once, the complexity ends up yielding a very real simplicity which is why Google, who invented Kubernetes is running BILLIONS of containers every day!

 

https://kubernetes.io

https://azure.microsoft.com/en-ca/services/kubernetes-service/

https://en.wikipedia.org/wiki/Kubernetes

 

 

This topic is now closed to further replies.
  • Posts

    • Now I may not quite understand this, so someone tell me if I'm off the mark here, but does this mean they'll be potentially removing drivers for now unsupported systems, such as old processors and chipsets? In the past 15 years, Windows has been amazing at just installing on any device, and often having zero, or just a few unessential drivers missing on first install. It would be a shame for that experience to go, though I understand the reasoning, or at least their financial reasoning for it!
    • Microsoft is removing legacy drivers from Windows Update by Usama Jawad Last month, we learned that Microsoft is making major changes to the development of hardware drivers in Windows. This included the retirement of Windows Metadata and Internet Services (WMIS), along with the process for pre-production driver signing. Now, the Redmond tech firm has informed partners that it will be getting rid of old drivers in Windows Update. In what is being described as a "strategic" move to improve the security posture and compatibility of Windows, Microsoft has announced that it will be performing a cleanup of legacy drivers that are still being delivered through Windows Update. Right now, the first phase only targets drivers that already have modern replacements present in Windows Update. As a part of its cleanup process, Microsoft will expire legacy drivers so that it is not offered to any system. This expiration involves removing audience segments in the Hardware Development Center. Partners can still republish a driver that was deemed as legacy by Microsoft, but the firm may require a justification. Once the Redmond tech giant completes its first phase of this cleanup, it will give partners a six-month grace period to share any concerns. However, if no concerns are brought forward, the drivers will be permanently eradicated from Windows Update. Microsoft has emphasized that this will be a regular activity moving forward and while the current phase only targets legacy drivers with newer replacements, the next phases may expand the scope of this cleanup and remove other drivers too. That said, each time the company takes a step in this direction, it will inform partners so that there is transparency between both parties. Microsoft believes that this move will help improve the security posture of Windows and ensure that an optimized set of drivers is offered to end-users. The firm has asked partners to review their drivers in Hardware Program so that there are no unexpected surprises during this cleanup process.
    • No idea, but I had a client the other week that lost the entire drive to it. I suggested relying on the Samsung T7's instead. The Sandisk Extreme's had reliability issues too.
    • I use it every day so personally yes I need it, or rather I want it. I use OpenShell though, not the garbage modern Start Menu. I just counted and at the moment I have a total of 92 program shortcuts organized into six folders almost exactly the way I did back in Windows 95. I can get to any program I want to run very quickly. I never use Search to find or run programs.
    • I do miss the Apps view from Windows 8.1 Update.
  • Recent Achievements

    • One Month Later
      gowtham07 earned a badge
      One Month Later
    • Collaborator
      lethalman went up a rank
      Collaborator
    • Week One Done
      Wayne Robinson earned a badge
      Week One Done
    • One Month Later
      Karan Khanna earned a badge
      One Month Later
    • Week One Done
      Karan Khanna earned a badge
      Week One Done
  • Popular Contributors

    1. 1
      +primortal
      681
    2. 2
      ATLien_0
      273
    3. 3
      Michael Scrip
      218
    4. 4
      +FloatingFatMan
      171
    5. 5
      Steven P.
      158
  • Tell a friend

    Love Neowin? Tell a friend!