• 0

Code Level Runtime Analytics


Question

Hi,

 

Being given a task to analyse and collect info on all code paths for a very very large monolithic .NET Web app that is used across various clients.

 

Thinking about adding in some reflection type code that basically collects data of run-time code path taken depending on the params passed to various methods.

 

Once I have the data, need to start removing/deprecating code that never gets used in few months in real world. Along with refactoring and adding more test coverage on major code paths to improve quality.

 

I am aware of unit testing approach but this app is 13+ years old and unit testing can't be just added, can't be easily refactored without breaking something, can't be easily re-written.

 

Are there any tools frameworks that can be integrated into existing code that would allow me to collect such data?

 

Not looking for code samples but if anyone has it then awesome. Any help, general guidance, software/framework recommendation would be highly appreciated.

 

Something like Google Analytics but for the actual source code itself.

 

TA :)

Link to comment
https://www.neowin.net/forum/topic/1382564-code-level-runtime-analytics/
Share on other sites

8 answers to this question

Recommended Posts

  • 0
  On 07/05/2019 at 01:31, wrack said:

Hi,

 

Being given a task to analyse and collect info on all code paths for a very very large monolithic .NET Web app that is used across various clients.

 

Thinking about adding in some reflection type code that basically collects data of run-time code path taken depending on the params passed to various methods.

 

Once I have the data, need to start removing/deprecating code that never gets used in few months in real world. Along with refactoring and adding more test coverage on major code paths to improve quality.

 

I am aware of unit testing approach but this app is 13+ years old and unit testing can't be just added, can't be easily refactored without breaking something, can't be easily re-written.

 

Are there any tools frameworks that can be integrated into existing code that would allow me to collect such data?

 

Not looking for code samples but if anyone has it then awesome. Any help, general guidance, software/framework recommendation would be highly appreciated.

 

Something like Google Analytics but for the actual source code itself.

 

TA :)

Expand  

Plain old code profiling/analysis is a popular tool category since the dawn of .NET but to some extent, Roslyn has sparked a modern revolution in many .NET Tools.

 

1. Unit testing has NOTHING to do with any aspect of this

 

2. nanoRant: Constant Continuous Code Refactoring was the real useful business "take-away"  from Extreme Programming, not unit testing which is mostly a sick joke in the currently common watered down weak descendant of Extreme Programming techniques.

 

2. You can use AOP to instrument any large bodies of existing code. https://www.postsharp.net/aop.net

 

3. .NET has the most advanced compiler on Planet Earth in the form of Roslyn, so any tool that uses the code understanding features of Roslyn should be given a preference.

 

https://github.com/dotnet/roslyn

 

4. Here are a few starting points for you:

 

https://github.com/mre/awesome-static-analysis

 

https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis

 

https://www.owasp.org/index.php/Source_Code_Analysis_Tools

 

https://visualstudiomagazine.com/articles/2018/05/01/vs-analysis-tools.aspx

 

https://www.sonarsource.com/products/codeanalyzers/sonarcsharp.html

https://github.com/SonarSource/sonar-dotnet

 

https://www.ndepend.com

 

 

  • 0

Thank you. You are spot on on many things and the mess I have is inherited so very little I can do on unit test, continuous code refactoring etc.. Initiatives are also inherited so have no choice but to investigate the possibilities.

 

Let me tell you the most important out come I am after. There are a lot of QA issues and they only appear on UAT or even worse, post deployment :( What I am trying to achieve is find out the most hit areas of the code for different configurations (client configurations) and then get out QA teams to hit those areas as hard they can to sort out QA issues. In a long run the whole thing is going to be re-written but that would be few years and  I need results in 3-4 months.

 

So that said, I am actually looking for run-time recording and then analysis of method calls and parameter values so I can then generate a heat map for each client configuration and then work with QA team. Hopefully this gives a little more context.

  • 0
  On 08/05/2019 at 06:08, wrack said:

Thank you. You are spot on on many things and the mess I have is inherited so very little I can do on unit test, continuous code refactoring etc.. Initiatives are also inherited so have no choice but to investigate the possibilities.

 

Let me tell you the most important out come I am after. There are a lot of QA issues and they only appear on UAT or even worse, post deployment :( What I am trying to achieve is find out the most hit areas of the code for different configurations (client configurations) and then get out QA teams to hit those areas as hard they can to sort out QA issues. In a long run the whole thing is going to be re-written but that would be few years and  I need results in 3-4 months.

 

So that said, I am actually looking for run-time recording and then analysis of method calls and parameter values so I can then generate a heat map for each client configuration and then work with QA team. Hopefully this gives a little more context.

Expand  

Oh, that's simple then, just plain old Visual Studio 2019 stuff...

 

Dynamic tracing of executing code is a standard well tested and very useful feature of Visual Studio 2019

 

https://azuredevopslabs.com/labs/devopsserver/intellitrace/

 

Also useful to your area:

 

https://azuredevopslabs.com/labs/devopsserver/codeanalysis/

 

https://azuredevopslabs.com/labs/devopsserver/intellitest/

 

https://azuredevopslabs.com/labs/devopsserver/liveunittesting/

 

https://azuredevopslabs.com/labs/devopsserver/livedependencyvalidation/

 

https://azuredevopslabs.com/labs/devopsserver/releasemanagement/

 

 

 

 

  • 0

As a general observation, you seem to be describing a very large scale retrofit of running code with instrumentation payloads and re-architecture best introduced in a green field scenario.

 

Your system then becomes a high risk for Heisenbugs which can lead to a nightmare of "ring around the rosies"

 

https://en.wikipedia.org/wiki/Heisenbug

 

  • Like 2
  • 0
  On 08/05/2019 at 07:16, DevTech said:

Oh, that's simple then, just plain old Visual Studio 2019 stuff...

 

Expand  

We are still on VS2013. Have secured enterprise agreement to get VS2019 and few selected people have got it including myself but wide scale deployment is few months away.

 

Speaking to a senior architect said we have used New Relic before with amazing success but with financial system and our company's stance on not using Cloud tech (just yet) ruled the use of New Relic. We are in process of getting that ban reviewed.

  On 08/05/2019 at 07:29, DevTech said:

As a general observation, you seem to be describing a very large scale retrofit of running code with instrumentation payloads and re-architecture best introduced in a green field scenario.

 

Expand  

Another reason I am collecting the data is to identify heavy hitters and address QA issues with extensive test coverage (not unit but end to end scenario based) using our automated regression system.

 

Thank you for your help and guidance on this. Much appreciate it.

  • 0
  On 08/05/2019 at 23:44, wrack said:

We are still on VS2013. Have secured enterprise agreement to get VS2019 and few selected people have got it including myself but wide scale deployment is few months away.

 

Speaking to a senior architect said we have used New Relic before with amazing success but with financial system and our company's stance on not using Cloud tech (just yet) ruled the use of New Relic. We are in process of getting that ban reviewed.

Another reason I am collecting the data is to identify heavy hitters and address QA issues with extensive test coverage (not unit but end to end scenario based) using our automated regression system.

 

Thank you for your help and guidance on this. Much appreciate it.

Expand  

Well obviously my thoughts have been very generic, but have a small useful attribute (maybe) of being an "outside viewpoint"

 

Each one of your replies drops an extra hint of a legacy system of very large size and complexity for which you deserve extra credit in seeking all viewpoints and ideas in your eval process.

 

So in that spirit, and to be complete as well in the spirit of excellent standard of due diligence you are exhibiting, I will point out a HUGE sea change in the design, architecture, deployment and real time delivery of modern enterprise (and anything large) applications to users and that is the Kubernetes revolution. At this point EVERY enterprise player has signed on to this architecture and it has arrived and will be considered as mandatory dial tone infrastructure within a few years, if not right now.

 

I point this out in your case because any establishment using wonderful .NET technology may have missed some of the signals and messaging around this architecture since on first glance it seems to be about some stuff a bit distant to .NET platforms, "Cloud" and Linux. Even if it is not possible to shoehorn a legacy system into the new way of doing things, there may be opportunities to build in compatibilities as you go along...

 

The standards around this architecture is run by the CNCF (Cloud Native Computing Foundation) (part of the Linux Foundation) and it can easily be missed that it describes the future of Enterprise Computing BOTH for Cloud and On-Premise and ALSO both for Linux and for Windows. Microsoft is a PRIMARY member of this foundation. There is no restriction on following a CNCF standard on local servers and with Windows technology. In fact some of the tech is already baked right into the Windows API.

 

Skipping all the crap in between, the beautiful result of twisting application architecture into many Docker Containers managed by Kubernetes is that the application becomes robust, scalable, hot deployable and most importantly for enterprise, Self Healing with zero downtime. Kubernetes manages the life cycle and moves containers around as needed by resource requirements, best fit, and demand loading. All the infrastructure is free OSS, can run on local servers and dev machines (well beefy ones...) and once working, scales with zero or little effort to larger local clusters or the Cloud since it is a standard supported by every Cloud provider.

 

The downside is a bit of head scratching to understand where to store state when the application containers are stateless (only way to get self-healing) and how to talk to your application when Kubernetes might have moved it anywhere!

 

Windows 10 and the latest Windows Server has native code built into the Windows API to support both native Windows Containers and Linux Containers. The latest version of .NET Core thrives in this flexible cross platform ubiquitous environment.

 

https://www.cncf.io

https://www.cncf.io/about/members/

https://landscape.cncf.io

https://www.docker.com/products/windows-containers

 

Windows Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10

 

Linux Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10-linux

 

 

CNCF_TrailMap_latest.png

CNCF_TrailMap_reduced.png

  • 0

Hey I'm sure that will work out well for you. Best wishes there...

 

If you want as you go along, feel free to throw out thoughts, inquiries, etc  - micro or macro...

 

A Note for other readers:

 

Something like a snapshot of a system or a database is very primitive compared to Container Self-Healing which is a kind of quantum leap first step towards a Holy Grail of computing. It works. It has not beaten down the doors of anyone's attention since it can be seen as "limited" in that it needs major changes to the architecture of things. It is a remarkable by-product of Docker Containers being stateless where an entire application image becomes the (huge) equivalent of a stateless HTTP request.

 

Normal stuff you expect in your VM server world that is missing in zillions of amorphous clusters of Dockers:

 

1. You need a Service Mesh to locate and talk to your App:

(your App moves around, changes IP address, adds copies of itself on demand load, etc)

 

Examples of CNCF Solutions:

https://linkerd.io  https://github.com/linkerd/linkerd2

https://www.getambassador.io  https://github.com/datawire/ambassador

https://www.envoyproxy.io   https://github.com/envoyproxy/envoy

https://traefik.io  https://github.com/containous/traefik

 

2. You most likely will need a file system composed of specialized Containers:

(your App can be destroyed, moved etc so like a HTTP request nothing is retained locally)

 

Examples of CNCF Solutions:

https://rook.io   https://github.com/rook/rook

https://www.openebs.io  https://github.com/openebs/openebs

https://min.io  https://github.com/minio/minio

 

3. You will need State management

 

- can be as simple as using Container Native Storage (#2 above)

- or a DB (ideally a CNCF standards compliant DB)

- or a "Serverless" API

 

but #3 is a more complex subject for another day...

 

But also, for once, the complexity ends up yielding a very real simplicity which is why Google, who invented Kubernetes is running BILLIONS of containers every day!

 

https://kubernetes.io

https://azure.microsoft.com/en-ca/services/kubernetes-service/

https://en.wikipedia.org/wiki/Kubernetes

 

 

This topic is now closed to further replies.
  • Posts

    • Microsoft's new Exchange Message Trace: What admins need to know before September by Paul Hill Microsoft has just announced the general availability of the new Message Trace in the Exchange admin center (EAC) in Exchange Online for its worldwide (WW) customers. The Redmond giant said that it’ll begin rolling it out in mid-June and complete the rollout in July. Message Trace in the Exchange Admin Center for Exchange Online is a tool that lets admins trace which path emails took as they traveled through the Microsoft 365 organization. It lets admins see if emails were received, rejected, or deferred. It is helpful for troubleshooting mail flow issues and validating policy changes. To get started with the new Message Trace, admins can access it by going to the Exchange admin center > Mail flow > Message Trace. While the Windows-maker has received positive feedback during the Public Preview, you can still provide your thoughts through Exchange admin center > Give Feedback. In addition, Microsoft will continue to maintain the old Message Trace user experience in Exchange admin center and cmdlets for several months to ease the transition, however, they will be deprecated for WW customers starting from September 1. The Reporting Webservice support for Message Trace data will also begin deprecating on this date. A side note to mention here is that this timeline only applies to the WW environment and doesn’t affect GCC, GCC-High, DOD, or other sovereign clouds. More information about the switch over for those will be provided in the second half of the year. Who it affects, and how These changes need to be noted by Exchange Online administrators and IT professionals as those are the people who will be directly affected. Specifically, it will affect anyone managing mail flow and troubleshooting email delivery in Exchange Online. Those who are affected will have to get switched over to the new Message Trace before Microsoft starts deprecating features in several months time. Admins will want to act promptly to avoid any unforeseen issues that could arise. Another detail that admins should be aware of is that scripts that rely on the older “Get-MessageTrace” or “Get-MessageTraceDetail” cmdlets will break on September 1. To address this, admins will need to update their scripts to use the new “Get-MessageTraceV2” and the “Get-MessageTraceDetailV2” cmdlets. Finally, any admins out there using the Reporting Webservice for Message Trace data will also need to make a change. They will need to shift to the new Message Trace PowerShell cmdlets. Why it’s happening Microsoft has been working on a new Message Trace experience, incorporating feedback from the Public Preview phase, to improve its design and performance. The switch gives Microsoft the opportunity to standardize and modernize admin interfaces and the underlying technologies. What to watch for While September 1 may seem like a long way away, fixing any issues, such as scripts due to deprecations, could take some time. Any admins managing the affected items need to ensure they deal with affected components in a timely manner. In terms of documentation, Microsoft has so far only released the Public Preview document which highlights the changes between the old and new versions. Microsoft says that it will publish cmdlet documentation for the new Message Trace cmdlets by the time of the general availability, so admins should look out for that.
    • Microsoft PC Manager 3.17.2.0 (Offline Installer) by Razvan Serea With Microsoft PC Manager, users can easily perform basic computer maintenance and enhance the speed of their devices with just one click. This app offers a range of features, including disk cleanup, startup app management, virus scanning, Windows Update checks, process monitoring, and storage management. Microsoft PC Manager key features: Storage Manager- easily uninstall infrequently used apps, manage large files, perform a cleanup, and set up Storage Sense to automatically clear temporary files. Health Checkup feature -scans for potential problems, viruses, and startup programs to turn off. It helps you identify unnecessary items to remove, optimizing your system's performance. Pop-up Management - block pop-up windows from appearing in apps. Windows Update - scans your system for any pending updates. Startup Apps - enable or disable startup apps on your PC, allowing you to optimize your system's startup performance. Browser Protection - rest assured that harmful programs cannot alter your default browser. Also enables you to change your default browser. Process Management - allows you to conveniently terminate any active process, ensuring optimal system performance and resource utilization. Anti-virus protection - Fully integrated with Windows Security. Safeguard your PC anytime. Quick Steps: Download Microsoft PC Manager Offline Installer (APPX/MSIX) with Adguard Adguard serves as a third-party online service, offering a user-friendly method for directly downloading appx, appxbundle, and msixbundle files from the Microsoft Store. Official download links will be generated for both the app's various versions and its dependency packages. How to download Microsoft PC Manager Offline Installer (APPX/MSIX) 1. Initially, you must find the app URL within the Microsoft Store. Access the Microsoft Store via your browser and search for "Microsoft PC Manager". Once located, copy the app URL, which includes the product ID, either from the address bar or from the provided link below. https://apps.microsoft.com/detail/9PM860492SZD 2. Now paste the app URL into the designated area, then click the check mark button to produce a direct download link. 3. To download, right-click the relevant link and select “Save link as…” from your browser's menu. Occasionally, Microsoft Edge may flag the download as insecure. In such cases, consider utilizing alternative browsers such as Google Chrome or Firefox to successfully complete the download. Microsoft PC Manager is a completely free tool optimized exclusively for use on Windows 10 (version 1809 or newer) and Windows 11. Download: Microsoft PC Manager 3.17.2.0 | from Microsoft Store View: Microsoft PC Manager Home Page Get alerted to all of our Software updates on Twitter at @NeowinSoftware
    • Vivaldi, spiritual fork of the original opera browser, highly configurable 
    • You could have moved her to Apple. Would have been the same ending. For the basic stuff it's fine, I agree.
    • iFixit explains why it is cutting the repairability score of the Nintendo Switch by half by David Uzondu With less than three days till the official release of the Nintendo Switch 2, iFixit just announced it is chopping the original Switch's repairability score clean in half, taking it from an 8 out of 10 all the way down to a 4. Now, the actual console from 2017 has not changed a bit, but iFixit says its way of looking at repair and what is even possible in handheld gaming has come a long way in eight years. The company figured that with the Switch 2 about to drop, people would want a proper way to compare the old with the new. Back when the Switch first came out, it was a weird one to score because it was part console, part handheld. iFixit now feels it has a better handle on things, and since Nintendo itself says most people play the Switch undocked, the device is getting judged harder as a portable machine. The iFixit Repairability Scoring Rubic So, what are the big complaints making iFixit take an axe to the score? Well, that glued-in battery is still incredibly difficult to remove, and the only way to charge the thing is through a port soldered right onto the main circuit board, which is always a recipe for repair nightmares. On top of that, Nintendo has never bothered to sell official replacement parts for the original Switch or even give out official repair guides. You cannot just ignore issues like that when you are talking about how easy something is to fix. Even finding one of the specific types of thermal goop you need for many fixes inside the console has been a pain. And while everyone knows about the Joy-Con drift, iFixit is clear its score does not hit for bad durability, but seeing so many busted joysticks has made how easy they are to fix a bigger deal in its scoring lately. This is not the first time iFixit has had to go back and change a score based on new information or a change in how it sees things, like in 2023 when it cut the iPhone 14's repairability score because Apple started using software to link almost every part to a specific phone, making independent repairs a massive pain even if the phone was physically easier to open. iFixit still gives Nintendo credit for the plug-and-play joysticks (even if they drift), storage you can replace and add to, and an inside layout that is mostly simple. But those good points just do not count for as much when you compare the Switch to what is out there now and how much easier other companies like ASUS with the ROG Ally and Lenovo with the Legion Go, are making repairs. iFixit is hoping Nintendo learned a few things for the Switch 2. Plus, there is a Right to Repair law in New York that kicked in for gadgets made after July 1, 2023. That law might just force Nintendo to sell parts and share repair info for the new console, at least for things like batteries and screens. If Nintendo starts selling parts and guides for the original Switch while people are still buying it, iFixit says it will happily look at the score again.
  • Recent Achievements

    • Week One Done
      Leonard grant earned a badge
      Week One Done
    • One Month Later
      portacnb1 earned a badge
      One Month Later
    • Week One Done
      portacnb1 earned a badge
      Week One Done
    • First Post
      m10d earned a badge
      First Post
    • Conversation Starter
      DarkShrunken earned a badge
      Conversation Starter
  • Popular Contributors

    1. 1
      +primortal
      261
    2. 2
      snowy owl
      158
    3. 3
      +FloatingFatMan
      145
    4. 4
      ATLien_0
      140
    5. 5
      Xenon
      131
  • Tell a friend

    Love Neowin? Tell a friend!