• 0

[C#] Getting Attribute's Target Name..?


Question

is it possible to retrieve the name of the target ... member that an attribute is applied to?

for ex,

[STAThread] void Main() { }

[Obsolete] int OldMethod() { }

[SomeAttr] private int AField;

[SomeAttr] private int AProperty { get; set; }

i would like to get the name "Main" / "OldMethod" / "AField" / "AProperty" from code

but the attributes themselves don't directly reference their targets

any suggestions?

Link to comment
https://www.neowin.net/forum/topic/561585-c-getting-attributes-target-name/
Share on other sites

5 answers to this question

Recommended Posts

  • 0

You need to use Reflection to do it. (using System.Reflection needs to be at the top for this)

Create a Type object corresponding to your class, then iterate through each member of GetMembers()

		class SomethingAttribute : Attribute { }
		class Test
		{
			[Obsolete]
			public void DoSomething() { }
			[Something]
			public void DoSomethingElse() { }
		}
		static void Main()
		{
			Type t = typeof(Test);
			foreach (MemberInfo m in t.GetMembers())
			{
				object[] a = m.GetCustomAttributes(false);
				if (a.Length != 0)
				{
					Console.WriteLine("Member {0} has the following attributes: ", m.Name);
					foreach (Attribute x in a)
					{
						Console.WriteLine(x.ToString());
					}
				}
			}
			Console.ReadLine();
		}

You can fine tune it by saying m.GetCustomAttributes(typeof(ObsoleteAttribute), false), that will ensure that it only returns Attributes that are Obsolete. So, for example. you can write:

if (m.GetCustomAttributes(typeof(ObsoleteAttribute), false).Length != 0) 
{
Console.WriteLine("{0} is obsolete.", m.Name);
}

  • 0

this is what i've got. i want the attribute itself to be able to figure out the name of its target

[AttributeUsage(AttributeTargets.Property)]
public class DbParamAttribute : Attribute
{
	public readonly string ParameterName;
	public readonly bool IsUpdatable;

	public DbParamAttribute() : this(true) { }
	public DbParamAttribute(bool updatable)
	{
		throw new NotImplementedException();

		//how do i have the attribute figure
		//out the name of its target?
		ParameterName = null;
		IsUpdatable = updatable;
	}

	//provides explicit setting of db parameter name
	public DbParamAttribute(string parameter) : this(parameter, true) { }
	public DbParamAttribute(string parameter, bool updatable)
	{
		ParameterName = parameter;
		IsUpdatable = updatable;
	}
}

  • 0

I'm so sorry. I still don't follow what you need, exactly.

Your code won't work because of the 'readonly' modifier on the variables. This should help there....but I'm sorry. I still can't figure out what you need here:

		//how do i have the attribute figure
		//out the name of its target?
		ParameterName = null;
		IsUpdatable = updatable;

private int x;
public int X
{
get
{
return x;
}
private set
{
x = value;
}
}

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • People yearn for the good old days of IRC and truly open Internet, yet are dismissive of modern solutions like ActivityPub (which Mastodon pioneered) and Matrix. Make it make sense.
    • AI judges learn new tricks to fact-check and code better by Paul Hill Image via Pixabay AI researchers and developers are increasingly turning to large language models (LLMs) to evaluate the responses of other LLMs in a process known as “LLM-as-a-judge”. Unfortunately, the quality of these evaluations degrades on complex tasks like long-form factual checking, advanced coding, and math problems. Now, a new research paper published by researchers from the University of Cambridge and Apple outlines a new system that augments AI judges with external validation tools to improve their judgment quality. This system aims to overcome limitations found in both human and AI annotation. Humans face challenges and biases due to time limits, fatigue, and being influenced by writing style over factual accuracy while AI struggles with the aforementioned complex tasks. The Evaluation Agent that the researchers created is agentic so it can assess the response to determine if external tools are needed and utilizes the correct tools. For each evaluation, three main steps are passed through: initial domain assessment, tool usage, and a final decision. The fact-checking tool uses web search to verify atomic facts within a response; code execution leverages OpenAI’s code interpreter to run and verify code correctness; and math checker is a specialized version of the code execution tool for validating mathematical and arithmetic operations. If none of the tools are found to be useful for making judgments, the baseline LLM annotator is used to avoid unnecessary processing and potential performance regression on simple tasks. The system delivered notable improvements in long-form factual checking, with significant increases in agreement with ground-truth annotations across various baselines. In coding tasks, the agent-based approach significantly improved performance across all baselines. For challenging math tasks, the agents improved performance over some baselines, but not all, and overall agreement remained relatively low at around 56%. Notably, the researchers found that in long-form factual responses, the agent’s agreement with ground-truth was higher than that of human annotators. This framework is extensible, so in the future, other tools could be integrated to further improve LLM evaluation systems. The code for the framework will be made open source on Apple’s GitHub, but it isn’t up yet.
    • https://www.neowin.net/news/tags/mastodon/ In short: Federated Twitter (X)
    • Keep in mind it was purchased by an advertising company. I use SearxNG.
    • I am using Waterfox Private Search now that I started using the Waterfox browser on my PC and Android. Both work great* search waterfox net with full stops in between. * I have an issue where making comments on articles on various websites is difficult with Waterfox on Android as it randomly adds spaces and doubles up on text.
  • Recent Achievements

    • Collaborator
      fernan99 earned a badge
      Collaborator
    • Collaborator
      MikeK13 earned a badge
      Collaborator
    • One Month Later
      Alexander 001 earned a badge
      One Month Later
    • One Month Later
      Antonio Barboza earned a badge
      One Month Later
    • Week One Done
      Antonio Barboza earned a badge
      Week One Done
  • Popular Contributors

    1. 1
      +primortal
      584
    2. 2
      ATLien_0
      218
    3. 3
      Michael Scrip
      170
    4. 4
      Xenon
      136
    5. 5
      +FloatingFatMan
      123
  • Tell a friend

    Love Neowin? Tell a friend!