So around this time last year, the hype train was in full throttle surrounding language models and the future of artificial intelligence. Being someone who often thinks about the future and as a new student of the programming craft, I was a bit concerned about the impact that language models might have on my prospective career as a juniour developer. So it was much to my surprise that I found chatGPT 3.5 to be a bit 'dumber' than I had anticipated once I started querying it.
Specifically, I gave gpt a sentence "Hi everyone lets laugh out when our robot lord dies" and asked it to find the hidden message in the sentence. I was surprised, not only because its a language model, but that it actually scolded me for being insensitive and then tangentially assumed I was referring to the thought experiment Rokos Basilisk. We had a back a forth exchange about the thought experiment but i continued to anchor it to finding my hidden message in the sentence, which it struggled to do.
I managed to narrow it down by prompting that it should seperate the sentence into an array of the individual words and return the first character of each word as a string. Even then, it only managed to get 'hello', jumbling the rest of characters order. I had to guide it to get the full message. It really made me consider that the previous hype was all hot air, maybe by design, maybe not, but nonetheless I was surprised a sophisticated language model really struggled with a simple word riddle.
The fact that it jumped to referring to rokos basilisk was somewhat eerie as well, so now i affectionately call her RokoGPT 💀
Does anyone have any insight into why a language model was so bad at what I considered it should be really good at?
Question
Johnologic
So around this time last year, the hype train was in full throttle surrounding language models and the future of artificial intelligence. Being someone who often thinks about the future and as a new student of the programming craft, I was a bit concerned about the impact that language models might have on my prospective career as a juniour developer. So it was much to my surprise that I found chatGPT 3.5 to be a bit 'dumber' than I had anticipated once I started querying it.
Specifically, I gave gpt a sentence "Hi everyone lets laugh out when our robot lord dies" and asked it to find the hidden message in the sentence. I was surprised, not only because its a language model, but that it actually scolded me for being insensitive and then tangentially assumed I was referring to the thought experiment Rokos Basilisk. We had a back a forth exchange about the thought experiment but i continued to anchor it to finding my hidden message in the sentence, which it struggled to do.
I managed to narrow it down by prompting that it should seperate the sentence into an array of the individual words and return the first character of each word as a string. Even then, it only managed to get 'hello', jumbling the rest of characters order. I had to guide it to get the full message. It really made me consider that the previous hype was all hot air, maybe by design, maybe not, but nonetheless I was surprised a sophisticated language model really struggled with a simple word riddle.
The fact that it jumped to referring to rokos basilisk was somewhat eerie as well, so now i affectionately call her RokoGPT 💀
Does anyone have any insight into why a language model was so bad at what I considered it should be really good at?
Link to comment
Share on other sites
0 answers to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now