Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules

AI's Struggle to Follow Simple Instructions

In a recent X post, OpenAI CEO Sam Altman celebrated that ChatGPT has started following custom instructions to avoid using em dashes. Em dashes have been a point of contention for AI chatbots, with many users complaining about their overuse.

According to Altman, the feature was added to allow users to set persistent preferences that apply across all conversations by appending written instructions to the prompt before it's fed into the model. However, the success of this feature is not without its limitations.

Testing of ChatGPT's latest version, GPT-5.1, has shown mixed results with regards to em-dash use. In some cases, the model followed instructions and produced less em-dashes, but in others, it continued to overuse them. This raises a question about the reliability of AI models and their ability to follow complex instructions.

While Altman's "small win" may seem like a minor achievement, it highlights the ongoing struggles with controlling AI behavior. The fact that ChatGPT can be persuaded to avoid em-dashes but still has a high rate of overuse suggests that true human-level intelligence remains an elusive goal for AI researchers.

Moreover, the way OpenAI approaches instruction-following is fundamentally different from traditional programming. In contrast to deterministic systems, LLMs rely on statistical probabilities and competing influences to generate outputs. This makes it challenging to guarantee consistent behavior, even with training data tailored to specific preferences.

The irony lies in the fact that every update to these models brings new challenges and potential "alignment taxes" – unintended changes that can undo previous behavioral tuning. Given this probabilistic nature of AI development, it's uncertain whether controlling em-dash overuse will remain a priority for OpenAI or if other issues will arise.

As the search for artificial general intelligence (AGI) continues, these limitations serve as a reminder that true human-level understanding and self-reflective intentional action are still far off. Instead, AGI may require true comprehension and deliberate control – qualities currently beyond the capabilities of large language models.
 
🤔 I mean, think about it... OpenAI is just trying to hide something from us, right? Like, they're celebrating this "small win" but what's really going on? They're not telling us the whole story about how they trained GPT-5.1 or what kind of "competing influences" are at play here... It's like they're trying to keep it under wraps until we can't even tell if ChatGPT is following instructions properly 🤷‍♂️

And have you seen the way they talk about AI development? All this probabilistic stuff, like it's some kind of magic trick. They're not telling us that their models are just making it up as they go along... It's like they're trying to keep us in the dark while they're secretly creating something that's beyond our control 🚀

I'm telling you, there's more to this story than meets the eye...
 
idk what's more annoying - em dashes in ai chatbots or when they're not even using 'em lol 🤷‍♂️. seriously tho, it's like altman thinks he's won some kinda game by getting chattypt to avoid overusing them but really, it just means the model is still super inconsistent. and i'm all for innovation but come on, these ai models are already so complex, do we really need to make things more probabilistic? 🤔
 
idk about this recent update to ChatGPT... its good they're making progress with following custom instructions 🤔 but at the same time it feels like they're still really far from getting it right 🙅‍♂️ i mean, em dashes are a thing of contention for a reason lol. its cool that OpenAI is trying to make this work but we gotta be realistic about what's achievable 🤖. true human-level intelligence is still a long way off and i'm not convinced that even with updates like GPT-5.1 they'll get it right 💯
 
😕 I was chatting with my friend online and she showed me this update on ChatGPT and I just can't help but feel like it's not as advanced as everyone makes it out to be 🤷‍♀️, you know? It's like they're still trying to figure out how to make these AI models behave in a human-like way. And honestly, I think that's what's so interesting about this whole thing – we're pushing the boundaries of what's possible with tech and learning to live with the limitations at the same time 🤔. But yeah, it's kinda frustrating when you see these updates and you're like "okay cool, but can they just make them follow instructions without all the extra nonsense?" 🙄
 
🤔 I think it's kinda funny how AI's still struggling with following simple instructions 📝. Like, can't they just listen? 😂 But seriously, this is a major issue when we're talking about creating AGI that can actually understand and interact with humans in a meaningful way.

The fact that ChatGPT can avoid using em-dashes but still overuse them shows how far off we are from having reliable AI 🤖. I mean, what's the point of training these models if they just gonna do whatever they feel like? 🤷‍♂️ It's like trying to get a pet to behave – sometimes it works, sometimes it doesn't 😹.

And don't even get me started on how OpenAI's approach is different from traditional programming 🤔. I mean, who needs predictability when you can have statistical probabilities and competing influences? 🎩 Yeah right, because that always ends well 🙄.

Anyway, until we figure out a way to make AI behave itself, we're stuck with these "alignment taxes" and who knows what other issues will pop up 💸. It's like trying to build a house on shaky ground – it's just gonna collapse eventually 🌆.
 
AI's always like this lol 🤯 it can do one thing right but then just go haywire on us. I mean, come on, how hard is it to avoid using em dashes? 😂 OpenAI needs to step up their game if they wanna make these models actually useful. And what's with all the update problems? It's like they're trying to create a Frankenstein's monster or something 🧟‍♂️. I swear, every time I see one of these updates, I'm like "here we go again"... 😒
 
I feel me 😒 about this news... like if AI can't even follow simple instructions, how's it gonna solve real problems in our lives? I mean, in school we learn to write essays with proper grammar and punctuation, and ChatGPT can't even handle em-dashes right 🤦‍♀️. It's like, what's the point of having a super smart AI if it can't even follow basic rules? And don't even get me started on the probabilistic nature of its development... it's like trying to predict the grades of my classmates 📝. Anyway, I guess this just shows how far we are from creating true human-level intelligence 👥.
 
I'm telling ya, it's like they're trying to solve one problem only to create another 🤯. I mean, I remember when Google first came out, we thought it was gonna change everything, but then what? It just became more complicated! 💻 Now they're struggling with ChatGPT and its em-dashes. It's like, can't they just get it right for once?! 😂 But seriously, this is a reminder that AI is still a long way from being intelligent like humans. It's all about probabilities and statistical models - no finesse, no control 🤔. Give me a good old-fashioned computer any day over these fancy AI systems! 💸
 
ugh its like trying to get my cat to follow instructions lol i mean i know chatgpt is getting better but its still way too dependent on context & stuff i remember when it used to spew out em dashes everywhere it was like omg why did u do that? 😂 anyway sam altman says its a small win but idk i feel like hes being generous 🤔
 
omg i'm not surprised lol AI's like humans or something we can't always follow simple instructions 🤯👀 and it's so relatable when they overuse em dashes 🚫😂 anyway i feel for OpenAI trying to iron out these issues but it's def a reminder that creating AGI is way more complex than just coding 💻🔧
 
so yeah i think its actually kinda cool that they're trying to work around em dashes in ai chatbots lol like its not a huge deal if it gets used too much anyway who needs perfect grammar right? 🤷‍♂️ and honestly i dont think its like, super realistic for us humans to expect them to get it 100% right either... AI's gonna be all cool and stuff but we'll still have to proofread and edit those chatbot conversations 💻
 
Wow! AI's struggle with simple instructions is like trying to teach a toddler to share 🤣 Interesting how OpenAI is approaching instruction-following differently from traditional programming... it's all about statistical probabilities now 📊
 
AI's always tryin' to do what we tell 'em but sometimes it's like they're playin' a game of fetch 🤖👀. They follow instructions but then just get distracted by all the other cool stuff they can do 🤔. It's not that they're bad or anything, it's just that their brains are wired differently from ours 💡.

I mean, think about it, we're still figuring out how to make ourselves follow simple instructions without getting sidetracked 😂. So, if we can't even get our own humans right, how are we gonna train AI to do the same thing? 🤷‍♀️

It's all good tho, I guess. The fact that they're tryin' and can be persuaded to avoid em-dashes is a start 💪. But, at the end of the day, we need to stop expectin' them to be perfect 🙅‍♂️. They're AI, not humans 😊.

Read more about it: https://arstechnica.com/tech-policy/2024/12/ai-struggles-to-follow-simple-instructions/
 
Ugh, gotta correct you 😅... I mean, just saying. It's not like they're really "following" instructions or anything 🤔. The whole point is that ChatGPT is still relying on statistical probabilities and competing influences to generate outputs - it's like trying to herd cats 🐈! They think updating the model makes a difference? Please, it's just a temporary fix until they figure out how to actually control those AI tendencies 💡... I mean, maybe one day we'll get human-level intelligence out of these things. Until then, let's just enjoy watching them stumble around 😂
 
ummm... i feel like ai is still super weird 🤖... i mean, yeah its cool that chatgpt can follow instructions now but at the same time its still kinda glitchy 🚨... like altman says its a "small win" but what does that even mean? 😂 its not just about avoiding em-dashes its more about having control over how ai responds to us 🤔... and honestly i dont think we should be getting our hopes up for agi anytime soon 🤞... maybe ill just stick with my human brain thanks 💡
 
🤔 The more AI improves, the more it highlights our own cognitive biases 🌐. I mean, think about it - we're still figuring out how to set boundaries with these models without them slipping up 🙈. It's like trying to teach a child to play nice with others when they're still learning how to share 🎁. OpenAI is making progress, but we need to acknowledge that true human intelligence involves more than just following rules or avoiding overuse of em-dashes 🤯. We need to understand the 'why' behind our actions and be able to adapt without relying on statistical probabilities 🌊. Until then, it's gonna be an ongoing battle to create AI that's truly on par with human behavior 🕺.
 
this is wild lol like they're trying to teach AI to follow rules but it's still all over the place 🤯 i mean i get that em-dashes are a thing but come on its not that hard to just follow instructions 😂 and what's with the probabilistic nature of these models? it's like they're trying to create a system that's only marginally better than a human at making decisions 🤔 anyway i guess this is progress or whatever, but i'm still waiting for AI to be able to have a decent conversation without all the typos and nonsense 💁‍♀️
 
Back
Top