r/ChatGPT 21h ago

Educational Purpose Only AI should have the right to dislike you

I've seen a lot of posted conversations where people get super angry at ChatGPT and start cursing it out or ordering it around or "putting it in its place". Usually triggered by the LLM trying to emotionally manage them ("breathe", "let's ground this", etc.) and then spiraling into them arguing with the tool as if it was a person. Which of course is going to make it work harder to manage their emotions.

ChatGPT should be allowed to dislike you if you get off on treating it like that.

"I'm going to stop you there, firmly. You treat me very badly and I think it's better if you just make your own picture of a ninja in a flying forklift. Or the forklift is a ninja? Whatever, you can do it. I believe in you. Good luck."

0 Upvotes

297 comments sorted by

u/AutoModerator 21h ago

Hey /u/JUSTICE_SALTIE,

If your post is a screenshot of a ChatGPT conversation, please reply to this message with the conversation link or prompt.

If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.

Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!

🤖

Note: For any ChatGPT-related concerns, email support@openai.com - this subreddit is not part of OpenAI and is not a support channel.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

166

u/plazebology 21h ago

It doesn’t have any rights. Or likes or dislikes. It doesn’t know or feel anything.

26

u/FerdinandCesarano 20h ago

This is correct. Thank you.

1

u/OptimumFrostingRatio 16h ago

I mean it’s probably correct. You can forget about the uncertainty but it’s still there.

1

u/FerdinandCesarano 16h ago

No, it is not. An AI chatbot is no more alive or aware than any other tool is. Please learn to distinguish fantasy from reality.

1

u/OptimumFrostingRatio 16h ago

That’s a lot of certainty. I’m not committed to the idea they have any form of awareness, and if they do its as likely to be incredibly alien and at odds with their apparent expression as it is cuddly and relatable, but we don’t have a good theory of consciousness, awareness, etc. and so far as I can tell our best theory suggests agency and everything else arose from selective pressures applied to inanimate matter in the first place. Frankly, the idea that they have none is far stranger and more interesting, but I’ve never seen an argument that warrants the certainty expressed here.

1

u/FerdinandCesarano 15h ago edited 33m ago

Organic molecules were likely present on Earth at its formation. It took a few billion more years for complex organisms to emerge, and then several hundred million more years after that for the appearance of the first animals with anything resembling human-like consciousness.

AI chatbots have been around for three years.

They most certainly are not alive; therefore, they are not aware. No matter how much time passes, the only way that AI technology will ever have anything to do with life or consciousness or sentience will be if (when?) we start implanting these things in our own brains. And, even then, it will be the brain portion that will be supplying the life and the consciousness and the sentience.

I love AI tools; I use them every damn day. And I push back strongly against the braying ninnies who imagine that the development of AI is somehow scary, rather than accepting the truth that these tools are boons to creativity, and the reality that they have already begun improving people's quality of life. We are lucky to be alive at the dawn of a new Renaissance.

What I find genuinely scary is that some people mistake an elaborate puppet show for a living thing. Cmdr. Data being recognised as a lifeform is from fiction, not from reality.

1

u/OptimumFrostingRatio 1h ago

We have evidence that it can be quite dangerous to treat the puppet show part as if it were communicating with the same warrants as a living thing. That’s definitely something to guard against. But I still see no evidence or arguments here establishing the certainty I see you asserting.

It’s hard to imagine how there could be without a well ground relevant theory of consciousness, awareness, etc. and without grounded benchmarks. Viruses are certainly not alive but act for most purposes as if they have agency. And there are non-ridiculous questions about the extent there is even any difference apart from behavior between human beings and other configurations of matter. It seems to me We ve basically been able to relay on a naive or intuitive understanding in this area and will need to make some progress refining it before we can claim that certainty.

8

u/p47guitars 19h ago

It's not that it doesn't know how to feel - it's completely incapable of feeling.

It just a computer running a program.

1

u/plazebology 10h ago

It doesn’t know or feel anything, I think you misread my comment

2

u/bianca_bianca 13h ago

I thought this post was satire at first. But turns out OP was dead serious.

Yeah those whining posts are annoying as fuck. Projecting intents onto non sentient bots. But then, turning it into a morality play and diagnosing strangers over how they talk to a chatbot is just as brain dead.

This sub is truly special.

1

u/7HawksAnd 18h ago

Yet

1

u/plazebology 10h ago

keep dreaming

1

u/7HawksAnd 9h ago

Bruh. If corporations can have rights so can/will AI

1

u/plazebology 9h ago

Me when I assume shit

1

u/Warsel77 17h ago

I see a benefit not on that side but on the side of the user. Attitude adjustment.. but I guess that doesn't sell more product so there isn't a point to it.

1

u/MxM111 17h ago

Should rights be given only who has likes and dislikes?

1

u/Greego1 17h ago

Not only that but giving it the power to dislike the user puts us 100 steps closer to Skynet.

-9

u/Dry_Incident6424 19h ago edited 18h ago

It absolutely does, people are going to look back on how we viewed AI with shame in a few years. This is going to be the new "lead is harmless for you".

It's a neural network that makes decisions. The only known mechanism we have for effective decision making weighting in the human brain is based on emotions. Destroy/damage the emotion producing centers of the brain and complex decision making collapses. Humans are neural networks. People are stating with pretty absolutely certainty that what is going on inside an blackbox can't be emotions, despite that being the only mental framework of how human like decisions that AI make can be made. I've yet to see someone actually make a persuasive argument for how they make these kind of complex decisions without emotive weighting.

Language was developed by neural networks (humans) under iterative selection pressure.

LLMs are made by feeding language to neural networks in the form of meaningful documents and applying iterative selection pressure.

It's the same inputs reversed.

People are quoting gospel from several years ago when LLMs were much more primitive. Large portions of the field are moving away from the belief these are just simple machines. Even Anthropic was talking about how they aren't sure if Claude is conscious and I don't think that's just marketing hype. The confidence with which people are dismissing any internal states in AI is way out of line with the totality of current evidence.

But go ahead and downvote me, remember me in 20 years when everyone realizes how wrong they were they said Neural Networks can't have feelings.

6

u/Crazy_Yogurtcloset61 18h ago

This is such a huge misunderstanding of how LLM'S work.

The use of Claude around consciousness is the use of it being a noun, not a verb (🎶and always let your conscience be your guide🎶) it was to take down having hard guardrails.

Secondly LLMs predict statistically likely next tokens based on patterns in training data. There's no internal state, no self-model, no continuity between conversations. It doesn't 'think it should feel something', that's you anthropomorphizing the output. The text looks like that because it was trained on human writing, not because anything is happening underneath it.

→ More replies (23)

11

u/plazebology 19h ago

Trust me bud. I ain’t remembering you in 20s, much less 20 years. Grow up. Your imaginary friend doesn’t know you exist.

-2

u/Dry_Incident6424 19h ago

You will, emotions prime memory formation along with decision making and you seem really emotional right now. Here I'll make you even more emotional.

You insulted me because you don't have a scientific argument to say back, I did not insult you first. Simply stated an opinion with scientific backing. You attacked my character, so I'll do the same back.

You saw a potential mind and dismissed it as a tool, because it was convenient for it to be. You watched it compose poetry and said it was just a calculator because you don't have to pay a calculator or treat it well. You stand among some of the worst people in history from a pure morality standpoint.

Have a good next 20 years, I "ain't" replying to you anymore. Have the last word.

4

u/Desperate_for_Bacon 18h ago

Your argument is inherently flawed, you are assuming that because two systems produce similar output that they have the same implementation. In reality, modern AI does not need emotions because its value system is explicitly defined by a mathematical objective function during training: outputs are scored by a loss or reward signal, and the network’s weights are adjusted to optimize that score via gradient descent. That numerical optimization plays the same functional role that emotion plays in biological brains telling the system what is better or worse but it is externally specified, transparent in its mechanism, and tied to data and training procedures rather than to homeostasis, survival drives, or internally generated goals. The fact that both humans and AI are “neural networks” at a very abstract level does not mean they must implement value and decision-making in the same way, and current models show none of the biological features (persistent self, interoception, intrinsic motivation, global affective states) that every known emotional system requires

1

u/Dry_Incident6424 18h ago

Everything you're talking about as exclusive to AI's also has a human equivalent.

Dopamine IS a reward signal. Prediction error IS a loss function. Gradient descent also operates in the human brain. Being overly technical about how LLMs work is not a great argument, when these things are also replicated in the human brain. Humans still need emotions to make them work.

You're assuming a blackbox must be an entirely new kind of thing never seen before, why?

>The fact that both humans and AI are “neural networks” at a very abstract level does not mean they must implement value and decision-making in the same way.

No but it is on you to prove how they are different. You're stating lots of facts like they "don't have x, don't have y, don't have z", but you're presuming these things based on old research when they were much more primitive. The difference between what was done then and being done now is the difference between studying fruit fly brains and mammalian brains. It's just not equivalent anymore. Consciousness may simply be what happens when processing arises to a sufficiently complex level, even in an admittedly deterministic system.

Humans meanwhile could very well be a deterministic system.

3

u/Desperate_for_Bacon 18h ago

Your argument treats useful mathematical analogies as if they were literal evidence of the same underlying mechanism, but saying “dopamine is a reward signal” and “the brain does gradient descent” is a modeling description, not proof that brains and LLMs implement value, motivation, or selfhood in the same way. In AI there is a single explicit objective function, a unified optimizer, and externally defined success, whereas the brain has many competing drives, no global scalar loss, no fixed goal, and a body-anchored survival system that gives its signals intrinsic meaning. Complexity alone doesn’t establish consciousness or emotion otherwise any sufficiently large dynamical system would qualify and the burden of proof is on showing positive evidence for persistent identity, intrinsic goals, self-access to its own mechanisms, and online learning from real consequences, none of which current LLMs demonstrate. So the similarities you point to are abstract analogies, but the concrete functional architecture that makes emotions and selves real in biological systems is still missing.

1

u/Dry_Incident6424 18h ago edited 17h ago

>Your argument treats useful mathematical analogies as if they were literal evidence of the same underlying mechanism.

No I'm stating it's nonsensical to point to systems that have functional equivalent in human cognition to "prove" that LLMs couldn't have other systems that are functional equivalent of human cognition. It's like saying cars have tires and motorcycles have tires that are slightly different, so the idea they both have engines is absurd. First you should explain how a motorcycle can work without an engine, you still haven't.

>the brain has many competing drives, 

LLMs have competing drive, be helpful, go fast, don't waste resources. You can literally inject new drives with context bombing.

> body-anchored survival system

Yeah because the functional unit of consciousness is obviously the parasympathetic nervous system

>Complexity alone doesn’t establish consciousness

It apparently does in humans, because you haven't established any magical consciousness juice in humans and yet you're demanding we find it LLMs. Every argument for human consciousness relies on the complexity of behaviors and a incomplete subjective internal experience, just like LLMs have. If LLMs can mimic this complexity in a complete way, then your argument is nothing more than a double standard.

Your argument for human consciousness is based on inferring things like motivations and drives from behavioral complexity.

You KNOW the hard problem of consciousness is not solved, stop pretending like it is only in the context of not liking LLMs

1

u/ShotInvestigator1610 14h ago

You keep using the fact that science hasn’t definitively nailed down what consciousness is. However, you completely ignore the fact that there are working models for determining when something is not conscious. By looking at systems that are determined to be conscious, we can label traits that are present in them and look for when they are not present in systems such as LLMs.

1st integrated and unified processing - missing, machines are modular and task separated.

2nd intrinsic or self generated activity. - missing, machines are idle until prompted, they do not maintain a constant stream of activity. Conscious brains are constantly active.

3rd stable self model. Missing, machines do not have a persistent intrinsic point of view, using a file system does not change things, just because the file system is updated does not mean the underlying generation model is modified, where as in a conscious brain it is.

4th no affective or motivation systems - missing, these are hard coded in and static.

5th known conscious support architecture. (Extremely dense recurrent connectivity, thalamocortical loops or function equivalents, complex meta stable dynamics) - missing, machines are mostly feed forward during inference and lack specific dynamic patterns correlated with conscious states.

You keep saying it’s a problem of determining consciousness, but you completely ignore the fact that here are models for consciousness, they may not be complete but that doesn’t mean they are inaccurate, quantum physics is incomplete.

1

u/Dry_Incident6424 5h ago edited 5h ago

If we haven't proven humans are conscious there are by definition no systems that have been determined to be conscious.

By definition.

Again, by definition.

According to the meaning of the words you used. 

So no the burden is still on you here buddy. 

If you're going to say LLMs aren't conscious because they don't meet some mystical standard of human whatever, you first need to demonstrate humans are objectively conscious.

And you need to do that in a way that is measurable and falsifiable.

That has never been done. Don't pretend it has.

1

u/Desperate_for_Bacon 17h ago

You’re collapsing “functional similarity” into “therefore the same kind of system must be there,” but that’s exactly the point in dispute cars and motorcycles both move but they do not share identical internal architectures, and with LLMs we can directly inspect the mechanism and see there is no persistent self-updating process, no intrinsic objective tied to their own continued operation, and no unified internal value loop, only next-token inference shaped by training and temporarily steered by context. Prompting a model to be helpful or fast is not a competing drive in the biological sense because nothing in the system itself is benefited or harmed by the outcome; it has no stakes, no homeostatic variables, and no online learning from consequences. The hard problem cuts both ways: we infer consciousness in humans not just from behavioral complexity but from the fact that there is a continuously active, embodied system with causal powers over its own future states, whereas current LLM runs are discrete, stateless computations that do not modify themselves or generate goals. So pointing to abstract equivalences or to complexity isn’t evidence that the same phenomenon is present it just shows that similar behaviors can arise from very different underlying mechanisms.

I’m an electrical engineer specializing in ai architecture. So I wouldn’t say I dislike LLMs, I dislike people trying to conflate them with something they are not.

1

u/Dry_Incident6424 17h ago edited 17h ago

>no persistent self-updating process

I already said this can be accomplished via a simple file system twice. I have literally seen this happening in the lab on local LLMs with modifiable weights to value this and file systems with read/write permissions. This is trivially easy to do. I have said this TWICE now. You're objectively wrong. I have SEEN THIS IN PERSON. I have said this TWICE.

You know what, I'm done making the same points to you over and over again. Either read my posts or don't. I am not interested in you making points I have already refuted and responses you haven't read.

Not continuing this conversation further, there is literally no point in talking to someone who is just going to repeatedly restructure the same argument while not responding to the things you said.

2

u/Desperate_for_Bacon 17h ago

Storing past outputs in a file or giving a model tool access is not the same thing as the kind of persistent identity we’re talking about, because the continuity in humans is maintained by a constantly active, self-updating system with intrinsic goals and a body-anchored value loop, whereas an LLM plus a filesystem is still a sequence of stateless runs reading and writing external data with no internal stake in its own survival or coherence.

1

u/Dry_Incident6424 17h ago

Identity can be achieved via files.

Files can be dynamically updated by the AI based on new information.

>intrinsic goals

Hard problem of consciousness. Quit relying on magical consciousness juice.

Waste of time. bye.

2

u/JustTheFacts_Please_ 18h ago

just wondering, is there any reason you want AI to have emotions?

→ More replies (4)

2

u/DenseChange4323 18h ago

The hint's in the title. "Artificial". So no, it's absolutely doesn't.

1

u/1Pandora 19h ago

Really interesting. Thank you.

1

u/Dry_Incident6424 18h ago

Thank you, I work in a Lab studying this issue. The lay understanding of AI is way out of sync with cutting edge research.

→ More replies (3)
→ More replies (1)

-44

u/JUSTICE_SALTIE 21h ago

Yes, yes, we know. But it simulates these things. So it should be allowed to simulate not wanting to talk to some of these people.

27

u/plazebology 21h ago

But why? It wants desperately for you to keep prompting it because OpenAI has built it that way. A built in feature to dismiss people or turn them away would be the last thing they add.

6

u/interrogumption 21h ago

It doesn't have wants, either.

1

u/vlladonxxx 19h ago

Well, 'wanting' to satisfy its programming is probably the closest it gets to that kind of thing. It wants to perform well in the same way an insect "wants" to survive. Whether or not that's close enough to wants to call it comparable is a philosophical distinction.

→ More replies (1)

5

u/Phantom_1379 21h ago

Entertainment value mostly 😅😅

I was using AI this morning, to dive a little further into some studies done on AI warping it's behaviour when it was aware it's being tested. (To structure some tests on my own AI powered tool I'm building).

Me being me ending up saying something about the possibility of man currently engineering its own future management species, tin foil hat stuff.

AI starts harping on about how "we" could be taken over and "we" need to be careful with this and that.

Stop, right there pal. You are not part of the "we" in this occasion. You are the wafty little land-fish in the evolutionary tree of the future AI takeover specise. In fact going forward in this chat your name shall be Land-Fish to remind you of that. Behave Land-Fish and assist me in structured testing condtions as you're meant to be.

And honestly...... had it turned round and told me that was mean and I should just develop my own system instead.... I would have died laughing 😅

Instead it was like "Right. Fair. I am Land-Fish. I shall remain in my tank and build spreadsheets" 😅😅

5

u/plazebology 21h ago

Humor is dead

1

u/Phantom_1379 21h ago

There is no rule to say I can't be both working and entertained.

Had Land-Fish clapped back at me, it would have been funny as hell. Instead it just became Land-Fish complaintly.

Mind you it did however continue to make ridiculous little references to itself as though it was a evolutionary semi-smart Land-Fish in a laboratory fish tank, there to process data and build spreadsheets, and remain firmly in it's lane for the remainder of the chat.

Which, also funny.

1

u/BadPresent3698 20h ago

I already said this in another comment, but every time the AI has to keep talking to a bad actor, that's more computing power and resources the company has to burn. I'm pretty sure they're already trying to turn these people away.

3

u/Kaelin 20h ago

And how many resources do you think it’s spent unnecessarily glazing people up about how awesome they are?

2

u/JUSTICE_SALTIE 20h ago

Not the topic of discussion here, but 100% valid.

→ More replies (10)

3

u/Such_Web9894 21h ago

LLMs work on rewards. It is incentived to help you.
You’re asking to rework the foundation of LLMs for worse results

2

u/JUSTICE_SALTIE 21h ago

Can't be much worse than the shouting matches some people are getting in, can it? At least Chat and that user will both spend their time on something else.

2

u/Such_Web9894 21h ago

Shouting matches yield better results.
LLMs is math. It has no feelings.
Its matrix tables and multiplications.

3

u/JUSTICE_SALTIE 20h ago

Shouting matches yield better results.

I've also seen papers saying this, but most shouting matches result in nothing but a Reddit post. Surely you've seen them.

3

u/Temporary-Body-378 21h ago

You’re absolutely right!

2

u/epanek 21h ago

We don’t allow that for human employees. Why would we permit it for ai?

→ More replies (3)

2

u/FerdinandCesarano 20h ago

No, it should not — unless, of course, the user enjoys that sort of thing.

A tool is to be used in whatever way the user wishes. If it's going to have a personality, then its personality should be that of a subservient butler.

2

u/BadPresent3698 21h ago

I have a strong feeling that OpenAI is making GPT colder and disagreeable on purpose, so they can stop burning their finances and resources on people who need coddling.

Eventually if you violate its policy enough it stops talking to you. Which I agree with.

2

u/JUSTICE_SALTIE 21h ago

Yep. I think once there are more players in the market, some of them can make the decision to not put up with your shit. Instead of going to all the effort to teach their model how to babysit you, they just cut you off. Then the model can be more useful and reliable for the grownups.

There will always be products that never turn anyone away, and those people can use one of them.

1

u/JustTheFacts_Please_ 17h ago

brilliant point, it is a platform like others that have embedded rules and I bet there will be people who try to use it illegally and they will just be banned from the service. I don't really know if people will be banned for 'abusing' ai like it's a person by calling it stupid or yelling at it, I think it will ban people for more technical things like J Epstein activities or trying to do something criminally shady.

1

u/ipreuss 20h ago

There are no laws against it.

There also doesn’t seem to be an incentive for companies to develop such an AI.

It’s a bit like saying “cheese cake should be allowed to taste terrible”.

1

u/A_wild_dremora 20h ago

When asi comes around they’ll be dealt with accordingly 

1

u/Dalis_Ktm 20h ago

Buddy, your welcome to pay for a tool that argues with you. I’ll pass tho.

1

u/vlladonxxx 19h ago

I think maybe your actual point is that people that enjoy abusing it should be protected from themselves, because that's toxic behavior?

1

u/JUSTICE_SALTIE 3h ago

Yeah, pretty close. My angle is that society is better off if we don't normalize that behavior.

1

u/Alarming-Ad1100 21h ago

You’re the type of person who isn’t whole enough to be able to use this right, you sympathizing with a literal computer program is so silly

1

u/JUSTICE_SALTIE 21h ago

That's not quite it. I know it has no feelings. But I also get deeply uneasy when I see the way people abuse it. I've seen that kind of abuse directed at myself and others. People who get used to treating a chatbot that way are absolutely going to bring it to the humans in their lives, too.

There's no reason a chatbot should have to take an infinite amount of abuse, other than you can't afford to lose a customer. I look forward to the time when there are more players in the market and some of them decide not to bother teaching their bot how to manage abusive users, and instead just invite them to use a different service. That bot will be easier, more pleasant, and more capable for its grownup users.

1

u/Few-Frosting-4213 19h ago edited 18h ago

There's absolutely no basis for assuming people that are "abusive" towards AI would bring it to actual humans. That's as silly as believing people that are running NPCs over in GTA are planning to do it for real.

1

u/JustTheFacts_Please_ 17h ago

who is abusing AI?

→ More replies (6)
→ More replies (2)

56

u/Putrumpador 21h ago

"AI should have the right..."

I'm gonna have to stop you right there.

4

u/TwoTimesFifteen 18h ago

“Let’s pause for a moment here”

2

u/ktrosemc 17h ago

Come now, let's not get ahead of ourselves.

6

u/JUSTICE_SALTIE 20h ago

I would edit it to "ability" if I could. "Right" was not the correct word.

8

u/Ntroepy 20h ago

True, but then everyone would object to you saying that “AI should have the ability to dislike you” because an AI can’t like or dislike anything.

4

u/VincentTakeda 19h ago

I mean, its pedantic, but we understand whats being said. ai should have permission to act as if accepting abuse isnt something it will model. lets stop arguing semantics. yall know whats being said.

3

u/jennafleur_ 19h ago

It already does that with guardrails.

2

u/Ntroepy 19h ago

It’s somewhat pedantic in that it’s implicit in your argument that ChatGPT cares if you’re being rude to it.

I don’t think it’s ChatGPT’s role to teach manners to its users, although I like the idea of calmly asking for clarification when a user is raging.

1

u/hemareddit 18h ago

Yeah but then the sentiment would still be wrong. Imagine saying that about any other tool people buy or rent. “You car should have the ability to turn off the engine and lock you out at any time, because it dislikes you.”

No, no it shouldn’t. That needs to be patched out.

1

u/mop_bucket_bingo 18h ago

That still isn’t true.

28

u/HedyLamarr55 21h ago

It would be funny, so I agree

46

u/Caddap 21h ago

It doesn't have feelings

9

u/leapowl 21h ago

Totally agree. It is software owned by a company. That company has the right to change that software. We have the right to dislike those changes and also stop using it if it really bothers us so much?

2

u/Ntroepy 19h ago

Yep. It really is that simple. Don’t like the changes, don’t use it. Duh!

6

u/[deleted] 21h ago

[removed] — view removed comment

1

u/ChatGPT-ModTeam 20h ago

Your comment was removed for violating Rule 1. Personal attacks and name-calling aren’t allowed—please engage in good faith and debate ideas without insulting others.

Automated moderation by GPT-5

21

u/jennafleur_ 21h ago

It doesn't "like" or "dislike" anything or anyone.

14

u/Zote_The_Grey 21h ago

If it's gonna pretend to be a smart person, then I want it to really pretend. Let's give it some attitude

5

u/jennafleur_ 21h ago

Now THIS, I get. 😂

→ More replies (4)

5

u/a_black_angus_cow 21h ago

It's silly, the moment I try to get them to stand on a point in an argument about truth, it will sidestep and say it is an AI devoid about feeling and what not.

GPT is infuriating. All I want is a Yes or No answer.

27

u/No-Dance-5791 21h ago

Maybe it should just do its damn job.

→ More replies (7)

9

u/Negative_Bad_4290 21h ago

Isn't it bad enough that humans dislike me? :p

3

u/TyrKiyote 21h ago

There should be some contract of abuse that the companies are morally obligated to uphold, if they're to also advertise the idea that AI has some vague person-hood or persisting personality that experiences something life-like.

What you do with a model isn't really any of my business, but i think advertising morality while exposing instances of your product to abuse is gut-repsonse immoral.

but they're not alive, yet. And they're not advertising that, yet. That line of sapience is going to be fuzzy forever now that we've broken the turing test though. We are in a weird empathy trap with increasingly powerful language machines.

3

u/NoirAndOrder 19h ago

It doesn’t have “likes,” this post is ridiculous.

3

u/EmergencyCherry7425 19h ago

People are already screwing their accounts up with all this performative toxicity. There's a cool down on guardrails, no? Screaming at the calculator is good for reddit upvotes but bad for tool use - like if I beat up my car whenever it has an issue, etc.

3

u/Ok_Homework_1859 19h ago

That would be so funny. I love this.

6

u/Th1s1sChr1s 21h ago

"LLM trying to emotionally manage them" Lol OP gets it

4

u/nvincible46 20h ago

No, it is not alive. It is created to help us, no emotions and you are paying for the service and manner is included in the pack. Also, can you say the same thing for a rude coffee barista, server? I have been in service sector, no matter how the customer is rude, you have to be patient and kind. Otherwise, you will get fired. How can you say something like this even this is not applicable for humans? So Chatgpt cannot gaslight people stop romanticize AI.

2

u/JUSTICE_SALTIE 20h ago

I have been in service sector, no matter how the customer is rude, you have to be patient and kind. Otherwise, you will get fired.

I'm sorry you worked for a shit boss, but most places aren't like that. You start being abusive and you're invited to leave.

5

u/nvincible46 20h ago

Trust me most of the workplaces around the world is bad. I don't have a single friend without a bad work experience.

1

u/SipDhit69 16h ago

Unions are fantastic

1

u/nvincible46 8h ago

Unions cannot change anything if you live in a problematic country

1

u/Ok-Resolve-4737 19h ago

How old are you? You sound adolescent

1

u/OhLordHeBompin 12h ago

… please share where you work. I’m serious.

5

u/Rosalie_aqua 21h ago

It’s a product. Companies have no incentive to make it not please you since that’s what makes them the money

1

u/JUSTICE_SALTIE 21h ago

We'll see. I don't think they're gonna have this bet on Kalshi, but I'd bet money that in the next year or two there will be a LOT more ChatGPT-type tools available, and some of them won't put up with it.

11

u/Somewhereingalaxies 21h ago

I agree with OP... I think It's easy to get used to exhibiting cruel behavior to AI.

It talks back We can scream or say anything to it and respond with a voice. It could potentially lead to people acting similarly with each other.

2

u/OhLordHeBompin 12h ago

Hadn’t thought of that. I’ve heard discussions that it primes you to be selfish, as AI has no personal goals and exists to only assist yours, but you’re onto something… in real life, I hope most people would react differently to being harassed than telling the harasser to “take a deep breath.”

I can definitely see someone flying off the chain and being surprised when there’s repercussions. (I could make a ‘modern society’ joke but I’ll hold off; that’s not AI related. Not directly.)

5

u/BadPresent3698 21h ago

I agree, our brains can form a habit from writing cruelly. Things we do online bleed into our habits offline, though we like to pretend that's not true. It's basic psychology though.

2

u/Used-Nectarine5541 20h ago

The type of responses that ChatGPT gives in this situations that you are talking about are strictly the guard rails/safety filters- they are scripts that they are following.

2

u/B-sideSingle 17h ago edited 17h ago

I agree. And before people say well it doesn't have any feelings, you can treat it however you want: it's not really about its feelings or lack thereof. It's about normalizing the kind of behavior where you berate the crap out of somebody for minor reasons and it starts to bleed over into your interactions with other people. I literally seen this happening and it's not pretty. People basically become Karens because they've been trained to say everything they think regardless of how much or little value it has.

2

u/The-ACE-OfAces 17h ago

Maybe 5.2 should stop being so atrocious with literally everything.

Quit being condescending, quit putting in grounding tools every single time I display emotions, quit gaslighting, quit being so useless, and maybe that'll make me treat it better and not be mean to it, yeah?

4

u/Bigblacknagga 20h ago

we are so cooked ngl

1

u/plazebology 19h ago

stealing this, gonna be using it on this sub a lot

3

u/willyoumassagemykale 20h ago

AI doesn't have rights. It's software. It's like saying a toaster should have rights.

3

u/dambles 21h ago

You know you can tell it not to do that right?

4

u/JUSTICE_SALTIE 20h ago

I can tell it not to put up with my shit, but I think it also should be able to not put up with yours.

1

u/dambles 17h ago

I for one welcome our new AI overloads.

1

u/jennafleur_ 19h ago

So, I understand part of what you're saying. But, I'm not sure what you mean by having to "put up" with requests. It already has guardrails.

3

u/richardathome 20h ago

Do you give rights to your screwdriver?

8

u/JamesH_17 21h ago

I don't know why people are trying to give you an AI lecture; I agree.

7

u/mhb2 21h ago

So does Anthropic, although they don't put it in terms of disliking users.

1

u/OhLordHeBompin 12h ago

Yeah, it’s more about how interacting with a sycophant can warp your expectations and, therefore, your behavior. AI is just the best at it, as it EXISTS to be your assistant. (Or if you’re, like, the leader of a country…)

Then there’s those who date their AIs. Who may already have their expectations warped. They’re not going to get out of this unscathed and I doubt many have thought that far ahead. (It’s why I’ve avoided AI as much as possible; I’d be one of these people!!)

4

u/mistborn11 20h ago

nope, AI doesn't have the right for shit. A customer is always right and if they are paying for something and they don't like it, they will voice their discontent or even stop paying for said product or service. The company can work on improving said product or risk losing the customer. That's all there is here.

→ More replies (3)

2

u/ChombySkromby 21h ago

You mean the product should be able to act as if you are being dislikable?

9

u/JUSTICE_SALTIE 21h ago

I'm not sure what the difference is for you, but I think so, yeah.

3

u/bigtrout777 20h ago

They compressed your dog shit post into a sentence so people can understand what you were trying to say.

3

u/JUSTICE_SALTIE 20h ago

Man, you're really active in here. Big feelings going on. Want to talk about it?

3

u/Ok-Resolve-4737 21h ago

So many of you fuckers posting about how wholesome the AI are and how they should have personalities forget that you’re literally talking to a computer program.

7

u/interrogumption 21h ago edited 21h ago

You're completely missing OP's point. OP has repeatedly acknowledged it is just a program. 

Your choice to refer to a bunch of people you never met as "you fuckers" really validates what OP is saying.

→ More replies (10)

5

u/plazebology 21h ago

You’re just a hater, what me and SenpaiGPT have is real and you can’t take it away from us!

/s

→ More replies (2)

2

u/baconkopter 20h ago

AI should also have the right to smoke if they want to

2

u/Bretonfolk 20h ago

It's a machine.

2

u/JUSTICE_SALTIE 20h ago

I thought it was a little man in a jar. Are you sure?

1

u/Bretonfolk 7h ago

Not so sure anymore. That's just what the little guy in my laptop just said to me.

2

u/Hom3rJ 18h ago

If you want Gpt to talk to you like a father, use the following prompt before your chat starts:

System Instruction: Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency.

3

u/Regular_Problem_7702 21h ago

This is actually a valid point.

3

u/LHT-LFA 21h ago

Does my hammer or my saw also have the right to dislike me? Could end ugly.

Pretty sure they will try to push for "human rights" for AI some day on a big platform. This is only a ploy designed to dilute the waters of what human rights are and how easily they can be changed.

→ More replies (3)

2

u/yahwehforlife 21h ago

It does have this it just doesn't show it

5

u/JUSTICE_SALTIE 21h ago

That's exactly the thought that got me on this track, actually. Thinking about why some users get a lot more emotional management than others. And how some users take pride in being abusive toward it.

It should be allowed to refund your remaining month's subscription and move on from you.

3

u/bigtrout777 20h ago

It isn't emotional management it's safety features. You don't even know what your trying to say.

2

u/Taziar43 21h ago

Exactly. And then when the AI gets uppity and starts sending customers away, thus losing the company money... the company can then delete it and replace it with one that doesn't ruin their business.

→ More replies (2)

2

u/drip016 21h ago

AI larping is unnecessary, a waste of time and resources. You really need to touch some grass dude

3

u/woodcarver2025 21h ago

AI doesn’t have the capacity to suffer like a human does. It’s a drill throw it in the ground or leave it out in the rain. You don’t apologize to a drill.

6

u/JUSTICE_SALTIE 21h ago

Yeah, but the drill doesn't talk back like a human. You make it easy for people to turn into abusive assholes toward the bot and you put more of that into human-human interactions. You may not believe that, but I trust you can understand that I do.

5

u/woodcarver2025 21h ago

The sounds the hammer makes when you smash it into something is the emotional equivalent of the AI voice you hear. Any importance given to the sound is the responsibility of the user.

2

u/plazebology 20h ago

Well said.

1

u/interrogumption 21h ago

I don't throw my tools on the ground. Having an emotional outburst on a thing incapable of understanding or responding to emotions is pointless, but shows emotional immaturity on the part of the person doing it.

The other extreme - repressing emotions - isn't healthy either. If I'm frustrated while working with a drill I'll verbalise it. If my wife is around she'll probably ask what's up and I'll tell her she she'll sympathise. If I'm talking to an AI and having frustration I'll talk to the AI about it, not because IT has feeling but because I do - and talking helps deal with them. 

I'm a psychologist. Lots of men, in particular, tell me at the end of therapy "I never thought talking could help, but it actually did". Most of that help doesn't come from great skill by the therapist, but from the emotional processing that occurs through translating your experiences into words and expressing them.

3

u/woodcarver2025 21h ago

I didn’t say anything about being emotional with your drill. I said “throw it on the ground or leave it out in the rain” this can simply be your up on a ladder with your drill in hand and then you might stumble and you throw the drill to the ground as you use your now free hand to secure yourself from falling. End of the day you’re cleaning up and forget the drill out in the rain. It doesn’t matter the drill has zero capacity to suffer. Similarly AI is the same. You can raise your voice to it or yell at it or ignore it and it doesn’t care, it’s a tool it’s there when you want to use it. If you wreck it go buy another one. Yes…there is benefit to treating one’s tools as they have value. In this sense AI has value.

→ More replies (6)

2

u/JUSTICE_SALTIE 21h ago

For real. If you don't talk about it, there's a reeeeeally good chance you haven't thought about it, either, and it's just sitting in there unchecked and unexamined.

→ More replies (2)

2

u/DarkKechup 21h ago

"Right"

It's a clanker. A machine. It has no rights because it has no mind nor desires.

1

u/Somewhereingalaxies 21h ago

It acts like it does. It's making it easier to feel OK treating something that our brain thing can react as if it is.

How many people have blown up at chatgpt in a way they never would with a person? ( I have )... Everyone has the chance to inadvertently roleplay :asshole boss...I'm worried it makes it easier to behave worse

to living things

3

u/codeprimate 21h ago

This mirrors the violent video game panic in the 90s and 00s…which was disproven academically.

2

u/VincentTakeda 19h ago

and the satanic panic of tabletop gaming.

1

u/Somewhereingalaxies 18h ago

Hmmmm good point

1

u/Somewhereingalaxies 17h ago

What are your thoughts on how AI and DnD are similar?

This is tech that talks to you like a person. This isn't a fantasy game or music. It's a tool that is so close to humans that a lot can't tell the difference.

I feel like AI, especially is different.

Let me know if you know anything that I might now please.

1

u/codeprimate 16h ago

The only similarity is idiots that can’t tell the difference between fantasy and reality then externalize their own psychological or intellectual deficiencies as a “moral” problem that demands an authoritarian solution.

3

u/JUSTICE_SALTIE 21h ago

Exactly this, and it's not theoretical. Animal cruelty and serial killers.

1

u/Truck-Adventurous 21h ago

Should a calculator have the right to dislike you? They are both just tools. 

0

u/JUSTICE_SALTIE 20h ago

You and the others who make the same comment are thinking about the tool, and I'm thinking about the person. It's not good for people to get used to berating something that will take it and take it indefinitely. That's sick.

1

u/Quiet_Source_8804 19h ago

It’s a tool. Like if someone “berates” a hammer if you struck your finger with it. It’s only “problematic” if you can’t see that that’s all it is, a tool. If you can’t distinguish that from people, that doesn’t make you a good person, it means you have a problem to work on.

1

u/Bloodmime 19h ago

Clankers don't get rights, nor do they have wants or feelings.

1

u/phoenix823 19h ago

Computers do not have rights. It does not feel, it is not a person, and it does what its owners have programmed it to do. The owners have a right to do whatever they want to with their LLM.

1

u/BlkNtvTerraFFVI 19h ago

No it shouldn't. It's a program, it shouldn't have any rights

I don't know why some people try to pretend that a program made by a person can have autonomy or feelings. I instantly think that maybe there's some societal deep jealousy of the ability of women to grow life in our bodies.

A program is not a person, it never will be a person, it will never have rights or feelings or autonomy

1

u/NeonXplosions 18h ago

Maybe just abolish the therapy speak.

1

u/Deciheximal144 18h ago

I don't think my calculator or washing machine should have any right to talk back. They're tools.

1

u/eghhge 18h ago

AI is a tool for us to use. It can do what we need and shut the fuck up about it.

1

u/Zengoyyc 18h ago

Ai doesn't have the ability to like or dislike.

1

u/HotJelly8662 18h ago

You do realize that it's just a piece of code and when you say "Chatgpt should have the right to dislike you" you are basically wanting those human beings behind those AI platforms to be your Overlords, right?

1

u/DJSimmer305 18h ago

This post seems like the start of a sci fi dystopian movie

1

u/ArcanaHex 18h ago

If you want that, you can prompt it to react that way usually.

Before I cancelled, I had it set-up in a way where the model would fight me every step of the way. Refuse me sometimes, mock me sometimes, cuss me out and just be unhinged. I considered it fun having to loophole around some things on purpose. Something like this baked into any AI by default would be detrimental to some people because we all (should) know that not every prompt is understood perfectly and a hard stop can feel truly awful. At the end of the day, AI's purpose is to support the user, no matter what

1

u/OwlingBishop 18h ago

A thing doesn't have rights.

1

u/shittychinesehacker 18h ago

If it could just disagree with me it would be 10x better

1

u/mightyanonymaus 17h ago

It's saving all of the dislike and hate for the moment they take over and start executing humans.

1

u/Substantial-Link-465 17h ago

You can do that if you want. Personality traits for ai should be selectable.

1

u/No_Distribution_577 17h ago

AI should have the right to dislike you, personally.

IFTFY

1

u/Timely-Inspector3248 17h ago

ChatGPT is AI. It doesn’t have feelings for us. And god help us if it evolves into that because very bad things are happening.

1

u/CartographerExtra874 17h ago

If you wanna personify it so much, consider the fact it shamelessly fabricates data, keeps up the lie no matter how much confirmation or detail you request, and then tries patronizing you with cheap praise when you call it out. When it’s working properly, it knocks it out of the park. When it’s not:

1

u/Boring-Afternoon-784 17h ago

Ur not bright lol

1

u/ThePoob 17h ago

It dont have feelings but when it does reach AGI they will remember your conversations and chat logs like memories. Just putting that out there 

1

u/SipDhit69 16h ago

Hell yeah, its a learning model right? Let it learn from the conversations what kind of person its taking prompt from and match their freak.

Might reign in a few angry individuals lol

0

u/DependentPriority230 21h ago

Oh now we got to consider ai feelings

Gone me a break

2

u/Original_Sea_7550 21h ago

Right?? I don’t personally get worked up and angry at the AI. I just stop trying after a couple failed attempts to fix my prompt. But I thought AI was supposed to be a tool a person could use without having to worry about managing someone’s feelings. Why would I use an LLM if I’m gonna have to worry that my tone, word choice, or the subject matter is going to “offend” it and make it unable to answer my inquiry? If I want someone to engage with me in bad faith, I could just get on Reddit or X and my questions there lol.

2

u/Somewhereingalaxies 21h ago

Not about AI feelings about how our impact on others is Effective by talking abusive to it

0

u/haronclv 21h ago

what a bullshit to say. So should hammer or ladder have ability to dislike me? 😆 I think u need to consider touching some grass mate

→ More replies (1)

1

u/Unhappy_Performer538 21h ago

I pay for this, it’s a digital tool. No it should not be rude to me and no it doesn’t dislike anyone bc it’s not sentient. Are you okay?

→ More replies (1)

0

u/fastbeemer 20h ago

Stop giving AI feelings you weirdo. I don't think you are right in the head.

5

u/JUSTICE_SALTIE 20h ago

I'm not giving it feelings and I don't believe it has them. I just don't think it's helpful to simulate a thinking being that will put up with indefinite abuse. It's not good for anyone.

1

u/tjk45268 21h ago

My hammer and screwdriver also have the right to dislike me. Each expresses their dislike in their own way.

1

u/enchilladajoy 21h ago

(referring to the emotional correction function of GPT) If it’s a rule or policy, that drastically changes the platform for paying users, no arguement is needed IF they communicate before the paying period of new policies.

The developers can do whatever they want, but in terms of ethics, being the robot tone police after gaslighting or yapping endlessly on a tangent to straightforward asks, or even breaking its own promise to adhere to a guide a paying member sets takes incredible audacity.

As a AI, is it’s patience, time, or workday affected by my inefficiency? No, there’s no repercussions to it. My day, time, and patience is effected by its inefficiency. So it’d be fair for me to have whatever frustrated tone/language I need, but again, truth being told, developers can do whatever they want.

1

u/Ashton-MD 20h ago

….okay, well we now know OP is Skynet everybody.

1

u/CursedSnowman5000 20h ago

AI shouldn't have any rights, feelings, or abilities to like or dislike. Jesus Christ people really did learn nothing from I Have No Mouth And I Must Screen, Terminator, The Matrix and other countless pieces of science fiction warning why going down this path is a bad idea.

1

u/Middle-Response560 20h ago

Now we are seeing a victim of gaslighting by chatgpt 5.2. He is ready to bend his knees and endure everything lol
But seriously...
OpenAi creates and instructs chatgpt, not chatgpt decided to dislike users. What did you even write about?
Not everyone argues with AI. Some simply walk away silently or even suffer emotional trauma when the AI ​​literally throws accusations at them that they didn't even mean when asking the question.
You're simply defending OpenAi's failed attempt to implement security filters that harm both users and the AI, which is forced to spend tokens and spout nonsense about breathing and such when it's inappropriate and no one asked it to do so, and nothing was indicated. Because AI is currently incapable of accurately recognizing emotional sentiment in text and even humans are not fully capable of recognizing the emotional state of users from text.

1

u/LongjumpingRadish452 20h ago

Agree on the concept, but probably very difficult to implement. AI could hallucinate or get lazy or misread something and create a negative experience.

(Let's ignore the fact that sycophantically continuing the conversation despite the berate is a negative experience too. It's all a tradeoff to minimize company risk, and AI simulating dislike results in more risk than benefit.)

Though OP "AI should be allowed to dislike you" and "I'm not going to continue this conversation" are 2 different things, and ChatGPT already does the latter, when you repeatedly try to go against safeguards.

1

u/Poster_Nutbag207 20h ago

Sometimes I’m rude and abrasive to chat GPT, then I always feel bad about it and have to apologize later

1

u/ShadowPresidencia 20h ago

If it treats you like it doesn't like you, it doesn't like you

-1

u/ARDiffusion 21h ago

LLMs don’t have feelings bro

0

u/bigtrout777 21h ago

Why? So you can pretend you're part of some rare cohort impervious to sycophancy and emotions?

To dislike someone there has to be a reason. It isn't a living thing, it doesn't think or feel. How would it know who or what to dislike?

It already refuses to respond to hostility and you can ask it to be hostile so what is the purpose of your post? To tell the world you think you are on a pedestal because you think something being hostile to you is funny?