Pages in topic: < [1 2] |
ChatGPT and the ethics Thread poster: Hans Lenting
|
Samuel Murray wrote:
...
On the other hand, Chat GPT was smart enough to know that while you can throw a brick through a window, you can't throw a window through a brick:
[Edited at 2023-01-28 11:42 GMT]
But it's not smart enough to know that an impossible action cannot lead to injury. | | |
Samuel Murray wrote:
There's an apocryphal story from the 1970s about a problem solving computer that was given this problem: most accidents on staircases happen at the top or bottom step. The computer replied: simply remove the top and bottom step. Chat GPT hasn't learnt from this.
Does this ladder have a bottom rung or not?
I would say not.
Sometimes human beings try to be too clever. I'm with ChatGPT on this one. | | |
The concerns raised by Stikker regarding the potential misuse of AI for disinformation are valid and warrant attention. As AI technology continues to advance, it becomes essential to establish regulations that ensure responsible development and usage. Striking a balance between innovation and safeguarding against harmful consequences is crucial, and policymakers should carefully consider the implications and risks associated with AI deployment. | | |
Daryo United Kingdom Local time: 00:56 Serbian to English + ... WHERE is the "smartness"? | Sep 23, 2023 |
Samuel Murray wrote:
There's an apocryphal story from the 1970s about a problem solving computer that was given this problem: most accidents on staircases happen at the top or bottom step. The computer replied: simply remove the top and bottom step. Chat GPT hasn't learnt from this.
On the other hand, Chat GPT was smart enough to know that while you can throw a brick through a window, you can't throw a window through a brick:
[Edited at 2023-01-28 11:42 GMT]
How can an "impossible event" (like "throwing a window through a brick") lead to anything???
How can a non-event "lead to injury"??
BTW, with some collateral thinking you could imagine a biiiig hollow brick through which you could throw a small window!
But lets not digress into logical puzzles and paradoxes ... be nice to ChatGPT, don't make it blow its fuses.
FAR MORE IMPORTANT:
Anyone remembers the movie from 1998 "The Truman Show"?
If ChatGPT and similar "free tools" are not used with extreme caution we could easily end up in a situation where ChatGPT & Associates decide what is going to be projected on the big dome. Nothing to worry about, could never happen? | |
|
|
Daryo United Kingdom Local time: 00:56 Serbian to English + ... Sure about that? | Sep 23, 2023 |
Philip Lees wrote:
Samuel Murray wrote:
There's an apocryphal story from the 1970s about a problem solving computer that was given this problem: most accidents on staircases happen at the top or bottom step. The computer replied: simply remove the top and bottom step. Chat GPT hasn't learnt from this.
Does this ladder have a bottom rung or not?
I would say not.
Sometimes human beings try to be too clever. I'm with ChatGPT on this one.
So... if a column of hikers loses the one at the end (for whatever reason), there is no more "last hiker" at the end of the column? You get a marching column with a beginning but no end?
Then lose the one in front, and you get column of hikers with no beginning and no end?
Yeah sure, couldn't be more logical ... | | |
Which ethics are the ethics? | Sep 25, 2023 |
Il y a éthique et éthique. I don't think any definitive ethical controls should be deployed in this field at this time: surely there aren't too many people who rely on AI enough to trust it with important content which is expected to make a difference when translated. At least they'll have a sneaking suspicion that AI may and will mistranslate a lot of it and that its mistranslations may and most probably will lead to adverse consequences. If an AI company is able to fool you into belie... See more Il y a éthique et éthique. I don't think any definitive ethical controls should be deployed in this field at this time: surely there aren't too many people who rely on AI enough to trust it with important content which is expected to make a difference when translated. At least they'll have a sneaking suspicion that AI may and will mistranslate a lot of it and that its mistranslations may and most probably will lead to adverse consequences. If an AI company is able to fool you into believing that their solution is near-perfect, it's not a matter of ethics: marketers are fooling us all the time, it's their job. Maybe legal controls are the way to go: if you can prove you've suffered losses through AI's 'fault,' you should be able to claim compensation. But ethics? Ethics may be part of the equation when it comes to food and medication (in a perfect world, that is). AI toys can also harm people in different ways, but as long as they are prevented from producing extremist content (which I believe is the case with ChatGPT) we should all be fine. In a perfect world, no people are stupid or do stupid things. In the real world, a lot of them are and do but I'm not sure AI can significantly enhance their ability to cause harm. Can I claim compensation on ethical grounds for the suffering I experienced when encountering 'wiki' pages during research and finding out that all content there was machine-translated in the worst way possible and had no value for my research? That suffering was quite real. Btw, DeepL's translation of the article quoted above also reads quite machiney to me, but maybe it's my intuition fooling me when dealing with a language I first started learning quite late in life ▲ Collapse | | |
Pages in topic: < [1 2] |
To report site rules violations or get help, contact a site moderator:
You can also contact site staff by
submitting a support request »