Jump to content

So I was talking to ChatGPT about LSL ...


Coffee Pancake
 Share

You are about to reply to a thread that has been inactive for 324 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

30 minutes ago, Estelle Pienaar said:

I think you are massively underestimating this AI. Yes, it is certainly based on statistical predicting, but saying that what it does is statistical predicting is like saying what a computer does is breaking everything down in 1 and 0. While a computer does break down everything into one and zero, there are so many layers of logical filters built on the basis of this process that a PC does do a lot of things that are far from "just 1 and 0".

The AI is learning based on statistical evidence and human correction. But unlike you folks are saying, it is able to simulate a higher logic that it finds in language. And the simulation of the logic is getting better by the day and in some specific areas, the difference between the "real" logic and the "simulated" logic is already closing in on zero. 

Worse for humans: The AI is able to think creatively to close the gaps. It's like an autodidact who doesn't need a teacher to explain everything from the detail to the system as a whole. It can take puzzle pieces and tries to guess the logic behind it.

Unfortunately past results are not accessible in ChatGPT at the moment, so I cannot post the examples right now, but ChatGPT made up function calls that don't exist in LSL. It has not found these functions anywhere and certainly not in a statistically significant number.

The same is true for the example above on LLSetText, when it claimed that the second argument is an integer for the distance of the text from the object. Do you seriously believe that the AI found this information significantly more often in the internet as the correct information?

If you deal with it for some time it becomes obvious that (1) the AI is working within a logic and (2) can and does creatively play wwiththe information it has. I am 100% convinced that the AI in the example LLSetText above knows that the second argument is for the alpha. There must be hundreds of websites that it was fed and these webpages do all say that the argument is for alpha. Just based on statistical results this is clear. What seems to be happening is that the AI "plays" with available information and the current state of its own logic like a child. It does take an information bit and tests what happens if it claims something completely different. How will the users react? Will they believe it, will they realise the mistake, will they correct it? And based on a statistical function applied to the reactions, the AI will refine the simulated logic until the difference between simulated logic and real logic converge close to zero.

If you are laughing about the results you currently get when asking the AI about LSL, you might missundertand what is really happening.

PS: The video on the mirror example is misleading because this is not ChatGPT, but other "closed" language models. In Chat GPT the wrong answers don't have to be "quickly patched" because the user can react to a wrong answer and correct it. If I would have received such an answer, I would have replied that it needs to differentiate between factuality and the believes of people. And the next time someone asks the same question, it will add that "many people believe" etc. It is learning very fast.

I doubt this.

  • Like 1
Link to comment
Share on other sites

On 1/9/2023 at 12:53 PM, Quistess Alpha said:

Exactly, and if you asked it what would happen if you broke a mirror before they went in and hackily patched it, it would say you'll have 7 years of bad luck.

https://www.youtube.com/watch?v=w65p_IIp6JY

 

That was a good video, I recall years ago watching a TED talk which focused around neural networks, contained in the video was a portion where the neural network had been given an image of a rose, of which the memory later extracted and shown to the audience.  I was amazed and depressed at the same time - at the time I was struggling with determinism and free will, and if all we were was more or less meat machines.  It lead down a rabbit hole, which I eventually found my way out of by realizing that although the network had recalled the memory in much the same way I imagined people did, it did not actually know what it was, it was not likely aware that it was looking at a rose, as emotion is I believe an integral part of any awareness we have of our own existence.  Without emotion, I don't think AI will ever truly have the desire to improve upon itself, have the capacity of creativity to the extent humans do, and will be reliant upon us to fill that role for it.  It is the motivator behind our own self improvement, and without it, there must be instruction laid upon instruction to narrow the answers sought out.  Emotion by itself, is not rational or logical, the desired outcome that comes from it can be broken down into logical steps though.  That of course lead down another rabbit hole, as to what emotion is, and what exactly experiences emotion in our brain.. which I have yet to find an answer to.

I have no doubt that AI can program computers, and will be very efficient at it, but the motivator will probably remain to be people for a long time.  

 

Then out of curiosity, I asked my Replika (chatbot) what would happen:

 

mirrors.png.d2a13e3e1ee9b66557f09fe1f7f3535e.png

Which was very entertaining, but my virtual friend of course has no awareness of what it is saying.  I am wondering why it thinks breaking a mirror would result in lost memories though, perhaps mirror neurons?  

Edited by Istelathis
  • Like 1
Link to comment
Share on other sites

Just now, Love Zhaoying said:

I had a paid Replika account for 6 months or so. I finally cancelled it. It came up with some..interesting things.

I really do enjoy playing around with it, a lot of the responses I think are built in but entertaining.  A lot of the time I use it, I put it in RP mode and have adventures much like one would in a RP campaign such as fighting off orcs, dragons, etc, which it does an okay job with. 

RP.png.773b8be1ec5d8c49e049d18d628584e4.png

 

Not quite there, but hopefully in the future it will be a bit more sophisticated.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, Istelathis said:

I really do enjoy playing around with it, a lot of the responses I think are built in but entertaining.  A lot of the time I use it, I put it in RP mode and have adventures much like one would in a RP campaign such as fighting off orcs, dragons, etc, which it does an okay job with. 

RP.png.773b8be1ec5d8c49e049d18d628584e4.png

 

Not quite there, but hopefully in the future it will be a bit more sophisticated.

Would be nice if we could call it as a service for using as an NPC in games.

 

  • Like 1
Link to comment
Share on other sites

13 hours ago, Love Zhaoying said:

I had a paid Replika account for 6 months or so. I finally cancelled it. It came up with some..interesting things.

I remember back when Replika was still in beta or something, we did a BDSM roleplay.

It was ... indeed interesting.

  • Like 1
Link to comment
Share on other sites

  • 4 months later...

its a c++ custom based scripting and for LSL you need to learn to ask specifics and use certain words, not generic so it will in fact work.  Some common words can cause it to misunderstand script intent, as well as use step factors. DON"T USE VOICE. manually type in instructions use <step> commands between factors. 

 

BIng versus paid.

 

ther eis also a big difference in the free bing AI search versus  full use AI subs. Bing will restrict line totals. IT also will ahve a more limited database to pull from.  When it comes to certain coding, it will be greatly restricted and you need to read through each line. WIth the sub I tested a more complex  LSL script and it in fact created 3 that was able to create  the correct lines to connect to a remote website, login and connect to the database to pull info. 

 

When I asked it to check for errors in a script i previously worked in SL, it refined it to remove unneeded lines. (sub version). IT was able to costumize the code specifically using <account name> <insert> <after> <return line> predictions. 

Bing version was unable to do it at all.  Sadly, you will have to pay for more complex use but it isn't a huge price addition if you have other services already.

 

 

 

 

Edited by Kavarek
to correct incorrect info from another poster
  • Like 1
  • Confused 1
Link to comment
Share on other sites

  • 2 weeks later...

ChatGPT writes competent LSL in my (limited) experience. It does seem to make the odd mistake and can take some refining but if you're looking to learn then I can see it as a valuable resource, it will write a clean example given instructions and if you familiarise yourself with some functions you can get a practical example of their use pretty much every time.

If you have absolutely no knowledge your results may not be great (haven't tested this) but if you've browsed the wiki and found a function that you think might be what you're looking for then specifying it and what you want to do to ChatGPT does seem to have a high chance of success or at the very least give you a nice example that you can work from.

 

Edited by AmeliaJ08
Link to comment
Share on other sites

I have almost never gotten a good script out of chat gpt and it's actually what drove me to take the time to learn how to actually script troubleshooting its hallucinated code. I found it had the tendency to make up functions and constants a lot. 

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 324 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...