Jump to content

So I was talking to ChatGPT about LSL ...


Coffee Pancake
 Share

You are about to reply to a thread that has been inactive for 318 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I asked ChatGPT to write some LSL script for secondlife to teleport an avatar, it delivered a script and explanation, I asked it to amend the script to show some hovertext and it invented 2 new LSL events.

// Set the target location where the avatar will be teleported to
vector target_location = <100, 50, 25>;

// This is the main event handler for when an avatar touches the object
default
{
    // This event is triggered when an avatar touches the object
    touch_start(integer num_detected)
    {
        // Loop through all detected avatars and teleport them
        integer i;
        for (i = 0; i < num_detected; i++)
        {
            // Get the key (unique identifier) of the avatar
            key avatar = llDetectedKey(i);
            
            // Teleport the avatar to the target location
            llTeleportAgent(avatar, target_location);
        }
    }
    
    // This event is triggered when an avatar starts hovering over the object
    hover(integer num_detected)
    {
        // Show the hover text "click here" above the object
        llSetText("click here", <1, 1, 1>, 1.0);
    }
    
    // This event is triggered when an avatar stops hovering over the object
    hover_end(integer num_detected)
    {
        // Hide the hover text
        llSetText("", <0, 0, 0>, 0.0);
    }
}

It then went on to explain how this worked, saying

Quote

This script will show the text "click here" above the object whenever an avatar hovers over it, and will hide the text when the avatar moves away. This can help to guide the avatar and let them know that the object can be clicked on to be teleported.

 

 

 

So .. When are we getting hover and hover_end ?

  • Like 2
  • Haha 3
Link to comment
Share on other sites

Do you mean that the avatar itself hovers over the object or that the user's mouse is hovering over the object?
The position of the avatar might not be where the camera is but a mouse hover over would be useful as it takes into account what the camera is looking at.
This would be my preference to add support for in LSL.

However if you really did mean avatar position then you can simulate this now with llGetAgentList() every second or two to get the avatar ids and then query their position on the region with llGetObjectDetails() to see if any avatar is in proximity before showing/hiding the text.  Not perfect but doable.

Link to comment
Share on other sites

2 hours ago, Gabriele Graves said:

Do you mean that the avatar itself hovers over the object or that the user's mouse is hovering over the object?
The position of the avatar might not be where the camera is but a mouse hover over would be useful as it takes into account what the camera is looking at.
This would be my preference to add support for in LSL.

However if you really did mean avatar position then you can simulate this now with llGetAgentList() every second or two to get the avatar ids and then query their position on the region with llGetObjectDetails() to see if any avatar is in proximity before showing/hiding the text.  Not perfect but doable.

The implication from the ChatGPT session is that the object would receive an event trigger(s) when the users cursor/mouse hovered over the object.

Another example use case would be a hunt where objects flash or highlight in some way on mouse over, mini game collectable items, interactivity hinting, as an accessibility hint

This would be extra especially cool if we could have some extra scripted function to break the shared experience that would allow different users to see an object in different states (which is essentially for game projects), but im not going to hold my breath for that .. hover(integer num_detected) on its own would be fun.

  • Like 1
Link to comment
Share on other sites

Awful lot of upstream traffic, viewer-to-simhost, unless the sim were to prime the viewer with specific objects that handle hover- events.

Somehow I'm also getting CONTROL_HOVER  vibe. Maybe more by analogy than a practical path for implementation.

Tangentially,

6 hours ago, Coffee Pancake said:

… extra scripted function to break the shared experience that would allow different users to see an object in different states…

I hadn't really thought about it in terms of the "shared experience" dictum but llSetAgentEnvironment is kind of special that way, and PARCEL_MEDIA_COMMAND_AGENT (the latter of which I used with llGetAgentLanguage to localize an exhibit, back before RedZone used agent-specific parcel media for crime; still works, though).

[ETA: I also meant to mention that I made myself a HUD that does something related to this: a text readout of everything it can discover about whatever object the alt-zoomed cam is pointed at. This totally doesn't work for seated avatars, and reveals the hideous geometry of event kiosks, etc., but is sometimes quite useful—and a bit nostalgic for those with a Symbolics Lisp machine history, where hovering the mouse over Presentation Manager-aware hypertext anywhere on the screen would update the dedicated "mouseline". The point is that it's very good for scripts to have creative control over mouse-hover effects, rather than something tooltip-like the viewer might do itself.]

Edited by Qie Niangao
Link to comment
Share on other sites

Sounds like they may have simply meant collision_start and collision_end, perhaps…?

Much along the lines of what Qie just described, I was thinking you could do it in mouselook with an attachment that figures out what you're looking at, and llRegionSayTo's a message on a known channel (should totally be channel 35270522) to let them know when you start and stop "hovering"…  But that's all quite nasty (you gotta get them to wear said attachment, for one), and of severely limited (if any) actual practical use.  Would be a fun project, though…  Going to have to make all those plywood boxes in my build space run away from you when you look at them in mouselook.

Link to comment
Share on other sites

I'm probably wrong in assuming that ChatGPT is what's wrongly called an artificial intelligence (and is perhaps better described as a pattern learning, matching and extrapolating system), that's been geared towards code for various programming/scripting languages, with maybe a smidgen of context keywords thrown in?

If I'm not wrong then ChatGPT is matching it's rudimentary and incomplete input of the patterns of LSL code (and possibly an overexposure to LSL function wish-lists) with those of, I guess, JavaScript or something similar where a hover event is a given.

Possibly someone influential at Linden Lab will see this as a message from the Singularity and implement some LSL that would definitely be cool, if maybe not all that essential.

 

  • Like 2
Link to comment
Share on other sites

If you haven't I strongly advise playing with ChatGPT some .. this is off topic for this thread but whatever .. 

I gave it the prompt

Quote

In the style of beatnik poem, Manny Calavera from the game Grim Fandango explains why not using tank controls to play the game is wrong.

In tank controls I trust To guide me through the Underworld With a smooth and steady hand I navigate the treacherous land  But those who shun the tank And choose instead to walk and strafe Are missing out on so much more Than just a different way to explore  For in the tank controls lie A sense of purpose, power, and control As I roll across the screen I feel like I'm the master of my own dream  So let the haters hate And scoff at those who choose the tank But for me, I'll take the path less traveled by And trust in the tank controls to never lie.

*snapping*

Programming code by hand is probably going to be a mugs game in a few years.

I've seen it generate some pretty complex functions and take code as a prompt to generate other code.

  • Sad 1
Link to comment
Share on other sites

9 hours ago, Coffee Pancake said:

Programming code by hand is probably going to be a mugs game in a few years.

I've seen it generate some pretty complex functions and take code as a prompt to generate other code.

It's a brave new world that will be interesting for a while. That it can produce reasonable approximations to "correct" code for specified functionality is impressive, and more interesting for how it will change the way we think about iterative "specification": Sometimes we can't know what we want until we have some code to try out; AI should be able to tighten that loop… for as long at there's still a human anywhere in that creative development loop.

Link to comment
Share on other sites

10 hours ago, Coffee Pancake said:

Programming code by hand is probably going to be a mugs game in a few years.

I've seen it generate some pretty complex functions and take code as a prompt to generate other code.

Call me back when it can auto-generate complete new viewer and SL server code based on modern standards with all the bells and whistles from a descriptive instruction.  I'll not be holding my breath though.

Edited by Gabriele Graves
Link to comment
Share on other sites

On 12/4/2022 at 2:18 PM, Love Zhaoying said:

So, ChatGPT just proved it's only an AI, and fallible.

I hate it when AI's start doing what I do only better; fallibility is my only remaining sphere of excellence and I'm going to resist any attempts at encroachment.

Bring on the EMP, that'll stop them in their tracks.

  • Like 1
  • Haha 2
Link to comment
Share on other sites

  • 1 month later...

I have made some experiments with ChatGPT and PHP and the results were fantastic. No comparison to LSL. It is quite obvious that it hasn't been sufficiently trained on LSL. It gets very creative sometimes, inventing functions that don't exist. Or it ignores basic rules like making a listen handle to avoid having a script listening all the time.

However I have used ChatGPT for raw drafts of my last two scripts and despite the above mentioned problems it has saved me about 50% of my time that I would have usually needed to write the script. Even in the actual state it is a helpful tool if you can quickly identify the working parts of the code and have the patience for some debugging...

But people without any scripting skills cannot use it at the moment.

Edited by Estelle Pienaar
  • Like 1
Link to comment
Share on other sites

I feel for the AI who tries to use LSL. I'm literally in the same situation as the AI who makes LSL do the impossible. 

Today's examples: 

- Need to track callbacks from a separate event script? Not at all, just pass the separate script a pointer and function to call, have the script pass it back with the data, and call what comes back.

- Need a self-referencing pointer, like ^this? Always pass it to the constructor! No constructor? Then always write it to the instance data! Don't like that? Just write a ClassFactory to suit the need, if needed.

*..all in JSON.

 

*Edit for Clarification* The source "code" data interpreted by the LSL Parser is in Json. And the data. And the pointers to the code and data.

Edited by Love Zhaoying
  • Haha 1
  • Confused 1
Link to comment
Share on other sites

I've been working with ChatGPT quite a lot recently.   I've found it very useful with things like javascript, which I don't know much about.     Most of the time, so long as I break down the process into a series of discrete steps, it gives me usable answers and examples, and even with the snippets of code it provides are wrong, at least "what's wrong with this line of code?" is generally easier for me to work out than "how on earth do I do this?"

Asking it about LSL is a different story, though.   For example, 

Quote

To add hovertext to an object in Second Life using Linden Scripting Language (LSL), you can use the llSetText function. This function takes two arguments: a string containing the text you want to display, and a vector representing the text's color.

Here is an example of how you could use llSetText to display red hovertext above an object:

Quote

default
{
    state_entry()
    {
        // Set the hovertext to "Hello, World!" in red
        llSetText("Hello, World!", <1.0, 0.0, 0.0>, 1.0);
    }
}


 

This script will set the hovertext to "Hello, World!" in red when the object is first rezzed or when the script is reset. The text will be displayed above the object at a distance of 1 meter.

You can adjust the color of the hovertext by changing the values in the vector. For example, to display green hovertext, you could use a vector like <0.0, 1.0, 0.0>.

You can also adjust the distance at which the hovertext is displayed by changing the fourth argument to llSetText. For example, to display the hovertext at a distance of 2 meters, you could use llSetText("Hello, World!", <1.0, 0.0, 0.0>, 2.0).

  

Edited by Innula Zenovka
Link to comment
Share on other sites

4 hours ago, Innula Zenovka said:

I've been working with ChatGPT quite a lot recently.   I've found it very useful with things like javascript, which I don't know much about.     Most of the time, so long as I break down the process into a series of discrete steps, it gives me usable answers and examples, and even with the snippets of code it provides are wrong, at least "what's wrong with this line of code?" is generally easier for me to work out than "how on earth do I do this?"

Asking it about LSL is a different story, though.   For example, 

  

Yes, it has some odd ideas of how LSL works.

Not that those odd ideas are bad. A range feature on llSetText would actually be really nice to have and more useful than alpha.

 

I'm kinda thankful our one day robot overlords will be as frustrated with how SL works as we are.

  • Like 1
  • Haha 3
Link to comment
Share on other sites

On 1/7/2023 at 11:53 PM, Coffee Pancake said:

Yes, it has some odd ideas of how LSL works.

Not that those odd ideas are bad. A range feature on llSetText would actually be really nice to have and more useful than alpha.

 

I'm kinda thankful our one day robot overlords will be as frustrated with how SL works as we are.

Well... ChatGPT doesn't really understand what's going on. It just compiles snippets it sees somewhere, run it through some mysterious blackbox that will say "this looks good enough" based on patterns (e.g., "braces must match" or "lines must end with semicolon" or something similar) and just spurt out its composition.

Despite it trying to explain how the code works, again it's not actually understanding how the code works; it just sees description of how it works, adapt according to the code it concocts, and blurt the description out.

That's how it sometimes ended up with embarrassing situations where it says 47 > 63 (or something to that effect), because it doesn't have an understanding of what it outputs; it just give a score to whether its composition is okay, and blurt it out.

Edit: I guess the most accurate analogy is that ChatGPT is like a know-it-all that copy-paste from StackOverflow, both the code part and the description part, without fully understanding how it works.

Edited by primerib1
  • Like 1
Link to comment
Share on other sites

As I understand it, ChatGPT works by statistical prediction.    That is, if you ask it "Who was the first man to walk on the moon?"  it replies "Neil Armstrong" not because it has looked it up in a table of people who have walked on the moon but because it "knows", on the basis of the web pages it's read, that there's a strong association between the  strings "Neil Armstrong" and "first man to walk on the moon," so that's likely to be a good answer.

 

  • Like 3
Link to comment
Share on other sites

2 hours ago, Innula Zenovka said:

As I understand it, ChatGPT works by statistical prediction.    That is, if you ask it "Who was the first man to walk on the moon?"  it replies "Neil Armstrong" not because it has looked it up in a table of people who have walked on the moon but because it "knows", on the basis of the web pages it's read, that there's a strong association between the  strings "Neil Armstrong" and "first man to walk on the moon," so that's likely to be a good answer.

Exactly, and if you asked it what would happen if you broke a mirror before they went in and hackily patched it, it would say you'll have 7 years of bad luck.

https://www.youtube.com/watch?v=w65p_IIp6JY

 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

I think you are massively underestimating this AI. Yes, it is certainly based on statistical predicting, but saying that what it does is statistical predicting is like saying what a computer does is breaking everything down in 1 and 0. While a computer does break down everything into one and zero, there are so many layers of logical filters built on the basis of this process that a PC does do a lot of things that are far from "just 1 and 0".

The AI is learning based on statistical evidence and human correction. But unlike you folks are saying, it is able to simulate a higher logic that it finds in language. And the simulation of the logic is getting better by the day and in some specific areas, the difference between the "real" logic and the "simulated" logic is already closing in on zero. 

Worse for humans: The AI is able to think creatively to close the gaps. It's like an autodidact who doesn't need a teacher to explain everything from the detail to the system as a whole. It can take puzzle pieces and tries to guess the logic behind it.

Unfortunately past results are not accessible in ChatGPT at the moment, so I cannot post the examples right now, but ChatGPT made up function calls that don't exist in LSL. It has not found these functions anywhere and certainly not in a statistically significant number.

The same is true for the example above on LLSetText, when it claimed that the second argument is an integer for the distance of the text from the object. Do you seriously believe that the AI found this information significantly more often in the internet as the correct information?

If you deal with it for some time it becomes obvious that (1) the AI is working within a logic and (2) can and does creatively play wwiththe information it has. I am 100% convinced that the AI in the example LLSetText above knows that the second argument is for the alpha. There must be hundreds of websites that it was fed and these webpages do all say that the argument is for alpha. Just based on statistical results this is clear. What seems to be happening is that the AI "plays" with available information and the current state of its own logic like a child. It does take an information bit and tests what happens if it claims something completely different. How will the users react? Will they believe it, will they realise the mistake, will they correct it? And based on a statistical function applied to the reactions, the AI will refine the simulated logic until the difference between simulated logic and real logic converge close to zero.

If you are laughing about the results you currently get when asking the AI about LSL, you might missundertand what is really happening.

PS: The video on the mirror example is misleading because this is not ChatGPT, but other "closed" language models. In Chat GPT the wrong answers don't have to be "quickly patched" because the user can react to a wrong answer and correct it. If I would have received such an answer, I would have replied that it needs to differentiate between factuality and the believes of people. And the next time someone asks the same question, it will add that "many people believe" etc. It is learning very fast.

Edited by Estelle Pienaar
  • Like 3
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 318 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...