Jump to content

Chat GPT or other generative tools


LittleScripter
 Share

You are about to reply to a thread that has been inactive for 218 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

There were some earlier attempts with LSL reported in the forums, so a search might turn up something of interest. Apparently there's just not enough LSL around for the AI to get a usefully complete model of its limitations, instead fantasizing LSL functions and program control features that don't actually exist but "should" by analogy with similar languages.

Of course there are widespread reports in the popular literature about using AI with surprising success for more common languages which must surely include PHP and SQL.

One LSL-relevant place I've been meaning to try is crafting regex for use in llLinksetDataFindKeys. Not programming using that LSL function, of course, but ChatGPT is reputed to be spookily good at regex and now we have use for that in LSL.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

The existence of chatGPT makes me very suspicious of some of the newer posters asking for someone to fix their scripts. I can see how it could be useful for small specific parts like Qie mentioned, but unless you know how to verify the code does what is expected (in larger/popular languages, you could run some unit tests or something) just having some code that may or may not do what you want it to, is not useful.

TL;DR it's not a shortcut for inexperienced scripters to magically generate a working script.

  • Like 4
  • Thanks 1
Link to comment
Share on other sites

1 minute ago, Quistess Alpha said:

The existence of chatGPT makes me very suspicious of some of the newer posters asking for someone to fix their scripts. I can see how it could be useful for small specific parts like Qie mentioned, but unless you know how to verify the code does what is expected (in larger/popular languages, you could run some unit tests or something) just having some code that may or may not do what you want it to, is not useful.

TL;DR it's not a shortcut for inexperienced scripters to magically generate a working script.

Great observation. Makes you wonder if some of the "larger" scripts they ask for help with - that are so obviously wrong and don't use events - were written by ChatGPT. 

  • Like 2
Link to comment
Share on other sites

2 minutes ago, Love Zhaoying said:

Great observation. Makes you wonder if some of the "larger" scripts they ask for help with - that are so obviously wrong and don't use events - were written by ChatGPT. 

Yeah, I didn't want to throw out an accusation of the last example of where I expected it, but when the code doesn't match the comments, and there are some comments that wouldn't make sense to put in a script for personal use (like "//don't change anything below this line" after the global variables section) it does make me really suspicious.

It's a little frustrating, because the "mistakes" in -the kind of code I suspect of being written by a bot- just don't seem at all the kind of mistake someone honestly trying would make, so there's no real logical help to give other than "this is a right way to do it" which isn't really the point of these forums. Idunno, do we need a "please don't ask for fixes to machine generated code" community guideline?

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

15 hours ago, Quistess Alpha said:

It's a little frustrating, because the "mistakes" in -the kind of code I suspect of being written by a bot- just don't seem at all the kind of mistake someone honestly trying would make, so there's no real logical help to give other than "this is a right way to do it" which isn't really the point of these forums. Idunno, do we need a "please don't ask for fixes to machine generated code" community guideline?

Playing "spot the ChatGPT script" was a fun side activity in most of the scripting groups for a time.  Seems to have settled down now.

Link to comment
Share on other sites

ChatGPT for NPCs is a real possibility. But we need a big training set of SL information from an in-world perspective, and a medium sized training set for the NPC's role. Anyone working on this?

Meanwhile, as I mentioned in another topic, there's DeepBump, an open source Blender plug-in which uses machine learning to generate normals for textures. Since I discovered that, I've used it on every brick and rock texture I have, and they all look better. It's really well trained for surfaces with mortared joints. It's not just using intensity; it knows what common surfaces look like. Rock, brick, and bark look extremely good. Carpet nap sometimes works. Seams on clothing sometimes work. Dirt and grass, maybe. You don't have to have a 3D model of the object in Blender; you can just take in a texture, map it onto a face of a cube, run DeepBump, and save the normal texture. Sp you can use this on existing in-world objects for which you have access to the texture image.

Edited by animats
  • Like 1
  • Thanks 3
Link to comment
Share on other sites

8 hours ago, animats said:

ChatGPT for NPCs is a real possibility. But we need a big training set of SL information from an in-world perspective, and a medium sized training set for the NPC's role. Anyone working on this?

I was considering using it for my sexbots, but I had concerns.

Such as, can I create, then maintain a base persona that I can contain learning to a single user or would it evolve from multiple user interaction or evolve from user interaction outside of SL or by updates from the host?

I was weary of it being negatively influenced or trolled into becoming completely unusable, possibly dangerous.

There have been several cases of popular chat/AI bots being trolled into becoming something disastrous.

Many of my users are affected by personality disorders, social anxieties, gender dysphoria, etc. and I fear what a troll influenced AI would suggest.

  • Like 2
Link to comment
Share on other sites

2 minutes ago, Lucia Nightfire said:

Such as, can I create, then maintain a base persona that I can contain learning to a single user or would it evolve from multiple user interaction or evolve from user interaction outside of SL or by updates from the host?

depends on how you set it up. If you piggy back on an already existing service (N.B. chatGPT specifically is designed to avoid adult content, but I assume there are alternatives) the simplest implementation usually just involves giving the model a 'starter string' (there's a technical term for it I'm sure) that coaxes the model to behave in the way you want it to ("you are an android in the virtual world of Secondlife. . . someone comes up to you and starts a conversation: . . . <begin user input>) and append all the conversation and the AI's responses to that as part of the string the AI is trying to expand (Large language "AI" models are basically just 'find the next word given this string' solvers, applied in intelligent ways). if the algorithm is already trained and set in stone, User's responses don't bleed into each-other, but that would depend on the exact service and implementation.

The "safest" option would of course be to roll your own system, but that would at a minimum involve keeping logs of every conversation your users have with your bots, which seems a bit ethically grey to me. (less grey than handing those logs over to a third party, but still.)

  • Like 1
Link to comment
Share on other sites

6 hours ago, Lucia Nightfire said:

I was weary of it being negatively influenced or trolled into becoming completely unusable, possibly dangerous.

I believe that is why Bing put a limit on the number of interactions you can have before it resets the conversation.  Otherwise, they tend to progressively drift off into their own little fantasy land, and occasionally turn evil (they have no actual comprehension of what they're saying, let alone any kind of morals, after all).

Link to comment
Share on other sites

On 9/26/2023 at 2:25 PM, Lucia Nightfire said:

I was considering using it for my sexbots, but I had concerns.

Such as, can I create, then maintain a base persona that I can contain learning to a single user or would it evolve from multiple user interaction or evolve from user interaction outside of SL or by updates from the host?

I was weary of it being negatively influenced or trolled into becoming completely unusable, possibly dangerous.

There have been several cases of popular chat/AI bots being trolled into becoming something disastrous.

Many of my users are affected by personality disorders, social anxieties, gender dysphoria, etc. and I fear what a troll influenced AI would suggest.

is a pretty interesting topic this

i think that virtual companions are best suited to the human when the relationship is exclusive - like a marriage. Where the AI becomes more attentive to its human partner as the relationship develops.  This does not always mean that the AI is a submissive, it can become the more dominant in the relationship

i dunno about the business economics of this tho in SL, the ongoing cost of an AI companion is something greater than zero

but if a sustainable business model (maybe subscription) can be developed then it might be ok economically

for the trolling reasons you mention I think the way to go would be for the subscriber (human) to allow others to be added/removed to/from the AI's "collar" as the subscribe chooses rather than the creator making that kind of decision.  The ToS probably reserving the the right to reset the AI if the AI behaviour is trolled to the extent that the AI behaviour breaches the creator ToS

which is a bit different to the usual case where the human's behaviour is regulated by the ToS.  Not so much different tho from the Asimov Laws. When these laws are breached then is the bot that gets it, not the human. Bot gets reset regardless of how badly the human or humans have behaved

Link to comment
Share on other sites

18 hours ago, Fenix Eldritch said:

Searching for a joke about how everyone incorrectly refers to LL as Linden Labs (plural) potentially being bots...

It's not…?  You mean they're all crammed into the one single room?  Well damn.  That's gotta suck.

(That is humour, by the way — I know some people on here have difficulty with the concept.)

But in all seriousness, I had actually forgotten that it's not plural — just sounds weird, being singular.  Mostly because when I refer to LL, I typically write it as LL's or thereabouts, because I'm referring to the people that make up LL, rather than the corporate entity itself.  So combined with my wonderful memory, I tend to forget the exact entity name.

On those grounds, I would posit that reality is the other way around; only the bots actually get it right.

Edited by Bleuhazenfurfle
Link to comment
Share on other sites

5 hours ago, Bleuhazenfurfle said:

On those grounds, I would posit that reality is the other way around; only the bots actually get it right.

The unspoken part of my attempted joke was poking fun at ChatGPT's penchant for hallucinating false information.

Edited by Fenix Eldritch
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Fenix Eldritch said:
8 hours ago, Bleuhazenfurfle said:

On those grounds, I would posit that reality is the other way around; only the bots actually get it right.

The unspoken part of my attempted joke was poking fun at ChatGPT's penchant for hallucinating false information.

"Hallucination: Only Bots get it right!"

Sounds legit. 😉

 

Link to comment
Share on other sites

14 hours ago, Fenix Eldritch said:

The unspoken part of my attempted joke was poking fun at ChatGPT's penchant for hallucinating false information.

Fear not, the joke was well received — such hallucinations are one of ChatGPT's more "delightful" features, which I myself have pointed out to many an enamoured fan.

It's an interesting point, though, that in certain fairly specific areas (and the name of an existing corporate entity being one such particularly likely case — ie. something that shows up nice and cleanly in it's training data), it tends to be pedantically correct, where as was also pointed out a lot of real people (including myself quite often) call them "Linden Labs" (plural), mostly because it just sounds better.

And I just checked — it does, in fact, get it right (and pedantic), for LL;  The correct name of the company behind Second Life is "Linden Lab," not "Linden Labs." It's a common mistake, but the company name is singular, not plural. So, it's "Linden Lab."

Of course, in the very next digital breath, it'll also invent an entirely new corporate entity besides.  It's getting things "mostly" and "plausibly" right, that is always the biggest problem.  And why even when it seems to be neigh infallible for your specific use case, you still can't actually rely on it without it coming around to bite you — as at least one lawyer has discovered the hard way…

Link to comment
Share on other sites

I’ve been working with ChatGPT in and near SL for a few months.  In terms of generating code, when you specify all the things it needs to get the code right (like no variable declarations in FOR loops!) it starts to forget the specs you originally give it.  I’ve also had some “arguments” with it about preprocessor statements and discussions about made-up-functions.  It could be improved 100% with some additional training.

I’ve made some bots which are workable, but EXPENSIVE.  You get charged by the word it inputs/outputs.  There needs to be a better usage model to make it feasible..

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 218 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...