Jump to content

Color Picker HUD - Moving a Reference Arrow to Last Touched Location


Chimera Firecaster
 Share

You are about to reply to a thread that has been inactive for 3633 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hello everyone.  I’m working on script for a color picker and I need some help. 

Just briefly, I might mention that there’s been some good work done with color picking.  However, after searching for the better part of a month, I’ve found that there’s not a complete solution available in public domain – particularly one that works in a HUD situation.   I’m certainly not an expert in scripting – far from it - but if I manage to put something together that’s workable, I’ll post it and make it available to the Second Life community for anyone’s use in their own projects.

I’m trying to create a color picker that looks and acts similar to Second Life’s.  I want something that’s familiar and comfortable to SL users.  Of course, it’s not possible duplicate all of the functions in SL’s version, but I would like something that’s a close match.

Actually determining the color chosen by the user has pretty much been worked out by others:  Aaron Edelweiss, Fox Stirling, Bjorn Nordlicht (and forgive me if I’ve left out someone).

In the color picker that I’m developing, I have the color picking part working fine.  My problem is with two reference pointers.  Let me explain . . .

After someone picks a color, I would like to have a “dot” remain on the color palette.  (In Second Life color picker it’s a “plus” but I’m not sure I can duplicate that).  I’d also like to have a pointer arrow remain on the luminosity slider after a luminosity setting is picked.

When one is trying to get just the right color, it’s really helpful to have those two reference points – to know where your last color appears on the palette and to know your last location on luminosity slider. 

I have image of what it looks like - below.  (To keep things simple, in the image I’m leaving out past saved colors and the “OK” and “Cancel” button)   

ColorPickerForum.jpg

My problem is moving the “dot” and “arrow” to the user’s last touched locations.  I have named each of the various prims making up the color picker, and I can tell which prim is being touched and I can move the “dot” and “pointer” by the use of llSetLinkPrimitiveParamsFast. 

I’ve been able to successfully move the dot and pointer to the last touched location when the color picker is a prim and I’m dealing with regional coordinates.  But when it’s a HUD, and I’m dealing with local (or is it screen?) coordinates, I’m totally lost.

I’ve tried Rolig Loon’s solution (found here:  http://208.74.205.111/t5/LSL-Scripting/Moving-prim-A-to-position-touched-on-prim-B-in-a-linkset/td-p/2324443).  Rolig did a very nice job of explaining the use of local coordinates in this manner.  But I’m not having any luck with that solution – and I suspect it has to do with HUD’s.

I wonder if anyone could suggest a way to move the dot and pointer to the last touched location on a HUD?  Or is it even possible?

Thanks in advance for your help!

 

Link to comment
Share on other sites

Well, how about ....

default{    touch_start(integer total_number)    {        vector Size = llGetScale();        float X = -0.5 * Size.x;        vector Here = llDetectedTouchST(0);        llSay(0,(string)Here);        float Y = (0.5 - Here.x) * Size.y;        float Z = (Here.y - 0.5) * Size.z;        llSetLinkPrimitiveParams(2,[PRIM_POS_LOCAL,<X,Y,Z>]);     }}

 

Link to comment
Share on other sites

Thank you so much Rolig for replying.  I tried your code, and I'm getting some weird movements of the pointer arrow.  Here is what's happening . . .

The first image show the starting location of the pointer arrow:

ColorPickerArrow.jpg

When I click at the very top of the luminosity slider, the pointer arrow ends up as shown in the next image:

ColorPickerArrow1.jpg

Here's another example.  If I click at the very bottom of the luminosity slider, the pointer arrow ends up as shown below:

ColorPickerArrow2.jpg

 

Any suggestions in what might going wrong?  Thank you again for taking the time to reply.

Link to comment
Share on other sites

Well, I don't know exactly what your HUD looks like.  As a result, the best I could do was write a simple code that assumes you have only one texture and it fills the entire face (and that it's obviously face #4, which is the -X face).  That's why I used llDetectedTouchST and why I scaled positions to the dimensions of the root prim.  If you have your two textures on separate child prims, then you'll have to treat each of them as a separate clickable entity.  So, use llDetectedLinkNumber to find out which child prim got clicked and then use that prim's dimensions to calculate the position to place a marker on it.  So, all in all you'd have a root prim (the background of your HUD) plus a child for the color picker and another for the tone picker, plus one for each of the two pointers = 5 prims.

To be sure that you see how my scriptlet works, make a demo HUD the way I made mine.  Rez a cube and color face 4 so you know which one it is.  Then rez a sphere (make it tiny) and link it to the cube as a child.  Drop the script in and attach it as a HUD.    When you look at the script, notice that llDetectedTouchST gives you a vector that looks like <X,Y,0> but as far as your HUD is concerned, left/right is Y and up/down is Z, so you have to switch positions so the vector becomes <-0.5*root_prim's_X,  -X, Y>. Don't forget to offset the horizontal and vertical distances by 1/2 the prim's dimensions so that all positions are figured from the lower left corner of the prim (local <0,0,0> ) instead of the center of the prim face.

I hope that makes sense.  If so, you should be able to translate it into the code you need for your two picker prims.  It's a nice little analytical geometry puzzle. :smileyhappy:

Link to comment
Share on other sites

Rolig's script snippet is not precise for your particular prims.  You'll need to adjust some of the constants in there,  You'll notice that your vertical position is almost correct, but the horizontal is offset about half the width of the prim.  You need to put in some adjustment factors to move the final coordinates to fit your particular prims and layout.

 

You may also need to make some adjustments to the input coordinates, depending on just how you are doing your picker.  If you are using a single prim, the input coordinates have to be filtered to determine WHICH part you are clicking on.  If you are using individual prims for each section, each one will have it's own scaling factors and offsets.

 

Link to comment
Share on other sites

You know, this is such a commonly used dialog in many things, you'd think LL would just go ahead and make a script function llColorPickerDialog(key avatar, string message, list options, integer channel) that would pop up an actual script dialog with a color/luminance/etc. picker on it, returning a color vector on the channel, just like a regular dialog would.

 

Link to comment
Share on other sites

Yeah, but as someone commented in a different LSL forum this morning, LL has its hands full dealing with requests for llGetPony(AMAZING_COLORS | RAINBOW).  We can request something like this, but it ain't gonna happen in our lifetimes.  :smileytongue:

Link to comment
Share on other sites

Well, I'm handy enough with C++ to look into the client side of this......the server side is all LL, though.  The server side of this code should be pretty simple, though.  It just sends the message to the client to display the dialog.  When the client actually clicks on the dialog and hits 'OK', it sends the vector to the server which then forwards it to the appropriate channel.

 

Since there is already a picker inside the client, displaying it on a script dialog shouldn't be THAT hard.  And as I said, the server side is pretty simple.  And while I know a lot of people are waiting on llGetPony(AMAZING_COLORS|RAINBOW), I think that would involve a lot more.  I've tried to push a pony through Cat5 cables, and it wasn't pretty....... :matte-motes-sick:

Link to comment
Share on other sites

Oh, true.  A TPV could implement something like this with little effort.  It's still not likely to make it into the "official" viewer, though.  Too much like a pony.

And I know what you mean about pushing a pony through a cable.  All you get is pony waste products.

Link to comment
Share on other sites

I'm still struggling with this . . . 

My color picker is several linked prims consisting of a base prim, color palette prim, luminosity slider prim, and the pointers.  The parent is the base.  The luminosity (child) prim is tall rectangle with the following dimensions: X=.01, y=.10, z=.58.

I have a script in the luminosity prim.  I know the link number of the pointer arrow.  The color picker object is attached as a HUD.

I've set up the script in the luminosity prim to execute the following upon touch:

llSetLinkPrimitiveParams(gPointerNo,[PRIM_POS_LOCAL,<0,0,0>]).  (gPointerNo is the link number of the pointer prim.) 

Shouldn't the pointer prim move to the lower left hand corner of the luminosity prim? 

It doesn't.  In order to move the pointer prim to the lower left hand corner of the luminosity prim, I have to use the following (approximate) vector:  <0,-.39,-.2>.

I must be intrepreting something wrong because <0,0,0> is not lower left hand corner of the luminosity prim.  Can you set me straight?

 

 

 

Link to comment
Share on other sites

If I were doing it, I wouldn't put any script in the luminosity prim or the color picker prim.  I'd use one single script in the root prim.  Then I'd do some setup work in the start_entry event.

vector gSizeColor;vector gSizeLuminosity;vector gLocalPosColor;vector gLocalPosLuminosity;integer gColor;integer gLuminosity;default{    state_entry()    {        integer i;        while (i<llGetNumberOfPrims() )        {            ++i;            if ("Color" == llGetLinkName(i))            {                gColor = i;            }            else if ("Luminosity" == llGetLinkName(i) )            {                gLuminosity = i;            }        }        list CParms = llGetLinkPrimitiveParams(gColor,[PRIM_POS_LOCAL,PRIM_SIZE]);        gLocalPosColor = llList2Vector(CParms,0);        gSizeColor = llList2Vector(CParms,1);        list LParms = llGetLinkPrimitiveParams(gLuminosity,[PRIM_POS_LOCAL,PRIM_SIZE]);        gLocalPosLuminosity = llList2Vector(LParms,0);        gSizeLuminosity = llList2Vector(LParms,1);    }

 With that background, I'd be able to define all pointer positions relative to the local positions of the Color or Luminosity prims, offset with reference to their own prim sizes.  From there, all I'd have to do each time the owner clicks the HUD is ask for llDetectedLinkNumber and compare it to gColor or gLuminosity to be sure that I was about to move the right pointer, and then go ahead and use the appropriate local positions and sizes to calculate where the pointer should be.

 

Link to comment
Share on other sites

Again, it's hard to say exactly without having your HUD in front of me, but the problem stems from having multiple frames of reference.  "Local" gets hard to figure out when each prim in the linkset has a different zero point, so it's easy to make movements relative to the wrong one.  That's one good reason to use one single script in the root prim instead of separate scripts in the Color and Luminosity prims.  When you do that, all prims are using the same "zero" -- the root prim's center -- as the reference point for their local position calculations.  So, for example, the center of the Color prim has an offset = its local position relative to the root. It's lower left corner on the -X face has an additional offset that's equal to 0.5*<-gSizeColor.x, -gSizeColor.y, gSizeColor.z>.  And your pointer has its offset relative to that lower left corner.  All referenced back easily to the center of the root prim, in one long daisy chain.

The other good reason for using a single script, of course, is that it reduces the number of scripts in your HUD and makes it easier to see how all the pieces fit together in one logical structure.

Link to comment
Share on other sites

I have discovered something - and I thought I would add just a bit more information for others who may run into a similiar situation.

I'll continue to use the color picker as an example, but there's a general principle that applies to moving a prim in a linked set.

To simplify, let's just consider three parts:  the base prim (the parent), luminosity slider (child prim), and the pointer (child prim).

The script is located in the luminosity slider, and it determines the link number of the pointer.  It then detects the location where it's being touch on the luminosity prim by use of  llDetectedTouchST.  That all works to plan.

Here's where the surprise comes - a surprise at least to me.  When llSetLinkPrimitiveParams is used to move the pointer to the detected location on the luminosity prim, it does NOT move to a location on the luminosity prim.  It moves to a location on the base (the PARENT prim).  Even though the script is running in the child prim, movement is in relation to the parent.

When I determined the size of the luminosity prim and offset the horizontal and vertical distances by 1/2 the prim's dimensions, that was all irrelevant.   It was the parent prim that I needed to be concerned with.

And, that's also why using the code below and trying to move the pointer to <0,0,0> (lower left hand corner) of the luminosity prim wouldn't work:

llSetLinkPrimitiveParams(gPointerNo,[PRIM_POS_LOCAL,<0,0,0>]).  (gPointerNo is the link number of the pointer prim.)

Where did the pointer move to when I used the above code?  It moved to the center of the base (the parent prim).

It was one of those aha moments.

So, indeed, Rolig's suggestion of working from the perspective the parent is sage advice.

 

 

 

Link to comment
Share on other sites

Well that was interesting. I wasn't quite aware of this hud problem.

I'd have choosen a different approach for this kind of hud though.
I'd place prims in front of the color and luminosity fields. Then make a transparent texture with a marker/crosshair/whatever on it.
To position a marker i only need to change the texture offsets then. Thats easy and no moving parts required.

Link to comment
Share on other sites

Rez a cube, drop this script in an click on a side:

string texture_uuid = "464fe4ba-b35f-e07a-5ac9-79123be45186";default {        touch_start(integer total_number) {        vector v = llDetectedTouchST(0);        v.x -= 0.5;        v.y -= 0.5;        llSetLinkPrimitiveParamsFast(LINK_THIS,[PRIM_TEXTURE, llDetectedTouchFace(0), texture_uuid, <1,1,0>, v, PI]);    }    }

 llDetectedTouchST delivers coordinates in the range 0 .. 1 thats perfect for what you need. Bottom left is 0.0 but thats a matter of the rotation of course. :)

For the luminosity field you set v.x = 0 to get a centered marker.

Add some complexity by using a single script in the root. You need to detect the touched prim and face and use that numbers in the SLPPF function.

The texture is a transparent plane with the marker in the center. If you make a white marker you can set the color by coloring the prim face.

Link to comment
Share on other sites

Awesome Nova!  What a elegant way to appoach it.  That was very kind of you to even provide a transparent texture.  Your solution is quicker and smoother than moving a prim. 

I'll need to re-work my color picking routine since the current approach relies on actually touching the color palette, but there should be a way to work from a touch on the transparent prim.

I do have a question concerning the rotation of the transparent texture.  I tried picking up the rotation of the transparent texture with  llGetLinkPrimitiveParams and using that instead of PI in the SLPPF function.  But it doesn't t work.   PI seems to be key in your solution, but I don't quite understand why it is important.  Could I trouble you to help me understand why PI is important to make this work.

 

Link to comment
Share on other sites

There are 2 different coordinate systems.

llDetectedTouchST delivers the normalized position of the touch point. x and y are in the range 0 .. 1 and origin is bottom left.

In llOffsetTexture or the SLPPF equivalent:  x and y are in the range -1 .. 1

If you think about it:

center touch: 0.5 / 0.5 offset 0 / 0
bottom left touch: 0 / 0 offset 0.5 / 0.5
top right touch: 1 / 1 offset -0.5 / -0.5

So you need to translate from one system to the other. 2 ways to do that come into my mind:

(1) moving the origin into the center and rotate

v.x -= 0.5;
v.y -= 0.5;
llSetLinkPrimitiveParamsFast(LINK_THIS,[PRIM_TEXTURE, llDetectedTouchFace(0), texture_uuid, <1,1,0>, v, PI]);

(2) just move the origin at the right place

v.x = 0.5 - v.x;
v.y = 0.5 - v.y;
llSetLinkPrimitiveParamsFast(LINK_THIS,[PRIM_TEXTURE, llDetectedTouchFace(0), texture_uuid, <1,1,0>, v, 0]);

So that works with and without rotation. Guess I like to make things over complicated :D

 

  • Like 1
Link to comment
Share on other sites

Maybe because it's on face #4, so you're trying to correct for flipping left/right?  Again, without seeing the geometry it's hard to guess.  You should be able to puzzle it out if you need to by doing a few experiments.

Edit:  Or see Nova's explanation.  :smileywink:

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3633 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...