Jump to content

Bubblesort Triskaidekaphobia

Resident
  • Posts

    92
  • Joined

  • Last visited

Everything posted by Bubblesort Triskaidekaphobia

  1. I keep seeing people saying they are doing PBR stuff like this, but I know it's not rolled out yet. Are people all using PBR on the beta grid? PBR has to be active on the sim, as well as the viewer, right? Or can you PBR with just a beta viewer?
  2. I believe there used to be a way to hide the nametag, as well as the whole avatar, but the green dot on the map would still show up. I used to have an attack HUD that had that feature on it. The attack HUD was open to the small community of users, so users of it used to figure out easter eggs and LSL tricks from it all the time. I think I figured out how the name tag was hidden once, but I've long since forgotten that bit of trivia. Might have been a bug that has since been patched. I never heard of anybody getting in trouble for simply being invisible. Maybe for doing other things while invisible... you shouldn't orbit people while invisible! But I think being invisible, in and of itself, is not a violation of TOS.
  3. Probably a bot with a follow script. Might be a malfunctioning bot. I would block and de-render them, and forget they're there, unless I plan to do a lot of talking in main or something. Having de-rendered and blocked people around when you are talking in main can cause some confusion.
  4. I've been playing with OBS to record video lately. It's really not that hard. Open Firestorm, then OBS, then pick Firestorm as an input. Set up an output by hitting the Settings button (bottom right corner of OBS) > Output tab on left side, and give it a recording path. Then hit apply, ok, and you are set to click record, and you're rolling! I'm pretty sure not recording the cursor is the default, but if not, it's simple to change, they didn't hide the cursor setting or anything. Just hit the cog in the sources list, then uncheck Capture Cursor. It's not a complicated program for the average SL user, IMHO, unless you are trying to livestream, but I am pretty sure SL is banned on all major livestreaming platforms, like Twitch. Might livestream to only fans or link your video to lovense, or something, IDK. It can be complicated if you do that stuff. Shooting simple video is simple, though. I can't shoot without setting up cam smoothing in my viewer. My shots are just too jerky and jumpy. I also use some cam smoothing just to navigate SL, ever since I started tinkering with it. Cam smoothing is in World > Photo and Video > Cameratools > Cam smoothing It defaults to 1.000. Try changing it to 2.000. Doesn't that just feel nicer? It does to me. That's what I use as my walking around cam smoothing setting. When I'm doing machinima, I play with it a lot, bumping it up to 5-10, sometimes 20. Bumping it up to 100 can give you a really cool crane shot or drone cam type effect, if you know what you're doing. Tip: I find it easier to cam out than cam in with smoothing at ridiculous numbers like 100. If I'm doing that, I like to set the smoothing, then cam close on the subject, and pull back from the subject, and back out the window until I get the whole building in the shot. Then, in post (using Premiere), I reverse the video for an establishing shot of the building, that flies in through the window, to land on the subject perfectly. Of course, you can't change cam smoothing on the fly easily, as far as I know, so you have to cut after that (unless you want to shoot your whole scene in reverse), so it's not a way to run a massive one shot that will impress film school professors, but it's a way to crane a good establishing shot into the subject smoothly. Takes some practice to hit that window and building, though. Cam smoothing doesn't work on pan, at all, which is annoying, at first... but you can turn that bug into a feature! Try this: Crank your cam smoothing up to 20 and cam around a bit. It's kind of hard to control, but not impossible, right? You are just drifting a lot. When you are drifting and want to stop, hold shift and pan just a teeny tiny bit. The drift instantly stops. It's like a brake on a car. You can stop the cam smoothed orbiting motion on a dime if you do it right, but it takes practice. If you are doing this with extreme cam smoothing at something like 100, sometimes the cam snaps to where it's going too fast, and you end up with a jump cut that you probably didn't intend to make. It's too bad there doesn't seem to be a way to smooth a pan, because that would be really nice to have. I've played with the camera lag setting and zoom speed and other things, but panning seems un-modifiable. Pan smoothing might make a good JIRA feature request, if JIRA still existed. I don't even know where to make feature requests now that it's gone. Someday I'll look it up.
  5. It's not April 1, yet. The leap year means today is March 32. There's still time to get your pranks ready for tomorrow, though!
  6. I'm barbecuing some chicken in my back yard today. This is a new look for me. In January, I decided Jake is too dated to keep investing in, so I started getting new bodies and setting up wardrobes for them. I got Kario and Anatomy, and spent way too much money and time outfitting them. Then I noticed this Jonathan body, and got it just for fun. It's dirt cheap at 700L, so why not? Turns out, it has a lot of vendor support! The creator hasn't been seen in years, and there's no BoM, but there's more support for it than my Anatomy body. So I started making a wardrobe for Jonathan, and after a while it kinda grew on me. It's nice to not be just a Ken for once. People who talk to me now are generally more playful, and interesting, and less interested in poseball hopping. So after spending weeks making custom alpha layers in photoshop, and outfiting the avatars to look like impossibly perfect underwear models, I ended up with this.
  7. I think the real question is... If you had a neural implant, to use SL hands free... what would you do with your hands? (PG answers only, please!)
  8. All of my problems with SL could be fixed with free ponies for everybody.
  9. If that was true, Open Sim would have a LOT more content! Do you think SL is free of piracy? LOL, you need to get out more. That's a bit of medium snobbery, but I can't disagree. Rosedale and Notch aren't citing Max Headroom when they talk about what inspired them. Actually, now that I think about it more... of the serious, professional game developers I've known, the ones big enough to give keynotes at big conventions and things like that... the one thing they have in common is that they were all theater tech nerds. Building theater sets for plays and changing them out seamlessly, to create an experience IRL is good training to make them with computers. Or maybe it was all the sneaking around in the dark while wearing all black made them the super ninjas they are today, LOL
  10. Lets go back and review who actually came up with this idea of a metaverse. Start with Gibson. He envisioned it to be a place where kids can enter their video games, bodily. He was so computer illiterate, he wrote Neuromancer on a typewriter. He had an inkling of the idea, but I don't think he developed it fully. Stephenson's Metaverse was kind of a parody of Gibson's, but it was a bit more developed. He himself says that no virtual worlds today are like his metaverse (Stephenson is a jerk, anyway, but that's a rant for another time). Cline's Metaverse in the book isn't horrible. I can see somebody like Zuckerberg trying to become James Halliday, but I can't see a reason why the people of the world would want Zuckerberg to become James Halliday. I don't think any platform is currently either open enough to become a metaverse, or appealing enough to the masses to become the metaverse, as a walled garden, like Google and Apple created with their app stores. So lets look beyond these sources... What about 80s movies? We all know Tron is a metaverse, right? I would argue the Stay Tuned (1992) was also a metaverse. Stay Tuned is basically Honey, I Shrunk The Kids, except instead of shrinking, John Ritter, is sucked into a new satellite TV system, to bounce around thousands of cable channels to rescue his family, while playing Running Man style life-or-death games (it's a hilarious, very underrated film). People got sucked into TVs constantly in the 80s. Remember Max Headroom? Pleasantville? Last Action Hero? Lawnmower Man? Videodrome, kinda? Almost every kids cartoon had characters get sucked into a TV at least once. Back in the 70s, they had Mike TV in Willy Wonka and the Chocolate Factory. OK, the TV shrank him, but he was still bodily taken in by the medium, and changed by it. So this idea of being sucked into our favorite medium is a pretty old idea, from TV and film. It's probably a more ancient idea, going back to the concepts of metamorphosis and apotheosis. Everybody wants to be part of their favorite stories and songs, right? We all want other people to remember us in the stories after we are gone. I'll leave it to the theologians to figure out why we want this, and just take it as a given that we do. So maybe, instead of arguing over which novel inspires the metaverse... maybe we should be asking what movie or TV show inspires it? Also, maybe it would be useful to look back at the first online virtual world. My first virtual world was Active Worlds, but before that there was Gopher VR. It was first written by Mark McCall (some of you may remember them as Pixeleen Mistrial, from the Alphaville Herald). The wiki sources now say it came out in 1995, but I clearly remember being surprised at sources that once said it came out in 1992, a year before the world wide web. Either way, the world wide web and the first virtual world came about at roughly the same time, within a year or two of each other. Mark was very into depicting relationships between objects and things, and he was especially into non-euclidean space. Non-euclidean basically means portals, like in the video game Portal. I don't think Gopher VR had portals, but his Croquet project definitely did. I know from speaking to Mark that he was influenced by Gibson. He would casually quote Gibson chapter and verse. So maybe Gibson's Neuromancer is more influential than we thought. We don't have to be influenced by Gibson. I'm just saying that Gibson is what inspired the creation of the first virtual world. If Mark had access to modern computing power, he may have been more inspired by Snow Crash, or he may have been more inspired by Tron. There's no way to know. If I was making a virtual world today, I think I would try to create something like Stay Tuned. I would want everybody to be able to make as many grids as they want, like they can make as many TV shows as they want, and just make a minimum viable product type protocol to navigate between them. Then, of course, I'd have to hire Jeffrey Jones to sell it door to door, as my demonic salesman, just like in the movie! LOL
  11. That doesn't totally surprise me. To be honest, I don't know much about furry culture. They were all extremely nice to me when I was there, though! Neos just felt like too much work for me to log in more than a few times. I mean, I love a good sandbox, but Neos was too technical for me. When I'm on my PC, I can do a lot of work, and make lots of stuff, but when I'm in a headset I don't feel like working. The headset makes me feel like relaxing, which brings me to your second point... IDK why you think that gimmick is dead. Big Screen is just fun! Watching movies with friends just hits that sweet spot of relaxing with a headset, and socializing. No streaming service does it as well as Big Screen, although they do try (Peacock, Prime, Netflix, and others all have their VR apps). Just give it a try, bounce around a few open rooms. It's hard to deny the fun, once you experience it. Try it in SL, too! A lot of groups have movie nights. Ask around, find a movie night to drop in on. LOL, yeah, 'metaverse' is just a term that some marketer decided to lift off Ernest Cline, without thinking too hard about it. I can't find a link right now, but I remember an article about Zuckerberg meeting with a bunch of obscenely expensive consultants from Boston Consulting. They pow-wowed all weekend about the metaverse and came up at the end with a proclamaition that, "There is one metaverse... it is THE metaverse, not A metaverse." Which is the most worthless circle jerk BS I can imagine. I hope Zuckerberg got a refund, LOL. We all know what we are talking about here, though. I talk about VR as a bunch of different grids. There is the SL grid, the SL beta grid, the OS grid, the VR Chat has a bunch of grids, etc. If the land is connected, it's a grid. If the metaverse exists, that might be a grid. All the grids are using TCP/IP, last I checked, so maybe if TCP/IP is a grid, then there is a metaverse? I don't think that's what journalists mean when they talk about a metaverse, though. So I doubt the metaverse exists. If it does exist, it doesn't matter, so I don't care about it. I don't think I want my SL avatar to play Dead Souls, anyway. That would be lame.
  12. SL has the best creation tools, and the biggest community. Roblox and Minecraft have bigger communities, but they are more gaming focused, and too kid friendly for me. I don't want to have to worry about everything I make being PG. I have tried many other pre-Oculus grids, and none of them come close in terms of content, tools, and population. Some have some super cool features (see: portals in Croquet/Open Cobalt), and some have tiny, yet loyal fan bases, but they never had the pull to get critical mass. Just a reminder: OpenSim is still out there, and it's not as primitive as many of you might think. Just look at their megaregions, or OSSL. SL scripters drool over the things OS scripters can do. There's not much content, or people there, and no real economy. I like to run it locally sometimes, just to doodle around with. In 2024, most competitors to SL are worlds that run in head mounted displays, like the Oculus... If you like head mounted displays, VR Chat seems like a big name in the sector, but all it really does is give VR platforms a bad name. VR Chat is a bunch of walled gardens. Imagine SL, but every land owner has their own full grid, so they are never next to anybody. Also, every grid is invite only. The public spaces are full of greifers and kids being obnoxious. If you can survive the noob areas long enough, I hear you can get invited to some super cool communities, where a lot of cool stuff is happening, but I've never been able to stay interested that long. I have friends who are obsessed with staying immersed in VR Chat, making rooms and places to hang out for their little clique, which is neat, I guess... but I personally don't see anything but greifers. Worst community in VR, IMHO. I also score VR Chat as worst content in VR, just because the noob areas are so tiny. Maybe they have more content, but if I can't see it, then I don't care about it. Zuckerberg's Horizons actually has a really nice in world content creation interface. LL could learn a thing or two from that. Last time I was there, they actually staffed their welcome areas, too, and the staff was friendly and helpful. The world is super PG, like Roblox, and the setup is very gamey, and Roblox inspired. I kinda liked it, but at the same time, I will never spend more than 15 minutes making anything there. For me, it's a question of trust. I know how Zuckerberg runs Facebook, so I know what to expect from his Horizons virtual world. If I spend months, or years, making things in Zuckerberg's Horizons, then at any moment, his AI can decide that that shoe or castle I built looks too much like a nipple or *****. Then they nuke all my work, ban me for life, and isolate me from my family and friends who are on their platform. Plus, ever since they reneged on their promise not to require a FB login for Oculus, if they cut off my Facebook account, they effectively shut down my Oculus hardware, so I can't do any VR with it, at all, and I lose all my creations on other VR platforms. So I'd be out $400, lots of time, and social connections, all over a rogue AI who doesn't know the difference between a hat and a nipple. That's not worth the risk. Or, alternatively, maybe they will just not let people see what I make, because it upsets advertisers or political allies, so they end up wasting all my time and energy, while I make things that nobody will ever see, and I don't know that until I publish, and am quietly shadow banned by some politburo in Silly Con Valley. So I see doing any work for Zuckerberg as a losing game. I don't create for money, I do it for fun, and to show friends my stuff, but on Horizons, I would want money to do it. If Zuckerberg isn't paying me enough for my creations, to make up for the risk of creating for his AI moderators, and paying me up front for it, then I'm going to work on somebody else's grid. Somebody who won't trash my creations for no reason. RecRoom is like Zuckerberg's Horizons, but it's even more kid focused. It's basically a Virtual World that's completely dedicated to Nerf gun shootouts (Hasboro is missing a golden sponsorship opportunity here). It has a much bigger population than Horizons, but the population is less mature. They're nice, but you know... you're hanging out with middle schoolers in a room full of nerf guns and dodge balls, so you aren't going to be debating the finer points of Aristotle in this world. Content creation is also a bit more clunky than Horizons, but it's not bad. I met some German school teachers there, who seemed to be having fun using it for history education. I think Rec Room is the closest to is Roblox. If I wanted what Rec Room is selling, I'd just play Roblox. Fun world, but in the end, it's just a Roblox clone in a headset. Neos is a really cool virtual world. Small community, but very creative, highly technically literate community. Most of them are furries. They will help you tweak settings and import mesh, and build things. There's a LOT of settings in Neos, too! Neos is like the Linux of head mounted VR. You can tweak all your hardware and software settings, nothing is abstracted away. The place is just a busy sandbox. You have to make the mesh and textures in something like Blender, of course, and then you import it and make stuff. If you want to create for a head mounted VR platform, hang out in Neos. They don't have as much content as other grids, because it is a small community, but IME, nobody beats a Neos user in per-capita output of VR content. Big Screen is the best hang out spot on the Oculus. It's got one gimmick, but it's a really, really good gimmick. You log in, and go to a movie theater, and watch movies with strangers in a public room, or with friends in your own private room. People can talk, throw popcorn, and draw graffiti in the air during the movies. Some rooms will kick you for acting up, some rooms are a party atmosphere. Some rooms have a DJ instead of a movie. The rooms generally have a limit of 15 people, so they don't get too overly crazy, but you can still have a lot of fun with 15 people. The 15 person limit makes it difficult to get into some popular rooms, which is annoying. I wasn't able to get into the 3D Godzilla marathon today. If you wait around enough, slots open up sometimes. If you want to set up your own room, you can rent movies from Big Screen directly (2 days at a time, IIRC), or stream off your home PC's desktop screen, or stream off a web host. I think most people with public rooms stream their movies off cheap web hosts. They also have integration with a bunch of streaming services, so if you want to watch, for example, Prime Video, with your friends: You set up a room, and you give your friends the link, and you all log in with your prime subscriptions, and watch it together. This is a good selling point to not share Prime subscriptions, because for it to work, everybody needs a prime subscription. You can stream just about anything. You can stream twitch, which means you can stream your own web cam. Big Screen is not really a sandbox where the sky is the limit, but it is the best head mounted virtual world, IMHO. Financially speaking, I think they are doing a lot better than other virtual worlds, because they recently came out with their own headset, called the Big Screen Beyond. If I wanted to spend the money for a new VR headset right now, I would get it, for it's small form factor. So, IMHO... I think Big Screen is the closest competitor to SL, in terms of where I'd like to be on any given Saturday night. They both do different things, though. SL has a bigger population, better avatars, more content to explore, and better music. Big Screen is better for watching movies and video with friends. I like to watch video with friends in SL, too. My video setup is an AVEN TV, streaming off my google drive. I don't think Big Screen has easy Google Drive streaming like AVEN does, but they might. I haven't looked deeply into setting up Big Screen streaming in a few years, so who knows?
  13. LOL, I'm sorry! I didn't realize she meant she wants to try the plugin, not try Stable Diffusion. Oh well, maybe my little overview will be useful to somebody else who stumbles on this thread, someday.
  14. LL has been working on replacing Vivox with WebRTC, and I have some questions about it. If you want to get caught up on the subject, here are some links: SL Wiki Inara Pey Daniel Voyager TL;DR - WebRTC is a technology that lets web browsers do peer to peer video and audio conferencing, without needing to install plugins. It can be used for other things. I know some sites used to use it to play regular video, and there's a web based bit torrent client that uses it. It's mainly known for teleconferencing, though. SL wants to upgrade from the obsolete Vivox system to WebRTC, and they are testing some implementations for doing that right now. We don't have all the details yet, but we know these things, so far: Regions that run WebRTC won't run Vivox, and vice-versa. They say the sample rate will be better, it will have built in gain control, and noise cancellation, and there will be better security, because LL will be running the streams through their own firewalls, to hide people's IP addresses. I think I saw on a Twitter thread that it's limited to 50 avatars in a region, which is a lot in SL, right now. They will probably increase that, eventually. One really nice change is that there will be no separate voice.exe program to run your voice, which means one less thing to white list on your antivirus software. My first reaction to this was... isn't WebRTC a security liability? I have a plugin for my browser that blocks WebRTC, unless I want to run it, because it can expose your IP address, even through a VPN tunnel (similar to how Shoutcast exposes it). This is not a bug that WebRTC intends to patch. They say it is foundational to the technology. When I read that LL is working on preventing that kind of leakage with a firewall, though, I was relieved. I have no problem running their WebRTC, if the security concerns are addressed. My second thought is: Can we use this for live music? What's the sample rate like? I mean, I know it's better than Vivox, but Vivox's sample rate is somewhere between 4khz and 48khz, so that's a very low bar. If the WebRTC sample rate beats Icecast, and similar shoutcast stream provicers, then we can ditch shoutcast and just use voice to run concerts, in real time, without the annoying 30 second lag, and without sharing my IP address with everybody in the club! I wonder, if the latency is low enough, maybe this would allow us to do sing alongs, in real time? Imagine a crowded SL club singing Do-Wah-Diddy-Diddy together. That would just be fun. Or imagine Rocky Horror, with real time, live callbacks! Just to set the bar for this: Icecast streams at 256khz. It may be possible to perform at lower sample rates, like 128khz, I don't know, I haven't tried. One other concern: WebRTC has built in noise cancellation. That feature may be an issue for this as a performance medium. No musician wants human voice tuned noise cancellation to muffle their guitar, or piano, when their instrument makes sounds outside the normal human vocal range. That's why I buy expensive microphones, like the SM-57. It won't distort my guitar with noise cancellation, so all distortions in my performance are deliberate distortions, that I put there, using my software, and my audio interface.
  15. We used to do this on sculpties with llSetPrimitiveParams. It was like this: llSetPrimitiveParams([PRIM_TYPE, PRIM_TYPE_SCULPT, "uuid", PRIM_TYPE_SCULPT_SPHERE]); "uuid" was the name of the sculpt map you want to change it to, and you can change PRIM_TYPE_SCULPT_SPHERE to PLANE or CYLANDER, or TORUS. You can use this to animate sculpties. I don't see a similar function for mesh, though. That's really weird. I always assumed something like this would be there, but I hever had occasion to change a mesh model with a script like that, since mesh can animate. There must be a way to do it! I've seen it done! If you look at PrimPossible's products, they are all mesh, and they change shape, based on a menu. They do it with their 1 Prim Unlimited Decor product, for example, but a lot of their products change shape like this (off the top of my head, see: their bed, and kitchen, and mesh genie). I know from an interview I read a few years back with the owner of PrimPossible, that it basically works like this: You click the thing, it gives you a menu, you select the options to change, say... a hat rack into a book shelf. The script asks some kind of web server for the book shelf shape and texture, and loads that onto the object, which morphs into a book shelf when it's done loading. At least, that's how I remember the inverview going. It's been a few years. Maybe it is rezzing a new object with a new shape, then deleting itself? I'll say that's probably not the case, because when you change object shape, sometimes if you are lagging, the new shape doesn't load, so you end up with an invisible object, with no shape, waiting for the shape to load. Looking around the LSL wiki, though, and I"m completely stumped. I don't see any functions to do this, or anything in the library for it. Qie is probably right. You probably can't do it. If you can, use sculpties, instead of mesh, and animate them like I described above.
  16. This is a much bigger problem than SL's inventory window. Generation Z has problems conceptualizing file and folder heirarchies. According to this Verge article (from 2022), it started around 2017. My sister is an educator, and she says the problem is getting worse as years go by. Just another example of the education system failing to educate.
  17. Let me help you out with a quick overview of how this works. This is my perspective, as a Windows 10 user: You want at least 8Gb of VRAM on your video board to run this. If you want to buy a card for this, the 12Gb RTX 3060 is pretty standard for Stable Diffusion users right now (the 8Gb version is not uncommon, but a lot of people, myself included, got the 12Gb version when it came out). Basic terms: Stable Diffusion is the model, it comes from a firm out of Heidelberg University, named Stability AI. There are many different flavors of Stable Diffusion. You can use Stable Diffusion 1.0, 1.5, 2.0, SDXL, SDXL Turbo... you see where this is going. Each has benefits and drawbacks. You can read and watch videos, but it's best to learn about them by tinkering with them. You can modify these models with tunings called Lora, LyCORIS, checkpoints, embeddings... most of the time when you use a tuning, you are using a LORA, though. You can find LORA on civit.ai, and on huggingface.co. So when you use Stable Diffusion, the Stable Diffusion model is on the back end. What kind of front end do you want? It doesn't come with a front end, so unless you want to spend a lot of time at the command line, you want to install a front end. There are options out there, but personally, I prefer Automotic1111. It also seems to be the most popular front end, right now. When you want to run Stable Diffusion, you will run a .bat file, which starts up your stable diffusion back end, then runs your front end in your browser, like a web page. If you want to look at alternatives, ComfyUI is the biggest competitor. I've also heard of a Unity project named Seth's Tools, that some people seem to like. For tutorials on how to install and use Stable Diffusion, and Automatic 1111, look up Sebastian Kampf on YouTube. There are a lot of AI educators on YouTube, but I have found that Sebastian's videos are the most helpful to get started with. Here is his video on installing Automatic1111. Here's his video on how to use Automatic 1111. If you want to find the Stable Diffusion community, check out this Discord channel. They are extremely helpful. If you want to keep up with the latest news, use the StableDiffusion subreddit. All the latest gossip and news is there. The owner of Stability AI hangs out there, and will sometimes just jump into conversations randomly (I think he may be a mod? IDK). Good luck!
  18. You do you... I'm not going to try to convince you to change how you do anything. I like that there's always traditionalists out there. At the same time, I think you misunderstand how things work with AI. You don't just send a prompt to an AI in the cloud any more. Well, sure... 90% of users who use AI are doing that now, but it gets boring quick, and those users don't really generate much. When the hype cycle fades, they won't be using Midjourney or ChatGPT any more. They may continue to use Adobe Firefly, but the average user isn't going to pay Adobe's prices for it. I don't know anybody who kept using their Midjourney or OpenAI subscriptions past the first 1-2 months, when the novelty wore off. I know a LOT of people who use Stable Diffusion, though, and Stable Diffusion is a totally different ball game. When you see cloud AI, think Apple. Everything is done for you. Wozinak decided on all the tradeoffs before you even knew what AI is. You send it a prompt, and get a thing back, then send another prompt if you don't like it, and get another thing back. It's one model, and you don't tune it or tweak it. It's a magic black box. Stable Diffusion is like the Linux of AI. It runs locally, not in the cloud, so we don't need permission to do anything we want with it. You can be as ethically flexible as you want, if that's what you're into. When we use Stable Diffusion, we work with dozens of different models, and tunings, and techniques, to get exactly what we want. It's not a simple word game with one prompt engine. It takes time and energy. You have to generate an image, then maybe inpaint it or outpaint it, or use a plugin to change the pose of the person. Maybe change the style of the output from sketch to photograph. There are tunings and plugins that run hand and face fixers, for example, so you don't end up with polydactic cyclopses. We have special tunings to clear the grid pattern that sometimes appears when AI draws stuff. Sometimes, a tuning, or a model we want does not exist, so we have to train our own tunings and models. I made my own model to face swap myself onto other people, for fun. The point is that Stable Diffusion users are not using one AI, like the Open AI folks. We are herding maybe 1-2 dozen AIs to do our bidding, and it's like herding cats. What process results from all this? Lets say I want to make a knock off Bob Ross. Judge me all you want, but if you walk into any fine arts educational institution, you will find hundreds of students all sitting around in circles, ripping off somebody, in every medium imaginable. Bob learned to paint the same way, by knocking off a guy on TV named Bill Alexander. Anyway... if I really wanted to do a good Bob Ross knockoff, I would train my own tuning, with some Bob Ross photos that are freely available online. It would take me maybe 2-3 hours to get an AI to generate something of similar quality to what Bob Ross painted in a 30 minute PBS show (I run a Ryzen 7 2700X, nVIDIA GeForce RTX 3060 12Gb, and 32GB of RAM). If I don't make my own tuning, it would probably take me an hour, but with worse results. So I don't think AI is necessarily faster at things. It's just different. Note: If you want a locally running text chatbot, look up LLAMA. I am personally not into the voice AI stuff, but I hear Tortise TTS is the thing to use for locally generated AI audio, for voice cloning and things like that. I hear that it's to the point where you can clone a voice really well from a tiny 10 second long sample.
  19. Neuralink? The company that was taken over by Elon Musk? ROFLMAO... I'd sooner let the crack head on the corner implant random things in my body. At least I know what motivates that guy. Musk is a junkie, but he's so rich, I have no idea why he does anything he does. His insanity is too unpredictable for me. I may someday consider neural cybernetics, when the science is settled and reliable, and the product isn't coming from a lunatic.
  20. I experimented with a lot of TVs in SL, and IMHO, the AVEN SX Smart TV is the best. I use it to watch movies from my google drive with friends in SL. I have to change them to .mp4, because google drive doesn't seem to like variable frame rates, but it's an easy conversion in Premiere.
  21. Humbug! If you aren't painting on cave walls with woolly mammoth blood and a stick, you aren't really a TRUE craftsman!
  22. Use the Avastar tutorials on Avalab, like this one, for rigging fitted mesh. They are still updating some of the stuff for Avastar 3, but what's there is current enough that most of it is usable. If you hit any snags, use the Avalab discord. It's linked in the top nav bar of the avalab web site. The support on the discord is incredible. Slow, sometimes, if you hit them at the wrong time of day, but IME, it's usually fast, and they are extremely patient. The support makes the annual price of Avastar worth it.
  23. The Sidekick was a precursor to the iPhone. It was popular in the early 2000s. All the cool kids on the Nickelodeon shows seemed to have them. You might not recognize the name, but you probably recognize the phone. It had a slide out keyboard for your thumbs. Here's a Cnet article about the history of Sidekicks.
  24. I treat credit card info on file as 75% age verification. I don't think there are regulations on kids buying prepaid credit cards, or receiving them ask gifts, or using them, or stealing their parents cards. There's always ways around any system. If somebody doesn't have payment info, though, I mentally give them a 75% chance of being a minor, and treat them like a 16 year old. Keep in mind: 18 years ago was 2006. I know, it makes me feel old too. Think about what happened in 2006. YouTube was brand new, no YouTube memes were really hitting virality yet. If you talk about your TiVo, or hunting Easter eggs in Homestar Runner, or say you used your sidekick to talk to your friends about gaming on New Grounds and watching Numa Numa... and they act like you are speaking a foreign language... then they're almost certainly a minor.
×
×
  • Create New...