Jump to content

Da5id Weatherwax

Resident
  • Content Count

    233
  • Joined

  • Last visited

Community Reputation

191 Excellent

About Da5id Weatherwax

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Coffee Jaworower - Followup. I did some research on your model of mixer to be sure I had given the right advice and the Behringer 1024 has a particular feature not common on small mixers that will work to your advantage. You have a compressor built in to each input channel. Adjust this for your vocals and your guitar independently while monitoring only that channel and adjusting its gain as you go - I don't know the block diagram of that mixer so I don't know if its PFL path takes the signal before or after the compressor, but it would make much more sense for it to be after the compressor, so I'm assuming you can be in PFL with the mixers meters showing that channel pre-fade and both see and hear the effects of the compression. Now, it's a relatively simple compressor - you have only one control for its strength, you can't adjust the hardness or softness of the "knee" in its profile and you don't have separate control over a limiter stage to explicitly damp peaks and prevent clipping so you're going to have to fall back on the sound engineers best tool, the Mk. I ear, to find the limits of how much compression you can apply to each channel without it sounding like crap. Once you've found that limit you're going to want to apply about 75% of that as a starting point and adjust the gain accordingly. You should find that you can be levelled to avoid clipping but still have a bit more "loudness" headroom on each channel - by the time you balance the mix you should find that overall your mix is louder for the same peak level on the meters at each step of the process I described above. If at any stage of listening to the main mix you start thinking you might have too much compression on either channel, do not just back it off a little and try and compensate for it at that later stage. Start over with levelling each step. Changing the compression will mean your input gain is "off" by a little (or a lot, depending on how much compression you had dialed in and how much you changed it) and each departure from ideal is magnified by the next stage, so even a small goof in input gain is almost guaranteed to give you a messed up mix by the time it feeds into the stream. I apologize if I'm insulting your experience and going too far towards "sound engineering 101" but I don't know how much production experience you have and being "too basic" for your experience at least includes all the information you actually do need, whereas erring the other way wouldn't.
  2. You're welcome. Remember that on a stream you are at the mercy of the providers encoder and the viewers decoder, but at least that's a mostly level field. As a performer who is also their own "sound guy" all you can do is make sure you're feeding the best - not necessarily the loudest - quality audio to the stream so that it truly reflects the dynamics and sound of the performance and that when you're performing you are getting a true and accurate monitor so that you can use your performance skills effectively - you can't do that if you are hearing anything different to what is being sent to the stream in realtime Accomplish both of those and you'll sound great. Your listeners will hear exactly what you are playing and singing, exactly how you are playing and singing it. A lot of venues are aware of the compressed-and-boosted nature of a radio stream too, so do not be surprised if at the end of your set the venue's host announces "Switching to radio now, watch your volume!" or something similar.
  3. I use a similar setup, with my guitars and mic going into an Allen&Heath ZED10fx. One thing I have noticed in SL is that pretty much ALL live performers have a lower overall level than "radio streams" - DJs are closer but still tend to be overall lower than packaged "radio." This is mostly due to their input having its dynamic range compressed and then its gain boosted, so the whole track ends up with an overall higher level. This is common broadcasting practice but is sadly overused - you've experienced this on your TV when the ads shout at you compared to the level of the show they are interrupting! In part it's a legacy of analog radio where overall higher amplitudes were needed to break the signal out of the interference. "This is how you do it for broadcast" even though the actual need for it has largely gone away. For my part, I do all the usual stuff setting up as I would for a live session IRL. Al the usual stuff you would do for setting up a mixer, levelling each step in the signal path in order. PFL each channel, setting the gain to use as much of the dynamic range of the mixer as possible, according to its built-in meter. Next, monitor the main mix - with and without fx - and balance the mix with the individual channels faders. Check your windows settings on your sound device, make sure it's not peaking in the "meter" displayed in audio settings but is using about 80% of that meters range - since it doesn't have a db level scale. Below that you should turn up the sensitivity of your input in the device settings, but don't set it to above 75%-80% on that slider (higher than that you'll start getting clipping and noise artefacts from windows itself) - if you still need more level to use all the devices dynamic range, you should turn up the main mix on the mixer. If you have to set your windows device lower than 50% to avoid peaking, turn down the main mix on your mixer as you are probably overdriving the input device. You want the meter showing a decent range with input device sensitivity set in in the 50%-80% range. Only now fire up B.U.T.T. - it has meters and a master fader too. At this point that is the only control you should touch. Adjust it so that your performance is using all of it's range without clipping. If you need to do more than a few db cut or boost in B.U.T.T., a previous step is probably a little off. Use your judgement and your audio engineering experience to determine if it's acceptable or if you need to go back and rework a previous step. Make sure your levels are acceptable and you're not clipping by playing a quieter and a louder track through the stream while capturing the stream on another machine and listening back to the capture. Now, if you MUST have a high-level compressed sound going to your stream, you will need your mixer to feed a compressor which then goes to a DAC for input to your computer. With carefully chosen compressor parameters and by adjusting the gain on your DAC you will be able to keep almost your entire performance in the upper half of the meter without losing so much dynamic range that you sound "thin" or "flat" but be warned that it will be easy to detract from the "live feel" of your stream that way - you want it to sound like a live gig rather than a radio stream, because that's what it is!
  4. I took a walk around the sim earlier today - A nice build, with its "spaces" overlapping and giving a kind of "Blade Runner" feel to the place. I briefly met Tinydollz on my arrival and while we only had time for a short chat it was a good one. There are several DJ stations around the place but of course as a live performer I'd be more likely to be leaning on a wall with my guitar around my neck and a cig either dangling outta my muzzle or clipped between the strings on the headstock of the guitar I'm still thinking about whether or not I'd play there, but DJs and performers looking to add a venue to their rotation could certainly do worse. For that reason, although I'm in no way associated with the venue's owners, it seemed only fair to pass on my impressions for the benefit of other performers. The "proof of the pudding" as they say, will be in whether it gets the people in to listen - and, we hope, potentially tip. As it's a new location we have no idea on that aspect yet, of course, but it sure can't hurt to give the place a try if you've got a way your performance might fit in with the ambience of the place
  5. There's a pretty decent multipage menu example in the script library on these very forums
  6. I got a clean download and good install with firefox and a security setup that's probably about as paranoid as yours is. Apart from the obligatory "I dont want to run this because it's a new file and I dont know the publisher" from windows, and the usual digital dopeslap from my keyboard to make it do as I darn well tell it to, no flags or warnings.
  7. It has occasionally taken longer but for me the usual average is 3-4 business days. On one occasion they surprised me by accomplishing it in two, which I particularly remember because I needed those funds in a hurry to replace a piece of kit before the next time I was booked to use it - If the stars were going to align at all, that was definitely the time for it to happen.
  8. I use it when it's appropriate to - ie when folks around me are using it. It's almost always somebody else's decision to fire up voice but it never bothers me any if/when they do. My most usual vox chat channel is via discord, though, even to other folks around me in SL because we're normally talking amongst ourselves, not "publicly", More people have heard my voice inworld on my performance stream than via chat. As a performer I have, of course, had to beat any self-consciousness about how I might sound into oblivion. Having been a RL performer and - at times - a radio actor for a long time, that was a task already accomplished before SL went live, let alone had a voice option. I've never tried to be anyone in SL who isn't an aspect of my RL self and am pretty open about who I am IRL so my voice isn't going to reveal anything that isn't already out there and therefore no need to bother myself over using a voice channel if others want to, but mostly I just don't. I know folks fret about characters refusing to voice but it's kinda overblown to me. In the past with the radio theater company we did a fun skit where the female characters were all voiced by guys and vice versa and with practice it's actually possible to get good at that - I even use a part of the techniques in my performances! The folk genre has a lot of songs that are a dialog between a male and a female viewpoint and while I don't "disguise" my voice singing the female verses I soften my intonation a bit to differentiate between them. The difference is minimal but its enough to tell the ear that "somebody else has taken over the story." If I intended that to be a "disguise" I've enough range to sing those verses an octave higher and potentially be mistaken for a lass with a contralto range but I don't do that. The one thing that's almost impossible to manage without "technical assistance" or outstanding talent is for an adult to successfully voice a child unless that adult is female, but given kids "aren't supposed to be on SL at all" that's hardly an issue.
  9. When I was actively selling stuff as my primary means of L$ income, I regarded maintaining group(s) as a necessary evil and subscribos and their ilk as the very spawn of hell. I hated that the only in-world way to contact or announce updates to my customers was via one of these, being an old-school geek with all the hatred of spam that this implies. So I rigorously kept the volume low and the thought of imposing further inconvenience on my customers by making them pay to hear about updates or bugfixes, or requiring them to be unpaid advertisers for my products by showing my group in their profile would have been the worst kind of anathema, let alone the inconvenience to me of actually trying to police that and enforce it! I totally sympathize with the OP and would run, not walk, away from any business that tried to behave like that. In my current line of SL "work" I'm well out of the whole mess. Now, as a performer, I explicitly do not run either. If I actually owned a venue it would have to be otherwise, but the venues advertise my gigs already and anyone in a position to "click the button and join my group!" is already at one of those venues and probably a member of one of their groups so any notifications I might send out will be duplicates for most recipients and, to my mind nothing but spam. Instead I make available a link to a public gcal which shows all my bookings, including SLURLs for every event. If they are regulars at a venue they'll know I'll be there and if they are not but still want to come listen to me they can always find out where I'm playing that day and I don't have to spam them about it.
  10. I don't know how everyone else deals with it, but what I do is make the highest detail model first - and this is an "insane LOD" that never gets uploaded - it's the source for baking normals. I unwrap that, trying to make sure that I'm sensibly placing seams along edges which will persist in lower LODs. I then start with a duplicate of that model, complete with its UV map, and remove detail that I will be relying on normal mapping to show in the uploaded model - making sure to not delete any edges which are seams. That gives me the high LOD. I repeat the process to make the medium, low and lowest LODs, making each from the next-higher model and always keeping the seams of my UV islands. All my LOD levels have the same UV map and while the fitting of the texture to the model will degrade slightly with lower LODs you never get tearing from having faces in a lower LOD not contained within a single UV island.
  11. Let's face up to one unpleasant fact. Those of us that successfully sell "products" in SL are pretty good at peddling snake oil. We have to be in order to make a go of flogging the intangible for what is equivalent to real money, for all the legal nicety that the L$ is an "in-game token." I say this not as a criticism, but from the honesty born of having been moderately successful at doing it in the past. Most of us have encountered situations where we can achieve the effect we want but our skills and knowledge are only up to doing it in a way that isn't "kind to the machine" and we are salesmen as much as we are artists so if we can talk that up into a feature rather than a detriment, we tend to do so. I know I was guilty of that on a couple of occasions. My knowledge and skills have improved since then and I'd hope I would not fall into that trap again but if all were being honest I think most would admit to at least a few instances of it in their history. And because we are good at peddling snake oil, folks buy into those sales pitches. You see it in software too, horrible kludges being marketed as a feature until even developers that know better have to hold their noses and implement it to stay competitive. You will never convince folks that have bought into it that way that it's bad so consumer choice will never fix this. That's why LL have to if it's ever going to change. They just don't think that the pain it will cause to the userbase is worth it, so they don't do it. They don't even incentivize merchants and creators to make improvements in this regard for new products, let alone update older ones. Without both the stick and the carrot, way too many of those merchants and creators just ain't gonna change what they are doing.
  12. All this back and forth about the subjective "quality" of optimized content versus builds/avatars where optimization has not been done misses a fundamental point. And that point is that a straight visual evaluation is subjective. It's also misleading. Folks are decrying optimization as "looking worse" - when the "poorer appearance" is just as likely to be disliking the creators chosen aesthetic as it is a consequence of poor or overzealous optimization. It's perfectly OK to prefer one creators style over another's - that variety is something that makes SL what it is. But both could be equally optimized - you'd still prefer the one over the other but it would perform better! Textures: At the closest normal range to view an avatar - say a fairly close-up portrait - or viewing a non-avatar build from a similar distance, any texture resolution where you've got multiple texture pixels in a single screen pixel is too high. It can be reduced without changing what appears on your screen at all. The appearance will not change in the tiniest amount from making this optimization. Your screen cannot display any higher resolution, so feeding it with a texture that has that higher resolution is a waste of VRAM and a cause of lag. This is not even a "best efficiency" optimization, that would be to set the cutoff limit where 1 texture pixel = 1 screen pixel at the most common distance to view the avatar or object - which would result in textures that were smaller still and unlike the "closest normal distance" optimization you would notice it in close-up shots. Nonetheless, it's still more optimization than most SL creators do. Most SL avatars and objects you'd have to have your camera pressed right up against the clipping plane to get 1 texture pixel to 1 screen pixel, if indeed you ever could. But when an optimization can be proven to make no difference to the displayed image - simply because of the display resolution compared to the resolution of the displayed texture - you can't claim it detracts from the quality at all. If you think it does you're simply wrong. Polycount: Particularly for avatars, this one is more awkward. It is undeniable that where an avatars skin must deform, around joints for example, a higher vertex density and therefore a higher polycount makes that deformation smoother and more natural looking. But when you look at an arm and see that there's no discernible difference in vertex density between the areas around the shoulders, elbows, wrists and the areas in the middle of the upper arm and forearm then that's a badly unoptimized model. Period. Nobody is saying "You have to make your avatar look like crap as soon as it gets animated" - that isn't what optimization is about. It's about not putting in complexity that you actually don't need and will never see any visual improvement from. A higher vertex density around the joints is (mostly) fine and does give better, more natural-looking movement with the same animation. Way too many creators in SL just increase the vertex density everywhere to achieve this - it's easy to just add a subdivision factor to the model, after all - and then get all bent out of shape (ironically) when they get told their av is a lagmonster or they pitch a snit when somebody points out their creation looks solid in wireframe view, often claiming that it's that way because it's "high quality." This last is particularly pernicious because a user that has spent time and effort on their appearance will frequently pick this up and actually go shopping for these horrible things in the mistaken belief that it "looks better" And because they've told themselves that it does, then the Emperor's new doublet looks just fine.
  13. The principled shader works pretty well, so long as you're not playing clever games with the alpha channels in your materials. HOWEVER... The place where you're going to have a seriously hard time getting "SL like" display of your materials in blender is specularity. The principled shader is PBR based whereas SL shading is not.The biggest "gap" between them is with the PBR parameters for specularity in the principled shader and the modified phong used by SL.
  14. ^^this. For this reason alone it's often a good idea to start by making your highest LOD model - or even an "insane LOD" model with details you'll later bake as a normal map - first. And then unwrap it before you start removing detail to make the LOD models you upload. Get rid of the details in blender by "dissolving" them rather than "deleting" them to make a lower LOD model - that will not destroy your UV layout so long as you make sure to not remove any UV island seams. Zap a seam and your UV mapping goes to hell in a handbasket, because you'll then have a "face" made up out of vertices from different islands which will not conform to your layout. Adding detail after unwrapping will usually result in a UV map that is not compatible with the lower-detail one - to the extent that blender will barf if you try and bake normals from one to the other, for example. To maintain compatibility the unwrap has to be done at the highest level of detail and then you keep that same UV map while making the lower detail models out of the more detailed ones.
  15. Think of the surface of your model laid out like a sewing pattern or a pepakura project. All the surface unwrapped and laid flat. Then you do the puzzle to fit them as efficiently as possible into the frame of the texture image - the parts of the image inside the pattern pieces are what appears on your model. Making that pattern layout is what is meant by "UV unwrapping" the model.
×
×
  • Create New...