Jump to content

Morgaine Dinova

Resident
  • Posts

    9
  • Joined

  • Last visited

Everything posted by Morgaine Dinova

  1. Yoz, you wrote that you wish to reduce QA costs. As a developer, I wish to increase viewer reliability through automated regression testing, and that would reduce your QA costs at the same time. By far the best way to automate viewer regression testing is to ride on the back of a project aimed at delivering a viewer extensions / plugins system, because that would kill several birds with one stone: Reduce viewer complexity by refactoring much of the slower code into client-side scripts; Reduce UI complexity by loading only those plugins needed for UI features at any given time; Raise reliability because client-side scripts would be written in a safe scripting language; Raise reliability because shared state multiprogramming would be replaced by processes; Harness multicore easily and safely because the hard work is done by the operating system; Increase flexibility because small scripts are much easier to write than monolithic C++ code; Satisfy the community much more because users could tailor the UI to their requirements; Improve viewer structure by defining an internal API to which the scripting calls are bound; Greatly enhance your QA because client-side scripts can perform unit and functional tests. Since this direction helps to limit viewer complexity as the number of options rises which so worries Q, and directly assists in automating viewer QA which is very likely to reduce your QA costs, don't you think it would be an excellent project to commence immediately, even without considering its many other benefits?
  2. Q: I'm not going to add to the comments above about the relationship with people who are trying to give you feedback. The many, very well-written posts have expressed the situation perfectly. I will however comment on the technical matter of UI options, because I believe that you are almost entirely wrong about this. The issue splits into two parts, presentation and implementation. PRESENTATION For a user, options only introduce complexity and confusion if they are presented randomly or illogically, and without any means of self-help. Even a small number of options can be badly confusing if presented randomly, yet nobody would suggest that a supermarket is confusing just because hundreds of thousands of items are on offer. Humans are extremely good at scanning large number of items, especially if you give them cues or hints. It is the job of a good UI designer to ensure that all features are presented in a way that is as clear as possible, which is not easy, but it's a discipline with many years of study behind it. To suggest that we're reaching a point where more options would go beyond what can be presented cleanly seems to be false by simple inspection, and you have not justified such an arbitrary position in any way at all. What's more, the UI designer has a number of powerful weapons at his or her disposal which have not yet been deployed, or have been used poorly so far. In addition to classification or partitioning into submenus and subpanels and tabs, GROUPING and HIGHLIGHTING within a given panel is a very powerful technique which is almost unused in the viewer, yet it can convey a large amount of semantic information and helps trigger our highly evolved pattern matching abilities. (And it looks pretty too!) We should use that a lot more, instead of making Preferences tabs all look visually similar. To a degree, grouping and highlighting can be applied to menus as well -- Debug Settings certainly needs that. Another very powerful weapon at our disposal is VISIBILITY. If you care to look at this Jira I filed a day or two ago -- http://jira.secondlife.com/browse/VWR-22781 -- you'll notice that *visibility* of each UI element is one of the UI attributes held in configuration data stored under each preset. This is important, because there is no better technique for simplifying an UI than making parts of it disappear! Visibility control should be hierarchical so that entire sections of the UI can be made to appear or disappear for a given preset. Couple this with a user being able to drag any desired visual elements to user-defined panels or edge frames and you have a formula for making UIs as tiny and as simple as the user wants at any given time. What's more, the latter offers "user-definable UI design" -- you don't have to get it right (an impossible task anyway) because layout and visibility are in the hands of the user, and you only need to provide UI infrastructure. It also makes your 50/50 vs 90/10 argument entirely moot, because it has the potential to satisfy everybody independently. I can't stress enough that there is no single UI layout that can satisfy everybody *simultaneously*, at all points of their day, so looking for a single median or a single subset is a severe mistake. Finally, we have not even begun to use UI assistive technologies, such as searching for UI entries by name, or providing multiple views into the UI classification space, nor adaptive technologies such as contextual selection and UI learning. Nor have we even begun to explore what pluggable extensions can bring to the table as a means of improving the UI for users. These are very early days, and suggesting that we can't handle many more options without trouble just makes no sense at all. IMPLEMENTATION On the development front, I'm puzzled by the objection to more options. Nobody codes options handling as a huge chain of if-then-elses anymore (well, nobody worth employing anyway). Instead, options handling in non-trivial applications is typically callback-driven in the UI, and complex optional runtime functionality would typically get refactored into a semi-formal control mechanism like a state machine to prevent it getting out of control. Callback dispatch and state control tables are inherently extensible without a complexity explosion greater than linear (as long as you pay some attention to state isolation which is a normal feature of OOP), so I'm puzzled by the expressed worry. (The state explosion you described in your renderer code sounds like you're using too much shared code and not enough specialization.) One area that does suffer enormously from complexity explosions is concurrent programming, and if you continue to use shared memory multiprogramming in the viewer then you will pay dearly, but that is your choice. The right way to do it is to let the operating system handle your concurrency and provide hardware-guaranteed state isolation in processes. We have discussed this approach many times over the last 2-3 years, and more recently the same topic was examined in the opensource-dev discussion about client-side scripting in February and March. Since you have used this technique for media handling already, I am sure that you understand the benefits, so I would expect you to refactor more parts of the viewer into separate processes to reduce monolithic complexity and eliminate shared memory multiprogramming. This kind of refactoring during program evolution occurs in all non-trivial software, and if having more options means that you have to tackle it earlier rather than later then this is a very good thing, since less effort will be wasted in a non-sustainable direction. Very helpfully, your "alternative a)" tells me that there is a ray of light at the end of the tunnel: you recognize what needs to be done, but you say it's a long way off. That's a reasonable statement when work has actually started but the timeline is long, but otherwise it's simply a brush-off even if unintentional. The road to coding hell is paved with good intentions, and the longer you wait, the more likely you are to end up in that hell. We need to start on a viewer extensions / plugins system right now, not later, because as you rightly suggest, that's a known path to the desired goal. Now that we have Snowstorm and a framework for open design, planning and development, I'm hoping that we can start on this work immediately. Before I end, there is a cross-cutting issue that needs to be highlighted. As I wrote in the Jira, virtual worlds inevitably evolve and become ever more complex, and UIs need to evolve and grow similarly in order to interface with all the features of those worlds. This cuts across presentation and implementation alike. Growth in features and hence options cannot be avoided --- we just have to deal with it. PS. I suggest that the "Options are bad" argument be left to the mists of time. It's more useful to replace it with "Let's work on multiple ways to refactor this monolith before it gets (even more) out of hand." This addresses both presentation and implementation problems simultaneously. Morgaine.
×
×
  • Create New...