The following summary is taken from the TPV Developer meeting held on Friday November 1st. A video, courtesy of North, can be found at the end of this report. The numbers in braces after each heading (where given) denote the time stamp at which the topic can be listened-to in the video.

General Viewer News

As noted in part 2 of this week’s report, there are currently two release candidates in the LL viewer release channel, the GPU Table RC, which contains updates to the viewer’s GPU table but no functional changes, and another Maintenance RC, which includes finer access control for estate/parcel owners; CHUI: toggle expanding Conversations by clicking on icon + more.

It is expected the Google Breakpad RC will be returning to the RC channel in week 45 (see below).

Several of the remaining anticipated viewer RCs / project viewers, again as previously reported, held-up as a result of issues uncovered in QA and / or bugs being re-introduced into them. These include:

The Group Ban List viewer: work here, which involves server and viewer changes, is held-up as a result of QA testing revealing some issues which Baker Linden is addressing (as per part 2 of this report)

The interest list viewer, which recently saw the issue of objects failing to render without a relog return to the code after having been fixed, and which still has one or two other issues to be fixed, although Oz Linden feels those working on it are homing in on solutions

The HTTP viewer updates, which were for a time awaiting QA resources (see below for a further update).

AIS v3

[02:29-07:07]

The Lab is keen to start progressing this work towards a release. As with Server-side Appearance, they’re looking to TPVs to help with various aspects of testing. To this end, a request has been passed to TPVs that they indicate to the Lab when they have merged the code into experimental versions of their viewers so that a pile-on test can be arranged in order to put the updates through their paces.

There is no specific date for when this will take place, and commenting on the project in general, Nyx Linden said:

Now is a good time to start your merges, I’ve just pushed an updated to Sunshine external, so you guys should have our latest and greatest … But again, this is not formally QA’d, we’ve been testing things as we’ve been going on, but it is not ready for release yet. But now is a good time to start doing test merges and getting side branches up-to-date with that.

The latest code includes a fix to viewer-side behaviour. On logging-in to Second Life, the server sends a list of the things it believed an avatar was wearing, although the message only had room for one wearable of each type (e.g. undershirt, shirt, jacket, etc.), and so it may or may not be up-to-date with the Current Outfit folder.

While the current release versions of the viewer ignore the contents of the message, they do still wait on the message for timing (thus slowing down avatar processing). With the new code, the timing pause is being done away with, so that the viewer should be able to start resolving the avatar from the Current Outfit Folder whether or not the message has been received. There is a slight side-issue with this change that may affect some avatars under limited circumstances, but a fix for this issue is due to be made available to TPVs before the code even reaches any experimental versions of their viewers.

Viewer Crash Reporting

[09:00-14:50 and 26:03-31:15]

There is an issue with the viewer crash reporting which means that a lot of crashes are being incorrectly reported as viewer “freezes”. This is something the Lab is aware of and is working to address. The problem lies with a number of the mechanisms used to determine various types of crashes are not working, with the result that the associated crashes are being misreported as the viewer freezing.

As well as addressing this issue, the Lab has also been working in other areas related to Google Breakpad and crash reporting, including:

Simplifying and cleaning-up the creation and interpretation of the marker files used to generate crash rate numbers

Re locating these files much earlier in the viewer initialisation and log-out processes so that crashes which occur during the viewer’s initialisation or termination can also be captured

Addressing those crash reports which are generated, but lack associated stack dumps or mini-dumps and ensure that in the future that do have the required information, thus allowing the Lab to fill-in more of the blanks and ensure even more meaningful data is gathered as a result of crashes.

It will be a while before this work is ready for inclusion in viewers; one reason for this is because the improvements to Google Breakpad require continual rounds of user testing as changes are made (hence why the Google Breakpad RC appearing and vanishing and reappearing in the viewer release channel). However, once the code is ready for release, it should provide for more accurate crash reporting across all viewers. As the work comes to fruition, it should allow for more accurate identification of a range of crash situation and assist with the work in trying to eliminate them.

The Lab also intends to add additional stats based on individual OS crash rates within a given channel (so that, for example, stats on the Firestorm 32-bit release channel can be broken down by Windows, Linux and Mac).

64-bit TPVs: Feedback and 64-bit Windows OS Viewer Stability

[14:50-26:00]

Both Singularity and Firestorm (Windows) have released 64-bit Windows alpha versions of their viewers. While it is early days for the latter, the feedback from both seems to indicate that they are a lot more stable than their 32-bit counterparts. This has apparently been particularly noticed in crowded places where avatars are invariably wearing a lot of attachments, where the 64-bit viewer tended to be “rock solid”, but the 32-bit would be known to fail.

That said, there are wide-ranging views on overall performance, with some reporting the 64-bit versions to be much faster, others reporting them to be much slower, and some seeing little difference at all in swapping between them (which has so far been my experience with Firestorm 64).

In terms of memory use, there are some indications that the 64-bit viewer does use more memory, although this doesn’t appear to be a significant increase when compared to a 32-bit version with LAA. However, it has been noticed that even with LAA, the 32-bit version of the viewer can experience issues on reaching around 2GB of memory use, and can crash, whereas the 64-bit version does not have this issue.

Passing a general comment, Oz revealed that the aggregate viewer statistics tend to show that even the 32-bit flavour of viewers tends to be a lot more stable on 64-bit flavours of Windows than on 32-bit flavours. the Lab hasn’t tracked down why this should be, only that there has been a noticeable difference between the two flavours of the OS, which Oz went to not describe as “not huge, but significant.”

That 64-bit flavours of the viewer are now becoming more widely available has not been lost on the Lab, and it would appear they are watching how things progress closely. This will doubtless play a part in determining the direction they may take with 64-bit versions of the viewer themselves in the future, and in the provisioning of things like 64-bit Havok support.

HTTP 1.1 and Pipelining

[31:48-51:49]

Monty Linden reports that the viewer-side HTTP mesh updates are now with LL’s QA, who are “aggressively” trying to find problems, but have so far only uncovered a very slow performance on Mac systems in general, which Monty describes as “not a regression, it’s always been that way.” It’s anticipated that the code will be in QA for around another two weeks, possibly a little longer, then it is expected the code will appear in a project viewer.

In the meantime, Monty has started on the next phase of the work, which is HTTP pipelining, and which will again see both server and viewer side updates and changes. As a part of this work, he’s been going through the third-party libraries and their repositories which are used in the viewer builds and updating them. While he’s unsure if all the libraries will get a refresh, those he is looking at include openSSL, c-ares, zlib, libcurl, libpng, libxml, libcurl, APR, SDL, and llqtwebkit (which may not be touched as Monty describes it as “very confused”).

Many of these libraries have not been rebuilt in over 18 months, so Monty sees this work as beneficial in ensuring everything is cleaned-up, brought up-to-date and rationalised. The work will include adding checklists to the libraries, documenting them in terms of what to do and what not to do when using them, etc. One aspect of the work he’s not touching as yet is to rebuild the libraries in 64-bit, as he’d prefer to have the libraries cleaned-up and building consistently before tackling anything else.

On the server-side of things, Monty indicated he may be looking even more aggressively at limiting connections to the capabilities services. This doesn’t mean that he is trying to limit people’s network performance in general. Rather, this work is aimed at improving the reliability and efficiency of connections between the viewer and the SL service to which it connects, by removing the need to set-up and tear down lots of short-lived simultaneous connections in order to handle requests to the various SL services, replacing it with fewer, much longer-lived connections over which many requests can be sent, thus improving things for everyone.

However, the benefits of this work can be very severely impacted by people who still create hundreds on concurrent connections (such as through a debug setting) to several hundred. In doing so, they severely impact other people’s ability to connect to the same services both in terms of performance and reliability as well as potentially adversely impacting their own connection (and quite possibly their router). So setting limits around the number of connections a viewer can make to a given service or server is intended to prevent those viewers which do use high numbers of concurrent connections from adversely impacting anyone else attempting to use the same services.

A further benefit of Monty’s work is that he has been adding instrumentation at many points within the services which will allow the Lab to monitor the various services more effectively, fine tune them and improve their ability to diagnose issues as they arise.

As this is a complex subject, Monty is also producing documentation on his work, which includes a blog post on the work completed to data and what it means for users, and it is anticipated that this post will go out when the HTTP project viewer makes an appearance.

SLShare

A problem has been noted with the capability to upload images to Facebook through the new SLShare service. Images are capped at 1024×1024 and are highly compressed, which can lead to pixelation. Images also automatically include a SLurl and around 90 characters of space for Google Analytics. As a result, questions have been asked as to whether the Lab can do anything about the image size cap / pixelisation, and whether the SLurls / Google Analytics spaces could be removed.

Responding to the issues of images size and compression / pixelisation, Hoz Linden indicated that both are being worked on, and that in the case of the compression level / pixelisation, it is apparently not a matter of just changing the compression level due to the way images are handled between the Lab’s services and Facebook. However, he is hopeful there will be a fix out soon. Feedback on the automatic inclusion of SLurLs and Google Analytics will hopefully be provided in the future.

With thanks to North for the video.