jump to navigation

Open grids and distributed asset systems April 24, 2008

Posted by justincc in opensim, opensim-dev, opensim-grid, opinion, Uncategorized.
trackback

One of the things I really love about OpenSim is that it gives us a platform for all kinds of experimentation. One of the experiments going on right now is the establishment of “open” grids using the Second Life protocol. An “open” grid, such as OSGrid, is one where anyone (subject to the grid owner’s agreement), can host their own region server and reserve a place for it in the grid map. Once the server is online, it appears in that grid’s world and can be flown/teleported into and out of just like any other region. This is in contrast to “closed” grids, such as the OpenLife and the Linden grids, where all region servers are hosted by a central entity (though it looks like the Linden grid is soon to open up in a limited fashion by allowing IBM to host region servers of its own).

By terming these “open” and “closed” grids I don’t mean that “closed” grids are evil in any way – I think there are very good arguments for running a grid as a “closed” grid. One of these is quality control – if you’re running an “open” grid, large chunks of your world may suddenly disappear if region servers you have no control of unexpectedly go offline. More insidiously, if the software running the region isn’t working as it should then the experience of users may be degraded in various ways. For example, all inventory requests in OpenSim are currently routed through the region server. This gives the region server provider an opportunity to maliciously or accidentally return an empty or garbled response to an inventory request, possibly rendering your inventory invalid until you clear your cache and relog).

Nonetheless, there are a number of exciting things about an “open” grid that I think make it an idea very much worth pursuing. Most of these stem from the fact that it leaves you in control of your own server. Don’t like the performance on your region? Then get more powerful hosting on the open market. Want to allow web users to spectate on events in your sim? Then add in an OpenSim module which streams flash video from a fake avatar acting as a camera (this is just a random idea – I’m pretty sure it doesn’t exist yet for OpenSim!).

So what, then, are the current challenges for an “open” grid? One of the chief problems is that up until recently, it looks like the second life protocol was designed with a “closed” grid in mind. This is manifest in the Linden viewer in various ways. For instance, to go back to the earlier inventory example, once bad inventory information has been passed to the viewer by a rogue region, I believe it can’t be replaced with new information until a relog (and possibly a cache clear). Actually, I’d love to know if this isn’t true but it’s rather difficult to determine since we can’t look at the viewer source (for reasons which have been exhaustively covered elsewhere) and Linden Lab hasn’t published all that much information on the protocol.

Also, to be fair, some of the newer protocol changes may make it easier to host an “open” grid. For instance, with the inventory example above, it certainly would have been true with the old method of sending inventory information directly in the client’s UDP session (which is connected to the region). However, with the new CAPS method of retrieving inventory (see here for an explanation of CAPS), it might be possible to provide a capability which routes inventory requests directly to a grid server rather than via the region server.

Although agent inventories are a challenge for an “open” grid, today I really want to talk about the asset issue. The architecture of OpenSim currently has a very centralized model where all asset data is hosted by an asset service run by the grid operator. For a small scale grid, this could be a single server (for example, the reference asset server provided by the OpenSim distribution), hooked up to a backend database.

As the grid grows and more regions come online, the load on the asset server increases. The chief increase is in texture data – additional regions have neat terrain and objects built on them with lots of interesting textures, and the extra regions attract more people who want lots of individual pieces of clothing.

Here are two possible responses to this problem. The first is to scale up your central asset service – throw a few more servers in and tune up your database. This is relatively nice and simple, but it may mean you have to start charging for uploads to reflect your storage and maintenance costs. This might be a perfectly acceptable business model (an “open” grid isn’t necessarily a non-commercial grid).

The second is to distribute the asset data. Instead of hosting everything yourself, allow asset uploaders to provide an alternative location from which to retrieve textures, scripts, sounds, etc. This could be as simple as placing them on a webserver. This reduces the load on a central asset server and may allow an “open” grid to scale to a much bigger size without needing a commercial or formal donations system.

Naturally, there are problems with such a scheme. Suppose you buy an item from a vendor hosting their own texture data. What happens if their asset service goes offline? If you want to allow other people to come and build on your region, how do you make sure their asset uploads are routed to your asset service (assuming that’s what you want) rather than their own? I don’t know currently whether such problems are surmountable.

Is there any way that a distributed asset system could be implemented in OpenSim today? Using the current Second Life protocol (which is the only one in town right now) this might be possible. All asset requests (whether through the client’s UDP session or via CAPS), are routed through a central point, which means we would have to store a pointer in the asset database in order to tell the client where it can find the data for a particular asset. Under CAPS, I believe this http location could be provided directly to the client, resulting in an interaction as illustrated below.

We still need to contact the central asset service for each request, but at least all the asset data doesn’t have to be stored at, and flow through, this single point. Ideally, one might want to embed the asset’s location directly in the asset metadata so that the request can be made to the external data source directly.

However, I don’t believe this is currently possible within the Second Life protocol.

So far this is just an idea, but it might be one worth exploring in more detail as “open” grids continbute to grow in size. I’ve taken a high level view of the distributed asset approach – almost certainly there will be issues once you get down to the nitty gritty of trying to make such a thing work. Constructive feedback is welcome – particularly in light of that fact that I’m not overly familiar with the plans for the evolution of the Second Life protocol (mainly because of a chronic shortage of time!).

Comments»

1. Olish Newman - April 24, 2008

Using web servers to store assets/textures is a very interresting way to decrease load on the central database. May be 2 or more webservers could host these as mirrors, this way, if one comes offline, the central grid asset server knows another online location ?

I saw recently the Open UGAI hosted on web servers. May be could it be used to implement such a distributed system ?

Or may be using Squid on each server hosting regions configured to look up assets and textures at other Squid parent caches ?

2. Olish Newman - April 24, 2008

There’s also a problem with data security for grids wanting to protect residents creation from thief. Distributed asset servers may not be maintained by external people in this case.

Anyway, I like this idea of a distributed system.

3. Christian Scholz - April 25, 2008

Did you had a look at what the Architecture Working Group in Second Life is doing? It was initiated by LInden Lab but is a community effort which aims to define a protocol for decentralized servers.

The hope is that OpenSim is part of that in the end.

Trust is of course a big issue here and widely discussed and there are some ideas around a web of trust floating around.

Linden Lab seems also committed to port their existing grid step by step towards this.

For more information see here:

http://wiki.secondlife.com/wiki/Architecture_Working_Group

I myself want to start soon on implementing an agent domain, if I would then be able to connect via this to an opensim region, that would be cool 🙂

4. justincc - April 26, 2008

Hi Olish. Thanks for the comments. Yes, one way to distribute load would be to take an implementation of our existing Asset service (which is REST based) and host it elsewhere. As for Squid, I suspect that’s something one would use internally or as a local cache for a group of regions rather than as a way to distribute data. Asset security in an “open” grid is also an issue. However, having a “closed” grid doesn’t alter the situation all that much, as the asset data still has to go to a person’s client.

Thanks for the pointer Christian. I am aware of the AWG though I simply haven’t had time to follow it recently. I know other developers from OpenSim have been involved with it, though I think it’s always hard when the only time you have to spend on a project is your spare time! 🙂

5. Why I love OSGrid « justincc’s opensim blog - May 23, 2008

[…] my classification, OSGrid is one of the “open” grids I wrote about a while ago.  To recap, it’s one where anyone (subject to OSGrid’s usually very […]

6. Diigo Update (weekly) « Web2.0 in PBL High School - July 20, 2008

[…] Open grids and distributed asset systems « justincc’s opensim blog […]

7. The Parallel Selves Message Bridge « justincc’s opensim blog - February 4, 2009

[…] the grid operators completely or you run the grid yourself).  On a commercial closed grid or an open grid where third parties are operating region servers the trust issue is probably […]


Leave a reply to Diigo Update (weekly) « Web2.0 in PBL High School Cancel reply