Oct 20, 2016 - BulletSim Notes

A while back I sent an email describing some of the features of BulletSim – the physics engine I did for OpenSimulator. This is useful information so it should be on the web somewhere:

The C# part of BulletSim can be in addin-modules – it doesn’t need to be’in core’ but needs to be built with core so it can be an addin module.

There is a separate OpenSimulator source tree… opensim-libs at “git://opensimulator.org/git/opensim-libs” that has a bunch of the non-core parts of OpenSimulator (http server, old and ancient other tries at he physics engine, …). The C++ portion of BulletSim is in ‘opensim-libs/trunk/unmanaged/BulletSim’ and there are the instructions for fetching the Bullet sources, patching same, and then building with the interface to the C# code). The C++ wrapper mostly deals with passing the structures back and forth between the C# and C++ code (pinned memory for the position updates and collisions, copying meshes in arrays of floats, …)

The BulletSim design is around making a simulation step be only one transition between C# and C++. So, under normal running conditions, there is only one transition per simulation step and the data (position updates and collisions) are passed in pinned memory so there is no copy. 98% of the C# code deals with doing and adapting Bullet to what OpenSimulator required (link sets (ugh!), …). The C# -> C++ interface for BulletSim is rather large… physics engines seem to have lots of calls for all their features Bullet, for instance, has what seems like zillions of methods of changing constraint parameters I made those appear in the interface to C#. If I had it to do over again, I’d probably go more with a functional design where there is a “call a named function with parameter blob” design so the C#/C++ interface was smaller and new function could be added without changing the binding of the DLL then use some fancy reflection to build the binding on both sides

The .NET C#/C++ binding is pretty good except that ints and booleans change size between 32 and 64bits… if you look at the BulletSim interface you’ll see I use floats and arrays of floats everywhere because they are always 32 bit.

I recently played with building “BulletThrift”… a version of BulletSim that uses Thrift to call a remote process physics engine (experiment in distributed physics). It didn’t get finished mainly because the existing interface to the C++ module is so large. BulletSim actually has a HAL to access the physics engine and there are two physics engines: the C++ Bullet and a C# port of Bullet. The latter was last used by Nebadon to run OpenSimulator on a Raspberry PI. But this also means it is easy to add a link to a remote Bullet. That’s where I was going to add BulletThrift that would call across the network to a remote Bullet server. My main reason for doing this was to be able to run Bullet in a pure C++ environment where debugging wouldn’t be complicated by the managed/unmanaged environment.

If you distributed the physics engine, operationally, I’d expect you’d see some of the things that happen when running BulletSim on its own thread like jitter caused when there is a ‘beat’ between the physics simulation time and the simulator heartbeat. BulletSim running on its own thread means that the physics engine is called on its own thread and the passing back of collisions and position updates happens when the simulator heartbeat thread calls into the physics engine.


Oct 19, 2016 - Gathering Prim Sources

Today’s work has been gathering sources for prim construction. The base library is PrimMesher. Then there is MeshmerizerR which is part of libopenmetaverse. (By the way, the “R” in “MeshmerizerR” is my initial.) “MeshmerizerR” is different from Meshmerizer that is in OpenSimulator in that it builds “faceted meshes” – meshes that are renderable with all the prim faces separated so textures, etc can be applied.

I should explain that, in the beginning, SecondLife defined all objects in their virtual world with procedural shapes. These are the ‘prim’s of which I speak. A ‘prim’ is a geometric shape (circle, square, …) projected along a path and then twisted, cut, and otherwise modified by parameters. The parameters for the construction of the displayable mesh is a ‘prim description’. The SecondLife(r) viewer would receive prim descriptions from the server and construct meshes for display. This design made sense when bandwidth was very limited (back in the modem days).

PrimMesher was independently developed code that implements the conversion of prim description to mesh.

libopenmetaverse is an independently developed SecondLife(r) protocol client. It has many functions for scripting a SecondLife(r) or OpenSimulator virtual world but it also includes functions for calling PrimMesher and created meshes.

The GitHub copies of PrimMesher and MeshmerizerR haven’t changed in a long time (and, in one case, the developer has sadly passed away) but they have been forked and copied into other viewer projects. This means that, if improvements were made, they are in other source repositories. Thus the job of finding the improvements and collecting them.

SecondLife(r) has added other formats and now there are sculpties as well as meshes. The mesh reading code has been added to OpenSimulator so that code needs to be incorporated into my code. Luckily, all of this code uses the BSD License so the merged code will be distributable.

I might end up creating a pull request or patch to update libopenmetaverse.

I also spent some time today installing and playing with High Fidelity’s virtual world Sandbox and Interface. I will want to look into their asset storage system but the user interface and experience is still pretty rough. Not sure where they are going with their system but they have developed a lot of very cool avatar and infrastructure technology.


Oct 18, 2016 - Focusing on Demo

I’ve been suffering from analysis paralysis. What I envision for Basil and ultimately the Herbal3D system is a huge project with many components. These days, with the Internet and all the collaborations and projects happening, there are innumerable technologies to choose from. What languages to use? What IDE to use? What messaging library? And on and on and on and on and on. Argh! There are so many to choose from!!

Previous blog posts (Pesto to Python, Cassandra and Docker, Looking for a message bus, Thirft vs ProtoBuff) have all been about analysing various software libraries and packages. The end effect is that nothing has gotten done.


Well, some little experiments and some documentation but really no useful code or results.

The next step is to do something. The best thing to do is the prim baking code and the comparison of display frame rates between a browser based viewer and an Unreal Engine viewer. This experiment will create some of the required basic functionality and verify some of my conjectures about the basic Basil architecture. It will also be visual and will hopefully spark interest and thus build a community of developers.

So, rather than worry about transport and APIs, over the next few days I’ll work on conversion of the OpenSimulator prims and objects into “baked” mesh format and in various mesh file formats. Since an OAR file format contains all the asset descriptions as well as the region placement, and, since it is just a compressed TAR file, I will burst an OAR file and write routines to do the conversions and create new files. For display, I can just move that file structure under an HTTP server.


Aug 15, 2016 - Pesto to Python

After sending more time than I wanted on a NodeJS version of Pesto, I came to the conclusion that NodeJS was not the language of choice for that service. This was given away when I found the multi-threaded Thrift server classes for Python.


Of course! JavaScript is not a multi-treaded language. Pesto, though, is supposed to be the responsive messaging center of the whole viewer framework. This kinda requires multi-threading.

I originally chose JavaScript/NodeJS because I wanted to build a fancy, interactive, and responsive web interface to Pesto. Guess I will have to do that with some Python libraries. The Python 2 vs Python 3 fork is concerning when thinking of the long term but I’ll have to see how that plays out.


Aug 14, 2016 - Cassandra and Docker

In my continuing effort to learn all the new technologies, I wanted to use one of the NoSQL databases. Since I want to store geographical data as well as meta data for virtual world objects, I steered away from document oriented ones like ElasticSearch. That leaves ones like MondoDB or CouchDB or pure Hadoop.

The fickle finger of databases then lead to Cassandra. It was used in the Sirikata virtual world project and is scalable and clusterable and continues to be used in a lot of places. It is also available on AWS and other infrastructures so it’s a good candidate.

The two target usages for Basil development are standalone, single computer installations and in larger, production installations. So, how to easily run Cassandra on a single desktop.

Another concern of mine is how to stay up-to-date with the latest versions and sources of any package. I want an answer for ‘what will building and maintaining look like a year from now’.

I pulled the Cassandra sources and looked into building it. It requires a specific version of Java and several other packages. This looked like it was going to be a nightmare to build on general use Linux system and the dependencies would make upgrading difficult. And, like I said above, I wanted the latest and greatest and not the older versions that will be in the Ubuntu package repositories.

But wait. Looking around I found a Docker version of Cassandra. Docker containers would provide some of the isolation of the different versions four all the libraries are going to Cassandra and he would get me again tangled up in other interesting internet technologies.

Docker also provides a solution for creating simple standalone systems as well as complex, production, scalable systems. A simple Docker setup would allow someone to just run a local Cassandra or they could deploy all of the services into the cloud and have a production environment.

So, off down the rabbit hole of learning Docker and learning all of its setup and configuration options.