Posts Tagged ‘amazon ec2’

Unity Networking looks like it is getting closer!

April 15, 2015

Exciting!  It looks like the first iteration of Unity Networking might be implemented in Unity 5.1.  I’m quite enthused by this, as this is the key development I’m waiting on before investing heavily in building the next version of my multiplayer networking project.  Well, not necessarily.  Recently I acquired a license for the bolt engine, which looks like a reasonable alternative contender for this sort of functionality in Unity.  So I may actually proceed with that for the time being, and potentially also the foreseeable future.

In other news, I was working quite hard to deploy a custom build of browserquest on google app engine a month or two back.  I’ll spare you, the reader, the details, but basically I found the main obstacle to be documented in this stack overflow issue, which led to the author of same raising this ticket.  So it doesn’t look like this sort of thing is possible at the moment – at least on google app engine.  On amazon elastic compute cloud though, it should certainly be quite doable to implement.  Maybe one might be able to write a lambda function for the nodejs server requests.  Now that would be an interesting learning experience.

Advertisements

Experiences in deployment of my Unity application to Amazon EC2

June 22, 2014

In this post I will document my experiences of attempting to deploy my Unity application to the cloud.

First of all, I decided that I would use RightScale, and Amazon EC2, as I had succeeded in using this combination in the past to deploy smartfoxserver2x-Unity prototypes.  Of course, in this instance, there was a new dimension – interaction with the database.

In the end, I adapted the smartfoxserver template by adding a couple of my own rightscripts.  In particular, I was interested in doing the following things on boot of the server in addition to installation of the SmartFoxServer2X instance:

  • download extension jar files from dropbox
  • download a shell script from dropbox
  • download a mysql representation of my database from dropbox
  • install mysql server 5.5 and start the server
  • load the database file into an appropriate table
  • editing the log4j.properties file for smartfoxserver to allow trace logging (via log4j)
  • editing the xml document for the appropriate room to reflect desired configuration options for the extension jar file (optional, see below)

And finally

  • an operational rightscript that would redownload the database and the extension jar file to the correct location
  • an operational rightscript that would copy the server logs to my dropbox (wput)

Admittedly, the first three are not hyper-secure in terms of data-security, but as I am primarily interested in just getting things to work, and relatively cheaply, this seemed like a good compromise – particularly as for a t1.micro server (that I eventually plan to use, in lieu of an m1.small) – the only way to configure the thing is on boot.  I eventually opted for a RightImage that was an Ubuntu 12.04 machine.

Additional things I needed to do involved:

  • configuration of the smartfoxserver admin to enable the extension jar file
  • management of the appropriate security groups to include ports for MySQL

Eventually I managed to get all of this to work (after much in the way of abortive attempts with seeking to jump straight past MySQL 5.5 to upgrade to MySQL 5.6 which I did not need)*, but still was not able to validate / log into the application. This led me to investigate using log4j within my extension jars to capture / trace behaviour at that end – however it turns out that even this was not necessary, since the server should still have been logging what was going on with the program.

*As a minor aside, MySQL is quite easy to install – indeed, I found that the following was sufficient for my purposes:

export DEBIAN_FRONTEND=noninteractive
apt-get -q -y install mysql-server

At this point, somewhat dissuaded by an apparent lack of progress, I decided to focus on other things for a number of months.  Finally I returned to the problem of cloud deployment of this project relatively recently, and finished addressing the remaining roadblocks.  Essentially the key things still left unfinished were:

  • The fact that the smartfoxserver version was out of date.  I fixed this by altering the rightscale scripts that initially performed a wget on a file in Amazon S3 to a zip file in my dropbox, which I then instructed the server on boot to unpack.
  • UDP configuration in SFS2X/config/server.xml.  It was necessary to provide the server with access not only to TCP but also to the UDP protocol.  I had previously opened up the ports with an appropriate security group but for some reason I hadn’t arranged for the server.xml file to be configured correctly.
  • Basicexamples.zone.xml.  Running locally, I was able to configure this without hazard from the admin page, but due to permissions on the virtual machine this didn’t work properly in the cloud.  I managed to get around this by editing the file manually from the terminal (sudo pico).  In particular what needed to be modified was:
    • Setting the custom login flag to true
    • Adding a reference to the appropriate extension jar class
  • Capitalisation in one of the scripts for creating a room leading to the server not being able to recognise the room extension – I think this was either an operating system thing (less brittle on Mac than Ubuntu), or it might have been rendered less fragile in SFS2X 2.8 (the one I can run locally) but in 2.7 I needed to be more careful.  Regardless, changing the name of the configuration parameter from lower to upper case fixed this issue.

Fixing all these items had the net consequence that, for one thing, I was finally able to see my print / debug statements from my jar files in SFS2X/logs/smartfox.log while the server was running.

More importantly, the end result of all this was that I was able to log in into the world and obtain similar location persistence as I did here.  In other words, I was able to move in the world, log out, and then log back into the world at the same spot, but rather than things running locally, I was connecting from my client to a server and database running on Amazon EC2.  Neat.

__________

So that’s my report as to my experiences with Amazon EC2.  It took a few months, but I got there in the end.

In terms of next steps:

I’ve been giving some thought to how to keep track of the movements of creatures (eg, mobs and npcs) as well as persistent world events (such as weather, day / night cycle) in the application.  This is an issue because although I have managed to obtain persistence of player characteristics (eg, potentially level, stats, equipment, friends list, and, of course, what I have implemented so far, namely position and rotation information) if two players log in and run all the creatures / weather /etc on their clients, then they will have different views of the environment than each other.

To get around this, one would need to have I suppose what might be aptly named as an instance of the game running on the server, with say a vector of transforms / spawn points representing creatures (as opposed to a singleton transform & spawn point for each player), and timers for the day / night cycle, etc, all sent to the server program, which in turn would synchronise with the database and any other player clients connected.  There is really no getting around the need for a separate client running on the server, since creature movements would be too complex to model properly if merely say updated in a java program, for instance.

However, more in terms of a more concrete roadmap and action plan, I am fairly interested in refactoring this project in preparation for Unity5, as then perhaps I might be able to take advantage of Unet, a perhaps more sensible networking framework to use than SmartFox, if I am going to be developing in Unity.  It is also important that I find a good way to simplify my project layout – although I finally have the thing under version control, the project structure is a mess and contains a great deal of assets that I don’t need or am not using, or are not in a particularly sensible structure.  One of the things that I’d like to do, for instance, which sadly is not the case now, is to have a clear separation between source assets and project files, in particular, scripts.

Furthermore, I have been thinking a bit more in terms of a general roadmap going forward.  I’ve decided that aiming to empower players to create dungeons, etc, in a way that is easy to put something together rapidly, might be a good way to go.  I know that Neverwinter Foundry already does this sort of thing, but the angle that I had in mind was to go for a more of a 3D version of Roll20 – where players could have tools / templates to be empowered to create worlds, and run dungeons and dragons sessions in such mockups.  Then the emphasis / focus would be more on providing tools for a good dungeons and dragons experience, rather than so much as building a complicated combat, leveling, and itemisation system.  In this way, I would seek a slightly different path than the one trodden by many of the more formulaic hack’n’slash MMORPGs that are all over the place these days.  Less action, more emphasis on old school role playing.

Anyway, I guess it is something to think about.

Latest developments in the PaaS universe

June 28, 2013

Hi folks,

I’ve been considering the feasibility of maybe setting up a small PaaS on RightScale using Amazon EC2.  There are a few bits and pieces that might facilitate this.  However, most recently, I came across this interesting post, which pointed me to Dokku, a project that is only a few days old, but looks quite promising and very interesting.

An alternative strategy for PaaS cloud hosting might be to write a RightScript or three and get Sandbox installed subsequent to Docker on a virtual box on EC2, via this tutorial (a key difference being that RightScripts would likely be taking the place of Vagrant in setting up the system, if using RightScale).  However Docker itself only supports Python (as yet – of course, the project is still young!), and does not yet (it would seem) support databases, as required with Django apps.  Dokku, however, I believe supports Django and a number of other techs that are not even Python based – and it has some quite clever chaps from MIT, I believe, driving it.

Dokku builds not only on docker, but makes use of a couple of other useful github projects in order to work its magic.

Essentially it should allow the admin to deploy multiple data-driven websites / cloud services (eg, data monitoring / data feed input, then email / sms notify relevant subscribed users, say by google cloud messaging) / applications on the same server, but, according to the project description, it does not yet support multitenancy – which I assume would mean multiple admins / admin user logon (controlled by master admin), not does it yet support multihosts (ie, scaling to multiple virtual boxes, or failover between same).

More here.