Archive for March, 2013

Google App Engine now supports Django

March 24, 2013

It looks like GAE is not just NoSQL any more, see here.

In other news, I have a strategy for extracting the mysql tables for manual backup from this wordpress blog.  Essentially a copy of ~/data/wp-content via this method here.  (I can check that this is the correct location by

dotcloud run -A <name of application> <name of database service>
ls

in a terminal window.) Evidently I will need to test that this works. My plan is to create a new temporary application and see if I can “restore” the backup data to that application. If I can, I won’t have to mess around too much with automation of the process (I suppose automation is especially useful if one is setting up something like this for a client, and you don’t necessarily want to trust them with command line access to the code on the server itself).

It is worth noting that Google app engine also now supports backup (although still experimental), see here. But it looks very workable!

Advertisements

A few things to investigate

March 21, 2013

I’ve been reading up on a few things recently, and I’ve come across the following interesting services.

Twilio (dotcloud tutorial here) is a service that allows for text to speech (and presumably vice versa), as well as SMS messaging and many other handy tips and tricks. One pays on a per call / per text basis as a function of the country and carrier.

Tokbox is a service that allows for multiple persons to interact concurrently on a website (hosted, say, on dotcloud). The first 25000 minutes of video streaming per month for 3+ persons at once is free, but beyond that they start to charge. That does seem like a lot, but if my site had 100 users, that’s 250 minutes of video per month per user, or an average of a bit over 4 hours of use before the limit was hit.

Regardless, the service itself is easily (at least, presumably easily) customisable to set up online conference rooms, or virtual whiteboards & collaboration spaces, etc. In fact, the idea of setting up a collaboration centre where people can collaborate on a file and then save it to disk seems like a nice idea. I probably would prefer not to keep any information permanently on the server, but if people could upload something – like google docs – and then be empowered to use various tools to modify it, while discussing the project – that would be really brilliant.

In particular, if people could collaborate on extremely complex projects like blender projects or artwork, that would be wonderful (though potentially a real headache to implement – unless somehow the software was emulated in the browser from the server somehow – I don’t know how you would do this). Possibly too hard.  But the idea of artists being able to collaborate on building assets for things (such as, but not limited to, Unity games), on a service I provide, has definite appeal and is definitely worth trying to work towards, at least in terms of finding out why such an app would be hard to write.

Zypr (see also here) is a Siri – like voice activated service that amalgamates / aggregates a great deal of 3rd party APIs to simplify the task for the developer. Apparently the profit model here is advertising revenue, with a split to the developer. I’m not quite sure how this is done, but it doesn’t seem too bad, and bears further investigation.

Probably on my to-do, I’d like to fool around with Twilio a bit first, then move on to Tokbox and see if there are any easy goals to be made there.

Structs in C++, Unity game, and other musings

March 18, 2013

Hello, again,

Yes, as per my previous discussion, it is a bit of a shame that I’ve since lost the posts I had here earlier.  Alas, the perils of not fully testing deployment / upgrade options on a remote service!  Still, the benefits of having this service far outweigh the minor inconveniences caused by the dropping of some data.  And I can still remember most of what I posted.

I’ve looked into the concept of taking backups with dotcloud, I’ve made some progress in this regard, but in terms of the whole backup / recovery approach I think I’d much rather keep hard copies on my computer; so that is what I am now doing.  Certainly using S3 as a backup mechanism gives me the heebie jeebies, since the cost of something going wrong could be a problem.  There are apparently two other approaches that one can take – ftp and ssh.  I may see if ssh is doable at some stage.

In terms of the Unity game I’ve been constructing, the links to the videos in question are first video, second video.

In the first video, I demonstrate logging into the Smartfoxserver 2x FPS example deployed to an Amazon EC2 instance using the RightScale overlay.  I view this as quite an achievement and feel rightly proud of being able to build the project as it stands in this video, even though the code is not my own.  The cost to me of this project was $2 / day, since I deployed on an EC2 small server (I have since decommissioned this, once I had established the utility of the concept).  The cost to use RightScale is nothing for up to 5 of these, since they use 300 of what are known as “RightScale Compute Units”.  One only starts to get charged in excess of 2000 RCUs.  However of course the Amazon instances, or RackSpace, or other service configured with your RightScale account – you have to pay for these.

I kept the default configuration in this example (4 players per game, and a fairly coarse update rate).  Boosting the update rate for the server could lead to a more playable experience (ie less apparent latency / “lag” – in addition to the real latency, of course!). As for the server itself, it can support up to 100 connections at a time – more than this requires a licence from SFS (their profit model). Of course, much more than that might well overload a small Amazon EC2 instance regardless.

The second video demonstrates my progress with pulling the technologies used in the first to a project I have been building, following the BergZergArcade RPG tutorials (quite a resource!).  It shows two screens logged into a local SFS 2X server running on my machine.  In the video I move one of the characters, and, lo and behold, the transform of the character updates on the other screen.  I was very happy with that achievement.

The way that a SFS-Unity game works is essentially one runs the client (which may sit in the web folder of the server, and downloads to the machine of a player if run in the browser – or as a standalone executable), and the client speaks to the extension jar sitting in the server (which is written in java).  The extension jar handles messaging between player clients.  Quite intricate and hard to debug.  But very interesting.  Certainly if there is a more practical way of managing client / server code snippets and source control I’d be very interested in learning of it.

Regardless, I’ve made a bit of progress narrowing down the issue I’m having with animations not syncing (which is apparent in the second video).  Turns out it is an issue with the controller script; I am going to have to rewrite the third person controller script that ships with the standard asset package from javascript to c#.  Apparently it is bad practice to mix the two in a single project, and there is a necessity to modify the script slightly to call functions from auxiliary c# classes.   After that, I think I’d like to try to knuckle through the problem of targeting another player.  Consequent to this I’d like to make mobs for one player the same as mobs for another.  Following that I’d like to start working on the basic combat mechanics, damage, pk, mob despawn, respawn, etc.

On a slight tangent I’ve also started looking at structs in C++.  I’ve found adapting the neural net struct was useful for dealing with higher dimensional tensors.  However this poses problems for anything practical, such as if one wants a mesh of 1000 x 1000 x 1000 with a 3 x 3 tensor, for instance.  Not practical to compute.  Nonetheless, here is the basic idea:

struct coord_t{
int       m_NumInputs;
vector<double> t;
//ctor
coord_t(int NumInputs): m_NumInputs(NumInputs)
{
for (int i=0; i < NumInputs; ++i)
t.push_back(0.0f);
};

};

struct coord_z
{
int       m_NumNeurons;
vector<coord_t> z;
coord_z(int Num1_Neurons,int Num0_Neurons) : m_NumNeurons(Num1_Neurons)
{
for (int i=0; i < Num1_Neurons; ++i)
z.push_back(coord_t(Num0_Neurons));
};
};

Then, in main, one simply declares coord_t p(4), or coord_z q(4,4).  To set a value one writes

q.z[2].t[3] = value.

However, to reiterate, this turns out to be quite inefficient and impractical.  The script will hang at runtime if the arrays are too large – ie there are computability issues.  So my next idea is to instead take a page from the existing book and, instead of allowing say a metric to be assigned to each point in a space so I can compute, say, geodesics via the non-uniform geodesic equation, I would instead take a big array (say only for spatial, 250 x 250 x 250), and then take as final array a coarser subarray (say on average 100 x 100 x 100).  So that the “twisting” of the metric can be arranged by allowing cells in the x, y, z directions to be identified for intervals from 1 up to 10.

In other words, I would have a uniform base array, and then on top of that array pick out a subset of points, to define a non-uniform array containing metric information.  The metric that I had in mind to use as prototype would be the random matrix Rand(0,1) = ((r, r, r), (r, r, r), (r, r, r)), where r = rand(0,1), a function that randomly assigns a value between 0 and 1, provided that the random matrix is non degenerate.  And this is to be computed (effectively) point wise.  And with the convention that the smaller the value the larger the distance between points should be, and that “infinity” or “1/0” is 10 units.

Anyway something to look into.  Evidently I still need to figure out how to represent metric information in terms of construction of non-uniform sub-arrays, but an approximation algorithm certainly seems achievable.  And should be computable.

Consequent to this, the idea would be to apply a finite difference equation to the total array, bearing in mind that the subarray would be the points where the output array is to be read off.

Scientific Programming with Python

March 14, 2013

Hi folks,

As a brief note to those looking for a quick fix, if you have Mac OS X 10.8, mountain lion, and are looking for a distribution of python 2.7 that has numpy, scipy and matplotlib all kitted up and ready to go, I would heartily endorse the Enthought Python Distribution .   I’ve found that this does precisely what I need – which is to allow me to have an attempt at finite difference methods for PDE.  In particular I am keen to implement solution of the geodesic equation on a Riemannian manifold using these techniques.

Note that this works with Eclipse Juno 4.2 and Pydev.  Naturally the interpreter needs to be configured.  Python can be found at /Library/Frameworks/Python.framework/Versions/7.3/bin/python2.7 .  Select the default libraries and everything should work.

Alternatively, one can open a terminal and simply key in “Idle”, and import numpy, import scipy, import matplotlib – all of these will work without error.

Finally, note that this will edit .bash_profile in user_home  (.bash_profile allows one to customise the path in mac os x 10.8).

.bash_profile:

# Setting PATH for EPD_free-7.3-2
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
export PATH

Although, I have been thinking that C++ might be a better tool to use in terms of approaching tensor calculus.  I’m actually quite keen to start looking into the genetic algorithms and neural network code I discovered.  These snippets can be found at this site , “ai junkie“.

Hello world!

March 14, 2013

Hello,

Welcome to my blog.  Unfortunately I’ve lost a bit of history to this site, since I was a bit too trusting of the specifications for installing plugins to WordPress with the dotcloud-on-wordpress app.  Turns out that it broke with a particular change from PhP 5.3 to 5.4; so with a new machine I lost a few posts.  Regardless, I will attempt from memory to recap on the content of what came before, and, if nothing else, I will be very mindful to learn how to backup mysql databases on this architecture, as per this listing.

Oh, well.  At least I can remember roughly what I wrote (and at least it was only a few posts).  I will attempt to repopulate based on memory in this first post.

Basically, I’ve played with a few compilers and languages over the last few years.  First I looked into C++, then Java, Java3D, cross-over / bridging techs, then libraries for Java (in addition to Java3D – lwjgl & joal), Eclipse, python, pydev…  Then Spring and a few other things.

[wpzon keywords=”java, python” sindex=”KindleStore” sort=”salesrank” listing=”2″]

One of my first objectives was to learn Java3D and build something useful with it, or at least something that worked.  I ultimately found out the utility of the language was limited due to the fact that it required a kit to be installed to run, even for a standalone executable; this has since changed with its incorporation into JavaFX in Java runtime 7.

Then I looked into Flash and a few other techs motivated by enhancing PDF files.

Consequent to this (in 2011 – 2012), I started looking into django.  When I finally got to the point of being able to build working data driven websites, I became interested in deployment.  Google app engine was my first choice, but it turned out, at least at the time, to be clumsy and unwieldy, mainly due to the fact that it relied on a fork of django that did not have a large community backing it (since app engine runs on NoSQL, which is incompatible with standard Django).

Ultimately I found out that dotcloud was better and easier to use for what I had in mind.  I deployed a few sites to it; a simple photo album, the original version of this blog, a personal webpage, a spring site or two, some javafx examples.

But my interests continued to develop and I found myself interested in experimentation with Game Engines.  I ultimately chose Unity, due to the fact that it is relatively indie friendly; it has an asset store and is comparatively easy to get started with.  In terms of deployment, dotcloud was inadequate to host SmartFoxServer 2X instances of Unity3D games, so I ended up gravitating to the RightScale cloud management platform, together with Amazon Web Services (AWS).

I have since made some progress.  I currently have plans to develop my current Unity3D game (which has as base code from a series of tutorials from bergzergarcade) into a freemium multiplayer game on RightScale, although I have, for now, decommissioned the original website on RightScale where I tested the architecture and am testing things locally.

I am also now currently looking into genetic algorithms and neural networks written in C++ (as one does), as well as scientific programming in python.  RStudio is another tool I’d like to get up to speed with, together with the R programming language.

By the by, for those who did not follow earlier, this installation is based on dotcloud, a PaaS that I have found useful for experimentation.