Recent findings
I discovered this haxe integration project for Godot recently: https://github.com/dazKind/hxgodot-cpp. This is of interest to me because the city generator code I’m interested in is also written in haxe: https://github.com/watabou/TownGeneratorOS.
… and asdf has a plugin for haxe here: https://github.com/asdf-community/asdf-haxe.
Of course one can bind other things to Godot, like haskell: https://github.com/SimulaVR/godot-haskell
Not that I’d want to choose python as a runtime language, and the code dates from 2016, but this tutorial on city generators is nevertheless very instructive (documentation, code).
Buildify looks pretty cool.
Soap Site
Turns out that the soap site wasn’t migrated, as Cloudflare was my domain registrar! So just as well I didn’t delete the old VM yet. I fixed the ssl cert for the new VM, but there is some data synchronisation I’ll need to action before I can establish reasonable confidence in the new machine.
Cloudfront would be a good idea to investigate eventually too so that I can use an AWS native CDN instead of Cloudflare, but it is not a burning concern at present, maybe that can wait another year. For this year’s calendar campaign running on a t4g.medium + using Argo should be good enough.
Payment app planning
In terms of my goals for this sprint in terms of drafting the new payment app:
- Have a basic app that can interact with Stripe if one makes certain api calls to it
- Configure Stripe in test-mode to be able to create a $5 USD per month subscription, or $50 USD per year annual subscription
- POC an api call to create a monthly subscription and an annual subscription using the app + Stripe api keys configured in env vars
- Figure out in principle how to integrate it with the rest of the system
I will be basing things loosely off this toptal tutorial from 2019, and this more recent webcrunch tutorial from 2022 (code, youtube). But evidently I am dispensing with the UI, since the client is the UI – and I’m not intending on hooking that up yet (via comms).
Payment app stream of consciousness
First things, Stripe. An api key I will need.
- Created an organisation, and a premium product with a 5 USD/month and 50 USD/year subscription price points.
- Added the Stripe gem.
- Installed Stripe CLI
brew install stripe/stripe-cli/stripe
class CreateSubscriptions < ActiveRecord::Migration[7.0]
def change
create_table :subscriptions do |t|
t.string :plan
t.string :customer_id
t.string :subscription_id
t.references :organisation, null: false, foreign_key: true
t.string :status
t.string :interval
t.datetime :current_period_end
t.datetime :current_period_start
t.timestamps
end
end
end
Actually, looking at this, there is a question in my mind as to whether it makes sense to decouple Subscriptions for an Organisation from the Organisation service itself. i.e., do I need a separate Payments app? If I am decoupling it, instead of t.references I should use t.string :organisation_id, and pass the organisation_id from the client via comms when making a create subscription request.
I guess that works??
This comes down to a choice as to how to architect to manage potential future complexity, and questions of premature optimisation.
It also depends upon what I want each app to actually do. For the instance service, that is easy – I want it to be the workhorse powering the instance experience for users. Easy.
For the User service, I want this to be the service powering how users log in and configure their settings. Ideally I’d like a user to be able to be a member of more than one organisation. One way of managing this is graphql, admittedly – another way is to have a separate service.
A User service also would be useful for storing user-specific digital in-system assets, eg marketplace purchases (if and when there is a marketplace).
For the Organisation service, the purpose of this is to track for an organisation which users are in it, what plan it is on, and how it is currently configured.
One argument for decoupling Payments is that although it will be used for tracking Organisation subscriptions, it would also be intended to track User-specific marketplace purchases too, as well as reimbursing content creators who build for the Sandbox project (contributing procgen assets or tokens, for instance, to some form of shared marketplace). So basically Payments would cater to at least three personas: individual Users, Organisation (owners), and Asset Builders.
Therefore I guess it does make sense to have Payments decoupled. But then we still have another pressing concern. Performance.
If I am concerned about Performance, for intra-subsystem communication I can’t use Kafka like I have for a few proof of concept calls. That will need to be replaced with something better. I think ZeroMQ is the best option for this. There are patterns coupled ZeroMQ with the Outbox pattern that allow for reliable communication between microservices.
(For inter-subsystem communication I think Kafka is probably okay for the time being, but I will periodically revisit this aspect of the system architecture.)
But what about Performance?
ZeroMQ is fast: for a send process and receive process running on the same machine, if the send process sends 10,000 messages, then the time delta between first sent and last received facilitated by ZeroMQ is a staggeringly small 15 milliseconds (ref). I think that is fast enough, and certainly blows RabbitMQ out of the water, being almost 100 times faster.
For guaranteed performance, of course, one would need each individual microservice to run in cloud infrastructure sufficiently “close” to each other. Running on the same machine would be a blunt approach that is probably “good enough” as a first iteration, but evidently this doesn’t scale.
A proper model would have containers of each microservice running in a shared kubernetes computing environment.
The third aspect of network latency to consider of course is the packet trip from user to the cloud services and back again. That is where Edge Computing comes to the fore, which can be loosely seen as a generalisation of the idea of a CDN (content delivery network) from content to microservices running “close to the edge”. AWS seems to offer a few services of this nature … I guess it remains to be seen how critical this aspect will be.
In terms of how Edge Computing works, presumably if a cloud (eg AWS) offers a service of this nature, one deploys an app to AWS, and then depending on where the traffic is coming from ephemeral copies of the app with a replicated database are generated closer to the user – and, depending on demand, these are spun down or more copies are provisioned. If there is a conflict between two database transactions then the system facilitating edge computing would have a process to reconcile these, eg by giving precedence for one over the other as the system periodically moves to pull itself into sync.
So as to that data migration …
class CreateOrganisationSubscriptions < ActiveRecord::Migration[7.0]
def change
create_table :organisation_subscriptions do |t|
t.string :plan
t.string :customer_id
t.string :subscription_id
t.integer :organisation_id
t.string :status
t.string :interval
t.datetime :current_period_end
t.datetime :current_period_start
t.timestamps
end
end
end
I think that this should suit my purposes.