Notes from a machine learning unconference

Here are a few dot points I took from a machine learning unconference I went to recently.

  • The STAN programming language.  You can declare variables without giving them values and then use them in calculations.  Outcomes are determined probabilistically.
  • Andrew Gellman at NYU apparently knows things about multi level models (beyond Bayesianism).
  • GAN (generative adversarial networks/models) are powerful.  Tension between minimising one thing and maximising another.  Made me think about Akaike information for the ‘max’ of an algorithm and Fisher information for the ‘min’.  Might be able to render these stable that way.
  • Kicad electronics CAD tool for microcircuits
  • Ionic transfer desalination
  • Super resolution is amazing (more GAN)
  • InfoGAN
  • LIME GAN super-resolution local to global (a model explaining a model).  See .  Might be the next step for this sort of thing (testing machine learning models)
  • Electronic frontiers Australia, citizen 4
  • Simulation to generate data, unreal CV, unity3d to train machines
  • Parseymcparseface and inception Google pretrained models
  • Imagenet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: