Thursday, February 21, 2019

The Fastest Path to Object Detection on Tensorflow Lite

Ever thought it would be cool to make an Android app that fuses Augmented Reality and Artificial Intelligence to draw 3D objects on-screen that interact with particular recognized physical objects viewed on-camera?  Well, here's something to help you get started with just that!

Making conference talks can be a chicken-and-egg problem.  Do you hope the projects you've already worked on are interesting enough to draw an audience, or do you go out on a limb, pitch a wild idea, and hope you can develop it between the close of the call for papers and the conference?  Well, in this case, the work I did for DevFests in Chicago and Dallas yield a template for talks formulated by either approach.

The most impressive part is that you can recreate for yourself the foundation I've laid out on GitHub by cloning the Tensorflow Git project, adding Sceneform, and editing (mostly removing) code.  However, it wasn't such a walk in the park to produce.  Here are the steps, edited down as best I can from the stream of consciousness note-taking that this blog post is derived from.  It has been distilled even further in slides on SlideShare, but this might give you some insights into the paths I took that didn't work -- but that might work in the future.

  • Upgrade Android Studio (I have version 3.3).
  • Upgrade Gradle (4.10.1).
  • Install the latest Android API platform (SDK version 28), tools (28.0.3), and NDK (19).
  • Download Bazel just as Google tells you to.  However, you don't need MSYS2 if you already have other things like Git Shell -- or maybe I already have MinGW somewhere, or who knows.

Nota Bene: ANYTHING LESS THAN THE SPECIFIED VERSIONS will cause a multitude of problems which you will spend a while trying to chase down.  Future versions may enable more compatibility with different versions of external dependencies.

Clone the Tensorflow Github repo.

A Fork In the Road

Make sure you look for the correct Tensorflow Android example buried within the Tensorflow repo!  The first one is located at path/to/git/repo/tensorflow/tensorflow/examples/android .  While valid, it's not the best one for this demo.  Instead, note the subtle difference -- addition of lite -- in the correct path, path/to/git/repo/tensorflow/tensorflow/lite/examples/android .  

You should be able to build this code in Android Studio using Gradle with little to no modifications.  It should be able to download assets and model files appropriately so that the app will work as expected (except for the object tracking library -- we'll talk about that later).  If it doesn't, here are some things you can try to get around it:

  • Try the Bazel build (as explained below) in order to download the dependencies.
  • Build the other repo at path/to/git/repo/tensorflow/tensorflow/examples/android and then copy the downloaded dependencies into the places where they would be placed.

However, by poking around the directory structure, you will notice is the population of several BUILD files (not build.gradle) that are important to the Bazel build.  It is tempting (but incorrect) to build the one in the tensorflow/lite/examples/android folder itself; also don't bother copying this directory out into its own new folder.  You can in fact build it this way, if you remove the stem of directories mentioned in the BUILD file so you're left with //app/src/main at the beginning of the callout of each dependency.  By doing this, you will still be able to download the necessary machine learning models, but you will be disappointed that it will never build the object detection library.  For it to work all the way, you must run the Bazel build from the higher-up path/to/git/repo/tensorflow folder and make reference to the build target all the way down in tensorflow/lite/examples/android .

For your reference, the full Bazel build command looks like this, from (the correct higher-up path) path/to/git/repo/tensorflow :
bazel build //tensorflow/lite/examples/android:tflite_demo

Now, if you didn't move your Android code into its own folder, don't run that Bazel build command yet.  There's still a lot more work you need to do.

Otherwise, if you build with Gradle, or if you did in fact change the paths in the BUILD file and copied the code from deep within the Tensorflow repo somewhere closer to the root, you'll probably see a Toast message about object detection not being enabled when you build the app; this is because we didn't build the required library.  We'll do this later with Bazel.

Now, let's try implementing the augmented reality part.

But First, a Short Diatribe On Other Models & tflite_convert

There's a neat Python utility called tflite_convert (that is evidently also a Windows binary, but somehow always broke due to failing to load dependencies or other such nonsense unbecoming of something supposedly dubbed an EXE) that will convert regular Tensorflow models into TFLite format.  As part of this, it's a good first step to import the model into Tensorboard to make sure it's being read in correctly and to understand some of its parameters.  Models from the Tensorflow Model Zoo imported int0 Tensorboard correctly, but I didn't end up converting them to TFLite, probably due to difficulties, as explained in the next paragraph.  However, models from TFLite Models wouldn't even read in Tensorboard at all.  Now these might not be subject to conversion, but it seems unfortunate that Tensorboard is incompatible with them.

Specifically, tflite_convert changes .pb files or models in a SavedModel dir into .tflite format models.  The problems with tflite_convert on Windows were firstly finding just exactly where Pip installs the EXE file.  Once you've located it, the EXE file has a bug due to referencing a different Python import structure than what things are now.  Building from source also has the same trouble; TF 1.12 from Pip doesn't have the same import structure that expects.  Easiest thing to do is just download the Docker repo (on a recent Sandy Lake or better system -- which means that even my newest desktop with an RX580 installed can't handle it) and use tflite_convert in there.

Looking Into Augmented Reality

Find the Intro to Sceneform codelab.  Go through the steps.  I got about halfway through it before taking a pause in order to switch out quite a lot of code.  The code I switched mostly revolved around swapping the original CameraActivity for an ArFragment and piping the camera input into the ArFragment into the Tensorflow Lite model as well.  More on the specifics can be seen in the recording of my presentation in Chicago (and in full clarity since I painstakingly recorded these code snippets in sync with how they were shown on the projector).

To build Sceneform with Bazel, first I must say it's probably not possible at this time.  But if you want to try (at least on Windows), make sure you have the WORKSPACE file from Github or else a lot of definitions for external repos (@this_is_an_external_repo) will be missing, and you'll see error messages such as:

error loading package 'tensorflow/tools/pip_package': Unable to load package for '@local_config_syslibs//:build_defs.bzl': The repository could not be resolved

After adding in the Sceneform dependency into Bazel, I also faced problems loading its dependencies.  There were weird issues connecting to the repository of AAR & JAR files over HTTPS (despite the Tensorflow Lite assets worked fine).  As such, I was greeted with all the things Bazel told me that Sceneform depended at a time, since Bazel would not tell me all the dependencies of all the libraries at once.  I was stuck downloading about 26 files one at a time, as I would continuously download libraries that depended on about 3 others themselves.  Or not... so I wrote a script to automate all this.

The following script, while useful for its intended purpose, alas did not solve its intended goal because once you do all this, it claims it's missing a dependency that you literally can't find on the Internet anywhere.  This leads me to believe it's currently impossible to build Sceneform with Bazel at all.  Nevertheless, here it is, if you have something more mainstream you're looking to build:

import ast
import re
import urllib.request

allUrls = []
allDeps = []

def depCrawl(item):
if (item['urls'][0] not in allUrls):
depStr = ""
for dep in item['deps']:
depStr += "\n%s_import(" % item['type']
depStr += "\n  name = '%s'," % item['name']
filepath = ":%s" % (item['urls'][0].split("/"))[-1]
if (item['type'] == "java"):
depStr += "\n  jars = ['%s']," % filepath
depStr += "\n  aar = '%s'," % filepath
if (len(item['deps']) > 0):
depStr += "\n  deps = ['%s']," % "','".join(item['deps'])
depStr += "\n)\n"
if (depStr not in allDeps):

f = ""

with open('git\\tensorflow\\tensorflow\\lite\\examples\\ai\\gmaven.bzl') as x: f =

m = re.findall('import_external\(.*?\)', f, flags=re.DOTALL)

aar = {}

for item in m:
aarName ='name = \'(.*?)\'', item)
name =
aarUrls ='(aar|jar)_urls = (\[.*?\])', item)
type = "java" if == "jar" else "aar"
urls = ast.literal_eval(
aarDeps ='deps = (\[.*?\])', item, flags=re.DOTALL)
deps = ast.literal_eval(
deps = [dep[1:-5] for dep in deps]
dictItem = {"urls": urls, "deps": deps, "type": type, "name": name, "depStr":}
aar[name] = dictItem
if (len(urls) > 1):
print("%s has >1 URL" % name)


for url in allUrls:
print("Downloading %s" % url)
urllib.request.urlretrieve(url, 'git\\tensorflow\\tensorflow\\lite\\examples\\ai\\%s' % url.split("/")[-1])

The important part of this script is toward the bottom, where it runs the depCrawl() function.  In here, you provide an argument consisting of the library you're trying to load.  Then the script seeks everything listed as a dependency for that library in the gmaven.bzl file (loaded from the Internet), and then saves it to a local directory (note it's formatted for Windows on here).

Giving Up On Bazel For the End-To-End Build

Nevertheless, for the reasons just described above, forget about building the whole app from end to end with Bazel for the moment.  Let's just build the object tracking library and move on.  For this, we'll queue up the original command as expected:

bazel build //tensorflow/lite/examples/android:tflite_demo

But before running it, we need to go into the WORKSPACE file in /tensorflow and add the paths to our SDK and NDK -- but not so much as to include references to the specific SDK version or build tools version, because when they were in, it seemed to get messed up.


  • Install the Java 10 JDK and set your JAVA_HOME environment variable accordingly.
  • Find a copy of visualcppbuildtools_full.exe, and install the following:
    • Windows 10 SDK 10.0.10240
    • .NET Framework SDK
  • Look at the Windows Kits\ directory and move files from older versions of the SDK into the latest version
  • Make sure your Windows username doesn't contain spaces (might also affect Linux & Mac users)
  • Run the Bazel build from an Administrator command prompt instance
  • Pray hard!
Confused by any of this?  Read my rationale below.

Eventually the Bazel script will look for javac, the Java compiler.  For this, I started out installing Java 8, as it was not immediately clear which Java that Bazel was expecting to use, and according to Android documentation, it supports "JDK 7 and some JDK 8 syntax."  Upon setting up my JAVA_HOME and adding Java bin/ to my PATH, it got a little bit further but soon complained about an "unrecognized VM option 'compactstrings'".  Some research showed similar errors are caused by the wrong version of the JDK being installed, so I set off to install JDK 10.  However, according to Oracle, JDK 10 is deprecated, so it redirected me to JDK 11.  Then, I had another issue with some particular class file "has wrong version 55.0, should be 53.0".  Once again, this is due to a JDK incompatibility.  I tried a little bit harder to seek JDK 10, and eventually found it but had to login to Oracle to download it (bugmenot is a perfect application to avoid divulging personal information to Oracle).

Once installing JDK 10, I then came across an error that Bazel can't find cl.exe, relating to the Microsoft Visual C++ '15 compiler & toolchain required to build the C++ code on Windows.  However, downloading the recommended vc_redist_x64.exe file didn't help me, since the installer claims the program is already installed (I must have installed Visual C++ a long time ago).  However, the required binaries are still nowhere to be found in the expected locations.  I ended up finding an alternate source, a file called "visualcppbuildtools_full.exe".  Unfortunately, this installs several GB of stuff onto your computer.  I first selected just the .NET Framework SDK to hasten the process, save hard disk space, and avoid installing unnecessary cruft, but then it couldn't find particular system libraries, so I had to select Windows 10 SDK 10.0.10240 and install that as well.

Trying again with the build, now it can't find Windows.h.  What?!?  I should have just installed the libraries & include files with this SDK!  Well, it turns out they did install correctly, but according to the outputs of SET INCLUDE from the Bazel script, it was looking in the wrong directory: C:\Program Files (x86)\Windows Kits\10\Include\10.0.15063.0 rather than C:\Program Files (x86)\Windows Kits\10\Include\10.0.10240.0.  To make my life easier, I just copied all the directories from 10240 into 15063, renaming the original directories in 15063 first.  I later had to do the same thing with the Lib directory, in addition to Include.

Upon setting this up, I made it to probably just about the completion of the build:

bazel-out/x64_windows-opt/bin/external/bazel_tools/tools/android/resource_extractor.exe bazel-out/x64_windows-opt/bin/tensorflow/lite/examples/android/tflite_demo_deploy.jar bazel-out/x64_windows-opt/bin/tensorflow/lite/examples/android/_dx/tflite_demo/extracted_tflite_demo_deploy.jar
Execution platform: @bazel_tools//platforms:host_platform
C:/Program Files/Python35/python.exe: can't open file 'C:\users\my': [Errno 2] No such file or directory

Aww, crash and burn!  It can't deal with the space in my username.  Instead, make an alternate user account if you don't already have one.  Now one thing you may notice is that the new account doesn't get permission to access files from the original user account, even if you define it as an Administrator.  Using Windows "cmd" as Administrator will finally allow you success with your Bazel build.


Look closely; this is the image of success.

Now, you're not out of the woods yet.

Tying It All Together

Now, you need to actually incorporate the object detection library in your Android code.

  • Find the file built by Bazel.  It's probably been stashed somewhere in your user home directory, no matter your operating system.
  • Copy this file into your Android project.  Remember where you normally stash Java files?  Well this will go into a similar spot called src/main/jniLibs/<CPU architecture> , where <CPU architecture> is most likely going to be armeabi-v7a (unless you're not reading this in 2019).
  • To support this change, you'll also need to add a configuration to your build.gradle file so that it will only build the app for ARMv7; otherwise if you have an ARMv8 (or otherwise different) device, it won't load the shared library and you won't get the benefit of object tracking.  This is described in the YouTube presentation linked above.
The final thing to do to get this all working is to add in the rest of the Sceneform stuff.  At this point, if you've followed the coding instructions in the YouTube video linked above that mentions what to change, then all you should need to do is build the Sceneform-compatible *.sfb model.

But hold tight!  Did you see where the Codelab had you install Sceneform 1.4.0 through Gradle, but now the Sceneform plugin offered through Android Studio is now at least 1.6.0?  Well if you proceed in building the model, you won't notice any difficulties until the first time your app successfully performs an object detection and tries to draw the model...only to realize the SFB file generated by the plugin isn't forward-compatible with Sceneform 1.4.0 which you included in your app.  The worst part is that if you try to upgrade Sceneform to 1.6.0 in Gradle, your Sceneform plugin in Android Studio will refuse to work properly ever again.

Your two solutions to this problem:
  • Rectify the Sceneform versions (plugin & library) prior to building anything, or at least making your first SFB file
  • Just use Gradle to build your SFB file, as shown in the YouTube video
Turns out you don't need the Sceneform plugin in Android Studio at all, and after a while it'll probably seem like a noob move, especially if you have a lot of assets to convert for your project or you're frequently changing things.  You'll want it to be automated as part of your build stage.

The big payoff is now you should be able to perform a Gradle build that builds and installs an Android app that:
  • Doesn't pop a Toast message about missing the object tracking library
  • Performs object detection on the default classes included with the basic MobileNet model
  • Draws Andy the Android onto the detected object

Any questions?

This is a lot of stuff to go through!  And I wonder how much of it will change (hopefully be made easier) before too long.  Meanwhile, have fun fusing AI & AR into the app of your dreams and let me know what you build!

As for me, I'm detecting old computers: (but not drawing anything onto them at this point)

* Not responsible for the labels on the detected objects ;) Obviously the model isn't trained to detect vintage computers!

And for the sake of demonstrating the whole thing, here's object detection and augmented reality object placement onto a scene at the hotel right before I presented this to a group:

Thursday, January 3, 2019

Tipping Over the Full Stack

Are there good full-stack developers?

For the past several months, I have been posing as a traditional full-stack developer when I come from a background of software testing, interpreted scripting, and winning hackathons with proof of concept and prototype work.  I suppose "full stack" could also mean twiddling bits of firmware to interface with a BLE stack and then writing software to allow a PC or phone to control an embedded device (which is my kind of full-stack), but here I'm referring to traditional back-end and front-end application development.

It used to look easy back when I entered this world roughly four years ago.  You can bang out some pieces in Node, others in Angular, and some deep plumbing in Java, interface with your database of choice, and then call it a day.  I was still usually involved with testing, though, and spent time flipping back and forth between API and UI testing (specifically API contract (such as Pact) type of testing, and regression testing).  Sometimes I would make contributions to the application code, which seemed to really impress folks that I could play on both sides, just as long as I didn't test what I had written for myself.

However, in a land of deep Java Spring and deep Angular, which has evolved substantially since last time I looked at it (I usually prefer Polymer for personal projects), it just seems like my head is spinning.  Not because the material is necessarily difficult (despite that it has nothing to do with machine learning or IoT, which is really the stuff that is up my alley), but because there are so many levels and layers of things to keep track of.  Orchestration layers and other APIs it calls upon comprise the back-end, and then intertwined repositories of Web components and Angular code make up the front-end.  It can become very time-consuming for someone who is OCD and who can get fixated on details to build up a feature involving front-end and back-end changes, plus testing as well, if the team is too small for dedicated testers.

It's necessary to be a full-stack developer if you are an independent contractor -- or even then, wouldn't you just bill for enough money that you can hire someone else to do the parts you don't want to?

If there are good full-stack developers, do they really like it?

Another thing I've observed is that those in my organization calling themselves full-stack developers often tend to focus, specialize, or gravitate toward doing either front-end or back-end work, but not an even split of both.  This way, they can hone in on and become an expert at their passion, but still be fluent enough in the other side of the application so as to keep pushing forward when someone else is on vacation or there's way more of one type of work from sprint to sprint.

But, beyond the time sunk into making sure each of your repositories has the correct dependency version on each other, and checking all your Git commits for errors (let alone the time required to make POJOs and very boilerplate unit tests...there should be tools for this), it is mentally draining and does not leave much energy left over for learning about what is coming around the corner.  Despite my recently-attained qualifications (such as becoming a Google Developer Expert in Tensorflow), I've been less inclined to play with emerging technologies after work.  It seems that most GDEs actually work in their field of interest, so I anticipate swiftly moving to make the same true for myself.

Anyway, this business of full-stack development is not for me, unless I'm doing it for myself at my own pace.  It is fruitless to try to keep up with two different sides of the same coin, since there are too many details to shore up that you can't better yourself in the process.  And despite the call to become an expert in a field (which a lot of people aspire to), frankly I'm wondering if it would not be more interesting to simply try and fail at a lot of things, and then come away with a bunch of interesting stories and insights from those experiences.  That being said, I have been deemed by Google as an "expert" at machine learning (not full stack app development), which is all well and good, but it doesn't mean anything until you can either apply it to a problem or teach it.  Naturally, I have my own set of ideas (not likely to pay the bills though).  However, in the spirit of trying a lot of things, failing fast, and learning for myself, I'm interested in situations where the tech is not the forefront.  There's some other goal like winning an election, valuing a real estate loan portfolio, or wading through monotonous legal documents to find precedent or stake a claim; tech is instead solving a problem to free up human capital for more meaningful things.

And yes, that's a nice way of saying I want to usher in the robot era and replace tons of jobs!  But it's my belief that we can aspire to be a society that prevents people from doing things that trained monkeys can do.  We should have the freedom and flexibility to be creative and expressive, and to be able to spend the majority of our days in pursuits that play well to our strengths and unique abilities as a species.  And while some people may find that full-stack development fulfills them, I'd rather stay in a specific niche than try to do all of that.

Thursday, November 8, 2018

Getting Started with a Sparkfun ESP32 Thing, ESP-IDF, and Visual Studio Code

I've had a Sparkfun ESP32 Thing laying around on my desk since back in May when I met the fellow from Iron Transfer at a pinball convention, and we got to talking about IoT and his devices to remotely administrate pinball machines.  However, I spent tons of time this year planning for exhibitions, and didn't really get to do anything with it -- until now.

Before You Begin

There are a few choices you need to make up-front about which IDE you wish to use for development, plus which development framework.  I have chosen Microsoft's Visual Studio Code since it is cross-platform, feature-rich without hogging resources, free to download, and based on an open-source product similar to how Google Chrome is derived from Chromium.  It comes with extensions for language support and IntelliSense, which are must-haves when authoring code.  You are free to use an IDE of your choice; it won't really hamper your ability to enjoy the rest of this article.

The other decision lies in your development framework.  I investigated two choices -- the Arduino framework and ESP-IDF.  When doing my research, mostly back in the summer, I found several posts where people indicated running into problems with support for various pieces of the Bluetooth stack.  Plus, I decided to be closer to the bare metal and have less abstracted away for me, so I went with ESP-IDF.  Now if you don't go along with this choice, you may not find much value from the rest of this article.  But you're welcome to look at these two alternatives for what you want to write your code.

Anyway, if you choose the VS Code with ESP-IDF, then you can follow along with this Instructable that details how to set up the entire toolchain: Despite one commenter that complained loudly about the poor quality of the Instructable, I didn't find it too troublesome to follow along with.  You might just need to look up what any missing pieces are called nowadays, as some things have changed, but I can assure you I got through setup just this past weekend with mostly just that article.

Strange Things You May Encounter

After getting a good way through the setup steps, it was time to compile the code for the first time.  Unfortunately, some odd errors appeared that indicated I hadn't defined some constants and some functions.  I Googled for the missing constant name and eventually found it on GitHub, and saw that it was missing from my local copy of the header file (which leads me to believe the Instructable or the accompanying example code must be being updated).  Fortunately, it is easy to update your development library in case you end up getting an old version and finding that such library code is missing.

From the VS Code Command Palette (Ctrl+Shift+P or Cmd+Shift+P), just look for:
PlatformIO: Update All (platforms, packages, libraries)

Or, from the command line/terminal, write:
platformio update

Once I updated the library code, I was able to compile the code successfully.  However, when using the toolbar at the bottom left of the VS Code window, it is not always apparent which terminal window the command you selected from the toolbar is actually running in.  Whenever you click on the toolbar, it seems to generate a new Terminal instance that's selectable from a dropdown at the top right of that window pane.  Usually, once the command is done running, you can close that terminal instance by pressing a key, but you are always left with one PowerShell instance from within VS Code where you can run your own commands.

After uploading the binary to the Sparkfun Thing for the first time, it displayed nothing but garbage on the serial terminal and didn't show up in the list on my Android's copy of the Nordic nRF Connect BLE scanner app.  This compelled me to reinstall the PlatformIO Core and the platforms/packages/libraries again, especially since after the first time, it apparently failed to remove a file due to a "Permission denied" error.  After seeing an error about something basic being missing once more when rebuilding, I did what you do with any Microsoft product to fix it -- you restart it.  A quick restart of VS Code fixed the problem, and now I was able to rebuild the binary once again without problems.

There was another problem when building the binary: all of my directory paths involved with this project have spaces in them, and the tool did not end up putting quotation marks around some of these.  As such, I would see errors such as:

esptool write_flash: error: argument <address> <filename>: [Errno 2] No such file or directory: 'd:\\Programming\\BriteBlox'

Fortunately, a bit more Googling allowed me to find out a command that would reveal the command that caused this error:

pio run -t upload -v

Here is the command:

"c:\users\user\.platformio\penv\scripts\python.exe" "C:\\Users\\user\\.platformio\\packages\\tool-esptoolpy\\" --chip esp32 --port "COM6" --baud 921600 --before default_reset --after hard_reset write_flash -z --flash_mode dio --flash_freq 40m --flash_size detect 0x1000 D:\\Programming\\BriteBlox Wearable ESP-IDF\\.pioenvs\\esp32thing\\bootloader.bin 0x8000 D:\\Programming\\BriteBlox Wearable ESP-IDF\\.pioenvs\\esp32thing\\partitions.bin 0x10000 .pioenvs\\esp32thing\\firmware.bin

Now to make the modifications, change the full path to Python at the beginning to just python (it seems to freak out with a fully-qualified path), then add quotes as needed.  Run the command in your own terminal, and the code will deploy as desired.  Meanwhile, this Github issue seems to indicate a fix might be imminent.

Alas, rebuilding the binary did not solve the problem of garbage coming through my serial monitor.  I Googled around for this some more, and found out that the Sparkfun Thing was running at the wrong frequency, or at least a different one than expected.  It runs by default at 26MHz, but the development platform expects that the device is running at 40MHz.  As such, by taking the device's baud rate of 115200, and multiplying by 26/40, I was able to find the true baud rate: 74880.  By opening up RealTerm on COM6 and entering 74880 as the baud rate, I was able to see the expected serial output from the Sparkfun Thing, instead of garbage finally:

BLE Advertise, flag_send_avail: 1, cmd_sent: 4

Now, to solve the mismatch in expected vs. actual frequency, you could either change the crystal to the correct frequency or adjust the development framework to work with the crystal on the board as it is.  In this case, I chose the latter approach.  Many people write about running make menuconfig in the root directory of your project in order to adjust the board settings, but that only seems feasible in Linux.  For Windows users, go into the sdkconfig.h file in your project, and alter the following lines:

#define CONFIG_ESP32_XTAL_FREQ_40 1
#define CONFIG_ESP32_XTAL_FREQ_40 0

(Nota Bene: I was compelled many times to write 24 in places rather than 26, which led to lots of confusion.  The baud rate if you write 24 will be 124,800.)

By changing the two lines from what is in red (on top) to what is in green (on bottom), you will be able to read serial output at the expected 115200 rate, and see the device appear in the nRF Connect app as "ESP-BLE-HELLO" if you copied the Instructable tutorial code.

Happy coding for BLE!

Thursday, November 1, 2018

The OpenBrite Turbo Controller for Vectrex

At long last, I debuted my custom turbo Vectrex controller at the Houston Arcade Expo on October 19 & 20.  This will be a milestone for Vectrex fans and collectors, as it brings about more ergonomic controls and a rate-adjustable Turbo (auto-fire) mode that can toggle per button.

Vectrex Controller Prototype, as seen in Houston last month

Why, you ask?

I acquired a Vectrex in late 2015 from a very generous individual who had several to spare.  However, it did not come with a controller, so it laid dormant until I got around to building the giant 100x NES controller.  As the guts of a cheap knock-off NES controller from Amazon went into my behemoth NES controller, I used its shell and buttons to enclose a crude perf-board controller, and cut up a cheap Sega Genesis extension cable from Amazon in order to make all the connections from my hand-soldered board into the Vectrex.  It is well-documented on how to fashion a Sega Genesis controller into a Vectrex controller, but I didn't really feel like harvesting the QuickShot knockoff because its cable was going bad.

Original homebrew Vectrex controller using a knockoff NES shell

Anyway, both things (the giant NES controller and my Vectrex) made their debut at Let's Play Gaming Expo in 2016.  I even had MuffinBros Graphics whip up a decal for me to go over the generic Nintendo-esque aesthetic and make it look more Vectrex-y.

Assembled controller with decal designed by MuffinBros Graphics (prior to reworking the screen/vector board).  Note how Select & Start have been repurposed into game buttons 1 & 2.

Ultimately, this controller didn't quite suffice because it could be flaky at times, and as an originalist, I really wanted an analog joystick -- one which neither this NES controller knockoff nor a Genesis controller would provide.  After a while, the generous donor came forward with an original Vectrex controller, and so I could study its design and try to replicate it.

However, the original Vectrex controller is not without its flaws.  It takes up an inordinate amount of space for what it is -- four buttons and a joystick.  The four buttons are in a straight line and spread far apart, forcing even someone with large hands to spread their fingers out and curl some fingers more than others to touch all the buttons.  The joystick has a tall, skinny grip, meaning you must grasp it between your thumb and forefinger rather than just mash it with your thumb.  The controller is designed to be set flat on a table to be used, not held in both hands like pretty much every other controller made.  Given all these flaws, original Vectrex controllers still fetch well over $100, with homebrew controllers appearing sparsely.  Given all this, I set out to rectify the ergonomic problems of the controller and modernize its interface, all while keeping such a cheap bill of materials that I could easily squash the market for the original controllers and still (hopefully) make some money.

Lastly, debuting it in Houston was essential because among conventions in Texas, the Houston Arcade Expo tends to have the biggest contingent of Vectrex fans coming to the convention (sometimes Vectrexes are more numerous than any other type of console).  At any other show, it would be far less noticed.

The Design Process

The main impetus for this was to have a homebrew controller that actually featured an analog joystick, since there were few if any guides elaborating how to fashion one from an existing controller.  I acquired a couple Parallax 2-axis joysticks with breadboard mounting capability to do the trick.

The Vectrex comes with a game in its ROM -- Asteroids -- thus you can play without needing a cartridge.  However, with the traditional controller, this requires lots of button-mashing since it has no auto-fire feature.  Using a 555 timer, potentiometer, and clever values within an RC circuit, I have given it the ability to auto-fire.  Not only that, but once you graduate from Asteroids, there may be other games where holding down a button to toggle something repeatedly might not be a good idea.  Thus was born the idea to have toggle switches mimicking the positions of the buttons.  These would be switchable to complete the circuit (by sending GND) just once upon the button press, or send the GND pulse from the 555 timer as long as the button is being held, depending on the position of the Turbo toggle switch for that button.  The potentiometer serves to adjust the rate of auto-fire, from less than once a second to around 7 times a second given the current values of the RC circuit.

The buttons are modeled in a diamond shape, just like Sony PlayStation or Microsoft Xbox controllers.  This allows for better agility as now all buttons to be reached with one finger, and this type of arrangement comes naturally as it has been ingrained into our brains since at least the Super Nintendo.  Playing "chords" of buttons happens rarely, if at all, in Vectrex games, so we might as well standardize the button arrangement to something more familiar but that generally accommodates only one finger at a time.

Finally, the screws to disassemble an original controller are located under the original decal.  As such, you have to pull up the original artwork/decal and risk doing severe damage to it in order to get to the screws underneath the sticker but still on the top side of the controller.


The initial breadboard was built in June 2017.  At first, I was using a jumper wire in order to complete the circuit for each button press, so it was very inconvenient to try to play the game and take a picture simultaneously, much less play the game at all.  After showing these pictures of the breadboard with auto-fire capability to people at Houston Arcade Expo in 2017, I vowed to get it produced by the 2018 show.  And, I just barely made it under the wire with the prototype.

First working breadboard version of the Turbo Vectrex Controller

As you might know, the body of the Vectrex has a clasp that can hold one controller.  Originally, I wanted to split up the controller in two so that this one spot could hold controllers for both players.  However, a controller this small would likely be unwieldy to hold, especially in larger hands.  Furthermore, this would greatly reduce the space on the breadboard available for all the desired components.  As such, I elected to model my enclosure after the original Vectrex controller enclosure.

It was quite painstaking to get the details correct on the controller.  I first attempted to trace the side profile of the controller with graph paper and then approximate it into the computer with Adobe Illustrator.  This proved tedious and with too much uncertainty as to the error, so I got clever and held the controller on its side above a flatbed scanner.  Then, not only did I have the side profile, but also I had the profile of the little groove piece where a nub on the system case guides a controller being inserted to lock it into place during transport or storage.  Furthermore, I could extrude the edge and make the object as long as needed to match the original controller width -- 199mm.

If you consider the side profile of the Vectrex controller to be a blob, I "hollowed out" the blob -- leaving a 2mm-thick ring along the inside -- in order to form the regular outer shell of the controller.  In Illustrator, I also sliced it in half so that I could lay flat both sides -- bottom and top of the controller -- in the CAD program, thus making it easier to model all the details that need to go on those respective pieces.  In particular, the bottom piece was adorned with little grooves in which the breadboard would fit.  As time was running out and component selection was not finalized, I elected not to go with a custom PCB for this implementation, but to simply use my original breadboard.  The bottom side also incorporated screw holes with recesses so that a pan-head screw would not protrude from the case.  While the diameter of these holes was set to accommodate a #6 screw shaft, as you know, 3D printing is an inexact science and, as filament deposits thickly in some areas, this ended up being a hole that nicely accommodates #4 screws instead.  (Note that these holes are on the bottom, unlike on the original.)  The top side of the case not only features further screw holes exactly aligned over the bottom screw holes, but also must have holes/slots for all the protruding buttons and joysticks coming out of the breadboard.  The top side is also angled down roughly 10.7 degrees relative to the bottom side (if aligned parallel to the horizontal), so there were a few times I had to flip things around exactly perfectly in order to verify their correctness.

Speaking of modeling, the CAD program I used for all this was TinkerCAD, with its convenient, simple, yet flexible interface.  With at least a few groups of positive and negative shapes, I can model the entire controller.

Besides the joystick, I also went out to source some interesting buttons.  I already had some clicky buttons and some mushy rubber-dome or membrane buttons, but I wanted something in between the feels of these buttons -- light to the touch, yet clicky.  I managed to find an ideal, nice-feeling button at the local electronics surplus store.  I also acquired several potentiometers and knobs, and was trying to figure out the best way to 3D print something for these too when I found a breadboard-compatible potentiometer lying around in the house.  And thank goodness I found that, because otherwise the knob would have had a very flimsy connection to the potentiometer!

The 3D printing aspect of this was tedious, as Stacy was out of town while I was trying to do all the printing.  She had the licenses to the Simplify3D slicer on machines I did not have access to, so I had to do a lot of back-and-forth of STL files and binary files with her before she finally gave me credentials into these systems.  As the bottom plate took somewhere between 6 to 8 hours to print, I decided to try to make the larger top plate print faster by cutting holes out of it in the design, especially so I could at least test the alignment and hopefully make a quick adjustment if anything was wrong.  Fortunately, the holes were indeed aligned correctly, but sadly, the 3D printer stopped working (formed a clog, apparently) and would not let me print any more items after this case and the buttons.  This was a problem because now my nice-fitting top cover was a bit less structurally sound, and it also looked super-weird.  I managed to rectify this during the show by printing out some informational blurbs as a top decal and taping it onto the top cover.  The prototype top cover also didn't have the screw holes in place, so I ended up having to hold the case together with rubber bands.

Ultimately, the slide switches never got promoted/extended up through the top cover because it was questionable as to how to stabilize such a large moving piece through the top cover.  Several times, I have sheared off the nubs of slide switches by applying too much pressure too close to the top of the switch.  You could reach in with a skinny screwdriver and change the switches, but I doubt anyone bothered or even gave it much attention.


Turbo Vectrex Controller in use -- with "traditional grip" :-P

This labor of love seems not to have gotten a whole lot of attention except from the other guy who brought a Vectrex machine, who really liked it (and was willing to  3D-print a proper top cover for me during the show).  It seems like consoles that predate the NES tend to be a little bit too obscure for people these days.  Even the Atari 2600, which was immensely popular in its time, now is generally popular only with people old enough to remember its heyday.  Of course, the Vectrex is a really obscure machine in its own right.  It came out shortly before the Video Game Crash of 1983, and it is nearly impossible to find replacement vector monitors if that part should go out.  This makes for quite an expensive collectors item nowadays, which means the owners are scattered around the country in small numbers.

I think for it to get much traction, I will have to do more than a couple tweets about it and show it at a regional show -- it probably needs to be posted on AtariAge and brought to the Portland Retro Gaming Expo in order to make a big splash.  However, before I take it to that point, it would be nice to make a proper PCB that mounts to the top cover, make the appropriate edits to the CAD model to facilitate that, finalize the bill of materials, and then make sure the whole thing fits into the slot designed for it in the system case.  After the Houston show, I thought this would take no time, but the more I think about it, the more I foresee still quite a bit more math and CAD time ahead.

Thursday, September 27, 2018

Angular Noob: An Observable On An Observable Observing a Promise

With the reusability and extensibility of modern Web components, I do not look back on the days of jQuery with much fondness.  However, I haven't paid much attention to Angular since Angular 1.  Since its syntax didn't really appeal to me, I opted to learn Polymer instead.  Well now, given a new opportunity, I am diving into a much more modern Angular and TypeScript.  Unfortunately, I am finding that a lot of articles people write on Angular, when you're diving into a well-established code base, are about as dense as reading toward the end of a book on quantum mechanics.  It's English alright, but the jargon is applied thickly.  (And this is coming from someone who has even impressed some of Google's Tensorflow engineers with their machine learning skillz.)

The problem at hand is fairly straightforward.  We want to notify something in the UI upon the outcome of a RESTful request we make to an external resource so that it can display useful information to the user.  We call http.get(), which returns an Observable of type Response (Observable<Response>).  Upon the outcome of the Observable (basically the one event this particular instance fires), we will run either onResponse() or onError().

To describe this in code, imagine the following:

Main App TypeScript File:

ngOnInit() {
  // handle routing, and whatever else you can imagine happening here

Data Service TypeScript file:

loadFromAPIOrOtherSite() {
    user => this.onResponse(user),
    error => this.onError(error)

Data Loader Service TypeScript file:

loadData() {
  return this.http.get(url)
    .map(response => this.transposeData(response.json()))

    .catch(error => Observable.throw(error));

The way this works is that once the page loads, the data will be fetched.  The obvious problem here is that the main page never gets informed as to the status of the data fetch; as such, the user is not notified when the server fails to respond properly.  Now, theoretically, you could inform the data service about the UI you are looking to manipulate, but I think it makes more sense for the page to deal with its own issues, rather than anything else.

It becomes apparent that what I need to do is get the loadFromAPIOrOtherSite() function to in fact be an Observable itself.  The loadFromAPIOrOtherSite() function utilizes an Observable, so of course the loadData() function returns an Observable that resolves into either the successful answer or an error message.  Unfortunately, a lot of the pedagogy on this topic informs you to use some of the chaining or aggregation functions found in the RxJs library, such as map(), which is overkill for a single GET request.  I don't have a whole array of things to process, nor do I care to append the output of one Observable directly to another Observable.  And, even if there was an array of things to process, it's unclear to me how I could allow the side processes to complete while still returning the request and its status to the main page controller.  I also don't want either of the data services manipulating the DOM directly in order to show the user an error message -- I want the main page controller to handle this.

After enough searching around on Stack Overflow, I finally came across this answer that shows how to nest Observables in a plain fashion, without anything fancy.  It nests an Observable in an Observable by observing the Subscription coming out of the subscribe() function.

Applying This To the Code

There's a little bit of extra logic in here to deal with what happens when the loadFromAPIOrOtherSite() call finishes before or after ngAfterViewInit().  On one hand, you might try to manipulate DOM elements that aren't rendered yet, leading to an undefined mess.  On the other hand, the view might finish rendering before the data load has finished.

Main App TypeScript File:

// You'll want this to deal with timing of the completion of your Observable

import { AfterViewInit } from '@angular/core';

ngOnInit() {
    data => {
      // happy path
      this.done = true;
    error => {
      // unhappy path
      this.done = true;

ngAfterViewInit() {
  this.elem = document.querySelector('#elem');

doSomethingOnUI() {
  if (this.elem && this.done) {
    // do something with this.elem

Data Service TypeScript file:

import { Observable } from '@rxjs/Observable';

import { Observer } from '@rxjs/Observer';

loadFromAPIOrOtherSite() {
  return Observable.create((observer: Observer<any>) => {
      data => {;
      error => {;

Now, it's helpful when 
this.onResponse() and this.onError() return something (even as simple as a string or integer), because propagates that return value as an "observation" to the subscriber to loadFromAPIOrOtherSite().  And, with observer.complete(), it will be the last thing that subscription will ever receive.

Nesting This Even Further: Moar Nesting!

It's possible that the previous example doesn't go as far as you need.  What if you want to do something else, like check for incomplete data inside this.onResponse() and augment it with additional data, or show an error to the user if it can't be augmented in the necessary way?  And on top of that, how about that this extra data collection function returns a Promise rather than an Observable?  Let's build upon the previous idea and make even more wrappers.

Note that the Data Service TypeScript file now has a subscription to onResponse() as well, not just loadData():

loadFromAPIOrOtherSite() {
  return Observable.create((observer: Observer<any>) => {
      data => {
          augmentedData => {
       // etc...

We must also modify onResponse() to return an Observable itself, and not just a basic literal or some JSON object.  You'll notice this follows a similar pattern to before, along with handling a lot of possible unhappy paths:

onResponse(data) {
  // used to just "return 42;" or something simple like that
  return Observable.create((observer: Observer<any>) => {
    if (!isTotallyUnsuitable(data)) {
      let moarData = Observable.fromPromise(this.promiseService.promiseReturner());
      moarData.subscribe((data) => {
        if (cantAugment(data)) { => {return "Failure to augment the data"});
        // augment the data here (happy path);
    } else { => {return "Failure to get good data at all"});


Now, if you know how to do such complex Observable nesting with map(), concatMap(), or forkjoin(), you're welcome to let the world know in the comments below!  And be sure to upvote the Stack Overflow post below if you liked this article!