Thursday, December 7, 2017

Journey to a Fully Custom Pinball Machine - Part 2

From walking the show floor at Texas Pinball Fest 2016, I couldn't help but get the vibe that something novel and big would be in store for TPF 2017 -- something beyond the big but also typical/expected releases of commercial games such as The Big Lebowski and Ghostbusters (more on those later), but in fact the ushering in of a new era: totally home-brew and open-source pinball.  As the re-themed games became more impressive from 2015 to 2016, and with easy access to leaning about hardware, fabrication techniques to develop new things and restore/renew/improve on old things, and a rejuvenated fascination with pinball in general, it was not surprising to me in the least that we would see someone totally knock it out of the park like Scott Danesi did at TPF 2017 with Total Nuclear Annihilation.

However, just in case Scott wasn't there with his amazing game (for which I placed one of the pre-orders slated to ship sometime in 2018), I wanted to produce some work as well in order to show what could be done in this realm by just two people working hard together over a short period of time.  Unfortunately, while this article accounting my activities around the Wylie 1-Flip custom pinball machine is long overdue and probably should have been published way back in May, something big transpired that really made me put it off for a long time.  The basis for the electronics in Wylie 1-Flip was the Intel Edison development kit, since it was a convenient mix of an x86-based chip running Linux combined with an interface supporting Arduino sketches without having to wait as long as a Raspberry Pi does to boot.  However, as you may know, Intel decided to discontinue much of its 2017 hobbyist IoT line, leaving me lamenting the significant time invested into learning a dead platform and lots of memory-hogging tabs open in Chrome for my research.  (Well, I'm not really lamenting the time; after all, I did study Latin, a famously dead language, and continue to tinker with retro-computers that haven't been manufactured nor supported in decades.  However, using a discontinued platform doesn't exactly usher the art of pinball into the cutting edge.)


Where We Left Off


In case you missed Part 1 of this series, there was another goal besides making an awesome custom game to go along with the trend I predicted for TPF 2017: it was also to impress my coworkers and continue producing mind-blowing projects to show off alongside their top creative talent at various internal and external events.  You got a slight peek at the CAD design process of the game, and the frustration around installing the various mechanisms that go on top and below the playfield, but then also learned at a high level the enhancements and innovations that went into it.  Here is where I start describing the innovations at a lower level.

So I can finally close those Chrome tabs...


Despite that Intel Edison is no longer a thing, I wanted to still describe for you the stumbling blocks in working with the Edison platform that cost me so much time and trouble.  Granted, there's always a learning curve with anything, but here I was biting off a whole lot at once by trying to basically hand-route all the electronics for the game and write controlling logic for it using a platform I hadn't explored too deeply for its hardware capabilities before in the two weeks or so I had left between finishing the cabinet and actually taking the game to shows.  Yes, it was pretty insane, given that a "Makers Gonna Make" event was to be held on 3/2, followed quickly by TPF 2017 starting on 3/24.  However, Stacy decided to take a buyout package from her employer at the time and took a couple months off work, and believe it or not, she spent a great deal of her time off dealing with artwork and 3D modeling the various parts for this machine.

As the Edison supported a couple different modes of development (one involving the Arduino IDE and another in standard C++ with gnu/gcc through MRAA), I had to choose which one would suit me the best.  It looked like, at first, the Arduino approach would be simple because it was a familiar programming style and way less verbose than the C++ constructs of MRAA.  My first approach was to utilize interrupts to watch for changes in state on any of the sensors, but if I recall correctly, it was really only feasible to set up a whole bunch of rising-edge and falling-edge interrupts using gnu C++.  I did experiments for a long time in just trying to get reading a pin to work, but it is confusing how the pin numbers are laid out between the GPIO numbering scheme on the board, the Arduino IDE's view of the 20 standard I/O pins, and what the GPIO "files" are named on the file system.

// Arduino | Edison | MRAA
//       0 | 26     | 130
//       1 | 35     | 131
//       2 | 13     | 128
//       3 | 20     | 12
//       4 | 25     | 129
//       5 | 14     | 13
//       6 | 0      | 182
//       7 | 33     | 48
//       8 | 47     | 49
//       9 | ???    | ???
//      10 | 51     | 41
//      11 | 38     | 43
//      12 | 50     | 42
//      13 | 37     | 40
//      14 | 31     | 44
//      15 | 45     | 45
//      16 | 32     | 46
//      17 | 46     | 47
//      18 | 36     | 14

//      19 | 15     | 165
Sheer quackery.

The next big annoyance was that the event loop didn't even work properly when there were rising- or falling-edge interrupts triggered.  The basic premise here is simple; when an interrupt is triggered, raise a flag.  Then, when the event loop runs a condition to check if the flag has been set, run the desired action (e.g. score points, flash an animation, increment the ball counter...) and clear the flag.  By using rising- and falling-edge interrupts, I can monitor for the side of the button press I really care about -- the actuation, rather than the release.  However, by using such interrupts on the Edison, it would for some reason only pick up on the very first pin being monitored -- the left lane rollover switch.  At the time, I was only trying to wire up the three rollover lanes on top, and coded it up to read from these switches in this manner, but I obviously didn't proceed like that with the rest of the switches because functions for each rising edge on each specific I/O pin are not named explicitly in the rest of the code.  Instead, I resorted to pin change interrupts, monitoring all the I/O pins for any change whatsoever.  At least this way, it'll tell me which pin changed as a function argument which can get passed directly into an array, saving me from explicitly naming each pin.  The downside was that I had to get serious about my debouncing code, since interrupts were being triggered on the actuation and the release of the switch, and if you know anything about switches, it's possible there were 2 or 3 such toggle cycles registered by the I/O pin before the ball moved away from the area.

I figured that there's no point in using pin change interrupts; I might as well just read all the switches at once during the event loop, setting all the flags at once before they each get analyzed one at a time (acting accordingly for whoever is pressed).  It's not quite as pretty as using interrupts, but:

  • My early understanding of the disassembled code for Gottlieb's Gold Wings (1986) indicates they only use interrupts for countdown & event timers, and that they read pin statuses at some point in the event loop like this anyway
  • MRAA interrupt frequency is only about 100Hz anyway due to the complexity of what's involved in checking for interrupts on the Edison, so if my event loop runs faster than 100 times per second, I'm able to react faster than the interrupts anyway

In the table above, you might have noticed those Edison pin numbers, and especially the MRAA pin numbers, get pretty high.  This is because there are a whole bunch of other GPIO pins available on the system to be configured.  I spent a great deal of time, energy, and effort trying to figure out how to tap into all these extra pins, but was ultimately disappointed that all these extra GPIO pins were only there to feed into various multiplexers to change the purpose of the 20 standard Arduino I/O pins.  Because the processor inside the Edison wasn't engineered with exactly the same types of I/O registers as, say, the ATmega328, functionality such as serial UART, PWM, SPI, and even setting up pull-up or pull-down resistors in front of the I/O pins.  The ATmega chips handle this all internally, but the Intel processor had to externalize this into a ton of extra GPIO pins I thought I could hack to read from more sensors, but alas not without compromising functionality I need in order to keep the rest of the system behaving as expected.  To see what all the extra GPIO pins control and where the table above is codified, read this codethis article, and this thorough writeup.

In short, given that:
  • It's unfeasible to access GPIO pins outside of the Arduino realm for your own uses
  • The gnu C++ coding style requires a whole lot more variables to be created, casting to be performed, and just longer lines of code to be written than the Arduino C++ style
  • Despite the documentation here and even from Intel's own site, attempts at making an input pin also utilize an internal pull-up resistor through MRAA code (and possibly the initial line states if I recall, for that matter, meaning solenoids might randomly fire upon starting the system) never seemed to work, leading me to have to solder on my own bank of resistors to the board by hand and possibly compromise electrical reliability of the system
  • Evidently I was trying to do something with timer interrupts or just pure waiting around for some amount of time that didn't work in gnu C++ either, whereas in Arduino I could use a very simple delay() function
I ended up porting my pinball code back to Arduino C++ after doing all this work in gnu C++.

Then came the next pitfall: Edison Arduino C++ code can't send serial data, despite the best advice from here and here.  As I was using BriteBlox LED displays as my DMD of choice (also not a great idea for quality purposes, as they tend to flake out at times, probably due to voltage fluctuations in the presence of unstable power, which is largely but not 100% helped by attaching a huge capacitor between power & ground), they must be driven by serial signals in order to show anything meaningful.  I already had lots of experience writing Arduino serial routines to deal with BriteBlox as that's their native environment, but the Arduino implementation of Serial.write() on Edison just wasn't having anything to do with me.  This means I had to go back to gnu C++ once again (just for the graphics & serial routines), write a routine in there to parse the .BMP graphics files I utilized for DMD artwork, and then promptly send this over serial.  I ended up finding a way from within Arduino C++ to execute binaries with arguments, so each time I needed something put on the DMD (whether it's graphics or just a simple score change), I'd use something akin to this, explained here:

String hi = "/home/root/dmd score ";
hi.concat(ballInPlay);
hi.concat(" ");
hi.concat(score[player]);
hi.concat(" &");
system(hi.buffer);
Updating the score on Wylie 1-Flip.

Where do we go from here?


The next endeavor would likely have been to launch the Wylie 1-Flip game software upon powering up the Edison.  (Right now, you have to reflash the Arduino side of the processor with the program in order for it to start.)  However, considering that:
  • Intel Edison is discontinued
  • There are still electrical gremlins in the system causing random switches to appear toggled when nothing in the game is happening, meaning the pop bumper constantly goes off, the score & flipper changes at will, and the ball in play counter moves up on its own until your game is terminated
I'm keen on switching this project to the Android Things framework and hope that it'll bring about a less buggy, more electrically isolated hardware platform where I can write all my code in one place without so many confusing or deceiving constructs.

Nevertheless, here's what I have so far:


Unfortunately, based on the few times I've gotten to play the game thus far, it doesn't really seem all that fun anyway.  There are still some issues with the ball getting stuck and the shooter lane not working well that really hamper it (not to mention by far the most annoying electrical issues mentioned earlier), but maybe once I solve those issues, it would actually be something I would play.  As you can see, the legs are built in a special way so that the machine can really be expertly nudged, because while play-testing it in Visual Pinball, the game was much more fun if you pushed on the cabinet.


That's already a lot of hand-cut wiring, and there's probably still a ways to go! (At least judging by how the leg plates hadn't been put on yet, so there was probably still a lot being worked on)

I don't anticipate you'll be seeing a Part 3 of this series anytime soon -- maybe after TPF 2018 in March at the earliest, if I manage to switch successfully to Android Things and happen to solve problems in a noteworthy fashion.

Epilogue - And what of Ghostbusters or The Big Lebowski?


As for those two pins mentioned at the top of this article, neither has fared well: Dutch Pinball has been facing many difficulties shipping TBL to those who pre-ordered it, despite the passage of many years since the initial hype, and the value of Ghostbusters and many other games designed by John Trudeau has taken a hit (if only temporarily) since he was arrested for possessing child pornography outside Chicago in August just as Hurricane Harvey was rolling into the Texas coast.  Meanwhile, if anyone needs to just drop their Ghostbusters LE edition quickly, you know how to get a hold of me... ;) Sorry, I managed to find a Pro edition for cheap, and it's holding me over just fine.

Thursday, September 7, 2017

A "Baby Tornado" to aid in Python server development

Why?


Since my last post, I've been highly focused on Tensorflow projects at home and at work.  In the process of running Tensorflow behind an API, I've needed to make code changes to the "secret sauce" (business logic) that stands before Tensorflow and actually provides it with its data.  This could be in the pipeline of multiple Tensorflow models chained together, image manipulation, working with data that gets outputted from the model, or whatever other reasons.  Unfortunately, it is often slow and wastes a bunch of time to constantly restart the whole server (including reinitializing Tensorflow for 20 or 30 seconds), especially when you simply made a typo or used the wrong variable name or something like that.

Besides the Tensorflow work, I've been involved in many blog-worthy pursuits since my last post but simply haven't had time to write about them.  (In fact, I meant to write this last week, but forgot.)  Anyway, at the end of June, right before my previous post, I began running biweekly meetups called "Tensorflow Tuesday Office Hours."  Here, interested people get together in various locations around town to talk about Tensorflow and get their questions addressed, whether it be about installation, scaling it up, mathematical questions, or picking a model.  In the process of helping people install, I decided it'd be worthwhile to try the mainline Tensorflow version that includes the Jupyter notebook rather than the "devel" version that has command-line access only but has more of the Tensorflow Github repo included in its image.  It had been many years since I used Jupyter, and had forgotten its benefits as, for such a long time, I fought the Python shell to enter long functions and make tweaks to specific lines in them.  Of course, with Jupyter, you just click on what you want to tweak, then rerun that code snippet.  (2013 called me and congratulated me on this rediscovery. :-P)

It didn't take me long to realize I could utilize a Jupyter notebook to run a Python server where I could change the route functions that a Python server calls when a request is made to a particular endpoint on the server.  This would allow me to make small tweaks to the business logic for the sake of testing the accuracy, performance, or simply fixing typos, without having to wait on Tensorslow [sic] to restart.

How?


The original application I was going to test this with was written using a Flask server.  Flask is a popular choice for quick proofs of concept written in Python, but has many downsides that make it unsuitable for production.  And, as much as I tried to change the underlying route function that Flask would call, it seems like the Flask process would simply take over the entire Jupyter notebook and no other code snippets could be run in Jupyter once you start a Flask server.  Maybe further research would uncover why or how to get around it, but since the app was being ported to Tornado anyway, I put the Flask research to bed and attempted to do this with Tornado.  To make a long story short, I got it working, and can make changes to the functions that Tornado runs whenever it runs a server route.

Where does the code live?


Check out my Jupyter notebook on GitHub here: https://github.com/mrcity/baby-tornado

In this notebook, simply run In [1], In [2], and In [3].  Each time you want to change what a particular API endpoint and request type does, just edit the code in In [2] and run In [2].  Call your endpoint again and observe the change!

As far as Tensorflow is concerned, you could initialize it in the notebook in stage, load the model in stage 3, and then not have to worry about those steps ever again -- just change your business logic in stage 2. Enjoy!

Thursday, June 22, 2017

My Tensorflow Project Isn't Saving the World

Among all the hype around the latest and greatest technologies, there is so much publicity devoted toward how they are being used in grand schemes to cure cancer, reduce energy waste, conserve water, solve poverty, and so forth.  While all these things are wonderful to humanity, there has to be someone left in the background who helps all the do-gooders unwind when it's time to take a break!


The TL/DR Version: Get To the Point!


Use clever arguments when loading up your Docker container so you don't have to shut it down and restart it when you want to mount external directories from the host filesystem or expose the port for the Tensorboard server.  There is also nvidia-docker available if you want to use your CUDA cores.

sudo nvidia-docker run -it -p 6006:6006 -v ~/Pictures/video-game-training/:/video-game-training gcr.io/tensorflow/tensorflow:latest-devel-gpu bash

Use the --output_user_root option in your Bazel builds so you can save it to that external directory on the host you provided earlier.  This way, when you have to shut down your Docker instance, your Bazel build will still be there (though you will have to recreate some symlinks in the Bazel project directory).

bazel --output_user_root=/video-game-training/bazel-build build tensorflow/examples/image_retraining:retrain

Don't forget to store your image category directories within a "training image root" directory at the same level as the bazel-build directory, or else Bazel might try to train on its own model files.

Also, don't forget that if you export the trained model to somewhere outside /tmp, and then iterate on this model, that you pass the location to the correct model to the classification step.  Otherwise, you might classify with the wrong model, which could lead to confusion and frustration.

Use my fork of the Imker repo (maybe someday I'll make a pull request to put it in the mainstream code) if you want to download only a portion of the images in a particular category from any Wiki site such as Wikimedia Commons.  This could be built upon so you can segregate training and test data.

Just Use the Devel Docker Image; CUDA Optional


Ignoring my original plans for what I was planning to do with TensorFlow, it struck me one night to build a classifier that could recognize different game cartridges for the Nintendo Entertainment System (NES).  I had a lot of pre-work to embark on because it had been a long time since my system had been updated with the latest supporting packages.  However, all of it ended up being all for naught; I found the "virtualenv" approach for installing Tensorflow to be so fraught with tedium that I ended up going for the simple Docker approach.  This is the Tensorflow installation approach I've been recommending since November and it seems to still be worth sticking to.

I have a pretty old nVidia graphics card (a GeForce 650 Ti) in my (mostly even older) desktop running Linux (and Windows at times, mostly during tax season).  It still supports nVidia Compute Capability 3.0 which is just barely enough to run the capabilities I need to perform machine learning, play with the Blockchain, and so forth.  To make Tensorflow performant inside Docker, a special add-on called nvidia-docker allows access to your CUDA cores from inside your Docker container, so I can still get blazing fast performance from my own hardware without needing to install everything in my primary environment (which is evidently too jacked up to support the Tensorflow installation).  Docker is great for providing a uniform, trouble-free experience when running apps anyway because it provides an isolated environment not subject to your system's specific configuration.  However, the version of Docker originally on my system was so old that the required libraries for nvidia-docker were not present; luckily, the upgrade path was simple thanks to their clear instructions.

In fact, thanks in part to my pre-work from before, and lots of good Internet guides on this topic already, getting Tensorflow working on my desktop in this manner went smoothly, if not for some early trial and error, and of course the usual long wait times for compilations to finish.  As I've often said, just use Docker.

Once you have Docker and nvidia-docker installed, here is the best way to run the Tensorflow image.  Note that if you don't have the image already, Docker will automatically download it:

sudo nvidia-docker run -it -p 6006:6006 -v ~/Pictures/video-game-training/:/video-game-training gcr.io/tensorflow/tensorflow:latest-devel-gpu bash

Let's break this down:

  • There's a way to avoid running docker with sudo, but it hides any semblance of auditability or traceability for when users go beyond their expected behaviors and start to get mischievous.
  • nvidia-docker is the binary that supports Docker instances accessing CUDA cores.
  • run tells Docker to launch the specified image in its own isolated environment, with its own filesystem and process tree.
  • -it (or -i -t) specifies first to run the container in Interactive mode, leaving stdin (standard input) open even if nothing is attached.  Secondly, a pseudo-TTY port is opened so the user can actually send input to the container.
  • -p 6006:6006 exposes the Tensorboard port inside the container to the host.  When you start the server, you can access it through localhost:6006 on a browser on your host machine.  Tensorboard is a great way to visualize what is going on inside your training algorithm from the model construction and details standpoint, plus illustrate simple representations of how the data exists in the classification space (as simple as you can make it in as few dimensions as we humans can easily perceive).
  • The -v option allows you to specify or mount a directory (not an entire filesystem; there's a different way to do that) from your native filesystem to include into your Docker container as it runs.  In this case, I wanted to expose the video-game-training directory from my user account's Pictures folder onto my Docker instance as /video-game-training so that the algorithm would have access to all my training data.
  • gcr.io/tensorflow/tensorflow:latest-devel-gpu is the Docker image name.
  • bash is the command to run on the Docker image once it starts.  You can run any executable you want, but it is easiest to run a terminal instance.

First Crack At Building a Classifier: Aligning Pictures And Commands


For object classifiers, good training data comes from as many images as you can get of the subject material.  To support this, I took videos of various NES game cartridges while moving the camera around so as to film it from various angles.  Depending on the lighting, the sun or lights would also reflect back into the camera and cause slight imperfections in the label.  I labored for quite a while in the hot Texas sun taking videos of these games with different backgrounds behind the cartridges so that the classifier would learn how to focus on what is important.

Once my environment was all set up and ready to go, I ran this Tensorflow example pretty much verbatim.  It took approximately 24 minutes to run the first step which sets up the Bazel build to run the training task.  However, as my Docker instance did not have any training data loaded into it, I had to exit out of it in order to add the file mount as described above.  Unfortunately, upon logging back into my Docker container, all this pre-work had been wiped out as a result of it all being built in some temporary .cache directory under the root home.  And, to add insult to injury, running that Bazel setup command the second time took more than twice as long -- clocking in at just short of 50 minutes!

Lesson Learned


One easy way to avoid losing your entire Bazel build when Docker decides to refresh the file system from scratch is to specify the --output_user_root option to Bazel before building to be the same as the external file system or directory from the host that you mounted inside Docker.  In my case, this meant specifying the following setting for my build:

bazel --output_user_root=/video-game-training/bazel-build build tensorflow/examples/image_retraining:retrain


Continuing With Trying To Break Bazel And My Docker Instance


Now, this meant I had to put my training examples one level deeper in this directory, or else the next step would possibly try to train on whatever output is in the Bazel build directory itself.  After running the Bazel build, I exited my Docker instance to see what would happen.  When I reopened it, I found that the symlinks in the /tensorflow folder had been changed to point to /root/.cache/bazel, which did not exist (and never existed because I made the build in another folder).  It took just a hair bit of manual tedium to point the symlinks back to the right place, but upon doing so, the bazel-bin "retrain" command specified in the Google example to actually perform training worked without a hitch.  With everything in place, this command took less than 15 minutes to perform 4,000 training steps utilizing my approximately 800 pictures of each the MegaMan and MegaMan 2 cartridges.  The exact syntax looks like this:

bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir /video-game-training/pictures

The output of this step produces two files in the /tmp/ directory: output_graph.pb and output_labels.txt (also /tmp/retrain_logs/ is important if you want to look at your TensorBoard at any point).  I moved these files into a model/ directory inside the directory exposed to Docker from my host system.

As for classification, I utilized the same strategy, using the --output_user_root option on the bazel build "label_image" step (obviously ignoring the conjoined bazel-bin step for the time being, thus stopping short of image classification).  This Bazel build took about 20 minutes:

bazel --output_user_root=/video-game-training/bazel-build build tensorflow/examples/label_image:label_image

Once this step was complete, I exited and re-entered Docker once again, and my symlinks had been similarly screwed up.  Upon restoring them (like last time), I found a picture of the MegaMan 2 cartridge from out on the Internet, and ran it through the classifier in this manner:

bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/video-game-training/model/output_graph.pb \
--labels=/video-game-training/model/output_labels.txt \
--output_layer=final_result \
--image=/video-game-training/megaman2-ex-01.jpg \
--input_layer=Mul


And voila, a reproducible classification each time, without having to leave my Docker instance open, simply by reconstructing those symlinks!  (That part could easily be scripted in a batch file, in fact.)

Note: Without that last line in the classification command, you will probably stumble into an error saying "Running model failed: Not found: FeedInputs: unable to find feed output input".   As it turns out, Google's example command is a little bit deficient, but fortunately some forum posts succinctly clarified the issue and offered the solution.

Because the Whole World Isn't Video Game Artwork


My training data consisted of only pictures of the label up-close, and mostly ignored the rest of the cartridge.  However, my first classification picture was in fact of the entire cartridge.  I was astounded at the results, because even considering this difference, the algorithm was 96% certain that my picture of the MegaMan 2 cartridge was in fact MegaMan 2; the 4% remainder was its (very weak) confidence that it was the original MegaMan cartridge.  Now, having spent most of my professional career up until now as a tester, I immediately wanted to see how it would perform on junk input.  I fed it an old picture of one of my pinball machines (Gold Wings, of no relation whatsoever to MegaMan), but the algorithm was 86% confident that what I just showed it was in fact MegaMan, and only 14% confident that it was MegaMan 2.  This was amusing to me, because I suppose in the algorithm's limited worldview of only having been trained on examples of MegaMan or MegaMan 2, it was in no position to say with any authority that anything was in fact neither!

Wikimedia Commons appealed to me as a good location to get quality public-domain photos to use as "negative" training examples (though I suppose I could have used private images with rights held by the authors, and since their data is buried deep within a machine learning model, you would never be the wiser!).  The only downside is their site offers only 200 photos at a time for a given category, and it would be a huge waste to sit there, expand each one, and manually click Save.  Fortunately, Wikimedia Commons supports API calls that will allow you to download all the media for a given category.  Better yet, there is already a Java program called Imker that offers a CLI and GUI wrapper around the API calls.

The only problem with Imker is their current UI only offers you the ability to download every single file within a given category, not to break it up into just a fraction of randomly-selected images.  Nevertheless, Imker is open-sourced, so I forked the Git repo and began hacking away at the Java code so that I could download just 10,000 of the 272,812 images currently in the "PD-user" category on Wikimedia Commons.  After sorting out a lingering issue, and waiting a few hours (thanks in large part to my crude rate limiter), I have 10,000 images from A-Z, not to mention A-Z in other languages, consisting of roughly 75% JPEGs, 18% PNGs, 5% SVGs, 1% GIFs (even animated), and some TIFFs thrown in for good mix.  Not only that, but the images consist of things like maps, diagrams of all sorts of things in many different languages, road signs, cars, street scenes, landmarks, molecular diagrams, and all sorts of other random stuff only a small percentage of the population could possibly care about. :-P

The beautiful part about using the pre-trained, robust Inception model is that you don't have to worry about scaling your input data to a particular size.  I was able to use these images just exactly as they came, and I only had trouble with two images that apparently contained bad data and failed to download properly (had Imker not stopped due to some exceptions regarding unhealthy API responses, this might have been avoided).  Apparently, it even dealt with all these file formats adeptly too.

Important Note: One thing that stumped me as to why my model was only showing "megaman1" and "megaman2" after I had trained "not-games" was because I was using an old copy of the model in my classification argument.  Make sure you set the correct path to your model!

In any event, the Tensorflow model retrained to distinguish between Mega Man 1, Mega Man 2, and "Not a game" performed successfully in my two trials thus far.



Trained on MM1 or MM2 MM1, MM2, or Not a game
Confidence Mega Man 2 Pinball machine Mega Man 2 Pinball machine
Mega Man 1 3.9% 86% 4.0% 9.3%
Mega Man 2 96.1% 14% 56.8% 1.6%
Not a Game N/A N/A 39.2% 89.1%

Thursday, June 1, 2017

Pre-Google I/O Entertainment: Old Electronics Stores and Computer Resellers!

The opportunity Google gave me to attend Google I/O, their annual conference, two weeks ago required me to travel to the Bay Area in California in order to attend in person.  Also known as Silicon Valley, it is an area steeped in computer history, featuring (of course) the Computer History Museum, not to mention large offices or global headquarters for many current and long-gone tech behemoths, plus all the tiny startups making millions off various Internet and mobile technologies.  As someone who has been using computers their entire life (well over 25 years now), I am enthusiastic about the way forward but do not want to forget about the winding, bumpy way that has gotten us to this point.

As I seek to bolster my collection of retro-tech, it is fascinating to pontificate on what all these devices would have cost brand new.  There's no way my family could have afforded but one or two these things back in the day, but as technology marches on and leaves so much of itself in the dust, follow along with me as I walk through some of the few remaining stores and shops dedicated to the Hardware Era of Silicon Valley.


Definitely not where Google I/O was.

However, pretty much right across the street from this Yahoo! building was the first stop on my tour after picking up my Google I/O badge: Weird Stuff Warehouse.  Not having been into such a computer surplus/resale store, I was filled with just about as much wonderment as I was upon walking into my first neighborhood computer store back in 2000 (let's just say it was much better than most neighborhood computer stores, and certainly a different experience from the big box retailers).  Upon walking in, you are greeted with about four aisles of tested working stuff of all kinds, including computers & parts, video equipment, and other assorted electronics.  There are several counters and associates waiting to offer help in this area.  That might not sound like much, but wait; it gets more interesting.



Behind this "Open to the public" sign (actually right where I'm standing when I took this picture), at the far corner of the first room from the entrance, is a whole plethora of aisles in their "As-is" section devoted to old software, I/O cards of all kinds, computer peripherals, cables, test equipment, server racks, typewriters, old telephony equipment, hard drives, floppy drives, CD/DVD drives, tape drives of all types, and even the obscure media that goes with these tape drives.




Some of the aisles in the "As-is" section of WeirdStuff Warehouse.

It is difficult to convey through pictures just how much there is to look at here because from the camera's perspective, it all disappears into the vanishing point so quickly, and so many of the bins are very small.  But after about three or four hours perusing Weird Stuff trying to pick as much SCSI components as I could muster, I had one of their associates search for some interesting stuff out of the back (namely more SCSI drives).  As it turns out, most folks say that they don't have everything necessarily out on display nor listed on their website; generally the stuff listed on their website isn't out in the aisles available to be browsed in-person.  Also, one of the guys from the Vintage Computer Forums says he's got a standing order with WeirdStuff where they'll let him know if they get anything on his wish list.  What a neat service that could be, but I'd hate (love?) to see how much stuff he's ended up with over the years!  Anyway, once I was through, I hailed another Lyft ride who whisked me on to Anchor Electronics.

Because when I think Anchor, or Electronics, I think "wire-frame dirigible..." ?!?

Anchor is in a small building right across the street from the southeast corner of the NVIDIA offices.  Walking into Anchor for the first time, it really felt like more of a typical electronic component store (i.e. more like a well-stocked Radio Shack) than WeirdStuff.  Everything in Anchor was some sort of component or tool neatly organized on their shelves.  I didn't really have a lot of time to browse around, having only about 25 minutes there before they closed, but I also wasn't really in need of components either.  They do happen to have various protoboards for everything from ISA to Arduino shields, and a small smattering of Atari 8-bit parts, of interest to vintage computer folks, but I'm really more interested in Atari ST (Sixteen/Thirty-two) systems; they don't carry those types of parts.  I did manage to get into a discussion with another fellow in the store and their main technical support guy Orville by helping brainstorm solutions for some short-distance presence detection type of application, and Orville was interested that his store was one of my primary sights to see in the Valley.  However, it was closing time, and I needed to leave.



Scenes from inside Anchor Electronics, a relatively small but dense store, including the one telling me it's time to go!

Now I was a bit split.  Do I take another Lyft down into San Jose to Excess Electronics and contend with tons of Silicon Valley traffic, or do I just take what's convenient?  Ultimately, Excess will just have to wait until next time, as Orville ended up driving me to my next destination just a few minutes after close; this would be HSC Electronics.  Along the short 1.5-mile journey there, Orville pointed out all sorts of buildings along the street that house famous names now but held other large names 10, 20, or 30 years ago that have since gone extinct -- most notably, the Qualcomm building on Kifer Road real close to HSC that formerly housed 3Com.

HSC is a really big electronic component store that, as much as I love Tanner's in Dallas, makes Tanner look like an itty-bitty Radio Shack in comparison to a great big Fry's store.  The fellows at HSC were also very cordial, and Orville was also buddies with them (and possibly performing a bit of reconnaissance on the competition... you never know!).  Once I expressed my interest in old computers with them such as the Amiga and Atari, they brought out some oddities for me to see: the KIM-1 (1976's version of a Raspberry Pi) and another old-time single-board computer I can't remember now but was based on the Motorola 6800 series processors if I can recall correctly.  And then they showed me a real claim to fame for their store:


Gee, some no-name hack from San Francisco bought an oscilloscope from them.  Who gives? :-P

No, seriously, look closely at that picture above.  If you're not impressed that somewhere, someone was keeping records at HSC for years and years and remembered that kid when he got famous, then I don't know what to tell you.  However, they said the same thing to me (more or less "remember us when you're famous"), though they don't have my name scribbled on a nice large "Name" field like they would have done if I walked into that store back in the '60s too.  Chances are they might have it on some way less interesting credit card log somewhere, but who's to know.

Nevertheless, I spent quite a while browsing this store too, firstly in sheer awe that the arrangement resembles Tanner's so much (but with aisles twice as high, and many more of them) and secondly trying to jog my memory for stuff I could possibly need.


The test bench section and the Self-Serve wire area.  They only ask a couple reasonable things of the test area: let them know ahead of time if you're testing anything with vacuum tubes, and don't leave hot leads lying around.



Aisles upon aisles of stuff, including one barely wider than my shoulders.  Also, mind you, I was homeless most of the day, having checked out of my hotel early that morning and not able to check into the rental house until that evening; as such, I was having to carry all my bags, toiletries, clothes, and my purchases with me at all times up and down the aisles.



Putting All This In Perspective


First off, without the assistance of Raymond, a local buddy of mine who runs arcadecomponents.com and travels out to these stores relatively often (and who also contributed to this relevant thread on the Vintage Computer Forums), I probably would have ended up in some lame stores that wouldn't pay such homage to retro technology and would only be looking to resell last year's Cisco servers, or else some general vintage store that once upon a time had a computer section but now only mostly sells hipster clothes and occasionally gets someone's old laptop once in a great while.  Nevertheless, here is how all these things fit together physically:

It should be noted that Raymond highly recommended St. John's Bar & Grill as a good place to have a burger, especially if you need to ship some of your larger hauls via the FedEx in the same complex.



Also Intriguing To the Music Nerd


For those of you who happen to be band nerds too, it should be noted that the Santa Clara Vanguard, an extremely competitive and highly-ranked drum and bugle corps and an original member of DCI (Drum Corps International), has their headquarters little more than half a mile north of Anchor Electronics, just on the other side of the NVIDIA offices.  If I were ever good enough to make that corps when I was in school, it would have been awfully intriguing for me to explore the tech scene whenever (if ever?) I had a break from practice, though in actuality I can't imagine the members really spending much time in that office compared to out on the rehearsal field or traveling anyway.

Thursday, May 18, 2017

Moments Inside Google I/O 2017

Google I/O 2017 is the highly-anticipated and much-improved follow-on edition of Google I/O 2016.  It's evident throughout all aspects of the conference that they took feedback and lessons learned from last year's maiden foray into the Shoreline Amphitheater to make for a much more awesome experience.  Now, I sit here writing this to you from inside the Amphitheatre itself, comfortable in just a thin long-sleeve shirt and jeans, no wind at all (unlike last year), awash in sound from the LCD Soundsystem.  However, I almost didn't write you this story.


A Nearly Missed Opportunity


The registration window for I/O 17 came upon us.  Stacy reminded me it was time for this, but I balked at the price -- up $150 from last year!  Nevertheless, I still planned to register... until I didn't.  I hadn't realized I forgot until a day after registration closed.  My heart fell through the floor, and Stacy rolled her eyes... Oh Stevo... Oh well... I thought all hope was lost until she told me of an opportunity to try again for another shot at a ticket through the Women Techmakers raffle.  I applied for the raffle dutifully -- it's never fun to fill out the long Google I/O registration form though -- but knowing it was my only shot, it needed to get done.  Fast forward just a couple days later, April 18, tax day, and on a day when I was about to spend about $12,000 between car repairs and the tax bill... well, I got debited another $750 when I won the raffle.  Now that actually felt good, because it was $300 off the regular ticket price anyway!


What to Do In the Bay Area


Prior to the event, besides bumming around San Francisco while Stacy participated in various Women Techmakers events (and all the touristy stuff that entails) (except for how I actually wandered through the Tenderloin district once accidentally and a couple times on purpose), I spent a significant part of "Day Zero" wandering around the Sunnyvale area hitting up old-time electronics stores still leftover from the days when hardware companies ruled Silicon Valley and software was something you crammed onto a tiny PROM chip if you were lucky.  (But more on that possibly next week... I have to make sure I'm clear to post some of those pictures, plus Google I/O is timely this week!)

Anyway, there are some "Zero-Day" parties sponsored by companies participating in Google I/O.  I scored invitations and tickets to the Netflix party and, for the second year, the Intel IoT party.  The night started with the Netflix party for me, which I arrived at after dropping off my haul from all the old electronics and computer stores.  It was at the Computer History Museum in Mountain View, very close to the Googleplex.  It is a cool museum, but unfortunately most of the people there seemed to already be in their little cliques and didn't seem too terribly interested in talking to a guy with a cool homemade LED badge.  It's kind of sad how insular a lot of tech folks can be.

After that party, I headed a bit northwest to the Intel Day Zero IoT party, where like last year, they had a variety of demos and you could earn tokens for swag by listening to these demos, and even more tokens by talking shop with the Intel representatives at the booths.  This one is set up more like a dance party with loud music and flowing beer, and the people at the Intel party seem to be more in a social mood; rather than just standing around on their phones talking amongst their own coworkers, people are flowing around between so many demos that there are plenty of opportunities to meet cool people.  Now, the Netflix party had demos too, but maybe there were not enough or they weren't so compelling?

In any event, even from the experience of getting tickets for I/O on Tuesday before the event, I could already tell things were going to be better.  They had actual organization for the Uber & Lyft rides, and so I knew they had taken into account at least some of the lessons from last year.  In fact, here are my takes on how it's better:

  • Break-out sessions are overall in much larger rooms.  Last year, only one or two of the rooms were really big; the rest of the sessions were in small geodesic domes.  This year, those geodesic domes are reserved for demo rooms (which were mostly outdoors last year, with the exception of a few like Firebase and BigQuery), and all the conference rooms are large.  In fact, I think a couple rooms are even bigger than the biggest rooms last year.
  • The food actually tastes like it has flavor.  The downside to this is I was really looking forward to getting a "food cleansing" like I got last year, but with the improvement in quality, it doesn't feel like that's going to happen.
  • We're back to getting a lot of nice swag.
  • The transportation situation has improved a bunch, as they had well-planned routes for Uber and Lyft drivers to take.
  • There seems to be quite a bit more seating around the venue.
From what I can tell, the cost of these improvements seems to be that some of the demo areas seem to be a bit more squishy than they were last year.  However, previously the Office Hours and Design Reviews took place in half-open tents; this year, they are also in enclosed rooms.  There are still a bunch of demos to be seen; plenty on Firebase of course, but also Android Auto, Google Assistant, Android Things, and all the ways in which these platforms can be connected.

The key takeaway I have from all this talk though, including everything from the keynote to the breakout sessions, is:

Don't bother specializing in any one particular area.

As a developer who has always had a wide variety of interests in the field and written in a number of programming languages, I've seen where Google is trying to make things previously unfathomably difficult and basically impractical for any corporation to want to invest in (and for academics to only dream of) to be made possible.  At first, we saw Sundar Pichai talk about Google training machine learning models to come up with...other machine learning models.  Computers will now be testing in parallel what it takes data scientists months or possibly years to come up with on their own in series and only after lots of tedious model building and testing.  The other big announcement is that Kotlin is now being added to Android Studio in addition to Java.  And while Google promises that Java will still be a first-class language and supported heavily, it will soon become evident that those who know Kotlin will become much more efficient and effective at implementing Android code than those still thrashing through standard Java code.

Other notable events:


* Ellie Powers, Product Manager for Google Play developers, introduces Google Baby at Speechless.  As a result of my live-tweeting all of Speechless, she now follows me on Twitter.  Cool!

* Stacy was the headliner of a Women Android Developers panel in San Francisco Thursday night.

* "Make Your Android O-Face" will be the next big social media trend.

* And there's still a whole 'nother day of the conference left, so we'll see what transpires!

Thursday, May 11, 2017

Journey to a Fully Custom Pinball Machine - Part 1

The"maker bone" has bit me pretty hard since 2011 when I began working on LEDgoes/BriteBlox and won 3rd place the Apps for Metro Chicago hackathon with Owtsee/headonout.  And while I've worked on a mixture of side projects for profit and fun since, I can't help but note the urge to do more ridiculous things for myself just to say I did them -- not necessarily to make any sort of profit (in fact, most of my projects sink an enormous amount of time spent over several years before true completion), but for the notoriety.

The Actual Beginning Of This Post


There... Now that the part Google Plus will post as the headline is out of the way, it seems to me that my coworkers generally love me, and I love them too.  I am seen as quite the maker type inside my area, and have been invited to participate in events even in various groups I don't necessarily belong to because they like to "claim" me.  Often, these are external recruiting events, and I love exuding the ideals of our culture while getting a bit of exposure for my stuff, and it's even more fun to see the people I meet at these events working inside my office within the ensuing weeks or months.

It should be noted that the ROM hack of Tapper was done for the first such event.  I gave them a choice between me coming up with a clone of Microsoft's "Rodent's Revenge" (rethemed with elements from our business) or retheming the Tapper arcade game to show the logo of where we were rather than the Budweiser logo.  They picked the latter, thus that was born.  It even turned into a class I give periodically.

For the next event a bit over four months later, I sent along several ideas for things I could show.  They picked BriteBlox, despite that I pretty much showed that already along with Tapper.  I was a little less enthusiastic to simply show another LED light show, so I brainstormed about what else I could build around an LED light show.

Eureka...!

I was inspired, with only about a month to prepare, to combine this endeavor with something else on my agenda: making a custom pinball machine.  I'd already had in mind several things I wanted to experiment with on my very own game, and with not a lot of time left (read: time to write tons of game logic for different modes, not to mention wiring in all kinds of switches, drop targets, and toys), I actually opted to neglect the Bally Ms. Pac-Man cabinet I bought solely for the purpose of making my own full custom game (I even sanded all the paint off it a while back already) and go with a custom cabinet design that's only about 60% the size of a full commercial game.  This way, it wouldn't look so empty when I don't bother to populate the playfield with a whole lot of game elements, and it's also "easily" portable (you can pick it up by yourself without needing to tie it up, strap it to a dolly, take legs off, etc).

T minus One Month


After talking to someone particularly enthusiastic about my project, I knew I had to act fast.  I dabbled around with a Windows program called Visual Pinball to build and test my layout on the computer.  Version 10 has a fair number of idiosyncrasies, and although it's an open-source project, it requires a fancy version of Visual Studio that has some development libraries the free version doesn't come with.  So much for trying to make it the way I like it... Nevertheless, with enough perseverance, I managed to build myself a nice layout that seemed to play well, especially when I learned how to play "two-dimensional piano" in order to nudge the game (in the simulator) every which way to make the ball really go where I wanted it to.  (Is it wrong that I designed a game that requires nudging in order to be fun?  I feel like it adds to the skill of a well-rounded pinball player...)

Of course, any time I'm dealing with CAD software (even if it's a pinball simulator), I spend a whole lot of time with geometry.  Most of this was fairly simple measuring, but still tedious as I often liked to write down all the measurements to the ten-thousandth of an inch.  This is far more precision than I really needed for the CNC router, but I wanted to make every effort to ensure it would play in real life just like it did in the simulator.  This meant collecting precise alignments of the lanes, the round area in the back where the ball curves over onto the playfield after the initial shot, the pop bumper, the slingshot holes, rollover targets, general illumination... you get the idea.  Once it was all measured, I duplicated everything into VCarve (our program of choice for the CNC router), which took yet more time just from the sheer number of elements required for even a fairly simple playfield.  And this didn't even account for any "stencils" as to where mounting hardware for all the playfield accessories would go; I ended up attaching them by eye, hand, and feel later on.

What's new with this machine?


To summarize some of the enhancements I presented to the crowd at this recruiting event, and at the Texas Pinball Festival a couple weeks later:

  • High-end servo motor instead of complicated traditional flipper mechanisms
  • 3D-printed brackets as mounting hardware for many rollover switches & general illumination
  • Rollover targets consisting of inexpensive yet highly durable computer keyboard key switches rather than expensive specialty pinball switches that require constant cleaning and/or calibration
  • A DMD display consisting of a 24-panel BriteBlox LED matrix
As development went on, I also eliminated the need for a switch matrix by using an I2C port expander chip; more on that later.

Also, to increase our Intel street cred, I used the Intel Edison development board with the Arduino Breakout Kit, and fabricated my own driver board consisting of MOSFETs to drive the solenoids and pull-up resistors for every single switch.  No expensive P-ROC boards for me!  I'm doing this fully my way.  I know enough about single-board computers, development kits such as the Edison, and embedded C/C++ programming to just go about this myself... right?!

Stay tuned...


Usually I detail a project in a single blog post, thus many of my posts are really long.  Instead (because I'm crunched for time tonight anyway, and because this can easily be talked about in stages), I am going to write several posts on this project over the next few weeks, detailing the ins and outs of my journey.  And honestly, it has only been about three months since I started, thus the requisite number of years before I'm truly finished with something have nowhere near elapsed yet.  As such, there are still quite a few quirks with it (especially electrical gremlins) that I need to fix, and all that will be detailed here too.

I especially want to talk about my fights with MRAA and the Arduino Breakout Kit, and some of the "fun" involved with sensing multiple switches rapidly, from a technical standpoint anyway, and then of course get into softer things such as artwork and the game's logic and theme, and possibly a bit on fabricating the cabinet (which seemed to take a great deal of time).  Be sure to check in on Thursdays around this same time for future posts!