Thursday, November 8, 2018

Getting Started with a Sparkfun ESP32 Thing, ESP-IDF, and Visual Studio Code

I've had a Sparkfun ESP32 Thing laying around on my desk since back in May when I met the fellow from Iron Transfer at a pinball convention, and we got to talking about IoT and his devices to remotely administrate pinball machines.  However, I spent tons of time this year planning for exhibitions, and didn't really get to do anything with it -- until now.


Before You Begin


There are a few choices you need to make up-front about which IDE you wish to use for development, plus which development framework.  I have chosen Microsoft's Visual Studio Code since it is cross-platform, feature-rich without hogging resources, free to download, and based on an open-source product similar to how Google Chrome is derived from Chromium.  It comes with extensions for language support and IntelliSense, which are must-haves when authoring code.  You are free to use an IDE of your choice; it won't really hamper your ability to enjoy the rest of this article.

The other decision lies in your development framework.  I investigated two choices -- the Arduino framework and ESP-IDF.  When doing my research, mostly back in the summer, I found several posts where people indicated running into problems with support for various pieces of the Bluetooth stack.  Plus, I decided to be closer to the bare metal and have less abstracted away for me, so I went with ESP-IDF.  Now if you don't go along with this choice, you may not find much value from the rest of this article.  But you're welcome to look at these two alternatives for what you want to write your code.

Anyway, if you choose the VS Code with ESP-IDF, then you can follow along with this Instructable that details how to set up the entire toolchain: https://www.instructables.com/id/Develop-ESP32-With-PlatformIO-IDE/ Despite one commenter that complained loudly about the poor quality of the Instructable, I didn't find it too troublesome to follow along with.  You might just need to look up what any missing pieces are called nowadays, as some things have changed, but I can assure you I got through setup just this past weekend with mostly just that article.


Strange Things You May Encounter


After getting a good way through the setup steps, it was time to compile the code for the first time.  Unfortunately, some odd errors appeared that indicated I hadn't defined some constants and some functions.  I Googled for the missing constant name and eventually found it on GitHub, and saw that it was missing from my local copy of the header file (which leads me to believe the Instructable or the accompanying example code must be being updated).  Fortunately, it is easy to update your development library in case you end up getting an old version and finding that such library code is missing.

From the VS Code Command Palette (Ctrl+Shift+P or Cmd+Shift+P), just look for:
PlatformIO: Update All (platforms, packages, libraries)

Or, from the command line/terminal, write:
platformio update

Once I updated the library code, I was able to compile the code successfully.  However, when using the toolbar at the bottom left of the VS Code window, it is not always apparent which terminal window the command you selected from the toolbar is actually running in.  Whenever you click on the toolbar, it seems to generate a new Terminal instance that's selectable from a dropdown at the top right of that window pane.  Usually, once the command is done running, you can close that terminal instance by pressing a key, but you are always left with one PowerShell instance from within VS Code where you can run your own commands.

After uploading the binary to the Sparkfun Thing for the first time, it displayed nothing but garbage on the serial terminal and didn't show up in the list on my Android's copy of the Nordic nRF Connect BLE scanner app.  This compelled me to reinstall the PlatformIO Core and the platforms/packages/libraries again, especially since after the first time, it apparently failed to remove a file due to a "Permission denied" error.  After seeing an error about something basic being missing once more when rebuilding, I did what you do with any Microsoft product to fix it -- you restart it.  A quick restart of VS Code fixed the problem, and now I was able to rebuild the binary once again without problems.

There was another problem when building the binary: all of my directory paths involved with this project have spaces in them, and the tool did not end up putting quotation marks around some of these.  As such, I would see errors such as:

esptool write_flash: error: argument <address> <filename>: [Errno 2] No such file or directory: 'd:\\Programming\\BriteBlox'

Fortunately, a bit more Googling allowed me to find out a command that would reveal the command that caused this error:

pio run -t upload -v

Here is the command:

"c:\users\user\.platformio\penv\scripts\python.exe" "C:\\Users\\user\\.platformio\\packages\\tool-esptoolpy\\esptool.py" --chip esp32 --port "COM6" --baud 921600 --before default_reset --after hard_reset write_flash -z --flash_mode dio --flash_freq 40m --flash_size detect 0x1000 D:\\Programming\\BriteBlox Wearable ESP-IDF\\.pioenvs\\esp32thing\\bootloader.bin 0x8000 D:\\Programming\\BriteBlox Wearable ESP-IDF\\.pioenvs\\esp32thing\\partitions.bin 0x10000 .pioenvs\\esp32thing\\firmware.bin

Now to make the modifications, change the full path to Python at the beginning to just python (it seems to freak out with a fully-qualified path), then add quotes as needed.  Run the command in your own terminal, and the code will deploy as desired.  Meanwhile, this Github issue seems to indicate a fix might be imminent.

Alas, rebuilding the binary did not solve the problem of garbage coming through my serial monitor.  I Googled around for this some more, and found out that the Sparkfun Thing was running at the wrong frequency, or at least a different one than expected.  It runs by default at 26MHz, but the development platform expects that the device is running at 40MHz.  As such, by taking the device's baud rate of 115200, and multiplying by 26/40, I was able to find the true baud rate: 74880.  By opening up RealTerm on COM6 and entering 74880 as the baud rate, I was able to see the expected serial output from the Sparkfun Thing, instead of garbage finally:

BLE Advertise, flag_send_avail: 1, cmd_sent: 4

Now, to solve the mismatch in expected vs. actual frequency, you could either change the crystal to the correct frequency or adjust the development framework to work with the crystal on the board as it is.  In this case, I chose the latter approach.  Many people write about running make menuconfig in the root directory of your project in order to adjust the board settings, but that only seems feasible in Linux.  For Windows users, go into the sdkconfig.h file in your project, and alter the following lines:


#define CONFIG_ESP32_XTAL_FREQ 40
#define CONFIG_ESP32_XTAL_FREQ_40 1
#define CONFIG_ESP32_XTAL_FREQ 26
#define CONFIG_ESP32_XTAL_FREQ_40 0

(Nota Bene: I was compelled many times to write 24 in places rather than 26, which led to lots of confusion.  The baud rate if you write 24 will be 124,800.)

By changing the two lines from what is in red (on top) to what is in green (on bottom), you will be able to read serial output at the expected 115200 rate, and see the device appear in the nRF Connect app as "ESP-BLE-HELLO" if you copied the Instructable tutorial code.

Happy coding for BLE!

Thursday, November 1, 2018

The OpenBrite Turbo Controller for Vectrex

At long last, I debuted my custom turbo Vectrex controller at the Houston Arcade Expo on October 19 & 20.  This will be a milestone for Vectrex fans and collectors, as it brings about more ergonomic controls and a rate-adjustable Turbo (auto-fire) mode that can toggle per button.

Vectrex Controller Prototype, as seen in Houston last month

Why, you ask?


I acquired a Vectrex in late 2015 from a very generous individual who had several to spare.  However, it did not come with a controller, so it laid dormant until I got around to building the giant 100x NES controller.  As the guts of a cheap knock-off NES controller from Amazon went into my behemoth NES controller, I used its shell and buttons to enclose a crude perf-board controller, and cut up a cheap Sega Genesis extension cable from Amazon in order to make all the connections from my hand-soldered board into the Vectrex.  It is well-documented on how to fashion a Sega Genesis controller into a Vectrex controller, but I didn't really feel like harvesting the QuickShot knockoff because its cable was going bad.

Original homebrew Vectrex controller using a knockoff NES shell


Anyway, both things (the giant NES controller and my Vectrex) made their debut at Let's Play Gaming Expo in 2016.  I even had MuffinBros Graphics whip up a decal for me to go over the generic Nintendo-esque aesthetic and make it look more Vectrex-y.

Assembled controller with decal designed by MuffinBros Graphics (prior to reworking the screen/vector board).  Note how Select & Start have been repurposed into game buttons 1 & 2.

Ultimately, this controller didn't quite suffice because it could be flaky at times, and as an originalist, I really wanted an analog joystick -- one which neither this NES controller knockoff nor a Genesis controller would provide.  After a while, the generous donor came forward with an original Vectrex controller, and so I could study its design and try to replicate it.

However, the original Vectrex controller is not without its flaws.  It takes up an inordinate amount of space for what it is -- four buttons and a joystick.  The four buttons are in a straight line and spread far apart, forcing even someone with large hands to spread their fingers out and curl some fingers more than others to touch all the buttons.  The joystick has a tall, skinny grip, meaning you must grasp it between your thumb and forefinger rather than just mash it with your thumb.  The controller is designed to be set flat on a table to be used, not held in both hands like pretty much every other controller made.  Given all these flaws, original Vectrex controllers still fetch well over $100, with homebrew controllers appearing sparsely.  Given all this, I set out to rectify the ergonomic problems of the controller and modernize its interface, all while keeping such a cheap bill of materials that I could easily squash the market for the original controllers and still (hopefully) make some money.

Lastly, debuting it in Houston was essential because among conventions in Texas, the Houston Arcade Expo tends to have the biggest contingent of Vectrex fans coming to the convention (sometimes Vectrexes are more numerous than any other type of console).  At any other show, it would be far less noticed.

The Design Process


The main impetus for this was to have a homebrew controller that actually featured an analog joystick, since there were few if any guides elaborating how to fashion one from an existing controller.  I acquired a couple Parallax 2-axis joysticks with breadboard mounting capability to do the trick.

The Vectrex comes with a game in its ROM -- Asteroids -- thus you can play without needing a cartridge.  However, with the traditional controller, this requires lots of button-mashing since it has no auto-fire feature.  Using a 555 timer, potentiometer, and clever values within an RC circuit, I have given it the ability to auto-fire.  Not only that, but once you graduate from Asteroids, there may be other games where holding down a button to toggle something repeatedly might not be a good idea.  Thus was born the idea to have toggle switches mimicking the positions of the buttons.  These would be switchable to complete the circuit (by sending GND) just once upon the button press, or send the GND pulse from the 555 timer as long as the button is being held, depending on the position of the Turbo toggle switch for that button.  The potentiometer serves to adjust the rate of auto-fire, from less than once a second to around 7 times a second given the current values of the RC circuit.

The buttons are modeled in a diamond shape, just like Sony PlayStation or Microsoft Xbox controllers.  This allows for better agility as now all buttons to be reached with one finger, and this type of arrangement comes naturally as it has been ingrained into our brains since at least the Super Nintendo.  Playing "chords" of buttons happens rarely, if at all, in Vectrex games, so we might as well standardize the button arrangement to something more familiar but that generally accommodates only one finger at a time.

Finally, the screws to disassemble an original controller are located under the original decal.  As such, you have to pull up the original artwork/decal and risk doing severe damage to it in order to get to the screws underneath the sticker but still on the top side of the controller.

Assembly


The initial breadboard was built in June 2017.  At first, I was using a jumper wire in order to complete the circuit for each button press, so it was very inconvenient to try to play the game and take a picture simultaneously, much less play the game at all.  After showing these pictures of the breadboard with auto-fire capability to people at Houston Arcade Expo in 2017, I vowed to get it produced by the 2018 show.  And, I just barely made it under the wire with the prototype.

First working breadboard version of the Turbo Vectrex Controller


As you might know, the body of the Vectrex has a clasp that can hold one controller.  Originally, I wanted to split up the controller in two so that this one spot could hold controllers for both players.  However, a controller this small would likely be unwieldy to hold, especially in larger hands.  Furthermore, this would greatly reduce the space on the breadboard available for all the desired components.  As such, I elected to model my enclosure after the original Vectrex controller enclosure.

It was quite painstaking to get the details correct on the controller.  I first attempted to trace the side profile of the controller with graph paper and then approximate it into the computer with Adobe Illustrator.  This proved tedious and with too much uncertainty as to the error, so I got clever and held the controller on its side above a flatbed scanner.  Then, not only did I have the side profile, but also I had the profile of the little groove piece where a nub on the system case guides a controller being inserted to lock it into place during transport or storage.  Furthermore, I could extrude the edge and make the object as long as needed to match the original controller width -- 199mm.

If you consider the side profile of the Vectrex controller to be a blob, I "hollowed out" the blob -- leaving a 2mm-thick ring along the inside -- in order to form the regular outer shell of the controller.  In Illustrator, I also sliced it in half so that I could lay flat both sides -- bottom and top of the controller -- in the CAD program, thus making it easier to model all the details that need to go on those respective pieces.  In particular, the bottom piece was adorned with little grooves in which the breadboard would fit.  As time was running out and component selection was not finalized, I elected not to go with a custom PCB for this implementation, but to simply use my original breadboard.  The bottom side also incorporated screw holes with recesses so that a pan-head screw would not protrude from the case.  While the diameter of these holes was set to accommodate a #6 screw shaft, as you know, 3D printing is an inexact science and, as filament deposits thickly in some areas, this ended up being a hole that nicely accommodates #4 screws instead.  (Note that these holes are on the bottom, unlike on the original.)  The top side of the case not only features further screw holes exactly aligned over the bottom screw holes, but also must have holes/slots for all the protruding buttons and joysticks coming out of the breadboard.  The top side is also angled down roughly 10.7 degrees relative to the bottom side (if aligned parallel to the horizontal), so there were a few times I had to flip things around exactly perfectly in order to verify their correctness.

Speaking of modeling, the CAD program I used for all this was TinkerCAD, with its convenient, simple, yet flexible interface.  With at least a few groups of positive and negative shapes, I can model the entire controller.

Besides the joystick, I also went out to source some interesting buttons.  I already had some clicky buttons and some mushy rubber-dome or membrane buttons, but I wanted something in between the feels of these buttons -- light to the touch, yet clicky.  I managed to find an ideal, nice-feeling button at the local electronics surplus store.  I also acquired several potentiometers and knobs, and was trying to figure out the best way to 3D print something for these too when I found a breadboard-compatible potentiometer lying around in the house.  And thank goodness I found that, because otherwise the knob would have had a very flimsy connection to the potentiometer!

The 3D printing aspect of this was tedious, as Stacy was out of town while I was trying to do all the printing.  She had the licenses to the Simplify3D slicer on machines I did not have access to, so I had to do a lot of back-and-forth of STL files and binary files with her before she finally gave me credentials into these systems.  As the bottom plate took somewhere between 6 to 8 hours to print, I decided to try to make the larger top plate print faster by cutting holes out of it in the design, especially so I could at least test the alignment and hopefully make a quick adjustment if anything was wrong.  Fortunately, the holes were indeed aligned correctly, but sadly, the 3D printer stopped working (formed a clog, apparently) and would not let me print any more items after this case and the buttons.  This was a problem because now my nice-fitting top cover was a bit less structurally sound, and it also looked super-weird.  I managed to rectify this during the show by printing out some informational blurbs as a top decal and taping it onto the top cover.  The prototype top cover also didn't have the screw holes in place, so I ended up having to hold the case together with rubber bands.

Ultimately, the slide switches never got promoted/extended up through the top cover because it was questionable as to how to stabilize such a large moving piece through the top cover.  Several times, I have sheared off the nubs of slide switches by applying too much pressure too close to the top of the switch.  You could reach in with a skinny screwdriver and change the switches, but I doubt anyone bothered or even gave it much attention.

Outcome


Turbo Vectrex Controller in use -- with "traditional grip" :-P

This labor of love seems not to have gotten a whole lot of attention except from the other guy who brought a Vectrex machine, who really liked it (and was willing to  3D-print a proper top cover for me during the show).  It seems like consoles that predate the NES tend to be a little bit too obscure for people these days.  Even the Atari 2600, which was immensely popular in its time, now is generally popular only with people old enough to remember its heyday.  Of course, the Vectrex is a really obscure machine in its own right.  It came out shortly before the Video Game Crash of 1983, and it is nearly impossible to find replacement vector monitors if that part should go out.  This makes for quite an expensive collectors item nowadays, which means the owners are scattered around the country in small numbers.

I think for it to get much traction, I will have to do more than a couple tweets about it and show it at a regional show -- it probably needs to be posted on AtariAge and brought to the Portland Retro Gaming Expo in order to make a big splash.  However, before I take it to that point, it would be nice to make a proper PCB that mounts to the top cover, make the appropriate edits to the CAD model to facilitate that, finalize the bill of materials, and then make sure the whole thing fits into the slot designed for it in the system case.  After the Houston show, I thought this would take no time, but the more I think about it, the more I foresee still quite a bit more math and CAD time ahead.

Thursday, September 27, 2018

Angular Noob: An Observable On An Observable Observing a Promise

With the reusability and extensibility of modern Web components, I do not look back on the days of jQuery with much fondness.  However, I haven't paid much attention to Angular since Angular 1.  Since its syntax didn't really appeal to me, I opted to learn Polymer instead.  Well now, given a new opportunity, I am diving into a much more modern Angular and TypeScript.  Unfortunately, I am finding that a lot of articles people write on Angular, when you're diving into a well-established code base, are about as dense as reading toward the end of a book on quantum mechanics.  It's English alright, but the jargon is applied thickly.  (And this is coming from someone who has even impressed some of Google's Tensorflow engineers with their machine learning skillz.)

The problem at hand is fairly straightforward.  We want to notify something in the UI upon the outcome of a RESTful request we make to an external resource so that it can display useful information to the user.  We call http.get(), which returns an Observable of type Response (Observable<Response>).  Upon the outcome of the Observable (basically the one event this particular instance fires), we will run either onResponse() or onError().

To describe this in code, imagine the following:

Main App TypeScript File:

ngOnInit() {
  this.dataService.loadFromAPIOrOtherSite();
  // handle routing, and whatever else you can imagine happening here
}


Data Service TypeScript file:


loadFromAPIOrOtherSite() {
  this.dataLoader.loadData().subscribe(
    user => this.onResponse(user),
    error => this.onError(error)
  );
}

Data Loader Service TypeScript file:

loadData() {
  return this.http.get(url)
    .map(response => this.transposeData(response.json()))

    .catch(error => Observable.throw(error));

The way this works is that once the page loads, the data will be fetched.  The obvious problem here is that the main page never gets informed as to the status of the data fetch; as such, the user is not notified when the server fails to respond properly.  Now, theoretically, you could inform the data service about the UI you are looking to manipulate, but I think it makes more sense for the page to deal with its own issues, rather than anything else.

It becomes apparent that what I need to do is get the loadFromAPIOrOtherSite() function to in fact be an Observable itself.  The loadFromAPIOrOtherSite() function utilizes an Observable, so of course the loadData() function returns an Observable that resolves into either the successful answer or an error message.  Unfortunately, a lot of the pedagogy on this topic informs you to use some of the chaining or aggregation functions found in the RxJs library, such as map(), which is overkill for a single GET request.  I don't have a whole array of things to process, nor do I care to append the output of one Observable directly to another Observable.  And, even if there was an array of things to process, it's unclear to me how I could allow the side processes to complete while still returning the request and its status to the main page controller.  I also don't want either of the data services manipulating the DOM directly in order to show the user an error message -- I want the main page controller to handle this.

After enough searching around on Stack Overflow, I finally came across this answer that shows how to nest Observables in a plain fashion, without anything fancy.  It nests an Observable in an Observable by observing the Subscription coming out of the subscribe() function.


Applying This To the Code


There's a little bit of extra logic in here to deal with what happens when the loadFromAPIOrOtherSite() call finishes before or after ngAfterViewInit().  On one hand, you might try to manipulate DOM elements that aren't rendered yet, leading to an undefined mess.  On the other hand, the view might finish rendering before the data load has finished.

Main App TypeScript File:

// You'll want this to deal with timing of the completion of your Observable

import { AfterViewInit } from '@angular/core';

ngOnInit() {
  this.dataService.loadFromAPIOrOtherSite().subscribe(
    data => {
      // happy path
      this.done = true;
      doSomethingOnUI();
    },
    error => {
      // unhappy path
      this.done = true;
      doSomethingOnUI();
    }
  )};
}

ngAfterViewInit() {
  this.elem = document.querySelector('#elem');
  doSomethingOnUI();
}

doSomethingOnUI() {
  if (this.elem && this.done) {
    // do something with this.elem
  }
}

Data Service TypeScript file:


import { Observable } from '@rxjs/Observable';

import { Observer } from '@rxjs/Observer';

loadFromAPIOrOtherSite() {
  return Observable.create((observer: Observer<any>) => {
    this.dataLoader.loadData().subscribe(
      data => {
        observer.next(this.onResponse(data));
        observer.complete();
      },
      error => {
        observer.next(this.onError(error));
        observer.complete();
      }
    );
  )};
}

Now, it's helpful when 
this.onResponse() and this.onError() return something (even as simple as a string or integer), because observer.next() propagates that return value as an "observation" to the subscriber to loadFromAPIOrOtherSite().  And, with observer.complete(), it will be the last thing that subscription will ever receive.

Nesting This Even Further: Moar Nesting!


It's possible that the previous example doesn't go as far as you need.  What if you want to do something else, like check for incomplete data inside this.onResponse() and augment it with additional data, or show an error to the user if it can't be augmented in the necessary way?  And on top of that, how about that this extra data collection function returns a Promise rather than an Observable?  Let's build upon the previous idea and make even more wrappers.

Note that the Data Service TypeScript file now has a subscription to onResponse() as well, not just loadData():

loadFromAPIOrOtherSite() {
  return Observable.create((observer: Observer<any>) => {
    this.dataLoader.loadData().subscribe(
      data => {
        this.onResponse(data).subscribe(
          augmentedData => {
            observer.next(augmentedData);
            observer.complete();
          }
       // etc...

We must also modify onResponse() to return an Observable itself, and not just a basic literal or some JSON object.  You'll notice this follows a similar pattern to before, along with handling a lot of possible unhappy paths:

onResponse(data) {
  // used to just "return 42;" or something simple like that
  return Observable.create((observer: Observer<any>) => {
    if (!isTotallyUnsuitable(data)) {
      let moarData = Observable.fromPromise(this.promiseService.promiseReturner());
      moarData.subscribe((data) => {
        if (cantAugment(data)) {
         observer.next(() => {return "Failure to augment the data"});
         observer.complete();
        }
        // augment the data here (happy path)
        observer.next(data);
        observer.complete();
      }
    } else {
      observer.next(() => {return "Failure to get good data at all"});
      observer.complete();
    }
  });
}

Epilogue


Now, if you know how to do such complex Observable nesting with map(), concatMap(), or forkjoin(), you're welcome to let the world know in the comments below!  And be sure to upvote the Stack Overflow post below if you liked this article!

Sources:



  • https://stackoverflow.com/questions/49630371/return-a-observable-from-a-subscription-with-rxjs/49631711#49631711
  • https://alligator.io/rxjs/simple-error-handling/

Thursday, August 30, 2018

Talking about Digital Fight Club

Earlier this month, I attended the third installment of the Digital Fight Club, put on by Digital Dallas.  Digital Fight Club is now going to be held at various events across the country!  I'd love to make it out to every one of them.  You all who are where it will be held are in for a treat, and should not miss it.  Read more at http://www.digitalfightclub.co/.


You Talk About Digital Fight Club


The format involves two thought leaders sparring in a short debate on a particular topic centered around emerging technology, design, and organizational behavior.  Each round of debate, consisting of both debaters' opening arguments and rebuttals, lasts under five minutes.  Then, another five minutes or less goes to at least two of the five judges to ask a question addressed to one of the debaters, and the other can rebut the answer as well.  Finally, the judges and audience decides a winner for each round.

Despite the brevity, it is a power-packed punch of information, opinion, and emotion from the debaters, and some have really hit it home in topics where you might not have seen two sides to the issue.  I have finally nursed my sore fingers back to health from live Tweeting the event.


Why Me? Why Today? Why Now?


I've been going to their events since "Digital Dumbo / Digital Dallas" launch party back in 2013, and they always coordinate an engaging time and draw interesting folks from mostly the creative and product side, but also the tech side.  (Although I didn't run into a whole lot of other engineers at Digital Fight Club this time; maybe they are all hanging around Plano and Frisco these days rather than on Lower Greenville.)  In fact, one of their events in 2014 landed me a job.  As my company and Digital Dallas were still partners in 2016, I scored free tickets to Digital Fight Club and attended with some of my favorite innovation-hungry coworkers.  It was a blast, and I definitely couldn't wait until the next one.

By the time Digital Fight Club was announced, with none other than Mark Cuban as one of the judges, one of my cousins booked a family getaway down in Galveston starting the same day and going on through the weekend.  And, unfortunately, since my employer decided to scale back substantially on sponsoring outside events, I could not get tickets through work.  Considering my family trip, I did not want to double-book myself and deprive someone else of a ticket.

Unfortunately, Hurricane Harvey blew into town that weekend.  Apparently the resorts were already starting to close and evacuate that Wednesday, so I was now removed from two things I wanted to do -- not going to Galveston, and extremely remiss that I was not going to Digital Fight Club either.  (Note to self: just double-book yourself anyway in case something else falls through like this.)

Given this, I knew that come hell or high water, I was not missing Digital Fight Club 3.


Let the Fights Begin


There were five fights this year, in the realms of:

  • Retail: Physical vs. Digital
  • Voice Marketing and Control
  • Design: Speed, Technology, and Process
  • Blockchain: Security & Trust vs. Promise
  • Smart Cameras / Smart Images
You can read about the participants here at the official Digital Fight Club 2018 Dallas page.  What I would like to relate to you are the viewpoints of the debaters and then how I felt about these views.

Retail

The fight in retail seemed to stem around the behavior of millennials versus the new generation (which is back on track for having to suffer through an uncreative name such as Generation Z).  It's still unclear to me what a millennial is; some say 1980-2000, but I would probably peg it as more of 1985-1995.  People younger than this have not grown up in the same world millennials did; they don't remember a world pre-9/11 or pre-broadband Internet, and have pretty much had computers and social media around constantly.  And people older than this at least got jobs right out of college, and could generally afford to start life right away like all the previous generations in post-war America.

But to get back to the arguments, Elie presented the viewpoint of people looking for "social, memorable, material transactions" with physical retail.  They remember the ambiance and experience of buying something and getting to interact with it, and claimed that Gen Z prefers to get physical and interactive with their shopping experiences, noting that it has been hard for e-commerce to pull this off.  On the other hand, Daniel claimed that the United States is over-retailed; over 25 sq. ft. of retail shopping space exists per person, as opposed to just 4 sq. ft. in Great Britain.  (Unless they actually reported their number in meters instead of feet, in which case that would actually be 43 sq. ft. :-P)  Claiming the millennial viewpoint, he believed the existing Fortune 500 brick-and-mortar stores would die off and yield retail space to online retailers looking to establish a brick-and-mortar presence.

Stacy wrote on Twitter that millennials care about experiences.  We don't like products so much, as that's just materialistic crap that fills your house and you can't take it to the grave.  In fact, we would rather go ax-throwing, indoor skydiving, or travel to a foreign continent than fill up our houses with junk that collects dust.  And we can easily share experiences digitally, just as how I live-tweeted Digital Fight Club.  As such, this leaves the vast majority of what we buy to groceries and household items, which are usually sold in sad stores with ugly fluorescent lighting, long lines, difficult parking, and staffed by people just trying to get by or still in school.  That's not the type of "memorable transaction" we care to relive, so why not just order groceries and toilet paper from Amazon?  Who needs to get social about that stuff anyway?  Trash bags are mundane, rarely change, and quite frankly reordering them is why the Amazon Dash button exists.  On the opposite side of mundane, lots of cool stuff can be found on Etsy, giving products a much wider exposure than anything in any boutique could get.

As you can imagine, Stacy sided with Daniel, but I sided with Elie.  I'm not a realistic case when it comes to shopping; since Stacy handles all the mundane stuff (and even things I know I need but that I don't want to put energy into getting for myself), this means I am only ever shopping for things that really strike my fancy, and that will always give me a memorable, emotionally positive experience from buying.  Consider all the suit shopping I've done at department stores, and all the interesting conversations I've had with the owners and other customers at Tanner Electronics.  And that once I got a summer job "working for my dealer" -- a computer store in Colleyville that I frequently bought parts from.  Plus, there's something really gratifying about having it in my hand as quickly as possible, even though many times I don't even use it for a while.  However, even though I like the personal touch, a lot of deals I find originate through private forums or other such electronic means anyway.

As it turns out, Elie came out the winner among the judges and audience voting.

Voice

With more and more buzz about digital assistants, there has yet to be a killer app necessarily defined just for voice.  To Michelle's point, it is just another "customer touch point" or method of interaction with a system.  I can almost as easily type something into the Google Assistant chat box, and ultimately the system is converting speech into text in order to comprehend the action anyway.  Michelle went on to make more sharp criticisms, such as AIs exerting "racist" tendencies such as only doing a good job at understanding native American or British English speakers, and not understanding those with foreign accents.  Now this is just the start; imagine over time, as AIs learn actions from those it understands best, eventually they may become biased to perform these actions, and then do the wrong thing for other cultures once the speech training improves.  Finally, she hit on the privacy aspect of advertisers now wanting to listen in on your voice interactions or possibly the conversation in the room while the agent is not actively interacting with you.  Being marketed to when you're trying to relax or right before sleep is annoying, not to mention seeing things the next day in your email or on your screen that you talked about in passing is downright creepy.

Unfortunately, Chris didn't have many points that stuck out to me, but did offer that voice is innate to humans all the way back to our days in our mother's womb.  He believed that 30% of web surfing sessions will be screenless in the next 2 years.  I'm maybe an oddball once again in this scenario; I love a keyboard (even a modern MacBook keyboard) rather than touching to input text on my phone, and while I do love speech-to-text on the phone, I don't often use it just because others might hear my thoughts and take them out of context.  I don't need them knowing my business or what I'm doing with my device.  And honestly, just having gone back and forth over the design of a "concierge" for something at work, where voice is a ridiculous way to sift through dozens of results, I was not bullish on voice that night anyway.

That being said, I dictated a paper for one of my masters' classes sometime in 2010-2011 using speech-to-text on my first Android phone.  It worked great, but as Stacy was taking me to Walgreens, where you can guarantee running into old people (especially when you're at one in the northern Chicago suburbs), a lot of them seemed very confused as to what I was doing.  Usually they had perhaps their adult children with them, who explained what I was doing, but were probably still impressed themselves that in fact I was dictating into my phone with such ease.

Nevertheless, I go on the Web to seek information, which is easiest to consume quickly when read.  No one recites GitHub readmes aloud before a group and expects anyone to remember everything.  I cringe at the thought of when I'll be too blind to use a computer monitor, and will have to fumble through some other way of sensing information, likely in a format presented not nearly as densely.  It's something I've discussed with accessibility engineers at Google I/O (and they had impressive answers for my concerns), but nevertheless hope it is still far, far off for me.  I voted for Michelle, and she handily won that round.

Running a Design Organization

Now I will say that one of the debaters in this round was James Helms, who was a judge of my product LEDgoes / BriteBlox back when it was on the Expose UX web series in 2014 (back when I had hair).  During the taping, I didn't think it came across so well, so it was hard to bring myself to watch the episode.  Nevertheless, I went to the Expose UX launch party, and met some absolutely wonderful, inspiring, enthusiastic, and life-changing people there, and we hung out talking in the lobby until security kicked us out.  (Then one of them became my coworker for a whole year!)  Now, I would need corroboration on this recollection, but it's possible I was one of just a couple people with a product on the show who actually bothered to show up to the party.  It was so nerve-racking sitting through the four products at the debut and seeing if I was going to be next!  Fortunately I wasn't, but soon after, I watched the episode for myself.  And I must say, the editors did a fantastic job putting the episode together.  I really liked the way it turned out, even though I really wish I'd have gotten the part about it being 1,000% funded on Kickstarter at least shared with the judges, if not into the final cut.

Ok, enough backstory on that.  Jeriad (a last-minute fill-in for someone) and James took on the fundamental principle of operating a design shop.  As an engineer, not such an expert in the daily life of a designer, I didn't have a lot of context on these arguments.  What I can summarize for you is that James made a rather inarticulate point about how designing fast is sexy, but you need to take it slowly.  He picked it up a touch by touching on asking what people want rather than what they wish they had (the old adage of a car versus a faster horse).  James' opponent, Jeriad, had a much better stage presence that night, with the viewpoint that the C-suite gets excited by things that happen fast.  Designers are at the forefront of disruption, so don't let your workflow be the same -- particularly, don't let it stop you from seeing something innovative.  And, of course, fail fast -- you know when something sucks quickly, so focus on service design.

Because of the strength of Jeriad's delivery, I voted for him.  Jeriad won, but still lots of people sided with James; the 57%-43% outcome was the closest margin of any of the debates.

Blockchain

Now we go from the closest outcome right into the farthest outcome, and back into familiar territory for me.  Most people surely can form opinions about voice and retail quickly, given their lifelong experiences and feelings on convenience versus creepy factor.  Most of the people in the audience had probably lived through a design sprint and had certain statements from the previous round resonate with them in different ways.  As for me, I've been dappling in blockchain since 2013, started developing on Chain Core & Colored Coins in 2015, tried to get into Ethereum in 2016, and lately have been exploring Hyperledger projects.  I've given three distinct presentations on blockchain at company-wide internal events in the past five months, so now I'm definitely back into my comfort zone.

Stacy & I spent a very long time talking to Mark, the first debater in this round, at the after-party, and he shared a lot of interesting stories with me, the most memorable ones relating to Launch DFW's early days.  Unfortunately, Mark recalled to me that he had an opening argument prepared.  From what I saw, at the last second, he decided to try to explain blockchain, and that alone took up exactly the roughly 90 seconds allotted for his opening, allowing him to get about half a sentence out for his main point.  While it was extremely impressive that he could describe all of blockchain in the space of an elevator pitch, I imagine it probably still went over the heads of most people, since he had to speak it all very quickly.

Nevertheless, Mark eventually made a point that resonated with me: enterprises are successful at (just like with most things) making the blockchain more bureaucratic and expensive.  When you look at the architectural complexity of Hyperledger Fabric compared to Bitcoin, which runs the same computations on a bunch of distributed nodes, or that Hyperledger Sawtooth requires specific instructions only implemented on the latest Intel chips that cloud service providers either don't or in fact no longer provide, you go "yep, that's an enterprise's work."  However, these systems do have massive advantages, not just for enterprises who are historically risk-averse, but for cheapskates like me: there is no mining, thus you can freely exchange information without worrying about the price of some underlying unit of currency that has to be exchanged as commission for posting the data onto the ledger.  And I would say this is important.  No one wants to be exposed to the circus that has played out in Ethereum-land, such as "The DAO" hack resulting in a hard fork, and millions more dollars lost through other misanthropic deeds or simply brainless bugs.

This was a good point, but Jaime, Mark's opponent, hit on something even more important.  Even with a public blockchain, someone could easily misrepresent something of value encoded on the blockchain that is more than simply a unit of the underlying currency (say, a smart contract representing movie tickets in a theater or units of commodities to be delivered through a commodity exchange), whether through intentional fraud, or just by a bug or fat-fingering something.  At least in a public blockchain, there might be more third parties trying to audit the data.  But when enterprises are playing in these "walled gardens"[1] where they are doing who knows what with your data and your money, and humans (who make mistakes) are in charge of writing the "chaincode", then what's at stake?  Jaime's point was essentially that enterprise blockchains are nothing more than centralized distributed databases that allow enterprises to cheat.  And, to me, it makes more sense to just run a common instance of Cassandra DB or Kafka message queuing service if that's all you're looking for.

Ultimately, Jaime took 90% share of the votes, including mine.

[1] Walled gardens being "private, permissioned" networks.  I think of "private" and "permissioned" as two different things on two different dimensions, not as two related things next to each other on a continuum, as many people often recite "public -> permissionless -> private".

Smart Images

With this one as well, my bias toward one of the judges at the onset only led me to disappointment.  As Skip runs Spacee and one of the AR/VR meetup groups around town, I'd run into him a few times in the community.  However, he took on a viewpoint that not many folks agreed with.  He may have been trying to be sardonic by flippantly dismissing legitimate privacy concerns, making comments such as "Robots, kill me last."  His points started by encouraging users to give up a little bit of privacy for the sake of convenience to, say, order from a place before you even think of it.

Thierry, his opponent, pointed out that KFC in China is already ordering for people based on facial recognition.  (Are we this habitual?  Maybe at KFC, sure, but when given this many choices in life, why be predictable?  How would this help me try new things?)  Thierry had a mic-drop kind of moment with the audience when he proclaimed that every technology has promptly invited some kind of abuse.  He asked where your business will be when the rules are broken.  Skip, on the other hand, actually did drop the mic after making his plea to the robots that I mentioned earlier.  Skip finished by proclaiming a daunting notion that "you think you control things at home but you don't."  Why not let an AI watch over you sleeping while you're sick?  Would this be beneficial?  Helpful to a doctor?  Or would it just go to some ad marketer to get you to buy medicine?

Finally, Thierry encouraged the audience to continue innovating and "pushing tech to the edge" but to do so in a transparent way, particularly by being up-front with your data collection practices.  (Well, you kind of have to these days, in light of the GDPR.)  He went on to claim that people have grown so accustomed to change that we are devolving, rather than evolving, as we are just letting whatever happen around us and not taking action.  Well, people do take action, but we have so little bandwidth these days compared to the energy it takes to fight all these battles, and there's not enough time in the world left to acquire enough unbiased insight into what is truly going on.  We are collectively leaving each other in the dust, possibly to cannibalize society, while we empower each other to learn and do more than ever thought possible.  Thierry's fear of Big Brother tendencies won over me and most of the rest of the audience.

Epilogue

All told, with as much energy and excitement as there was, and with such efficiently-run fights, the meat of the content was all over in about 45 minutes.  I think there are plenty of people who would agree that adding a minute to the discussions, or adding another round altogether, would be great.  Stacy and I were next to last to leave the afterparty, at about 11:30 PM, way after the outdoor bar had closed.

This is a really fun, innovative way to consume content.  It's not necessarily for learning about something so much as thinking about higher-level concepts, but it definitely busts the boring old "give a talk or presentation about something" paradigm that usually exists when sharing information.