Thursday, April 27, 2017

My Journey With Implementing JPA

There comes a time in pretty much every programming project when you need to communicate with a database of some sort.  When doing this in Java, one choice is to manipulate DriverManagers or DataSources directly.  However, if you want to take advantage of POJOs to represent your data from the database, manipulating data as objects, you might be staring down the task of writing all sorts of ugly reflection code or a really long constructor to make this happen.  There is an alternative: the Java Persistence API.

Old news, buddy!


JPA is old-school, clocking in at over 10 years old!  (Jeez, I might as well bust out my IBM 5150.)  However, I did not feel like writing reflection like what was done on the last project, so I searched for an alternative.  JPA provides you a mechanism to add in details pertaining to object/relational metadata (ORM), and thus run all sorts of CRUD operations with minimal code, by offering convenience functions for these operations.  The objects you make persist the data in memory, just like a database would.  The Java Persistence Query Language is offered in order to help translate your objects into SQL, or you can use regular SQL.

JPA in and of itself is simply a framework, and there are many different providers of the advertised functionality and benefits.  Some of these choices include Hibernate, EclipseLink, and OpenJPA.  There’s also Spring’s JPA provider but since Spring is not part of my current project (much to my chagrin), it might not be so easy to just go and get it.  Also, OpenJPA doesn’t seem to be available in my available Maven repository, and I’m a Maven snob now; if I can’t get the dependency through a dependency manager, then forget it.  This pretty much leaves me the choice between Hibernate and EclipseLink.

Now, some people have given themselves fits with JPQL and how it is implemented by the various persistence provider they chose.  If you don’t want to use JPQL and/or are really good at SQL, then you can always write straight SQL to manipulate your data.  However, you’ll have to weigh using regular SQL with the possible drawbacks of not getting to use your nice POJOs.  Due to that article I just linked to, I ended up choosing EclipseLink and performing all my operations with SQL simply because I’m only doing SELECT operations.

The Gist Of a JPA Implementation


I use IntelliJ by JetBrains as my Java IDE.  In here, it is really easy to create a Java Maven project.  I am going to eschew detailing the basic steps of setting up a project and get straight into the code.

For starters, you need to get the correct Maven dependencies.  Then, build more or less a POJO to describe the contents of the table.  Open up the table description in your favorite SQL editor and copy it in or start writing it into your new class.  Normally, you should name the class the same as the table you wish to access.  Use the IDE to automatically generate the getters and setters once you have written in the variables.  Import the persistence library into your POJO class file in order to describe certain properties, such as the table name, which instance variable is the ID/primary key, and so on.

Build a persistence.xml file describing the means by which you will connect to the database, and which Java class will define which persistence unit.

Finally, make your EntityManagerFactory and EntityManager that will manage all the persisted data across all the instances of the POJOs you create by means of performing CRUD operations on your databases using functions from the EntityManager.

The Structure Of Everything


Here are the Maven dependencies you will need to import (if you wish to use EclipseLink, like I did):

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>6.0.6</version>
</dependency>

<dependency>
    <groupId>javax.persistence</groupId>
    <artifactId>persistence-api</artifactId>
    <version>1.0.2</version>
</dependency>

<dependency>
    <groupId>org.eclipse.persistence</groupId>
    <artifactId>eclipselink</artifactId>
    <version>2.5.2</version>
</dependency>

You will also want to provide the maven-compiler-plugin as a <plugin> in your POM file with <source> and <target> configurations set to your JDK version if you’re having trouble using advanced syntax.

Here is a summary of my class:

package com.myproject.tests;

import javax.persistence.*;
import java.util.Date;

@Entity(name = "the_table_name_in_the_DB")
public class MyPojo {
    @Id
    private Long id;
    private String someOtherString;
    @Temporal(TemporalType.TIMESTAMP)
    private Date lastUpdated;
}

The @Entity annotation's name attribute is used when your table name is different than your class name.  It’s usually better to name them the same unless your table name is awful.  It happens…

Notice the @Id and @Temporal annotations too.  These are important; you always need a primary key (even if the underlying table doesn’t have one), and @Temporal is required for any Date or Calendar objects.

The next thing is the persistence.xml file, belonging at resources/META-INF/persistence.xml (either src/ or test/ is OK, depending on your context):

<?xml version="1.0"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence" version="2.1">
    <persistence-unit name=
"testJPA" transaction-type="RESOURCE_LOCAL">
        <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
        <class>com.myproject.MyPojo</class>
        <properties>
            <property name="javax.persistence.jdbc.url"
                      value="jdbc:mysql://mysqlOnRDS.rds.amazonaws.com/my_db?zeroDateTimeBehavior=convertToNull"/>
            <property name="javax.persistence.jdbc.driver"
                      value="java.sql.DriverManager"/>
        </properties>
    </persistence-unit>
</persistence>

In here, you specify the persistence-unit name (referred to by the EntityManagerFactory creator function earlier), the PersistenceProvider you are using (the PersistenceProvider class should always be implemented by the provider you chose), and your POJO class.  Then, specify other properties such as your JDBC URL, the driver you use for reading the database of your choice (here this is the MySQL driver), and possibly a username and password if you don’t care about keeping those in plaintext.

Note that the <property> names might change based on the particular persistence provider you are pursuing.  In this case, the name of javax.persistence.jdbc.url works for EclipseLink, but in OpenJDK, it would be openjpa.ConnectionURL .

Finally, I set the remaining important parameters and instantiate my entityManager with the following snippet.  The entityManager manages all the instances of my POJOs (objects) representing the (relational) data in my database table, and is responsible for giving us the ability to run queries.  Note that below, I'm running this as a JUnit test, thus the use of before() rather than main().

import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.persistence.Query;

public class StepDefinitions {
    public EntityManager entityManager;

    @Before
    public void before() {
        Map<String, Object> configOverrides = new HashMap<>();
        configOverrides.put("javax.persistence.jdbc.user",
              System.getProperty("db.user"));
        configOverrides.put("javax.persistence.jdbc.password",
              System.getProperty("db.password"));
        EntityManagerFactory emfactory = Persistence.createEntityManagerFactory(
              "testJPA",
              configOverrides);
        entityManager = emfactory.createEntityManager();
        Query query = entityManager.createNativeQuery(
              "SELECT * FROM the_table_name_in_the_DB;",
              MyPojo.class);
        MyPojo firstRow = (MyPojo) query.getSingleResult();
    }
}

With System.getProperty(string), I can add the -D arguments (such as -Ddb.user=user) to build the application with the desired parameters on the command line, saving me from saving them in plaintext and/or checking them in anywhere.

You should also look at the documentation for entityManager to find out all the ways you can construct queries.  The one above takes a standard SQL string as an argument, but you can find ones that will work with JPQL.  The find(Class<T> entityClass, Object primaryKey) call is the simplest of them all, allowing you to find an object just by providing its primary key and your POJO class.

Battles I Fought


The persistence.xml file belongs in a specific place.  Since I am not running this application as a WAR (Web application) at all, and in fact this is simply a project that runs JUnit tests on existing servers, I learned it needs to go in src/test/resources/META-INF/persistence.xml .

The first time I tried to run my JPA code, it complained about

PersistenceException: No resource files named META-INF/services/javax.persistence.spi.PersistenceProvider were found. Please make sure that the persistence provider jar file is in your classpath.

Ultimately, I realized that while I had included the Persistence APIs themselves (javax.persistence.*), I omitted the actual persistence provider.  I resolved this by doing some quick research to pick between Hibernate, EclipseLink, and OpenJPA (though I would have used Spring’s one if I were making a Spring project).  Ultimately I added EclipseLink to my POM file.  It turns out I also needed to add the <provider> tag to my persistence.xml file with the name of the PersistenceProvider class: org.eclipse.persistence.jpa.PersistenceProvider  This one pertains to EclipseLink in particular.  Whichever JPA service you use, the provider class should always be named PersistenceProvider.

In IntelliJ, I also had to make liberal use of the “Reimport All Maven Projects” button.  It exists on the far right, folded up in the “Maven Projects” panel.  Unfold this panel, and then see the button that resembles a Refresh button.

Once I had this in place, Java gave me the following error:

javax.persistence.PersistenceException: No Persistence provider for EntityManager named testJPA:  The following providers:
org.eclipse.persistence.jpa.PersistenceProvider
Returned null to createEntityManagerFactory.

Since I was developing in IntelliJ with Maven, it would be surprising to me if the usual suggestions of fixing your CLASSPATH would actually help my problem.  In fact, by digging down, it was suggested that the root tag in persistence.xml (<persistence> itself) needed some attributes set.  To fix the error, I used this (as noted above):

<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence" version="2.1">

Now, I received a similar error upon adding these attributes in persistence.xml, but noticed I was making progress when closely scrutinizing the error message.  I saw that my class “does not specify a temporal type”, which “must be specified for persistent fields or properties of type java.util.Date and java.util.Calendar”.  Most tables will have timestamps of some sort in them as one or more fields, so for this, you will want to use the @Temporal annotation on any of your Date or Calendar objects in your POJO.  (Just be mindful to consider such things as daylight saving time when utilizing objects that do not specify a time zone.)

My next problem:

java.sql.SQLException: Zero date value prohibited

There are a couple ways to fix the problem where you have a timestamp consisting of all zeros.  Assuming you don’t have the ability (or care) to modify the data to NULL the all-zero timestamps inside the table itself, just add this to the end of your JDBC string:

jdbc:service://host:port/dbName?zeroDateTimeBehavior=convertToNull

After solving this problem, I ran across one more issue:

Exception Description: Entity class has no primary key specified. It should define either an @Id, @EmbeddedId or an @IdClass. If you have defined PK using any of these annotations then make sure that you do not have mixed access-type (both fields and properties annotated) in your entity class hierarchy.

In my case, the database table I inherited had no primary key.  It is not my concern to go in there and define a primary key at this time, so I was hoping to get away without defining it in the POJO.  If your database table does not have a primary key defined, it’s OK.  JPA does not care if you specify a primary key in your POJO that isn’t truly a primary key in your table, so just pick an instance variable representing a column that always has a unique value in it and add the @Id annotation to it.

Epilogue


After this was all said and done, I realized I would need more than just what one single table would give me in order to run my tests.  Data will have to come from joined tables, and that will be explored another day…

Sources


Override Configurations Loaded from XML (i.e. DON’T STORE PASSWORDS IN PLAINTEXT) - http://stackoverflow.com/questions/8836834/read-environment-variables-in-persistence-xml-file

Thursday, March 9, 2017

Appreciating Huge Light Marquees On Game Shows Of the Past

Recently, I dug up an old email where I was musing on the big marquee on an old television game show called The Magnificent Marble Machine.  It was a very fancy prop at the time, as most television shows of the era relied on art cards with big, bold letters that would be flipped, slid, or otherwise revealed by stagehands upon the emcee's verbal cue.

   k
Slide it, Earl! (Match Game) - Survey Says?! (Family Feud) -
Sources: YouTube, Game Show Utopia


This is about as fancy as it got for electronic graphics on game show props in the 1970s (Tic Tac Dough).  However, they never upgraded this board during the 1980s...
Source: Dailymotion

And a couple shots of the Magnificent Marble Machine game board in action.  Looks like a plain ol' marquee, but really an astonishing feat of engineering given normally chintzy television props, especially for its era...
Source: Gameshows Wikia

Cashing In On a Fad


Now, the Magnificent Marble Machine was a show produced in 1975 and 1976 by Heatter-Quigley Productions.  This "gem" only lasted about nine months, but you may know Heatter-Quigley for bringing us much more durable and enjoyable shows such as The Hollywood Squares.  The idea of the show was to capture and adapt for television the essence of a phenomenon that had been becoming very popular at the time: pinball.  You had The Who and Elton John singing Pinball Wizard before millions of fans and avid players starting in 1969, and in 1975, unless you ran across a Pong console, generally your only other option for cheap entertainment out on location was pinball.  And games from 1974 and 1975 such as Sky Jump, Top Score, Wizard!, and Captain Fantastic were stealing quarters from players of all ages, and still fondly remembered to this day based on their ratings at the Internet Pinball Database (I can vouch for Top Score personally as well).  However, the pinball machine on MMM (brought out for the bonus round) featured a fun factor more in line with watching someone play Atari's Hercules game, only even bigger and more sluggish.

Generally, pinball machines of this era were all electromechanical in operation, involving liberal use of solenoid coils not just to actuate active elements on the playfield but also serving to operate physical latches for memory and game status.  Lots of springs and mechanical linkages would be involved, and complicated beasts called "score motors" would advance the score in the manner required and pulse score reels to rotate at 36 degrees through every digit, 0 through 9.  Generally, you would not find so much as a resistor in these games in terms of electronic components, but only a transformer to step down 120V from the wall to 24V for game components.

However, 1975 was a big year for microcontrollers, as the MOS 6502 was first released to the public (this chip was used in all sorts of devices including pinball machines for decades to come).  Pinball was soon to come of age in the solid-state realm as well, with both Bally (using Intel's 4004 chip) and Allied Leisure producing pinball machines in 1975 with such new and improved electronics.  However, Atari also began work in 1975 on its VCS home console, and we all know what home consoles did to pinball and arcades in general...

Making a Light Marquee The Old Way


I hope you'll take a moment with me to appreciate the level of engineering and expense that went into creating the scrolling clue board seen during the game on MMM.  Nowadays, this could be easily accomplished with a matrix of high-intensity LEDs and very simple microcontrollers to display the letters; you should know I successfully Kickstarted such a thing back in 2013.  But by 1975, LEDs were so small and low in intensity that they were used on little more than expensive electronic test equipment as status indicators, and on cutting-edge calculators, the numbers on which were so small that lenses were placed above each digit to enhance readability.  Clearly these early LEDs were not useful for consumption by far-away viewers like the TV cameras filming the show (which still used small vacuum tubes instead of modern CCD chips), or people viewing a marquee outside a theater, or etc., and were not an option for people building scrolling marquees.  With LEDs out of the question, people still used regular incandescent light bulbs.

For the digital logic that controls the lights, there were very few programmable microprocessors at the time.  Intel released the first ever CPU for consumer purchase, the 4004, in 1971.  The MOS 6502 was not on sale by the time MMM began to air, and Motorola's 6800 chip which came out the year prior was still selling for $175 in 1975 dollars.  Of course, the now-standard x86 line of chips found in PCs did not exist, nor did PCs themselves (unless you want to call the Altair 8800 hobbyist kit a "PC"), and none of these primitive processors were designed to run at speeds above 2 MHz.  Chances are that only electrical engineers working for large computing firms at the time (like HP, Digital, and Texas Instruments) were using these brand-new CPUs in any applications; they were vastly more complex than anything people dealt with in the past, and so most electrical engineers throughout the 1970s still built digital logic out of discrete components such as the 7400-series integrated circuits, or by actually using transistors themselves to build the logic from scratch.  (Heck, pretty much all the major pinball manufacturers were still using these original microprocessors along with 74xx logic for extra features 20 years later!)  To make a scrolling LED board out of 7400-series ICs would require lots of different chips (such as registers, multiplexers, and simple Boolean operators) and lots of wires.  But above all, each incandescent light would require its own special relay so it could be driven by the digital logic.

While most basic logic circuits such as the 7400-series ICs operate at 5 volts of direct current, light bulbs typically operate at a much higher voltage -- up to 120 volts, and then it's alternating current, not direct.  Unlike nowadays (since we have good LEDs that run at 5 volts DC), the voltage generated by the digital logic circuit when a light should be on could not be used to directly power that light.  So, the output from the digital logic circuit would be fed into relays that would provide the correct power for each light at the appropriate time.   There must be one relay for each light.  Once again, I re-emphasize the beastly amount of wiring that must have gone into that sucker.

And what might all this have cost?  Well besides the engineer's time, the parts would have been quite pricey, and certainly far more expensive than they are now, even after inflation.  Based on my analysis, you'd need at least one 74LS373 chip per column of lights on the board just to keep the lights on if you didn't buy latching relays instead (after you've already used a bunch more chips just to discover who's supposed to be turned on), and did you see how many columns there are?  While I don't have any data for 1975 prices, prices for the mil-spec version of various 7400-series ICs were $18 to $29 per chip in 1965.  In that year, the average family income was only $6900; prices like that would keep these chips out of most hobbyists' hands, but by the 1970s, they were appearing in many microcomputing kits for hobbyists--these kits were still as expensive as, say, a nice stereo at the time.  (Nowadays, you're stupid if you're an electrical engineer and can't figure out where to get some of these for free.)  And while I also don't have any historical price data on relays, they're still priced at between $1 and $100 apiece nowadays, depending on the complexity, materials, and how many volts & amps it puts out.  Probably the least expensive components would have been the lights themselves. :-P  So from an engineering standpoint, it's sad that board didn't get used on a show that ran for a very long time, but hopefully it found a good home somewhere else...

...Maybe on Family Feud?  (Feud premiered about 3 months after the last new MMM episode aired...)
Source: Game Show Garbage

Epilogue


Apologies for all the "original research" in here... If I could dig up my Internet history from 10 years ago or so, I would cite the sources for the figures I quoted, such as the prices of the electronic components.  However, you have access to Google if you are reading this, and some of my original sources have gone offline and have been replaced with other sources in the meantime I'm sure, so you might learn something about a particular niche you're interested in if you try to corroborate my figures.

Thursday, February 23, 2017

Inverting And Combining An Open-Drain Signal

Lately, I have been working with the LTC4151-1 chip by Linear Technology.  It is used to measure voltage and current with high resolution typically in telecommunications equipment.  In the process of validating my design, I need to test it with the simplest circuit possible in order to simplify fabrication and eliminate variables introduced by other intermediate devices (namely, the required optoisolators).

The LTC4151-1 chip communicates over the I2C bus.  Only problem: its setup breaks out the SDA (serial data) signal into SDAI (data in from the microcontroller) and /SDAO (data out to the microcontroller, inverted).  It is broken out like this (with SDAO inverted) so that people can conveniently wire up optoisolators to reconcile different ground potentials that exist between the MCU and the unit whose voltage is being measured.  (Many times, the V- out of the battery will not be the same as the GND used by your microcontroller logic, computer, or etc., especially if you are measuring individual batteries within strings of batteries.)  In order to combine the lines, I could have tied the /SDAO line to a 7404 inverter and added a pull-up resistor so the line is pulled high when the /SDAO line goes high-impedance before coming to the inverter.  Normally this would be OK because I2C signals are never driven high; they are only driven low and to high-impedance.  However, in the event that SDAI was being pulled low at the same time /SDAO was also being pulled low, this would cause the output of the inverter to go high, likely resulting in a nasty short circuit and/or bus contention.  Let's try to avoid this situation, even if it is unlikely to happen unless there is an error.

Basically, what I need to build is a NOR gate with one inverted input.  As SDAI goes low or /SDAO goes high, the output signal is supposed to go low as well.

In this case, I will define the master device as my microcontroller and the slave device as my LTC4151-1 chip.

Here's how you do it.  You will need one NPN transistor and jumper wires.  I used a common 2N3904 which you should have if you collect parts.

  • Connect SCL from the slave device to the master device, then to a pull-up resistor.  (Since this is the standard arrangement, it is not shown in the schematic below.)
  • Connect SDAI from the slave device to the collector of an NPN transistor, then to a pullup resistor.
  • Connect /SDAO from the slave device to the base of an NPN transistor, then to a pullup resistor.
  • Connect the emitter of the NPN transistor to ground.
  • Connect SDA from the master device to the collector of the transistor.
Here's what I just described with the bullets, in picture form.  Note that GND in the schematic must be shared across all your devices, as described in the next paragraph.  Also, the resistor values are not guaranteed to work in every situation, but worked for me with my LTC chip.  Ohm's Law is your friend, especially if your devices need a particular amount of current for them to be responsive.


Most importantly, since the LTC4151-1 can measure voltage across its ground pin and a pin called VSENSE+, its ground pin must be tied to the same ground as the microcontroller and the transistor or else the circuit will not work.  When I have this configuration hooked up to my laptop for serial verification of data, I use the USB ground pin as the ground to my microcontroller, the transistor, and the USB chip, and I hook up this ground to the negative terminal of the cheap drugstore 9V battery I use for testing the LTC chip.

This works because when the base is on (i.e. /SDAO is being pulled high by the pullup resistor and data is being transmitted), the current flows through the transistor with no resistance, and since the emitter is grounded, this pulls the whole SDA signal low.  When the base is turned off, this turns the transistor into an open switch, thus forcing the SDA signal to be pulled up by the pullup resistor.  And, of course, when the SDA pin from the master device is getting pulled low, this still grounds out the SDAI signal on the slave device because the collector is low and won't have any voltage difference from ground anyway, even if the transistor's base gets turned on.


Having trouble finding the address of your I2C device?


Have a look at this brilliant blog post "I2C Scanner" by Nick Gammon on his forum.  It is Arduino code you can use to iterate through all possible I2C devices and see which one(s) return a response.  www.gammon.com.au/forum/?id=10896&reply=6#reply6  This code proved very helpful to me as I tried to find out where and how to reach my LTC chip.

Thursday, December 22, 2016

Passing "this" Around In Polymer

Lately, I have decided to write the front ends for my personal Web projects in Polymer.  It gives me the ability to construct the UI, and even tie in AJAX actions and database calls, by simply including Web Components as elements in the DOM, just like plain HTML.  To me, it seems less bloated and denser than even Angular 1 (sorry, haven't played with Angular 2 yet), not to mention plain JavaScript or JQuery where you still need to write out most of the interactions between the model, view, controller, and external APIs yourself.  The Web Components aspect was most appealing to me because now I could leverage previous work, standing on the shoulders of giants, rather than reinventing the wheel for the needed interactions.  Better yet, if I use Google's implementations of Web components, I can even get Material styling on my DOM elements with no extra work.

However, Polymer suffers from some of the problems I've had with Perl too.  You really need to be in the right frame of mind when writing in Polymer because the syntax is deceptively short and you need to be intimately aware of what the framework is handling for you.  This could make it tougher to explain what's going on in the code if haven't looked at it in a while.  "Oh yes, you can build this in Polymer, and it looks just like HTML!" I once touted to a crowd, but then upon closer inspection -- no -- there were a lot of other idiosyncrasies specific to Polymer they would need to know before they could go off on their own and replicate my work!  Finally, while Perl is generally easy to find help for, it can be frustrating in the world of Polymer because many answers pertain to older versions of the framework,

Despite these pitfalls, many people who love front-end website design and are adept with XML-based layouts have been flocking to use Polymer for building fully-functional Web apps.  In fact, now you don't even need to write JavaScript to do 3D scenery in a browser, so if you enjoy using A-Frame for building 3D scenery and you need to build an entire Web app, then you would probably enjoy Polymer as well.  For me, it's about doing more with less, and Polymer seems to minimize the amount of code I need to write, not to mention the amount of technology I need to get involved -- the days of having to build an API on your server to talk to the remote API might be coming to an end!

One of the biggest things I've busted my head on in Polymer thus far was working with the this variable.  You use this to access the functions and properties defined within the scope of Polymer.  This is OK when you have a diverse set of interactions taking place in your application, but what about those times when you have something fairly repetitive?  The principles of DRY dictate you should move such repetitive code into a function.  That's easy to do if you want to keep everything within the scope of Polymer:

Polymer({
  is: 'my-element',
  properties: {},

  scopeFxn1: function() {
    console.log(this);  // just as you would expect in the Polymer scope
  },

  ready: function() {
    this.scopeFxn1();
  }
});

However, imagine if you have a bunch of different interactions you need to define as functions.  This could start to make your Polymer element and its JavaScript properties a bit long and messy-looking.  The first thought is to move these functions outside the scope of Polymer and possibly into a separate file containing your JavaScript logic.  However, note the caveat to that:


this doesn't propagate into your function.  Watch out!


If you define a function outside the scope of Polymer, and try to call it from within a function that is in the scope of Polymer, the browser will not know what this is anymore and that could prove very frustrating for you if you want to take advantage of any of the benefits this brings you, such as manipulating variables bound to DOM elements or updating properties of your data objects.

outsideFxn1: function() {
  console.log(this);  // nope, does not compute
},

Polymer({
  is: 'my-element',
  properties: {},

  ready: function() {
    outsideFxn1();
  }
});

One potential workaround: simply pass this as an argument from within the scope of Polymer to your external functions.

outsideFxn1: function(element) {
  console.log(element);  // same as this, but outside Polymer scope now!
},

Polymer({
  is: 'my-element',
  properties: {},

  ready: function() {
    outsideFxn1(this);
  }
});

This way, you can use element (or whatever you end up calling it) to update the data bindings on your UI and run the set() function on your objects in order to properly broadcast updates to their properties.

To be honest, I had this article waiting in the wings for a long time but posted it now simply because my upcoming features on ATmega fuses and retrocomputing reminiscence aren't quite ready yet, and I wanted to get something out for this week.  But since originally getting the idea for this article, I learned about Polymer behaviors.


Behave Yourself, You Big Group Of Functions!


Polymer behaviors are like mix-ins in other popular languages like Java where there is no concept of multiple inheritance but people still have the need to incorporate functions belonging to different classes of objects.  It's the best of both worlds; you can have all the functions you need in order to do all the operations you want to do on your DOM and your data, but without the weird side effects and useless baggage you would get if you had to do multiple inheritance as in C++.  And because Polymer behaviors are defined in the scope of Polymer in your application, the functions (whether you write them or not) in these behaviors understand the true meaning of this once they are initialized.  I'm not going to provide an example here because there are tonnes of other places where you could obtain them, including the link to the official Google doc above (as long as 1.0 is still relevant).

As such, you could probably pass this around between functions inside and outside the scope of Polymer, as described initially, but you know what they say about "when in Rome..."  I would argue it's wiser and more sustainable to stick to the conventions given to you by the framework.  If you're looking to attain DRY nirvana in your code and want to push functions out of the scope of any individual Polymer element you write, just implement the code in a Polymer behavior, save your this, and call it a day.

Thursday, December 15, 2016

Last-Minute Stocking Contest Entry Takes the Cake!

The place I work at during the day tries to foster a whole lot of creativity as we tackle tough problems from many different angles, starting with carefully designed user experiences and including the best performance and functionality in order to delight our customers.  We have incredibly talented people working as leaders and contributors on our tech and design teams, and it takes lots of incentives in order to keep a team such as this together without suffering from attrition.  Many of my coworkers came from the startup world and, for all we know, are only trying to pay off some bills while chasing their great big hit on the side.

One particular perk I've had a big hand in lately hasn't quite been mentioned on the Internet yet that I could tell, other than hinted at within various event invitations that you could probably dig up if you tried hard or maybe within social media interactions originating from the attendees, thus I'll refrain from talking about that particular perk in great detail just yet.  However, I'll say that out of this particular resource, we held a Christmas stocking decorating contest.

This contest kicked off on Nov. 30 during one of the monthly parties in our workspace, and I saw many people from the design team crank out very creative stocking ideas within just a short period of time.  While I was standing there awed by this raw talent being flaunted, two directors came up to me and said "I expect that yours will have LEDs on it."  The challenge was on!


Early entries in the contest.

I still didn't know what exactly the theme should be, and as someone who has spent more than the last three years writing incredibly geeky posts on this blog regarding various aspects of game shows, options, software, and hardware, I know that my right brain has a very hard time actually executing on its vision, if it can even come up with something that looks good in the first place.  I think I used up all my artistic energy being the Vice President of Public Relations for the Residential College Board at Northwestern University from 2006-08, so it would require a lot of meditation, hopes, and prayers to see it through.

Enter Stacy, who can certainly execute well on the good ideas that come out of her brain.  She definitely has her own style, and while it might not be quite as polished as people who do this for a living, it's amazing that someone can have such a strong left brain and right brain.  She fulfills that artistic side which at one time I thought I had for myself, but now that I have her for myself, I won one in the end anyway.  (We keep trying to figure out who got the better end of the deal in our relationship. :-P)

Anyway, my initial thoughts on the matter revolved around using my secret "charlieplex110" project (well mostly kept secret from the Internet; folks at work have seen it and maybe you've seen it on Google+ if you were watching closely) on the stocking in order to display messages, but thought that wouldn't be very Christmas-y and not really be easily integrated into a Christmas theme.  Using a BriteBlox display would be silly because the dot pitch on those things would be too large to display a coherent Christmas-related message without spanning several stockings.

The weekend before the contest, I intended to bring a blank stocking home and meditate on it in order to come up with a vision for what I wanted -- except I forgot it somewhere and ended up getting distracted with other things over the weekend.  Then I had a holiday party to attend on Monday night, which left me with strictly Tuesday in order to finish the stocking for the Wednesday afternoon contest.  I was hoping to bounce things off Stacy that night, but she spent most of the night fixing her mother's computer and didn't get home until very late.  As such, in my soporific and possibly somewhat depressed state of mind, I had a flash of inspiration to make Santa's sleigh being pulled by reindeer.  Also, the reindeer needed to have moving legs.  But, in fact, doing all that would be really tedious, so I settled on just having one reindeer with legs affixed to the servo motor.  I finally managed to churn out some code that would light a string of WS2812 LEDs (which I had just had a lot of first-hand experience with) with alternating & switching red & green colors, and then also spin a servo motor 90 degrees in order to move the legs, using the Sparkfun Pro Micro development board.  Most of this was, in fact, code that was easily leveraged from other projects, but still seemed to take me more time to finish for some reason than I'm willing to admit.

Some of the hurdles I faced included:

  • Not being able to find felt anywhere -- I wanted to use felt in the construction of Rudolph on the stocking
  • One of my servos didn't want to work with the Pro Micro, and the other servo had a smaller Grove-style interface that couldn't hook up directly with the headers on the Pro Micro
  • I wanted to include a PIR sensor to make these effects motion-activated, but didn't know where it was off-hand and wasted too much time locating other things
Once I got the LED strip and the servo all working in harmony, it was time to draw the artwork.  We have a large collection of colored Sharpie markers at home, and despite being the normal size, the lines actually came out quite thin on the stocking due to the type of material it was made out of.  I'll just say again that people who have the capability to draw fascinate me.  I used to think I had a knack for drawing too, but now I'll say I've pretty much been reduced to just tracing patterns onto the paper and then doing coloring on top.  That's fine, though my stocking came out way more hand-drawn than many of the other offerings that took advantage of the physical accent pieces available to add.

My goal was to keep it hidden until adding the final feature to it right before the contest; that would be a blinking nose for Rudolph.  It took me way longer than it should have just to blink a stupid LED, thanks in part to having to improvise on some of the cabling, said cabling being difficult to put together (and having to cope with long leads liable to touch each other), and then a goofy firmware glitch I caused when I tried to write the servo data and toggle the LED from the same output pin.  Unfortunately this problem was compounded when Windows stopped acknowledging my Pro Micro on the USB port.  Of course, when Windows screws up, you have to reboot, and it's not quite as smart as Mac OS is about re-spawning all your applications.  However, even a reboot didn't fix it right away; it took somehow even more monkeying with the Arduino IDE (sigh) before I could re-upload the firmware with the bug fix.  By this time, the party had already started, but not a lot of folks had voted yet.

Electioneering Efforts Near the Polling Place


The vice-president over me, who is mostly in charge of the Tech side of things (i.e. not Design), quizzed my teammates on who made a stocking.  It became apparent that I was probably the only person in Tech who attempted it, but was out of pocket trying to get that blasted LED for Rudolph's nose working.  When I went back to my desk, I affirmed that I did in fact have an entry, and I was told to show it to the others in Tech and have them vote for it just to support their own kind, since basically the designers came up with every last other stocking.

Getting it hung up on the judging board was met with a bit of resistance, as the admins were jokingly jealous of the level of awesomeness of the final product.  "No, I'm not putting it up..." "Ok, I'll put it up, but I'm putting my name by it!"  Then, as one of our executives began explaining the rules and procedure of voting, I realized there'd be a problem.  Votes were to be cast by stuffing the stockings with jingle bells that would be counted later, but as mine was stuffed with "electronic crap," there was very little room to put jingle bells inside it and it could even pose a safety hazard (shorting a lithium ion battery) if anyone attempted it.  The crowd gasped and laughed once I basically revealed which one my stocking was.  Some jokingly said "Too bad!" since it was pretty clear (or pretty well-illuminated perhaps) who would win, and others feared it would turn into a popularity contest for me versus the others.  Luckily someone offered a good suggestion to actually hang a blank stocking next to mine, and people could put their jingle bells inside that one so they all would fit without messing up my electronics.  Nevertheless, while I was awkwardly standing nearby someone we are all extremely fortunate to have with us in our group and in life in general, I happened to see the counting committee dump out approximately four jingle bells from inside my actual decorated stocking.  Sigh... At least no one broke anything during either the jingle bell insertion or removal phase!

As the time came, for some reason, it was about as nerve-racking as a contestant awaiting the results of the Super Match on Match Game.  They start by revealing the third place, then move onto the second place, and then finally show the first place winner.  The stakes were high, since the winners would take home Amazon gift cards of up to $100 in value.  Pretty good just for some stocking-making!  Anyway, I can't even remember the other two due to the sheer level of excitement and anticipation (had that feeling before), but I probably wouldn't have written up a close to 2000-word article unless I didn't personally take that $100 prize home.


Afterward, one of the product managers told me to watch my back, as next year, he would be bringing some serious hardware & IoT know-how to the table to make yet another technologically awesome stocking.  Fine with me; after my director told me that mine really exuded the values of our particular workspace and our maker culture, let's hope even the designers take some time to put in hardware features next year!

My lil' ol' contest entry.  Unfortunately, by this time, Rudolph's nose LED wiring fell apart somehow (surprise surprise) and the chenille pipe cleaners being rotated by the servo motor (for "Spider Legs Rudolph") also fell off.  The good news this is already two nights after my contest, so none of my coworkers saw it in such disarray.