Thursday, July 7, 2016

It's a Huge 100x Faithful & Functional NES Controller Coffee Table!

A great deal of my time was spent from Memorial Day weekend up through now trying to produce this gigantic piece of functional furniture, and now I am pleased to present an Instructable on how to make one for yourself!  Here is a link to the Instructable.

Hint: It is helpful to have a Makerspace handy.

I actually started this project back in January in an effort to get certified on the CNC table router at Dallas Makerspace and start cutting pinball playfields.  I made a post previously about my experiences and learnings from that particular endeavor, but as you can tell by the edits, I didn't quite get it right the first time.  It took a bit of time before my schedule could clear up in order to try 3D parts again, and lest I forget the procedures involved with using the table router, I thought it'd be prudent to try building an even bigger NES controller than what I did to get certified.  My proficiency test involved making a 16x-sized controller, but here I would be attempting a 100x-sized controller that would take up an entire 2'x4' sheet of MDF (medium-density fiberboard).

I measured everything incredibly accurately and made 3D models of various parts in Blender prior to even using the CNC mill, and this took me maybe 20 hours' worth of work for both the 16x and the 100x version.  I even brought my laptop into the woodshop and spent about 5 hours cutting and tweaking G-code to make the perfect controller face and route out channels for the wiring and a pocket for the controller PCB to live so it would actually work.  Over Memorial Day Weekend, I was out at the Makerspace cutting the controller face from about 9:30 PM until 3:00 AM one night.  Fortunately, I came home with a wonderful-looking piece of MDF.

However, the fun had only just begun...

Milling is easy... Sanding and painting take forever!!!


Stacy & I worked feverishly to sand down the controller face and buttons for their initial coat of primer, then sanded everything down again ahead of putting on paint, and then sanded everything down yet again in between each coat of paint.  The buttons were finished with copious amounts of lacquer so their finish would stay good.  However, the controller face itself was another story.  With the multiple colors of paint and lettering required, it took quite a bit of waiting around, placing tape-n'-drape, rearranging the tape-n'-drape, and using a stenc...oh wait... I need to make a stencil for the letters!

This led to another several hours in the CAD tool and with image processing tools.  I needed to get high-quality vectors of the letters, and could not trust the vector tracers that muddled pictures of a real controller.  Luckily, I found artwork representing the shapes of the various letters on the controller face (namely A, B, SELECT, START, and Nintendo).  I spent a while in Photoshop massaging these to lay on top of a blown-up image of the controller, then converted these edges to vectors before re-importing them into VCarve so I could render the stencil design.

Then, there was the dilemma as to what material to use for the stencil.  I planned to use Hardboard, but was concerned about the way it cuts leaving tonnes of little fibers that could affect the quality of the edges of the stencil.  My other choices were laser cutting or just using a vinyl cutter and making these things all decals.  Well, those two wouldn't work because:
  • I'm not trained on the laser cutter.
  • I didn't have a piece of acrylic handy that would be big enough for the stencil.
  • Vinyl decals don't give the same texture as the paint, can't be reused, and would look like garbage if you apply them wrong or get so much as one crinkle or air bubble in it when you apply it.
I wish I could have gotten a lasered stencil, since the edges wouldn't have needed any finishing, but I went with what I knew anyway.  This led to many, many more hours sanding the insides of the letters, then painting the letters, then sanding again with finer sandpaper in order to get rid of all the leftover "fuzz" from the hardboard.

Painting the letters was still not easy with the stencil, and they didn't really look good afterward.  Fortunately, Stacy has mad skills with knives and detailing brushes, and was able to scrape away excess paint or just paint over it.  The fun wasn't over yet, though; the sides of the controller still had to be put on, not to mention the wiring!

Wiring it up for the first time


I spent probably four hours carefully measuring 26AWG wire, splicing it, threading it through the holes of the switch contacts, and soldering it down to make a good connection.  Once all was said and done, I wanted to try it out on our NES.  The Retron 5 was conveniently located upstairs with Kirby all ready to boot, so all I had to do was plug it in and play.  However, I didn't bother putting the buttons on the switches since they weren't necessary to verify the electrical function.  It's really difficult to play this controller while sitting on the floor and trying to hit the exact switches.  Once I was through with the electronic testing, it was time to put the sides on.

Oh wait, I still need to do something about that PCB.  Originally I had cut all the wires so that the PCB would hang outside of the NES controller.  I wanted it really to be pocketed inside the controller, so I had to go and shorten all the wires I spent such a long time working on already.  After about another 4 hours of untangling wires that had commingled with each other and dealing with weak traces on the PCB that had broken off as a result, I was ready to actually mill out a PCB pocket and glue the Hardboard down.  This would permanently enclose the PCB into the Hardboard, as we would liberally apply epoxy to everything to ensure it would be durable and wouldn't budge.  We prayed that no damage would happen to the electronics inside the controller after this stage!

Edging and More Painting


Once this was done, I needed to install the edges of the controller in order to give it some depth.  I excitedly drilled out a hole that the controller wire would be threaded through, but forgot that the side I drilled into would be the side that faces inward, not outward.  So when I threaded the wire through this hole, it ended up being totally the wrong way -- my detailing (beveled edge) was pointed inward, the side piece wouldn't line up flush with the edge of the controller face, and the side piece naturally wanted to be placed above the controller face rather than below it.  Wrong, Wrong, WRONG!  I ended up having to cut a channel out of the side piece to liberate the wire, and then drill another wire which was also about 1" off from where it needed to be, so I drilled a third hole which was finally right.  Once this was done, I cut yet another channel out of the side piece and allowed it (and the other three side pieces) to set overnight.

This process was nerve racking because the controller face was already beautifully painted, yet here we were planting it face-down on a dirty work table getting who knows what kind of scratches on it while we try to put on the sides.  Meanwhile, the side with the wire coming out of it loved to fall down, and I was just hoping that no damage was happening to the electronics as a result.  I also had the controller face-down to mill out the PCB pocket, which I didn't do originally because I didn't have the exact dimensions of the PCB at the time I was cutting the controller face, and when I went to go do it, the table router was down (along with other big machines) for general electrical repairs to our 3-phase power system.  Thus, one of the other members guided me in using a hand-operated mill to mill out the PCB pocket.  That was actually kind of fun, as you just go to town on the piece -- no need to do much planning or CAD work or exact measurements on a computer screen.

Final Touches


After the edges were in place, it was time to make them look nice and smooth with the controller face.  Despite our best efforts, the edges weren't exactly flush with the controller face, meaning we would need to sand down the edges that were too high.  We also used Bondo to fill in the seams, because after we would sand, it would be obvious these were several boards fused together.  With the Bondo, it's way less obvious how we fabricated it and the controller face looks great (and seamless).  This involved carefully applying Bondo to the seams without getting any on the controller face, and then once it cured, we had to go outside to paint over the Bondo with gray.  This was a bit nerve-racking too, with the controller face already being so nicely painted, so we made sure to totally cover it with tape-n'-drape so as not to ruin any of our hard stenciling work.

Oh, and did I mention that by now, it's the night before Let's Play Gaming Expo?  We're supposed to load this piece in tomorrow for the exhibition starting the following day.  As the first round of gray paint dries, I go to work creating some paper stencils to fill in the "letter holes" -- the middles of characters like A, B, and O.  My first technique for dealing with these didn't work at all (where I was trying to keep the middles of the milled-out characters, sand them down carefully, glue them to skewers, and then hope I was holding them down hard enough and evenly while the spray paint is applied).  So once these were done, I taped them over the letters and Stacy went at them with a paintbrush.

As she was doing this, I decided to do one final electrical test.  I brought our real Nintendo downstairs with Mario Bros/Duck Hunt, and tried to go to town.

Oops...

The A & B buttons, and Start, don't work.  Sh*8*@%!!!!.

Luckily Stacy is a smart one, and figured maybe it was a problem with a ground wire becoming disconnected from one of the buttons.  I was about to be beside myself with fury and rage if this thing got broken somehow and it was one of the actual button signal wires that got messed up, since that would have been about impossible to fix that close to showtime.


Here we are burning the candle at both ends at the 11th hour.  It is almost midnight Friday, 6/17, the day of load-in for LPGE.  Look closely at the picture and you'll see a blue alligator wire linking the ground line on SELECT to START.  Fortunately, this solved our problem!  All the buttons worked fine with this one modification, so I made a wire for this and installed it the next morning (so we wouldn't have the ugly blue wire exposed and messing with gameplay).

Finally, our touch-up work was done, and our gray paint was dry, so we applied a coat of lacquer to dry overnight.  Then we had another coat of lacquer dry during the time we were at work.  Unfortunately, by this time, the weather was starting to become very hot and humid like it does here every year, so our lacquer has some nasty streaks and imperfections in it.  Fortunately, we should be able to hit it with a heat gun and smooth it out.

What a time!


We brought it to the show and it was a big hit.  The guys running the free play home console room were ecstatic to have it because apparently one of their other ideas for a big show piece fell through.  They plugged it into the old-school console TV lent by the National Videogame Museum in Frisco, and that made it extra-cool and really a sight to behold.  It drew all sorts of folks, from little kids who don't know how to do anything but mash buttons all the way to experienced Super Mario Bros. speed runners.  Some folks played it solo, others paired up to share the D-pad and A/B buttons.  Anyway, I wrote more about the Expo in general and some of my thoughts in the previous post, but didn't mention much about this controller because I knew I'd be writing this much more detailed article next.  After all, I put so much effort into this piece this whole year, especially in the past month; it deserves its own article.

And the fun didn't stop there... After the show, I spent a while getting together pictures, documentation, and recollections, while modifying various CAD files to reflect a more sane and logical set of steps (as well as to expunge any trademark violations therein).  Including this post, it's taken me about 5 nights' worth of work just for all these documentation and housekeeping types of steps in order to share the mechanics of the project with you!

And here's a little treat for making it to the end: a link to our GitHub page where you can find all the downloadables you would need to make this NES controller for yourself.

And I couldn't have done it without three other folks from the Makerspace: Patrick, Rodney, and Mike, who helped assist me with various tools such as the table saw, wood clamps/nail gun, and the hand router.

Thursday, June 23, 2016

More things lost on today's youth

I'm a collector.  My wife doesn't help me with my problem.  It's something I've done since I was young, and growing up as an only child, it brought me joy to share these things with others.  On the other hand, she comes from a big family but has also come to appreciate stuff.  There are things we collect together, such as retro video games, which led us to the Let's Play Gaming Expo which was in Plano, TX on 6/18 and 6/19.  We brought eight of our home console systems with at least one game per system from our personal stash, and two pinball machines.  (Not to mention the gigantic 100x NES controller.)

We have seen a staggering evolution in technology and aesthetics in our just shy of 30 years in existence, and what's more interesting to me about this evolution is the things we don't do anymore rather than the new things that have evolved.  For instance, print media has declined drastically over the past 15 years, pay-phone booths are virtually non-existent, and hardly anyone keeps a Mapsco in their car to get themselves around town anymore.  Some of us hardly remember how to hold a pencil, and are much faster and neater with Swype or even speech-to-text technology.

And time marches on.  The youngest kids old enough to truly enjoy Bob Barker's hosting style on The Price Is Right are now in college.  Soon, going out shopping or going to the post office (yes, some of us have businesses that ship goods, and some of you might still have landlords who are stuck in the '70s) will be a thing of the past.  In the far future, gas stations will be a thing of the past, and hopefully we will no longer be driving cars, thus buying us all sorts of time to enjoy the scenery or absorb ourselves in the latest gossip or games.

There are people in this world, young and old alike, who have missed out on these cultural phenomena.  And there are others who cherish them and to whom it all brings back fond memories.  At the expo, we shared our toys with kids of all ages whose experiences with these things varied greatly.  It was gratifying to watch a pinball wizard set a high score on one of my machines, which I will spend months trying to beat back at home.  It also felt good to teach a youngster how to use an Atari 7800 controller.  But when I wasn't around to guide people on the best ways to enjoy these things, I observed some awfully odd behavior.


You're One Person, But You Started Four Games


Pinball machines tend to have modes where a maximum of four people can take turns playing one ball at a time in a 3- or 5-ball game.  In the arcade, people would pay money for a "credit" which allowed one player to play.  Thus, four credits meant one player could play four games, four players could compete in a single game, or anything in between.  I've somewhat questioned the utility of this mode, since it doesn't really speed things up; if you're waiting in line to play a game, though, it can facilitate bonding with other pinheads and at least get more people in the queue playing rather than waiting.

Nevertheless, I wished I had made my games require people to pay for a credit in order to avert a really bizarre behavior enabled by free play.  Many people (including practically everyone I walked by on Sunday) thought that the "Start" button would actually launch the ball and begin the game.  Pinball players with any experience know that the Start button only puts a ball in the shooter lane for you, and you then need to pull back the plunger in order to try to make the skill shot.  So often, I saw a single person who had started four games all for themselves before figuring out how they messed up.  I don't even usually play four games in a row on any of my machines, so as you can imagine, most people got bored and walked away, leaving other players in the midst of a game in progress and no good path to getting a high score.  No one bothers resetting the machines for obvious reasons, but hopefully at least a couple more people in this world know what to do if they decide to play pinball.

I even found four quarters in my World Cup Soccer '94 game at the end of the expo!  Were these people tipping me, or did they think they really needed to put in money to start a game?  Imagine if I didn't check that before putting the game up on a dolly, which would cause the quarters to fall back and possibly short something.  Nevertheless, proof that it still pays to own old games. ;)

Long Times To Boot


Among the consoles, I brought systems that were generally modded to boot faster than normal.  Raymond Jett sold me a special BIOS chip for the ColecoVision that eliminates the ungodly long time you have to wait for it to start up normally.  Same deal with our Sega Master System.  However, the poor old Amiga 500 had no such love; we still had to put in our Workbench 1.3 floppy disk and wait about 5 minutes for it to boot up each time it needed to be rebooted (and so many people messed with it that reboots were frequently necessitated).  When doing this, people would often try to bang on the keyboard during the lengthy boot cycle to see if the system did something.  Then, they were surprised to see that such an old system actually had a desktop-style GUI.  Evidently, no one remembers waiting around!  My Nexus 6P seems to take an ungodly long time to cold boot, so I guess most folks are either never letting their batteries die or never installing critical system updates.

Someday, I will see about getting a ROM with various versions of Workbench programmed onto it.  This'll save me the hassle of floppy disks and buy me the convenience of loading whatever Workbench version works nicest with whatever I want to do.  Similarly, I put a 1GB SCSI hard drive onto my Mac Plus from 1986 in order to help with loading System 6 and large programs such as Photoshop.  While everyone should go through at least once what people back in the day had to suffer through, even those before us had limits and would eventually spend $1,000 on a device (usually a 40MB hard drive) that saved them from having to swap the Photoshop floppy with the System floppy several times just to draw a gradient, not to mention how many times required to load the program!

Miscellania


There seems to be an age where one actually understands what's going on rather than just mashing the controller or watching the game basically play itself.  I was trying to teach one poor youngster how to play Joust on the Atari 7800, but he was notably bad at pressing the Fire button on the side in order to get his character to fly.  Another kid was getting a lesson in Food Fight from me, but he had a hard time aiming to sling deadly food at the attacking chefs.  Others would hold the joystick sideways or upside-down, causing their character to move in unexpected ways.  How much user research was done on these things back in the day to find out how intuitive they were?  Did kids fumble with controllers as much back then?  It would make me sad to see my own kids fumble with everything in my own collection, and would hope I could coach them enough so that eventually they're adept at even the things I fumble with (basically anything like a SNES controller or newer).

I was probably the only person working on systems live in-person in the free play room.  I still hadn't reassembled my ColecoVision from attempting the composite mod (so it would have composite video out rather than only RF out, which looks terrible), so I had to spend time taking care of that.  I was frequently removing the top cover of the Amiga to point out the Indivision ECS scan doubler / flicker fixer I installed into it so it could output VGA rather than to some weird unobtainium 23-pin RGB connector or to the black-and-white composite port.  It was extremely lucky that I got that card in from FedEx just hours before the show started, and I had lots of fun showing it off.

The other thing is my Amiga case is actually broken -- it doesn't really snap shut anymore, and one of the standoffs that you're supposed to screw the disk drive into is cracked.  Thus, it was hard for people to eject disks from it, and some folks even assumed that since they couldn't press the Eject button, the disk drive was empty.  I walked upon it once where there were two disks in the disk drive!  Argh.  Sad, but I will definitely have to rethink taking the Amiga with me to any public shows unattended.  That, and it developed a habit of screaming "Guru Meditation: Software Failure" at me, which is usually more a sign of hardware failure than software failure.  Hopefully I can whack that gremlin out of the system before long...

Not sure what other truly home-brew hardware people whipped up for the Expo (probably none), but besides building my 100-times-scale NES controller (which will be described in more detail here soon), I used the rest of the off-brand controller I harvested to make a Vectrex controller.  Controllers for the Vectrex are getting outrageously expensive, and I didn't feel like modifying one of our name-brand Sega controllers, so I decided to spend roughly double that value worth of my own time building one myself. Here's the guts:

The "component side" of this piece of prototyping board used in the Vectrex controller.  It's nothing more than hand-cut wires and several resistors, and on the other side, there are some short wires exposed that get shorted together each time you press down one of the buttons.

A few astute people noticed I was walking around with a Nintendo controller that had a Sega plug on the end of it!  It was nice that people were paying such close attention, but soon I solved that problem (and the other problem of "Why's this knock-off NES controller hooked up to this Vectrex?") by commissioning this decal from the folks at Muffin Bros. Graphics, basically taking the Vectrex controller graphics and massaging it onto their NES template:

Ooh, shiny!  And nothing like reproduction multi-carts for the Vectrex.

Thursday, May 19, 2016

Moments Inside Google I/O 2016

I was invited to attend Google I/O 2016 at pretty much the last second -- only about 10 days before the start of the event.  This is my first time here, though my wife DoesItPew has been here twice before already.  We got to attend the convention together for once, and now I am dodging the cold wind blowing in over the hill here at dusk from the Shoreline Amphitheater in Mountain View, CA.

Prior to the event were many different Day Zero parties including a really large one thrown by Intel.  As an Intel Innovator (though I rarely speak about that because I suck at publicizing myself and/or I mostly use their products on projects covered by NDAs), I got to show off some new hardware prototypes made recently and impressed several people who are fond of homemade hardware and/or LED products.  Unfortunately, my favorite prototype died on Wednesday after the Google I/O keynote; it probably got shorted when one of the power wires poking through my shirt decided to come loose and touch the other power wire.  Now all it does is burn my finger really badly whenever I try to touch the microcontroller. after plugging it in  Oh well, I can just replace it when I get home.  Meanwhile, no extra attention for me... :-(  I also got to meet some fellow employees of my company from different sites I don't usually interact with (and tour their two offices in the city), and met a couple people from a relatively recent acquisition we made in San Francisco.

As you may have heard, the lines to attend many of the sessions were absolutely ridiculous and caused people to miss out on things they wanted to see.  That combined with the nearly 90-degree weather here in the Bay Area led to a lot of unhappy, hot, and sunburned folks.  The folks I met from our acquisition were so disappointed with the lines that they bailed after the first hour and vowed to watch I/O only via the live streams!  They would not even bother showing up in person on Thursday nor Friday.  But, as an amelioration for those of us who suffered through that and those who were not selected for the conference, you can (for probably the first time) experience the sessions of this I/O on the Google Developers YouTube Channel.  Thus, there's not really a reason to attend a talk unless you want to see something demonstrated live.  The real reason one would spend $900 on a conference pass now is to come out and schmooze with Googlers and other enthusiastic developers deemed worthy and discuss lofty ideas one could develop on the backbone of all the stuff Google has come out with now and will release in the near future.  Plenty of them are around and enthusiastic to talk to you, showing off intriguing demos on anything from mobile app "point-and-click" testing to a music box made out of a revolving wooden disc and sticky notes with "Do", "Re", "Mi", etc. on them being analyzed in real-time by a camera and translated into text that feeds into a synthesizer.  A Google Maps API guy clarified the Terms of Service for me, and stated my idea for "Maker's Markers" wouldn't violate it.  Goodbye OpenStreetMaps.  We have had previews of Android Auto in various scenarios where DoesItPew tore apart the UI with various Googlers for about 10 minutes while they took diligent notes.  (It's good to have someone so opinionated, but can you tell who wears the pants at our house? :-P)  I have spoken with people about Firebase, Project Tango augmented reality, various machine learning ideas, things I want to do relating to what I've seen here the last two days (which you can join on the new Google Spaces app), and of course on progressive Web design and improving the user experience for my company's mobile offerings.


NFC "tattoos" representing my three ideas on the Big Idea Wall.  One may or may not be to rename it the Big Idea Board... doesn't that have a nicer ring to it? :-P

To see for yourself the ideas I added to the Big Idea Wall, check out these links and please join the conversation:

Don't Die Watching Android TV -- Emergency Notification Overlays
Order Fast Food on Android Auto
Optimize Meetups By Travel Time

As if I/O 2015 wasn't inspirational enough just from watching the keynote, now I will have a whole 'nother year of projects to keep myself busy after work, not to mention plenty of stuff to share with my coworkers and help build new and innovative aspects of the financial industry as it pertains to auto finance and home loans.

Tomorrow is the final day of I/O 2016, and we have yet to see what that will bring.  After that on my agenda is to hit the Bay Area Maker Faire in San Mateo and the Computer History Museum, not to mention see some family and friends who live out here on the "Left Coast."  And if that's not enough to do, once I get back late Sunday night, there's yet another conference for software engineers the following Tuesday at my workplace and then I'm playing a gig with the corporate band to celebrate the recent expansion of my workspace -- the awesome innovation lab known as The Garage.

Meanwhile, it's so friggin' cold out here...

Thursday, April 21, 2016

Run Integration Tests Separately Within Your Maven Build

There are several ways to configure Maven to run designated tests separately in a Java project.  Usually, people want to distinguish between unit tests and other types of automated tests during a build.  Unit tests are fast because you are mocking all the external services that the particular code under test is relying upon.  They’re also typically smaller than functional tests, since they are (supposed to be ;) testing a unit of code rather than an entire feature.

However, functional tests are also critical to the success of your project.  You or your managers are probably interested in seeing automated end-to-end usage of your application running constantly without errors, but how is this possible without annoying the developers as they wait for all the tests to finish?

The Maven Failsafe plugin is most helpful in separating unit from functional tests.  By default, it focuses on tests whose filename follows the specific pattern:

**/IT*.java
**/*IT.java
**/*ITCase.java

Of course, you can add (or even exclude) files of particular naming patterns by modifying your POM file as described in the documentation.

The Circle of Life(cycles): It Builds Us All


A very simple way to get started with Failsafe is simply to add the following to your POM file:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <executions>
        <execution>
            <id>integration-test</id>
            <goals>
                <goal>integration-test</goal>
            </goals>
        </execution>
        <execution>
            <id>verify</id>
            <goals>
                <goal>verify</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This tells Maven to add Failsafe’s integration-test goal to the integration-test stage of your build when you run it, then the same for the verify goal.  This means that to run your functional tests, all you need to do is run Maven with a build goal of “integration-test” or later in the lifecycle, including the popular “mvn install”.  To skip your functional tests, simply pick a build goal prior to “integration-test”, such as “mvn package”.

Of course, this leaves you with the disadvantage that you won’t be able to deploy the application to any environments until you wait on all your tests to finish, and it probably won’t deploy the application until all your tests are passing.  If you want to use “mvn install” to deploy your application to your test environment without waiting on the functional tests to complete, consider using Maven profiles.

Separation Via Profiles


In Maven, you can construct different profiles in order to specify different ways you want a build to work, such as utilizing different plugins such as running Surefire (for unit tests) versus Failsafe (for functional tests).  Here is an example of what you would put in your POM to run Failsafe when the Maven profile with-functional-tests is specified:

<profiles>
    <profile>
        <id>with-functional-tests</id>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-failsafe-plugin</artifactId>
                    <version>2.19.1</version>
                    <executions>
                        <execution>
                            <id>integration-test</id>
                            <goals>
                                <goal>integration-test</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>verify</id>
                            <goals>
                                <goal>verify</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

Notice that everything within <plugin></plugin> is exactly the same as in the first example; the only difference is the Failsafe plugin only runs when this profile is specified (on the command line with -Pwith-functional-tests).  It also gives you the benefit of limiting which environments the integration & regression tests actually run on, since developers won’t want to run every single functional test just to make sure the build succeeds before they can push changes to the code repository, and they won’t run unless they specify this profile explicitly (unless you put this under the “default” profile, and then they’ll just hate you :-P).

Annotation As a Solution


Yet another approach suggests creating an empty interface simply for marking purposes and then using that interface as a @Category to distinguish between your test types.

You might define a file such as IntegrationTest.java:

package com.test.annotation.type.IntegrationTest;
public interface IntegrationTest {}

And then use it in a real test as such:

import org.junit.experimental.categories.Category;
@Category(IntegrationTest.class)
public class RealTest {
    // etc...
}

You then need to set up the POM so that the Surefire plugin (for unit tests) explicitly ignores your IntegrationTest type:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.19.1</version>
  <dependencies>
    <dependency>
      <groupId>org.apache.maven.surefire</groupId>
      <artifactId>surefire-junit47</artifactId>
      <version>2.19.1</version>
    </dependency>
  </dependencies>
  <configuration>
    <includes>
      <include>**/*.class</include>
    </includes>
 <excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
 </configuration>
</plugin>

Also note the choice of surefire-junit47 for the artifact ID of Surefire, since this particular version correctly detects categories assigned with @Category.

Finally, you need to set up the POM so that the Failsafe plugin will actually run your IntegrationTest type (and only that type) during the integration-test build stage:

<plugin>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.maven.surefire</groupId>
            <artifactId>surefire-junit47</artifactId>
            <version>2.19.1</version>
        </dependency>
    </dependencies>
    <configuration>
        <groups>com.test.annotation.type.IntegrationTest</groups>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
            </goals>
            <configuration>
                <includes>
                    <include>**/*.class</include>
                </includes>
            </configuration>
        </execution>
    </executions>
</plugin>

The downside to this approach is that you have to write that IntegrationTest.java interface file into each module you plan to use integration tests in.  If you have a multi-module Maven project, it violates the principle of DRY.  Plus, it seems to involve more XML (or at least more complex XML) than the previous methods, and dependencies on surefire-junit47 and org.junit.experimental.categories.Category that you wouldn’t otherwise need.

Article Sources:



Saturday, April 2, 2016

Arduino Day Special: Make an EPROM Tester with an Arduino Mega and Octal Latch

I could have just asked around to see if anyone had an EPROM validator, but why ask when you can spend several hours doing it yourself, and then several more hours writing in pedantic detail about it?  Of course, I must have the DIY bone...

Who still uses EPROMs, anyway?


While working on old solid-state pinball machines from the 1980s and late '70s, you might run into a situation where a dead ROM chip needs to be replaced.  Certain types of machines (I'm looking at you, all you Gottlieb System 80's) suffer a problem where coils can get locked on due to bad grounding design throughout the system, and then cause transistors and all sorts of other things on the driver board and even possibly the main board to fry themselves.  In other cases, battery corrosion might leech into the ROM chip and possibly compromise it.  No matter what the case is, you might find yourself in need of new ROMs at some point.

Now I could easily go and find new ROMs for my game, order them, and call it a day -- oh wait, I did mention System 80, didn't I?  Well it turns out Gottlieb (or the remnants thereof) is very picky about their licensing and who can sell related products, and the one legitimate source of the game ROM wants $50 for it.  I'm sorry, but I'm not paying that much.  I'll just get my own ROM chips and try to find a way to get the source code.

Now there are two things you need to do before plugging a new EPROM into a device:
  • Make sure it is erased
  • Program it with your new program
In both steps, you probably want to make sure the job was done correctly, no?  It would not be great to discover the program either didn't burn correctly, or couldn't burn correctly because there were already some zeros living on the EPROM that don't happen to line up with the zeros in your program.  Now again, I'll pose the question I asked to you at the top. ;)

Sanity Check


Before going down the rathole of doing this myself and having to do both the hardware setup and software programming (let's face it, wiring by itself takes enough time), I wanted to see if anyone had attacked this problem before.  I found, besides various forum posts that don't offer a complete solution, someone's GitHub code where they had utilized three different I/O registers on the chip to make this happen.  That's all fine and dandy, and was in fact the solution I was about to implement for myself... until I looked a little bit closer at the choice of I/O registers used and what the names of some of the pins were.

The ATmega2560 chip featured on the Arduino Mega happens to have outputs for /RD, /WR, and ALE.  I also noticed one register whose pins were labeled AD[7:0] and then another one whose pins were simply labeled A[15:8].  This evoked memories of my 8051 Microcontroller class in college (no, I swear I'm not that old yet!), and I realized this implies the chip can somehow multiplex its output of the first 8 address bits with the input (i.e. the data line) coming from the EPROM itself.  So, yes, it is in fact possible to use only two I/O registers on the Arduino Mega in order to read/write to an external chunk of memory. 

However, before you get started, note this approach requires access to a 74x373 or 74x573 8-bit octal latch chip whose timing specifications comply with the requirements mentioned on page 28 of the ATmega2560 datasheet.  The only difference between the 373 & 573 is the pinout, so use whichever you think will be more convenient for your end result (most people pick the 573 thusly).

Don't Forget To Register For This Service


I turned to the ATmega2560 datasheet and found the simple steps on how to do this.  In order to enable the chip to take total control of the PORTC (A[15:8]) and PORTA (AD[7:0]) registers plus the /RD, /WR, and /ALE signals so you don't have to worry about driving them yourself or changing input states, you need to be concerned with the two registers XMCRA and XMCRB.  These control the behavior of the XMEM (eXternal MEMory) functionality on various AVR chips including the ATmega2560.

Paraphrased from the ATmega2560 datasheet starting on page 36:

XMCRA has the following settings:
  • SRE (Bit 7): Set to enable the XMEM interface.  If you want to do anything described in this post at all, you must set this bit to 1.
  • SRL[2:0] (Bits 6:4): The Wait-State Sector Limit.  If you are worried about valid data not being ready from your EPROM quickly enough given the clock speed of your AVR, you can add wait states, and even specify to a degree which addresses get what particular wait states.  For my case, I dictated that all of the external memory shall be governed by one single wait state, so I set SRL[2:0] to 000b.
  • SRW11, SRW10 (Bits 3:2): Wait State Select 1.  Since I am paranoid, I set these bits to 11b so it would enforce the maximum wait.
  • SRW01, SRW00 (Bits 1:0): Wait State Select 0.  Since I selected to use only one wait state, the value of these bits don't matter.
XMCRB has the following settings:
  • XMBK (Bit 7): External Memory Bus-keeper Enable.  When this bit is set, the chip will retain the most recent value seen on the bus, even when another device would have set the lines to high Z.  This means the address hangs around on PORTA after ALE goes low (normally the address would be wiped out as the bus goes high Z for just a bit before the data is driven onto the port).  Also, the data from the EPROM hangs around on PORTA after /RD goes high (normally it would get wiped out as the bus goes to high Z before the AVR writes the next address).  Basically it acts like a smart latch that you don't have to toggle yourself, and in fact, you can activate this feature on PORTA without necessarily using the rest of the XMEM interface simply by setting this bit.
  • Reserved (Bits 6:3): Leave these alone.
  • XMM2, XMM1, XMM0: External Memory High Mask.  These bits determine how much of PORTC is given back to you for regular GPIO use.  If you have a device smaller than 64K words, then obviously you won't need (and it probably doesn't even have inputs for) all 16 address lines.  For example, my 2764 chip (8KB EPROM, 8K words * 8 bytes/word = 64Kbits = 8 KB) only uses 13 address lines, so I can set these XMM[2:0] bits to 011b so that I can regain the regular use of PORTC[7:5] if desired to do my usual reads from sensors, driving robot controllers or LEDs, or other general shenanigans.
You can see how I finally chose to set these registers in the code example down at the bottom.  Later on, I will also describe the instructions you have to send to the chip in order to get it to read memory, including exactly how to send a memory address to the EPROM through A[15:0].

Another important caveat mentioned in the datasheet discusses exactly how memory addressing works.  Since the ATmega's own memory is addressed from 0 to 0x21FF, you can use the principle of aliasing to access the beginning of your EPROM.  Without aliasing, these bytes would be masked by the ~8KB of internal SRAM plus other MMIO/PMIO on the AVR.  Thus, to read the first 8,404 bytes of your EPROM, you need to actually start by reading memory address 0x8000.  Also, if you have a ROM whose size is >32K words (e.g. the 64512 EPROM chip), there are other special considerations you need to make as well.  This is explained in more detail on pages 31 & 32 of the datasheet.

Making Connections


Next up is actually wiring up everything on the breadboard to the Arduino Mega.  (You do remember I'm still using an Arduino despite talking about all the mumbo-jumbo from the ATmega datasheet, yes?)  The wiring diagram to use is shown on that datasheet on page 28, Figure 9-2.  Also note that the 2764 datasheet (at least the one I was using) mentions that its /G line should be hooked up to the /RD line of the memory controller (thus saving me from trying it on something else and being disappointed).  Also, when the ATmega2560 datasheet mentions that the latch should be transparent to the EPROM and/or AVR when G is high, that means that ALE on the AVR should be hooked up to /G on the latch, not /E, since you don't want the latch to ever output high-Z; it should either be propagating D (the latch input) through Q (the latch output) when ALE is high (i.e. what they mean by "transparent") or propagating through Q what the state of D was just as ALE was set low throughout the whole time ALE remains low.

Besides Figure 9-2, which you can open up for yourself, here' s a table of the same information:

MCULatchEPROM
/RD/G
AD7:0D7:0
AD7:0D7:0
Q7:0A7:0
A12:8A12:8
ALEG

And here's a picture of my final setup:




Assembled In the USA


Yes, a mark of quality indeed... Anyway, if you've gone this far, why not write a little bit of assembly code just to put your effort over the edge into ridiculousness?  Because I am lazy and I use Windows mostly for AVR development, I still use the plain ol' Arduino IDE and blend assembly with C code (also I think it's fun to fly in the face of all the haters of basic Arduino stuff).

The macro for running assembly code inside C is called asm(), and each line of assembly can go into a double-quoted string that can be chained back-to-back without commas (but doing multi-line asm() calls is a bit outside the scope of this post).  When you add the keyword volatile to it, that lets the compiler know these values are subject to change at any time and the command needs to be rerun with any new values that might have been loaded into the variables representing the arguments.  Without using the volatile keyword, you might run a loop from 0 to 32767 with the intent to access the ith element of the EPROM, but only ever access the 0th element of the EPROM because the compiler "optimized" the assembly to assume the address argument doesn't change.  Whoops!

I started with the instruction lds (Load Direct from SRAM) to fetch external memory.  It takes two arguments: a register (any one register from r0 to r32 will do) and a constant.  This constant must be hard-coded into your assembly statement and cannot be provided by a variable.  Unfortunately, this doesn't really facilitate testing unless you want to write a really long unrolled loop!

Fortunately, there are instructions in assembly that allow you to store the memory address into a register, read the memory address indicated by the register, and then post-increment or pre-decrement that number for you so you don't even have to worry about updating the index.  Specifically, registers R26 through R31 handle this.  The odd-numbered registers store the high byte of the 16-bit memory address, and the even-numbered registers store the low byte.  For a diagram, check Figure 7.5.1 on page 14 of the ATmega datasheet.  These six registers represent three 16-bit special registers called X, Y, and Z.  In my code, I use Y (r28 & r29) because it worked most reliably out of the three.

At Last... The Code!


Note: Be Sure you have selected the "Arduino/Genuino Mega or Mega 2560" board as your choice in the Arduino IDE, or else it will not load the appropriate header files and will complain that XMCRA and friends are undefined.

/*   Note: If you want to test the boundary conditions, 
 *    the last address of internal SRAM is 0x21FF and the 
 *    first address of external SRAM is 0x2200, which also
 *    actually corresponds to address 0x2200 on the SRAM.
 *    To hit the very first address of the SRAM (0x0), 
 *    you must take advantage of aliasing by reading from
 *    0x8000 to 0xA1FF.
 *    
 *    The following code demonstrates writing to an
 *    internal register and will fail to write to the first 
 *    available address of an EPROM:

  asm volatile("ldi r16, 0xFF");
  asm volatile("sts 0x21FF, r16");
  asm volatile("sts 0x2200, r16");

 */
uint32_t i;
volatile unsigned int c, d;

void setup() {
  XMCRA = 0b10001100;
  XMCRB = 0b10000011;
  Serial.begin(115200);
}

void loop() {
  delay(1000);  // this helps avoid garbage at the beginning
  /*
  // This part proves the auto-increment feature is working
  // and that the first 10 bytes are indeed being read correctly
  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0x800A; i++) {
    asm volatile("sts (d), r28");
    asm volatile("sts (d + 1), r29");
    Serial.print("Contents of address ");
    Serial.print(d);

    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    Serial.print(": ");
    Serial.println(c, HEX);
  }
  */

  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0xA000; i++) {  // for an 8KB EPROM
    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    // The following prints out hex in the format
    // FF FF FF FF  FF FF FF FF  FF FF FF FF  FF FF FF FF
    if (c < 16)
      Serial.print(0);
    Serial.print(c, HEX);
    Serial.print(" ");
    if (i % 16 == 3 || i % 16 == 7 || i % 16 == 11)
      Serial.print(" ");
    if (i % 16 == 15)
      Serial.println();
  }

  while (true) {
    // spin lock
  }
}


Reference Materials


This article would not be possible without the help of the following:

ATmega2560 Datasheet
AVR Instruction Set Manual
Introduction to AVR assembler programming for beginners
GCC inline assembler cookbook