Thursday, June 23, 2016

More things lost on today's youth

I'm a collector.  My wife doesn't help me with my problem.  It's something I've done since I was young, and growing up as an only child, it brought me joy to share these things with others.  On the other hand, she comes from a big family but has also come to appreciate stuff.  There are things we collect together, such as retro video games, which led us to the Let's Play Gaming Expo which was in Plano, TX on 6/18 and 6/19.  We brought eight of our home console systems with at least one game per system from our personal stash, and two pinball machines.  (Not to mention the gigantic 100x NES controller.)

We have seen a staggering evolution in technology and aesthetics in our just shy of 30 years in existence, and what's more interesting to me about this evolution is the things we don't do anymore rather than the new things that have evolved.  For instance, print media has declined drastically over the past 15 years, pay-phone booths are virtually non-existent, and hardly anyone keeps a Mapsco in their car to get themselves around town anymore.  Some of us hardly remember how to hold a pencil, and are much faster and neater with Swype or even speech-to-text technology.

And time marches on.  The youngest kids old enough to truly enjoy Bob Barker's hosting style on The Price Is Right are now in college.  Soon, going out shopping or going to the post office (yes, some of us have businesses that ship goods, and some of you might still have landlords who are stuck in the '70s) will be a thing of the past.  In the far future, gas stations will be a thing of the past, and hopefully we will no longer be driving cars, thus buying us all sorts of time to enjoy the scenery or absorb ourselves in the latest gossip or games.

There are people in this world, young and old alike, who have missed out on these cultural phenomena.  And there are others who cherish them and to whom it all brings back fond memories.  At the expo, we shared our toys with kids of all ages whose experiences with these things varied greatly.  It was gratifying to watch a pinball wizard set a high score on one of my machines, which I will spend months trying to beat back at home.  It also felt good to teach a youngster how to use an Atari 7800 controller.  But when I wasn't around to guide people on the best ways to enjoy these things, I observed some awfully odd behavior.


You're One Person, But You Started Four Games


Pinball machines tend to have modes where a maximum of four people can take turns playing one ball at a time in a 3- or 5-ball game.  In the arcade, people would pay money for a "credit" which allowed one player to play.  Thus, four credits meant one player could play four games, four players could compete in a single game, or anything in between.  I've somewhat questioned the utility of this mode, since it doesn't really speed things up; if you're waiting in line to play a game, though, it can facilitate bonding with other pinheads and at least get more people in the queue playing rather than waiting.

Nevertheless, I wished I had made my games require people to pay for a credit in order to avert a really bizarre behavior enabled by free play.  Many people (including practically everyone I walked by on Sunday) thought that the "Start" button would actually launch the ball and begin the game.  Pinball players with any experience know that the Start button only puts a ball in the shooter lane for you, and you then need to pull back the plunger in order to try to make the skill shot.  So often, I saw a single person who had started four games all for themselves before figuring out how they messed up.  I don't even usually play four games in a row on any of my machines, so as you can imagine, most people got bored and walked away, leaving other players in the midst of a game in progress and no good path to getting a high score.  No one bothers resetting the machines for obvious reasons, but hopefully at least a couple more people in this world know what to do if they decide to play pinball.

I even found four quarters in my World Cup Soccer '94 game at the end of the expo!  Were these people tipping me, or did they think they really needed to put in money to start a game?  Imagine if I didn't check that before putting the game up on a dolly, which would cause the quarters to fall back and possibly short something.  Nevertheless, proof that it still pays to own old games. ;)

Long Times To Boot


Among the consoles, I brought systems that were generally modded to boot faster than normal.  Raymond Jett sold me a special BIOS chip for the ColecoVision that eliminates the ungodly long time you have to wait for it to start up normally.  Same deal with our Sega Master System.  However, the poor old Amiga 500 had no such love; we still had to put in our Workbench 1.3 floppy disk and wait about 5 minutes for it to boot up each time it needed to be rebooted (and so many people messed with it that reboots were frequently necessitated).  When doing this, people would often try to bang on the keyboard during the lengthy boot cycle to see if the system did something.  Then, they were surprised to see that such an old system actually had a desktop-style GUI.  Evidently, no one remembers waiting around!  My Nexus 6P seems to take an ungodly long time to cold boot, so I guess most folks are either never letting their batteries die or never installing critical system updates.

Someday, I will see about getting a ROM with various versions of Workbench programmed onto it.  This'll save me the hassle of floppy disks and buy me the convenience of loading whatever Workbench version works nicest with whatever I want to do.  Similarly, I put a 1GB SCSI hard drive onto my Mac Plus from 1986 in order to help with loading System 6 and large programs such as Photoshop.  While everyone should go through at least once what people back in the day had to suffer through, even those before us had limits and would eventually spend $1,000 on a device (usually a 40MB hard drive) that saved them from having to swap the Photoshop floppy with the System floppy several times just to draw a gradient, not to mention how many times required to load the program!

Miscellania


There seems to be an age where one actually understands what's going on rather than just mashing the controller or watching the game basically play itself.  I was trying to teach one poor youngster how to play Joust on the Atari 7800, but he was notably bad at pressing the Fire button on the side in order to get his character to fly.  Another kid was getting a lesson in Food Fight from me, but he had a hard time aiming to sling deadly food at the attacking chefs.  Others would hold the joystick sideways or upside-down, causing their character to move in unexpected ways.  How much user research was done on these things back in the day to find out how intuitive they were?  Did kids fumble with controllers as much back then?  It would make me sad to see my own kids fumble with everything in my own collection, and would hope I could coach them enough so that eventually they're adept at even the things I fumble with (basically anything like a SNES controller or newer).

I was probably the only person working on systems live in-person in the free play room.  I still hadn't reassembled my ColecoVision from attempting the composite mod (so it would have composite video out rather than only RF out, which looks terrible), so I had to spend time taking care of that.  I was frequently removing the top cover of the Amiga to point out the Indivision ECS scan doubler / flicker fixer I installed into it so it could output VGA rather than to some weird unobtainium 23-pin RGB connector or to the black-and-white composite port.  It was extremely lucky that I got that card in from FedEx just hours before the show started, and I had lots of fun showing it off.

The other thing is my Amiga case is actually broken -- it doesn't really snap shut anymore, and one of the standoffs that you're supposed to screw the disk drive into is cracked.  Thus, it was hard for people to eject disks from it, and some folks even assumed that since they couldn't press the Eject button, the disk drive was empty.  I walked upon it once where there were two disks in the disk drive!  Argh.  Sad, but I will definitely have to rethink taking the Amiga with me to any public shows unattended.  That, and it developed a habit of screaming "Guru Meditation: Software Failure" at me, which is usually more a sign of hardware failure than software failure.  Hopefully I can whack that gremlin out of the system before long...

Not sure what other truly home-brew hardware people whipped up for the Expo (probably none), but besides building my 100-times-scale NES controller (which will be described in more detail here soon), I used the rest of the off-brand controller I harvested to make a Vectrex controller.  Controllers for the Vectrex are getting outrageously expensive, and I didn't feel like modifying one of our name-brand Sega controllers, so I decided to spend roughly double that value worth of my own time building one myself. Here's the guts:

The "component side" of this piece of prototyping board used in the Vectrex controller.  It's nothing more than hand-cut wires and several resistors, and on the other side, there are some short wires exposed that get shorted together each time you press down one of the buttons.

A few astute people noticed I was walking around with a Nintendo controller that had a Sega plug on the end of it!  It was nice that people were paying such close attention, but soon I solved that problem (and the other problem of "Why's this knock-off NES controller hooked up to this Vectrex?") by commissioning this decal from the folks at Muffin Bros. Graphics, basically taking the Vectrex controller graphics and massaging it onto their NES template:

Ooh, shiny!  And nothing like reproduction multi-carts for the Vectrex.

Thursday, May 19, 2016

Moments Inside Google I/O 2016

I was invited to attend Google I/O 2016 at pretty much the last second -- only about 10 days before the start of the event.  This is my first time here, though my wife DoesItPew has been here twice before already.  We got to attend the convention together for once, and now I am dodging the cold wind blowing in over the hill here at dusk from the Shoreline Amphitheater in Mountain View, CA.

Prior to the event were many different Day Zero parties including a really large one thrown by Intel.  As an Intel Innovator (though I rarely speak about that because I suck at publicizing myself and/or I mostly use their products on projects covered by NDAs), I got to show off some new hardware prototypes made recently and impressed several people who are fond of homemade hardware and/or LED products.  Unfortunately, my favorite prototype died on Wednesday after the Google I/O keynote; it probably got shorted when one of the power wires poking through my shirt decided to come loose and touch the other power wire.  Now all it does is burn my finger really badly whenever I try to touch the microcontroller. after plugging it in  Oh well, I can just replace it when I get home.  Meanwhile, no extra attention for me... :-(  I also got to meet some fellow employees of my company from different sites I don't usually interact with (and tour their two offices in the city), and met a couple people from a relatively recent acquisition we made in San Francisco.

As you may have heard, the lines to attend many of the sessions were absolutely ridiculous and caused people to miss out on things they wanted to see.  That combined with the nearly 90-degree weather here in the Bay Area led to a lot of unhappy, hot, and sunburned folks.  The folks I met from our acquisition were so disappointed with the lines that they bailed after the first hour and vowed to watch I/O only via the live streams!  They would not even bother showing up in person on Thursday nor Friday.  But, as an amelioration for those of us who suffered through that and those who were not selected for the conference, you can (for probably the first time) experience the sessions of this I/O on the Google Developers YouTube Channel.  Thus, there's not really a reason to attend a talk unless you want to see something demonstrated live.  The real reason one would spend $900 on a conference pass now is to come out and schmooze with Googlers and other enthusiastic developers deemed worthy and discuss lofty ideas one could develop on the backbone of all the stuff Google has come out with now and will release in the near future.  Plenty of them are around and enthusiastic to talk to you, showing off intriguing demos on anything from mobile app "point-and-click" testing to a music box made out of a revolving wooden disc and sticky notes with "Do", "Re", "Mi", etc. on them being analyzed in real-time by a camera and translated into text that feeds into a synthesizer.  A Google Maps API guy clarified the Terms of Service for me, and stated my idea for "Maker's Markers" wouldn't violate it.  Goodbye OpenStreetMaps.  We have had previews of Android Auto in various scenarios where DoesItPew tore apart the UI with various Googlers for about 10 minutes while they took diligent notes.  (It's good to have someone so opinionated, but can you tell who wears the pants at our house? :-P)  I have spoken with people about Firebase, Project Tango augmented reality, various machine learning ideas, things I want to do relating to what I've seen here the last two days (which you can join on the new Google Spaces app), and of course on progressive Web design and improving the user experience for my company's mobile offerings.


NFC "tattoos" representing my three ideas on the Big Idea Wall.  One may or may not be to rename it the Big Idea Board... doesn't that have a nicer ring to it? :-P

To see for yourself the ideas I added to the Big Idea Wall, check out these links and please join the conversation:

Don't Die Watching Android TV -- Emergency Notification Overlays
Order Fast Food on Android Auto
Optimize Meetups By Travel Time

As if I/O 2015 wasn't inspirational enough just from watching the keynote, now I will have a whole 'nother year of projects to keep myself busy after work, not to mention plenty of stuff to share with my coworkers and help build new and innovative aspects of the financial industry as it pertains to auto finance and home loans.

Tomorrow is the final day of I/O 2016, and we have yet to see what that will bring.  After that on my agenda is to hit the Bay Area Maker Faire in San Mateo and the Computer History Museum, not to mention see some family and friends who live out here on the "Left Coast."  And if that's not enough to do, once I get back late Sunday night, there's yet another conference for software engineers the following Tuesday at my workplace and then I'm playing a gig with the corporate band to celebrate the recent expansion of my workspace -- the awesome innovation lab known as The Garage.

Meanwhile, it's so friggin' cold out here...

Thursday, April 21, 2016

Run Integration Tests Separately Within Your Maven Build

There are several ways to configure Maven to run designated tests separately in a Java project.  Usually, people want to distinguish between unit tests and other types of automated tests during a build.  Unit tests are fast because you are mocking all the external services that the particular code under test is relying upon.  They’re also typically smaller than functional tests, since they are (supposed to be ;) testing a unit of code rather than an entire feature.

However, functional tests are also critical to the success of your project.  You or your managers are probably interested in seeing automated end-to-end usage of your application running constantly without errors, but how is this possible without annoying the developers as they wait for all the tests to finish?

The Maven Failsafe plugin is most helpful in separating unit from functional tests.  By default, it focuses on tests whose filename follows the specific pattern:

**/IT*.java
**/*IT.java
**/*ITCase.java

Of course, you can add (or even exclude) files of particular naming patterns by modifying your POM file as described in the documentation.

The Circle of Life(cycles): It Builds Us All


A very simple way to get started with Failsafe is simply to add the following to your POM file:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <executions>
        <execution>
            <id>integration-test</id>
            <goals>
                <goal>integration-test</goal>
            </goals>
        </execution>
        <execution>
            <id>verify</id>
            <goals>
                <goal>verify</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This tells Maven to add Failsafe’s integration-test goal to the integration-test stage of your build when you run it, then the same for the verify goal.  This means that to run your functional tests, all you need to do is run Maven with a build goal of “integration-test” or later in the lifecycle, including the popular “mvn install”.  To skip your functional tests, simply pick a build goal prior to “integration-test”, such as “mvn package”.

Of course, this leaves you with the disadvantage that you won’t be able to deploy the application to any environments until you wait on all your tests to finish, and it probably won’t deploy the application until all your tests are passing.  If you want to use “mvn install” to deploy your application to your test environment without waiting on the functional tests to complete, consider using Maven profiles.

Separation Via Profiles


In Maven, you can construct different profiles in order to specify different ways you want a build to work, such as utilizing different plugins such as running Surefire (for unit tests) versus Failsafe (for functional tests).  Here is an example of what you would put in your POM to run Failsafe when the Maven profile with-functional-tests is specified:

<profiles>
    <profile>
        <id>with-functional-tests</id>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-failsafe-plugin</artifactId>
                    <version>2.19.1</version>
                    <executions>
                        <execution>
                            <id>integration-test</id>
                            <goals>
                                <goal>integration-test</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>verify</id>
                            <goals>
                                <goal>verify</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

Notice that everything within <plugin></plugin> is exactly the same as in the first example; the only difference is the Failsafe plugin only runs when this profile is specified (on the command line with -Pwith-functional-tests).  It also gives you the benefit of limiting which environments the integration & regression tests actually run on, since developers won’t want to run every single functional test just to make sure the build succeeds before they can push changes to the code repository, and they won’t run unless they specify this profile explicitly (unless you put this under the “default” profile, and then they’ll just hate you :-P).

Annotation As a Solution


Yet another approach suggests creating an empty interface simply for marking purposes and then using that interface as a @Category to distinguish between your test types.

You might define a file such as IntegrationTest.java:

package com.test.annotation.type.IntegrationTest;
public interface IntegrationTest {}

And then use it in a real test as such:

import org.junit.experimental.categories.Category;
@Category(IntegrationTest.class)
public class RealTest {
    // etc...
}

You then need to set up the POM so that the Surefire plugin (for unit tests) explicitly ignores your IntegrationTest type:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.19.1</version>
  <dependencies>
    <dependency>
      <groupId>org.apache.maven.surefire</groupId>
      <artifactId>surefire-junit47</artifactId>
      <version>2.19.1</version>
    </dependency>
  </dependencies>
  <configuration>
    <includes>
      <include>**/*.class</include>
    </includes>
 <excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
 </configuration>
</plugin>

Also note the choice of surefire-junit47 for the artifact ID of Surefire, since this particular version correctly detects categories assigned with @Category.

Finally, you need to set up the POM so that the Failsafe plugin will actually run your IntegrationTest type (and only that type) during the integration-test build stage:

<plugin>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.maven.surefire</groupId>
            <artifactId>surefire-junit47</artifactId>
            <version>2.19.1</version>
        </dependency>
    </dependencies>
    <configuration>
        <groups>com.test.annotation.type.IntegrationTest</groups>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
            </goals>
            <configuration>
                <includes>
                    <include>**/*.class</include>
                </includes>
            </configuration>
        </execution>
    </executions>
</plugin>

The downside to this approach is that you have to write that IntegrationTest.java interface file into each module you plan to use integration tests in.  If you have a multi-module Maven project, it violates the principle of DRY.  Plus, it seems to involve more XML (or at least more complex XML) than the previous methods, and dependencies on surefire-junit47 and org.junit.experimental.categories.Category that you wouldn’t otherwise need.

Article Sources:



Saturday, April 2, 2016

Arduino Day Special: Make an EPROM Tester with an Arduino Mega and Octal Latch

I could have just asked around to see if anyone had an EPROM validator, but why ask when you can spend several hours doing it yourself, and then several more hours writing in pedantic detail about it?  Of course, I must have the DIY bone...

Who still uses EPROMs, anyway?


While working on old solid-state pinball machines from the 1980s and late '70s, you might run into a situation where a dead ROM chip needs to be replaced.  Certain types of machines (I'm looking at you, all you Gottlieb System 80's) suffer a problem where coils can get locked on due to bad grounding design throughout the system, and then cause transistors and all sorts of other things on the driver board and even possibly the main board to fry themselves.  In other cases, battery corrosion might leech into the ROM chip and possibly compromise it.  No matter what the case is, you might find yourself in need of new ROMs at some point.

Now I could easily go and find new ROMs for my game, order them, and call it a day -- oh wait, I did mention System 80, didn't I?  Well it turns out Gottlieb (or the remnants thereof) is very picky about their licensing and who can sell related products, and the one legitimate source of the game ROM wants $50 for it.  I'm sorry, but I'm not paying that much.  I'll just get my own ROM chips and try to find a way to get the source code.

Now there are two things you need to do before plugging a new EPROM into a device:
  • Make sure it is erased
  • Program it with your new program
In both steps, you probably want to make sure the job was done correctly, no?  It would not be great to discover the program either didn't burn correctly, or couldn't burn correctly because there were already some zeros living on the EPROM that don't happen to line up with the zeros in your program.  Now again, I'll pose the question I asked to you at the top. ;)

Sanity Check


Before going down the rathole of doing this myself and having to do both the hardware setup and software programming (let's face it, wiring by itself takes enough time), I wanted to see if anyone had attacked this problem before.  I found, besides various forum posts that don't offer a complete solution, someone's GitHub code where they had utilized three different I/O registers on the chip to make this happen.  That's all fine and dandy, and was in fact the solution I was about to implement for myself... until I looked a little bit closer at the choice of I/O registers used and what the names of some of the pins were.

The ATmega2560 chip featured on the Arduino Mega happens to have outputs for /RD, /WR, and ALE.  I also noticed one register whose pins were labeled AD[7:0] and then another one whose pins were simply labeled A[15:8].  This evoked memories of my 8051 Microcontroller class in college (no, I swear I'm not that old yet!), and I realized this implies the chip can somehow multiplex its output of the first 8 address bits with the input (i.e. the data line) coming from the EPROM itself.  So, yes, it is in fact possible to use only two I/O registers on the Arduino Mega in order to read/write to an external chunk of memory. 

However, before you get started, note this approach requires access to a 74x373 or 74x573 8-bit octal latch chip whose timing specifications comply with the requirements mentioned on page 28 of the ATmega2560 datasheet.  The only difference between the 373 & 573 is the pinout, so use whichever you think will be more convenient for your end result (most people pick the 573 thusly).

Don't Forget To Register For This Service


I turned to the ATmega2560 datasheet and found the simple steps on how to do this.  In order to enable the chip to take total control of the PORTC (A[15:8]) and PORTA (AD[7:0]) registers plus the /RD, /WR, and /ALE signals so you don't have to worry about driving them yourself or changing input states, you need to be concerned with the two registers XMCRA and XMCRB.  These control the behavior of the XMEM (eXternal MEMory) functionality on various AVR chips including the ATmega2560.

Paraphrased from the ATmega2560 datasheet starting on page 36:

XMCRA has the following settings:
  • SRE (Bit 7): Set to enable the XMEM interface.  If you want to do anything described in this post at all, you must set this bit to 1.
  • SRL[2:0] (Bits 6:4): The Wait-State Sector Limit.  If you are worried about valid data not being ready from your EPROM quickly enough given the clock speed of your AVR, you can add wait states, and even specify to a degree which addresses get what particular wait states.  For my case, I dictated that all of the external memory shall be governed by one single wait state, so I set SRL[2:0] to 000b.
  • SRW11, SRW10 (Bits 3:2): Wait State Select 1.  Since I am paranoid, I set these bits to 11b so it would enforce the maximum wait.
  • SRW01, SRW00 (Bits 1:0): Wait State Select 0.  Since I selected to use only one wait state, the value of these bits don't matter.
XMCRB has the following settings:
  • XMBK (Bit 7): External Memory Bus-keeper Enable.  When this bit is set, the chip will retain the most recent value seen on the bus, even when another device would have set the lines to high Z.  This means the address hangs around on PORTA after ALE goes low (normally the address would be wiped out as the bus goes high Z for just a bit before the data is driven onto the port).  Also, the data from the EPROM hangs around on PORTA after /RD goes high (normally it would get wiped out as the bus goes to high Z before the AVR writes the next address).  Basically it acts like a smart latch that you don't have to toggle yourself, and in fact, you can activate this feature on PORTA without necessarily using the rest of the XMEM interface simply by setting this bit.
  • Reserved (Bits 6:3): Leave these alone.
  • XMM2, XMM1, XMM0: External Memory High Mask.  These bits determine how much of PORTC is given back to you for regular GPIO use.  If you have a device smaller than 64K words, then obviously you won't need (and it probably doesn't even have inputs for) all 16 address lines.  For example, my 2764 chip (8KB EPROM, 8K words * 8 bytes/word = 64Kbits = 8 KB) only uses 13 address lines, so I can set these XMM[2:0] bits to 011b so that I can regain the regular use of PORTC[7:5] if desired to do my usual reads from sensors, driving robot controllers or LEDs, or other general shenanigans.
You can see how I finally chose to set these registers in the code example down at the bottom.  Later on, I will also describe the instructions you have to send to the chip in order to get it to read memory, including exactly how to send a memory address to the EPROM through A[15:0].

Another important caveat mentioned in the datasheet discusses exactly how memory addressing works.  Since the ATmega's own memory is addressed from 0 to 0x21FF, you can use the principle of aliasing to access the beginning of your EPROM.  Without aliasing, these bytes would be masked by the ~8KB of internal SRAM plus other MMIO/PMIO on the AVR.  Thus, to read the first 8,404 bytes of your EPROM, you need to actually start by reading memory address 0x8000.  Also, if you have a ROM whose size is >32K words (e.g. the 64512 EPROM chip), there are other special considerations you need to make as well.  This is explained in more detail on pages 31 & 32 of the datasheet.

Making Connections


Next up is actually wiring up everything on the breadboard to the Arduino Mega.  (You do remember I'm still using an Arduino despite talking about all the mumbo-jumbo from the ATmega datasheet, yes?)  The wiring diagram to use is shown on that datasheet on page 28, Figure 9-2.  Also note that the 2764 datasheet (at least the one I was using) mentions that its /G line should be hooked up to the /RD line of the memory controller (thus saving me from trying it on something else and being disappointed).  Also, when the ATmega2560 datasheet mentions that the latch should be transparent to the EPROM and/or AVR when G is high, that means that ALE on the AVR should be hooked up to /G on the latch, not /E, since you don't want the latch to ever output high-Z; it should either be propagating D (the latch input) through Q (the latch output) when ALE is high (i.e. what they mean by "transparent") or propagating through Q what the state of D was just as ALE was set low throughout the whole time ALE remains low.

Besides Figure 9-2, which you can open up for yourself, here' s a table of the same information:

MCULatchEPROM
/RD/G
AD7:0D7:0
AD7:0D7:0
Q7:0A7:0
A12:8A12:8
ALEG

And here's a picture of my final setup:




Assembled In the USA


Yes, a mark of quality indeed... Anyway, if you've gone this far, why not write a little bit of assembly code just to put your effort over the edge into ridiculousness?  Because I am lazy and I use Windows mostly for AVR development, I still use the plain ol' Arduino IDE and blend assembly with C code (also I think it's fun to fly in the face of all the haters of basic Arduino stuff).

The macro for running assembly code inside C is called asm(), and each line of assembly can go into a double-quoted string that can be chained back-to-back without commas (but doing multi-line asm() calls is a bit outside the scope of this post).  When you add the keyword volatile to it, that lets the compiler know these values are subject to change at any time and the command needs to be rerun with any new values that might have been loaded into the variables representing the arguments.  Without using the volatile keyword, you might run a loop from 0 to 32767 with the intent to access the ith element of the EPROM, but only ever access the 0th element of the EPROM because the compiler "optimized" the assembly to assume the address argument doesn't change.  Whoops!

I started with the instruction lds (Load Direct from SRAM) to fetch external memory.  It takes two arguments: a register (any one register from r0 to r32 will do) and a constant.  This constant must be hard-coded into your assembly statement and cannot be provided by a variable.  Unfortunately, this doesn't really facilitate testing unless you want to write a really long unrolled loop!

Fortunately, there are instructions in assembly that allow you to store the memory address into a register, read the memory address indicated by the register, and then post-increment or pre-decrement that number for you so you don't even have to worry about updating the index.  Specifically, registers R26 through R31 handle this.  The odd-numbered registers store the high byte of the 16-bit memory address, and the even-numbered registers store the low byte.  For a diagram, check Figure 7.5.1 on page 14 of the ATmega datasheet.  These six registers represent three 16-bit special registers called X, Y, and Z.  In my code, I use Y (r28 & r29) because it worked most reliably out of the three.

At Last... The Code!


Note: Be Sure you have selected the "Arduino/Genuino Mega or Mega 2560" board as your choice in the Arduino IDE, or else it will not load the appropriate header files and will complain that XMCRA and friends are undefined.

/*   Note: If you want to test the boundary conditions, 
 *    the last address of internal SRAM is 0x21FF and the 
 *    first address of external SRAM is 0x2200, which also
 *    actually corresponds to address 0x2200 on the SRAM.
 *    To hit the very first address of the SRAM (0x0), 
 *    you must take advantage of aliasing by reading from
 *    0x8000 to 0xA1FF.
 *    
 *    The following code demonstrates writing to an
 *    internal register and will fail to write to the first 
 *    available address of an EPROM:

  asm volatile("ldi r16, 0xFF");
  asm volatile("sts 0x21FF, r16");
  asm volatile("sts 0x2200, r16");

 */
uint32_t i;
volatile unsigned int c, d;

void setup() {
  XMCRA = 0b10001100;
  XMCRB = 0b10000011;
  Serial.begin(115200);
}

void loop() {
  delay(1000);  // this helps avoid garbage at the beginning
  /*
  // This part proves the auto-increment feature is working
  // and that the first 10 bytes are indeed being read correctly
  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0x800A; i++) {
    asm volatile("sts (d), r28");
    asm volatile("sts (d + 1), r29");
    Serial.print("Contents of address ");
    Serial.print(d);

    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    Serial.print(": ");
    Serial.println(c, HEX);
  }
  */

  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0xA000; i++) {  // for an 8KB EPROM
    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    // The following prints out hex in the format
    // FF FF FF FF  FF FF FF FF  FF FF FF FF  FF FF FF FF
    if (c < 16)
      Serial.print(0);
    Serial.print(c, HEX);
    Serial.print(" ");
    if (i % 16 == 3 || i % 16 == 7 || i % 16 == 11)
      Serial.print(" ");
    if (i % 16 == 15)
      Serial.println();
  }

  while (true) {
    // spin lock
  }
}


Reference Materials


This article would not be possible without the help of the following:

ATmega2560 Datasheet
AVR Instruction Set Manual
Introduction to AVR assembler programming for beginners
GCC inline assembler cookbook

Thursday, March 31, 2016

More Reviving Old Computers

Since last time I wrote in, I have been extremely busy preparing for Texas Pinball Fest 2016 -- tried to get four machines ready, then three, and then... oh well... only the two I had working to begin with were actually working by showtime.  Meanwhile, I had started to investigate a couple other projects, but never got something going nice enough to warrant writing a blog post on.

Now that Texas Pinball Fest is over, and I swear my games are acting like rabbits (if you know what I mean ;), I'm trying to step back and work on some of the projects I had going before this massive push to restore a bunch of pinball machines happened.  However, I have another great big push for this weekend to get prepared for the North Dallas Area Retrocomputing meetup.  I've had vintage computers hanging around me since they were new, but have acquired some "new" stuff since around Thanksgiving (especially from Fred's "Warehouse of Wonder"), and need to get all the new acquisitions nice & shiny & displayable.  And if you read thoroughly, I'll treat you to some shareable work I did to make this happen.

The first system on the docket is an IBM PS/2 Model 25 286.  It had a crushed PS/2 port in the back.  Fortunately, there is a store very close to my house that sells all kinds of modern and surplus electronic parts, and had plenty of compatible PS/2 ports in stock.  With a couple hours' worth of work, I replaced the sad old jack and was able to boot up the computer for the first time in who knows how long.  Only problem is I'm striking out on all the floppy disks & drives from Fred's so far, and this was no exception; the computer is complaining of a floppy disk or controller error.  I will need to plug in my HxC2001 floppy drive emulator to try diagnosing the problem further.  Nevertheless, it runs Cassette BASIC like a champ...

My IBM PS2 Model 25 286 not doing much
Here's its first sign of life in a while.

By the way, if anyone has a source for faceplates to cover up the two holes there in the front, or has an STL file where I could print some new ones out, that'd be appreciated.

After that, I worked backward and attempted to power on the IBM 5162 (XT 286) for the first time since receiving it.  All the Nichicon & Rubycon capacitors in the power supply looked to be in excellent condition, so I didn't bother replacing anything despite having acquired all the right components a couple months ago.  At first, it fired up but only showed the memory count.  After rearranging the expansion cards a little bit, it beeped a little bit differently at POST, but it started showing helpful diagnostic messages as well.  After trying several combinations of OS disks and drives, I finally reverted back to my trusty HxC2001 controller and it loaded DOS 3.3 like a charm.  To my surprise, it actually remembered the set date & time for a few minutes after a power cycle, but ultimately I do need to put in a replacement BIOS battery (which I also sourced locally, but just lack a cradle for at this time).

The 5162 comes with more flexible setup options than the 5150 (e.g. it actually has a battery because it needs to remember more settings than you can cram on two 8-bit DIP switches), but unfortunately, you need a setup disk to reconfigure the BIOS.  They didn't make it easy; there's no program easily loaded in the ROM that will come up when you hit F2 or DEL.  You must have the bootable setup disk, or if it doesn't boot, load an OS first and then call it up manually later.  Well, it turned out to be quite a hassle to get the setup disk going as an HFE image for the HxC, but eventually my perseverance paid off.

In the off chance you have an HxC floppy disk emulator and want to get started quickly with an OS or the setup disk set up in 5.25" formats, which they don't seem to offer on their site, save yourself some hours and utilize the hours I already spent making them.

IBMDOS11.HFE - This is IBM DOS 1.1 as a 180K (40-track single-sided 5.25" disk) image.  Experience the earliest days of Microsoft's famous operating system.

MSDOS33.HFE - This is MS-DOS 3.3 as a 360K (40-track double-sided 5.25" disk) image.  There are many more features already than with DOS 1.1.

MDA_GAMES1.HFE - This is a curated collection of games (as a 360K 5.25" disk) that have been tested to work on an IBM 5150 with MDA graphics adapter and 5151 monochrome (green-screen) monitor.  Some games are executables, others have to be loaded from BASIC.  As it's a 360K disk image, it will not work with DOS 1.1.

ATADSETUP.HFE - This is an image of the "Diagnostics for the IBM Personal Computer AT" floppy (80-track double-sided 5.25" disk).  The original source of the disk image is minuszerodegrees.net (a great site for early IBM PC system info), but it was not straightforward to convert the IMG into an HFE file.  Also, this disk image wouldn't boot (maybe it needs to be set to 96 tracks instead of 80?), but after loading DOS 3.3, I was able to switch to this disk image and run SETUP.COM (or whatever the executable is named to kick off the setup & diagnostics program).

Sorry I forget what the original sources of the other materials were; in many cases, I had to do digging & tweaking periodically over many weeks to get all these things to work.

The final story is that of the Mac Plus.  I was not forgetting about it while rummaging through Fred's looking for parts, as when I stumbled across an SCSI enclosure containing an 8x CD-ROM drive, I thought that enclosure might make a perfect candidate for a hard drive enclosure for the Mac Plus.  And sure enough, Fred's was a treasure trove of old SCSI equipment; I was able to find tons of SCSI-2 and SCSI-3 drives, cables, and terminators free for the taking.  (I only took five drives. :-P)  Combine that with some system disks & utilities purchased from rescuemyclassicmac.com, and now I have a 1GB hard drive (unfathomable in 1986, in 3.5" form factor no less) on my 1986 Macintosh.  That'll make a great scratch disk for Photoshop 1.0. >-D

Anyway, it's a pretty well-documented procedure on how to use non-Apple hard drives with the Mac Plus; the hardest parts for me was getting a hold of the Utilities disk (luckily the site I mentioned above sells disks with the already-patched version of the disk formatter utility) and actually removing/replacing the CD-ROM drive with the hard drive in the enclosure.  Other than that, with the factory settings on the Seagate ST51080N drive along with the IBM-branded external SCSI enclosure, the Mac picked up on the SCSI hard drive right away. 

1,052,733 K of available disk space
Ohmigawd, that's like ALL TEH K's EVER MADE.  They're right here.  This would have made one helluva scratch disk for Photoshop 1.0 back in the day. :-P

Now the kicker is I have compatible 2GB and 4GB SCSI drives (the other two I picked have a 68-pin SCSI-3 interface and are not compatible with the enclosure).  So yes, I could have EVEN MOAR K's.  Maybe one of my other drives could be allocated toward an A/UX installation (if such a thing actually exists)... but wouldn't you need an order of magnitude more RAM than what the Mac Plus supports to get it working well, not to mention some kind of domain controller?  Who knows...

Unfortunately I have not been able to install System 6 on it just yet because one of the System 6 installation floppies seems to be corrupt and it won't let me proceed with the installation.  Nevertheless, hopefully I can still set this HDD to be my scratch disk for Photoshop 1.0 so I can actually run a "Who can make the best art?" contest at the retrocomputing meetup this Saturday.  Besides, with not having the HDD running the system and programs, I can amuse people on how many times you have to swap the damn floppy disks out just to have it load a program or sometimes even load menu options in your program.

Thursday, January 28, 2016

Making 3D Parts in VCarve, a 2D Editor

In my quest to get certified to use the MultiCam CNC router table at the local Makerspace, I need to create some kind of part that requires use of at least a couple different types of end mills, plus do various cuts such as pockets, profiles, V-carves, engraving, and so on.

First, a bit about the Makerspace movement, in case you haven’t heard: Makerspaces (or Hackerspaces) are places set up for community good that allow people to share knowledge, resources, and tools to help further DIY activities such as programming, electronics, 3D printing, machining, or other activities where people make something.  Makerspaces come in various flavors: some are set up as startup incubators, others as for-profit centers where paid employees build things for people, and still others where members mostly come to work on something completely different from work.  The tools one would find at a makerspace were traditionally owned by companies or by individuals who have spent a long time honing their craft; as such, they are left for very few people to use either in someone’s garage or during working hours.  Through the power of crowdsourcing money from makerspace members through dues and/or fundraising drives, makerspaces can afford tools to share with the community (that is, anyone willing to become a member) and offer training around getting the most out of these tools, not to mention proper use and safety.  People who live in small apartments or are otherwise constrained for space or don’t have thousands of dollars for tools now have access to all sorts of tools that would be impractical for them to own outright.  This attracts talent and people with ideas who often form groups that can do much more than any one individual can on their own, though there are still lots of individual projects happening at makerspaces as well.

Tabling VCarve for the moment


Our MultiCam 3000 CNC router table, like most other CNC milling machines, 3D printers, and other such devices, requires information sent to it in the form of G-Code.  This flavor of code specifies things like feed rate, spindle speed, and tool position to the machine so it will mill out (or extrude plastic or etc.) in the location you want at the time it needs to be there, hopefully without breaking the tool from too much stress or damaging the machine.  The toolchain we use at the Makerspace to produce the required G-code for the table mill involves use of a program called VCarve.  It is a nice program that allows you to design your part and produce G-code to run on the machine to make the part.

VCarve is great for designing fairly simple projects.  This can take you a long way, because what is “simple” in the world of milling can often yield astonishingly detailed and fantastic results, usually by use of the actual operation called “V Carve” (which of course the program VCarve can help you do).  Even a pinball playfield could count as a simple part using this metric.  However, the part I want to make for my test is essentially a replica of the classic Nintendo game controller for the NES, which involves several contoured buttons.  Look closely at the controller, and you will see that the A and B buttons are slightly concave so as to cradle your fingertip nicely.  The D-pad has directional arrows imprinted into the plastic and a hemisphere carved out of the center, not to mention each direction bends slightly upward out of the center to give you nice tactile feedback for exactly where to press to move in that direction.  After trying hard to make these types of 3D cuts (which VCarve doesn’t support inherently) using ramps and other effects, I temporarily gave up on VCarve.

Ok, so what else is there to make the 3D shape?


Previous versions of VCarve prior to 8.0 don’t support any 3D objects whatsoever.  Luckily, my Makerspace has VCarve Pro V8 available for us to use.  With its license, I am able to upload one STL file or bitmap image for it to translate into G-code.  I created the contoured buttons using Blender in three simple steps:
  • Use negative Boolean operations to subtract a large sphere from a small cylinder to create the slight contour of the A & B buttons (then import this button into VCarve and add two instances of it)
  • Use transformations to slightly elevate desired edges of basic cubes to make the contoured D-pad shape
  • Use transformations on cubes to create arrows, and then negative Boolean operations to subtract the arrows from the D-pad shape
While on the topic of Blender, here are two other quick hints about it:
  • When doing a negative Boolean operation, the negative shape does not immediately disappear (unlike other 3D rendering environments I’ve worked with).  You have to move the negative shape out of the way in order to see the imprint it made into the positive shape.  Otherwise, you’ll think the negative Boolean operation is not working, attempt to upgrade Blender to the latest version, and find that doesn’t even help either.
    • When exporting to STL format, you need to have all the desired elements of your object selected while in Object Mode.  Otherwise, Blender won’t export anything in your STL file and VCarve will complain that your STL file is invalid because it contains nothing.

    Bringing It All Back In


    [Edited 6/3/16]

    Once you have made the STL files with Blender, import them into VCarve by going to “Model” -> “Import Component / 3D Model".  Remember that with certain license grades, you might only be able to import one STL file or bitmap per part.   It's not hard to get around this:

    • If all you need are objects imported from Blender (or whatever 3D program you use), then just merge all the objects into a single STL file.
    • Otherwise, make n files to describe the same part, but only import one STL or bitmap into each file.  Place the imported 3D object in the desired location on the part by itself.  Generate the needed G-code outputs for each file, and then either merge the G-code files by hand or just have the machine run all the G-code files sequentially, possibly without even changing the tool.
    Anyway, once you import an STL file, VCarve will give you several options regarding how to treat it.  The most interesting ones to me are:
    • Model Size: Allows you to scale your model if you used the wrong units in Blender or want to make a last-minute adjustment.
    • Zero Plane Position in Model Top: This specifies where the origin is placed with respect to your 3D model.  You can put the zero plane at the top of your model so that all drill depths are negative with respect to the zero plane (just make sure to un-check the checkbox to discard anything below the zero plane).  You can also put it at the bottom of your model, or anywhere in between.
    Once you confirm your selections by clicking OK, it should switch you right away to the Modeling tab.  Double-click on the item you just imported in the Component Tree.  Now, pay close attention to two options:
    • Shape Height: This gives you another opportunity to scale the Z-height of your 3D model in case you think it needs to be altered.  Experimenting with this before your "final cut" could help you optimize the "look and feel" of your 3D part.
    • Base Height: This sets how far from the base of the material the part will be milled from.  This is where your Zero Plane position comes into play.  If your zero plane position is at the top of the model, you can leave the Base Height at zero and the top of the model will be flush with the top of the material.  If your zero plane position is at the bottom of the model, then the Base Height must be set to (Material Height - Model Height) in order for the top of the model to be flush with the top of the material.  Otherwise, the model will be cut too low or too high, causing the mill to do extra work or to not cut everything you expected it to.
    For instance, I imported the NES buttons with the zero plane at the very bottom of the model.  With a material height of 19.7 mm and a model height of 12 mm, I had to specify the base height as 7.7 mm in order to make the edge of the buttons exactly flush with the top of the material so that no extra material would be cut away and that none of the details would be chopped off by being above the top of the material.  Now if you want shallower buttons (or a shorter part in general), feel free to set the base height to something less.  In retrospect, though, I probably could have imported them with the zero plane at the top, un-checked the checkbox, and then not worried about any math other than not setting the shape height more than 19.7 mm.

    With your 3D object in the right place with its desired specifications, you can now treat it like it’s another vector object.  Since I merged a lot of things together into one STL file, I first made vectors describing the outlines of where the 3D parts should go in order to line up the 3D object exactly as I wanted it.  You can either use these same vectors in your pockets and profiles if you need to cut out/around the 3D parts, or you can delete these and create new vectors from the actual 3D model if needed.  This way, you are guaranteed that the vectors will truly follow the 3D part.  How to do this is mentioned below.

    IMPORTANT SAFETY NOTE: Before making a 3D cut, slow down the feed rate on the machine's control panel because it will plunge to cut the deepest part first and will not do any pocketing beforehand.  The first time I attempted to cut a 3D part, the tool plunged so deep and moved around so forcefully that, despite the table vacuum being run, the part still moved around the table quite a bit.  It seemed dangerous and could have resulted in a broken tool.

    It is important to adjust the feed rate on the machine itself if you can, and not in the G-code.  If you set the feed rate too low in the G-code, it might seem tolerable while you're watching the deep cuts get made, but watching the machine cut the shallower sections of the part will be very tedious and boring.

    Two things you might want to do with your imported 3D object:
    • To actually mill the shape you imported, you need to select the 3D object in the drawing tab and then select the “3D Finishing Toolpath" operation in the Toolbox at right.  Remember to mind your feed speed as discussed above so that the tool doesn't just shake around your material and mess up the cut, or (worse yet) break itself.
    • You might want to profile this shape (i.e. cut material around it so it exists by itself) as well.  If so, select it, create a vector of its outline by going to the “Modeling” tab at left, selecting the desired instance of your 3D model, and clicking the “Create vector boundary from selected components” button under “3D Model Tools".  Then, with that vector selected, select the “Profile” operation in the Toolbox at right.  Finally, you can describe all the desired parameters of your Profile operation as usual.  Remember to do Profile last; otherwise, you would be cutting into a piece that is now mostly disconnected from other material and it could possibly come loose.

    Notice: These instructions are based on my observations and are not a substitute for proper education on the VCarve program or using a milling machine.  I cannot be held liable for any damage to materials, tools, or personal injury.  It is always up to you to properly verify that the instructions you give to the machine will produce the expected output before you begin milling.

    What kind of 3D models did you import into VCarve and then have milled?  Take a moment to show them off here!