Thursday, May 19, 2016

Moments Inside Google I/O 2016

I was invited to attend Google I/O 2016 at pretty much the last second -- only about 10 days before the start of the event.  This is my first time here, though my wife DoesItPew has been here twice before already.  We got to attend the convention together for once, and now I am dodging the cold wind blowing in over the hill here at dusk from the Shoreline Amphitheater in Mountain View, CA.

Prior to the event were many different Day Zero parties including a really large one thrown by Intel.  As an Intel Innovator (though I rarely speak about that because I suck at publicizing myself and/or I mostly use their products on projects covered by NDAs), I got to show off some new hardware prototypes made recently and impressed several people who are fond of homemade hardware and/or LED products.  Unfortunately, my favorite prototype died on Wednesday after the Google I/O keynote; it probably got shorted when one of the power wires poking through my shirt decided to come loose and touch the other power wire.  Now all it does is burn my finger really badly whenever I try to touch the microcontroller. after plugging it in  Oh well, I can just replace it when I get home.  Meanwhile, no extra attention for me... :-(  I also got to meet some fellow employees of my company from different sites I don't usually interact with (and tour their two offices in the city), and met a couple people from a relatively recent acquisition we made in San Francisco.

As you may have heard, the lines to attend many of the sessions were absolutely ridiculous and caused people to miss out on things they wanted to see.  That combined with the nearly 90-degree weather here in the Bay Area led to a lot of unhappy, hot, and sunburned folks.  The folks I met from our acquisition were so disappointed with the lines that they bailed after the first hour and vowed to watch I/O only via the live streams!  They would not even bother showing up in person on Thursday nor Friday.  But, as an amelioration for those of us who suffered through that and those who were not selected for the conference, you can (for probably the first time) experience the sessions of this I/O on the Google Developers YouTube Channel.  Thus, there's not really a reason to attend a talk unless you want to see something demonstrated live.  The real reason one would spend $900 on a conference pass now is to come out and schmooze with Googlers and other enthusiastic developers deemed worthy and discuss lofty ideas one could develop on the backbone of all the stuff Google has come out with now and will release in the near future.  Plenty of them are around and enthusiastic to talk to you, showing off intriguing demos on anything from mobile app "point-and-click" testing to a music box made out of a revolving wooden disc and sticky notes with "Do", "Re", "Mi", etc. on them being analyzed in real-time by a camera and translated into text that feeds into a synthesizer.  A Google Maps API guy clarified the Terms of Service for me, and stated my idea for "Maker's Markers" wouldn't violate it.  Goodbye OpenStreetMaps.  We have had previews of Android Auto in various scenarios where DoesItPew tore apart the UI with various Googlers for about 10 minutes while they took diligent notes.  (It's good to have someone so opinionated, but can you tell who wears the pants at our house? :-P)  I have spoken with people about Firebase, Project Tango augmented reality, various machine learning ideas, things I want to do relating to what I've seen here the last two days (which you can join on the new Google Spaces app), and of course on progressive Web design and improving the user experience for my company's mobile offerings.


NFC "tattoos" representing my three ideas on the Big Idea Wall.  One may or may not be to rename it the Big Idea Board... doesn't that have a nicer ring to it? :-P

To see for yourself the ideas I added to the Big Idea Wall, check out these links and please join the conversation:

Don't Die Watching Android TV -- Emergency Notification Overlays
Order Fast Food on Android Auto
Optimize Meetups By Travel Time

As if I/O 2015 wasn't inspirational enough just from watching the keynote, now I will have a whole 'nother year of projects to keep myself busy after work, not to mention plenty of stuff to share with my coworkers and help build new and innovative aspects of the financial industry as it pertains to auto finance and home loans.

Tomorrow is the final day of I/O 2016, and we have yet to see what that will bring.  After that on my agenda is to hit the Bay Area Maker Faire in San Mateo and the Computer History Museum, not to mention see some family and friends who live out here on the "Left Coast."  And if that's not enough to do, once I get back late Sunday night, there's yet another conference for software engineers the following Tuesday at my workplace and then I'm playing a gig with the corporate band to celebrate the recent expansion of my workspace -- the awesome innovation lab known as The Garage.

Meanwhile, it's so friggin' cold out here...

Thursday, April 21, 2016

Run Integration Tests Separately Within Your Maven Build

There are several ways to configure Maven to run designated tests separately in a Java project.  Usually, people want to distinguish between unit tests and other types of automated tests during a build.  Unit tests are fast because you are mocking all the external services that the particular code under test is relying upon.  They’re also typically smaller than functional tests, since they are (supposed to be ;) testing a unit of code rather than an entire feature.

However, functional tests are also critical to the success of your project.  You or your managers are probably interested in seeing automated end-to-end usage of your application running constantly without errors, but how is this possible without annoying the developers as they wait for all the tests to finish?

The Maven Failsafe plugin is most helpful in separating unit from functional tests.  By default, it focuses on tests whose filename follows the specific pattern:

**/IT*.java
**/*IT.java
**/*ITCase.java

Of course, you can add (or even exclude) files of particular naming patterns by modifying your POM file as described in the documentation.

The Circle of Life(cycles): It Builds Us All


A very simple way to get started with Failsafe is simply to add the following to your POM file:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <executions>
        <execution>
            <id>integration-test</id>
            <goals>
                <goal>integration-test</goal>
            </goals>
        </execution>
        <execution>
            <id>verify</id>
            <goals>
                <goal>verify</goal>
            </goals>
        </execution>
    </executions>
</plugin>

This tells Maven to add Failsafe’s integration-test goal to the integration-test stage of your build when you run it, then the same for the verify goal.  This means that to run your functional tests, all you need to do is run Maven with a build goal of “integration-test” or later in the lifecycle, including the popular “mvn install”.  To skip your functional tests, simply pick a build goal prior to “integration-test”, such as “mvn package”.

Of course, this leaves you with the disadvantage that you won’t be able to deploy the application to any environments until you wait on all your tests to finish, and it probably won’t deploy the application until all your tests are passing.  If you want to use “mvn install” to deploy your application to your test environment without waiting on the functional tests to complete, consider using Maven profiles.

Separation Via Profiles


In Maven, you can construct different profiles in order to specify different ways you want a build to work, such as utilizing different plugins such as running Surefire (for unit tests) versus Failsafe (for functional tests).  Here is an example of what you would put in your POM to run Failsafe when the Maven profile with-functional-tests is specified:

<profiles>
    <profile>
        <id>with-functional-tests</id>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-failsafe-plugin</artifactId>
                    <version>2.19.1</version>
                    <executions>
                        <execution>
                            <id>integration-test</id>
                            <goals>
                                <goal>integration-test</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>verify</id>
                            <goals>
                                <goal>verify</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

Notice that everything within <plugin></plugin> is exactly the same as in the first example; the only difference is the Failsafe plugin only runs when this profile is specified (on the command line with -Pwith-functional-tests).  It also gives you the benefit of limiting which environments the integration & regression tests actually run on, since developers won’t want to run every single functional test just to make sure the build succeeds before they can push changes to the code repository, and they won’t run unless they specify this profile explicitly (unless you put this under the “default” profile, and then they’ll just hate you :-P).

Annotation As a Solution


Yet another approach suggests creating an empty interface simply for marking purposes and then using that interface as a @Category to distinguish between your test types.

You might define a file such as IntegrationTest.java:

package com.test.annotation.type.IntegrationTest;
public interface IntegrationTest {}

And then use it in a real test as such:

import org.junit.experimental.categories.Category;
@Category(IntegrationTest.class)
public class RealTest {
    // etc...
}

You then need to set up the POM so that the Surefire plugin (for unit tests) explicitly ignores your IntegrationTest type:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.19.1</version>
  <dependencies>
    <dependency>
      <groupId>org.apache.maven.surefire</groupId>
      <artifactId>surefire-junit47</artifactId>
      <version>2.19.1</version>
    </dependency>
  </dependencies>
  <configuration>
    <includes>
      <include>**/*.class</include>
    </includes>
 <excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
 </configuration>
</plugin>

Also note the choice of surefire-junit47 for the artifact ID of Surefire, since this particular version correctly detects categories assigned with @Category.

Finally, you need to set up the POM so that the Failsafe plugin will actually run your IntegrationTest type (and only that type) during the integration-test build stage:

<plugin>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.19.1</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.maven.surefire</groupId>
            <artifactId>surefire-junit47</artifactId>
            <version>2.19.1</version>
        </dependency>
    </dependencies>
    <configuration>
        <groups>com.test.annotation.type.IntegrationTest</groups>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
            </goals>
            <configuration>
                <includes>
                    <include>**/*.class</include>
                </includes>
            </configuration>
        </execution>
    </executions>
</plugin>

The downside to this approach is that you have to write that IntegrationTest.java interface file into each module you plan to use integration tests in.  If you have a multi-module Maven project, it violates the principle of DRY.  Plus, it seems to involve more XML (or at least more complex XML) than the previous methods, and dependencies on surefire-junit47 and org.junit.experimental.categories.Category that you wouldn’t otherwise need.

Article Sources:



Saturday, April 2, 2016

Arduino Day Special: Make an EPROM Tester with an Arduino Mega and Octal Latch

I could have just asked around to see if anyone had an EPROM validator, but why ask when you can spend several hours doing it yourself, and then several more hours writing in pedantic detail about it?  Of course, I must have the DIY bone...

Who still uses EPROMs, anyway?


While working on old solid-state pinball machines from the 1980s and late '70s, you might run into a situation where a dead ROM chip needs to be replaced.  Certain types of machines (I'm looking at you, all you Gottlieb System 80's) suffer a problem where coils can get locked on due to bad grounding design throughout the system, and then cause transistors and all sorts of other things on the driver board and even possibly the main board to fry themselves.  In other cases, battery corrosion might leech into the ROM chip and possibly compromise it.  No matter what the case is, you might find yourself in need of new ROMs at some point.

Now I could easily go and find new ROMs for my game, order them, and call it a day -- oh wait, I did mention System 80, didn't I?  Well it turns out Gottlieb (or the remnants thereof) is very picky about their licensing and who can sell related products, and the one legitimate source of the game ROM wants $50 for it.  I'm sorry, but I'm not paying that much.  I'll just get my own ROM chips and try to find a way to get the source code.

Now there are two things you need to do before plugging a new EPROM into a device:
  • Make sure it is erased
  • Program it with your new program
In both steps, you probably want to make sure the job was done correctly, no?  It would not be great to discover the program either didn't burn correctly, or couldn't burn correctly because there were already some zeros living on the EPROM that don't happen to line up with the zeros in your program.  Now again, I'll pose the question I asked to you at the top. ;)

Sanity Check


Before going down the rathole of doing this myself and having to do both the hardware setup and software programming (let's face it, wiring by itself takes enough time), I wanted to see if anyone had attacked this problem before.  I found, besides various forum posts that don't offer a complete solution, someone's GitHub code where they had utilized three different I/O registers on the chip to make this happen.  That's all fine and dandy, and was in fact the solution I was about to implement for myself... until I looked a little bit closer at the choice of I/O registers used and what the names of some of the pins were.

The ATmega2560 chip featured on the Arduino Mega happens to have outputs for /RD, /WR, and ALE.  I also noticed one register whose pins were labeled AD[7:0] and then another one whose pins were simply labeled A[15:8].  This evoked memories of my 8051 Microcontroller class in college (no, I swear I'm not that old yet!), and I realized this implies the chip can somehow multiplex its output of the first 8 address bits with the input (i.e. the data line) coming from the EPROM itself.  So, yes, it is in fact possible to use only two I/O registers on the Arduino Mega in order to read/write to an external chunk of memory. 

However, before you get started, note this approach requires access to a 74x373 or 74x573 8-bit octal latch chip whose timing specifications comply with the requirements mentioned on page 28 of the ATmega2560 datasheet.  The only difference between the 373 & 573 is the pinout, so use whichever you think will be more convenient for your end result (most people pick the 573 thusly).

Don't Forget To Register For This Service


I turned to the ATmega2560 datasheet and found the simple steps on how to do this.  In order to enable the chip to take total control of the PORTC (A[15:8]) and PORTA (AD[7:0]) registers plus the /RD, /WR, and /ALE signals so you don't have to worry about driving them yourself or changing input states, you need to be concerned with the two registers XMCRA and XMCRB.  These control the behavior of the XMEM (eXternal MEMory) functionality on various AVR chips including the ATmega2560.

Paraphrased from the ATmega2560 datasheet starting on page 36:

XMCRA has the following settings:
  • SRE (Bit 7): Set to enable the XMEM interface.  If you want to do anything described in this post at all, you must set this bit to 1.
  • SRL[2:0] (Bits 6:4): The Wait-State Sector Limit.  If you are worried about valid data not being ready from your EPROM quickly enough given the clock speed of your AVR, you can add wait states, and even specify to a degree which addresses get what particular wait states.  For my case, I dictated that all of the external memory shall be governed by one single wait state, so I set SRL[2:0] to 000b.
  • SRW11, SRW10 (Bits 3:2): Wait State Select 1.  Since I am paranoid, I set these bits to 11b so it would enforce the maximum wait.
  • SRW01, SRW00 (Bits 1:0): Wait State Select 0.  Since I selected to use only one wait state, the value of these bits don't matter.
XMCRB has the following settings:
  • XMBK (Bit 7): External Memory Bus-keeper Enable.  When this bit is set, the chip will retain the most recent value seen on the bus, even when another device would have set the lines to high Z.  This means the address hangs around on PORTA after ALE goes low (normally the address would be wiped out as the bus goes high Z for just a bit before the data is driven onto the port).  Also, the data from the EPROM hangs around on PORTA after /RD goes high (normally it would get wiped out as the bus goes to high Z before the AVR writes the next address).  Basically it acts like a smart latch that you don't have to toggle yourself, and in fact, you can activate this feature on PORTA without necessarily using the rest of the XMEM interface simply by setting this bit.
  • Reserved (Bits 6:3): Leave these alone.
  • XMM2, XMM1, XMM0: External Memory High Mask.  These bits determine how much of PORTC is given back to you for regular GPIO use.  If you have a device smaller than 64K words, then obviously you won't need (and it probably doesn't even have inputs for) all 16 address lines.  For example, my 2764 chip (8KB EPROM, 8K words * 8 bytes/word = 64Kbits = 8 KB) only uses 13 address lines, so I can set these XMM[2:0] bits to 011b so that I can regain the regular use of PORTC[7:5] if desired to do my usual reads from sensors, driving robot controllers or LEDs, or other general shenanigans.
You can see how I finally chose to set these registers in the code example down at the bottom.  Later on, I will also describe the instructions you have to send to the chip in order to get it to read memory, including exactly how to send a memory address to the EPROM through A[15:0].

Another important caveat mentioned in the datasheet discusses exactly how memory addressing works.  Since the ATmega's own memory is addressed from 0 to 0x21FF, you can use the principle of aliasing to access the beginning of your EPROM.  Without aliasing, these bytes would be masked by the ~8KB of internal SRAM plus other MMIO/PMIO on the AVR.  Thus, to read the first 8,404 bytes of your EPROM, you need to actually start by reading memory address 0x8000.  Also, if you have a ROM whose size is >32K words (e.g. the 64512 EPROM chip), there are other special considerations you need to make as well.  This is explained in more detail on pages 31 & 32 of the datasheet.

Making Connections


Next up is actually wiring up everything on the breadboard to the Arduino Mega.  (You do remember I'm still using an Arduino despite talking about all the mumbo-jumbo from the ATmega datasheet, yes?)  The wiring diagram to use is shown on that datasheet on page 28, Figure 9-2.  Also note that the 2764 datasheet (at least the one I was using) mentions that its /G line should be hooked up to the /RD line of the memory controller (thus saving me from trying it on something else and being disappointed).  Also, when the ATmega2560 datasheet mentions that the latch should be transparent to the EPROM and/or AVR when G is high, that means that ALE on the AVR should be hooked up to /G on the latch, not /E, since you don't want the latch to ever output high-Z; it should either be propagating D (the latch input) through Q (the latch output) when ALE is high (i.e. what they mean by "transparent") or propagating through Q what the state of D was just as ALE was set low throughout the whole time ALE remains low.

Besides Figure 9-2, which you can open up for yourself, here' s a table of the same information:

MCULatchEPROM
/RD/G
AD7:0D7:0
AD7:0D7:0
Q7:0A7:0
A12:8A12:8
ALEG

And here's a picture of my final setup:




Assembled In the USA


Yes, a mark of quality indeed... Anyway, if you've gone this far, why not write a little bit of assembly code just to put your effort over the edge into ridiculousness?  Because I am lazy and I use Windows mostly for AVR development, I still use the plain ol' Arduino IDE and blend assembly with C code (also I think it's fun to fly in the face of all the haters of basic Arduino stuff).

The macro for running assembly code inside C is called asm(), and each line of assembly can go into a double-quoted string that can be chained back-to-back without commas (but doing multi-line asm() calls is a bit outside the scope of this post).  When you add the keyword volatile to it, that lets the compiler know these values are subject to change at any time and the command needs to be rerun with any new values that might have been loaded into the variables representing the arguments.  Without using the volatile keyword, you might run a loop from 0 to 32767 with the intent to access the ith element of the EPROM, but only ever access the 0th element of the EPROM because the compiler "optimized" the assembly to assume the address argument doesn't change.  Whoops!

I started with the instruction lds (Load Direct from SRAM) to fetch external memory.  It takes two arguments: a register (any one register from r0 to r32 will do) and a constant.  This constant must be hard-coded into your assembly statement and cannot be provided by a variable.  Unfortunately, this doesn't really facilitate testing unless you want to write a really long unrolled loop!

Fortunately, there are instructions in assembly that allow you to store the memory address into a register, read the memory address indicated by the register, and then post-increment or pre-decrement that number for you so you don't even have to worry about updating the index.  Specifically, registers R26 through R31 handle this.  The odd-numbered registers store the high byte of the 16-bit memory address, and the even-numbered registers store the low byte.  For a diagram, check Figure 7.5.1 on page 14 of the ATmega datasheet.  These six registers represent three 16-bit special registers called X, Y, and Z.  In my code, I use Y (r28 & r29) because it worked most reliably out of the three.

At Last... The Code!


Note: Be Sure you have selected the "Arduino/Genuino Mega or Mega 2560" board as your choice in the Arduino IDE, or else it will not load the appropriate header files and will complain that XMCRA and friends are undefined.

/*   Note: If you want to test the boundary conditions, 
 *    the last address of internal SRAM is 0x21FF and the 
 *    first address of external SRAM is 0x2200, which also
 *    actually corresponds to address 0x2200 on the SRAM.
 *    To hit the very first address of the SRAM (0x0), 
 *    you must take advantage of aliasing by reading from
 *    0x8000 to 0xA1FF.
 *    
 *    The following code demonstrates writing to an
 *    internal register and will fail to write to the first 
 *    available address of an EPROM:

  asm volatile("ldi r16, 0xFF");
  asm volatile("sts 0x21FF, r16");
  asm volatile("sts 0x2200, r16");

 */
uint32_t i;
volatile unsigned int c, d;

void setup() {
  XMCRA = 0b10001100;
  XMCRB = 0b10000011;
  Serial.begin(115200);
}

void loop() {
  delay(1000);  // this helps avoid garbage at the beginning
  /*
  // This part proves the auto-increment feature is working
  // and that the first 10 bytes are indeed being read correctly
  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0x800A; i++) {
    asm volatile("sts (d), r28");
    asm volatile("sts (d + 1), r29");
    Serial.print("Contents of address ");
    Serial.print(d);

    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    Serial.print(": ");
    Serial.println(c, HEX);
  }
  */

  asm volatile("ldi r28, 0x00");  // YL
  asm volatile("ldi r29, 0x80");  // YH

  for (i = 0x8000; i < 0xA000; i++) {  // for an 8KB EPROM
    asm volatile("ld r0, Y+");
    asm volatile("sts (c), r0");
    // The following prints out hex in the format
    // FF FF FF FF  FF FF FF FF  FF FF FF FF  FF FF FF FF
    if (c < 16)
      Serial.print(0);
    Serial.print(c, HEX);
    Serial.print(" ");
    if (i % 16 == 3 || i % 16 == 7 || i % 16 == 11)
      Serial.print(" ");
    if (i % 16 == 15)
      Serial.println();
  }

  while (true) {
    // spin lock
  }
}


Reference Materials


This article would not be possible without the help of the following:

ATmega2560 Datasheet
AVR Instruction Set Manual
Introduction to AVR assembler programming for beginners
GCC inline assembler cookbook

Thursday, March 31, 2016

More Reviving Old Computers

Since last time I wrote in, I have been extremely busy preparing for Texas Pinball Fest 2016 -- tried to get four machines ready, then three, and then... oh well... only the two I had working to begin with were actually working by showtime.  Meanwhile, I had started to investigate a couple other projects, but never got something going nice enough to warrant writing a blog post on.

Now that Texas Pinball Fest is over, and I swear my games are acting like rabbits (if you know what I mean ;), I'm trying to step back and work on some of the projects I had going before this massive push to restore a bunch of pinball machines happened.  However, I have another great big push for this weekend to get prepared for the North Dallas Area Retrocomputing meetup.  I've had vintage computers hanging around me since they were new, but have acquired some "new" stuff since around Thanksgiving (especially from Fred's "Warehouse of Wonder"), and need to get all the new acquisitions nice & shiny & displayable.  And if you read thoroughly, I'll treat you to some shareable work I did to make this happen.

The first system on the docket is an IBM PS/2 Model 25 286.  It had a crushed PS/2 port in the back.  Fortunately, there is a store very close to my house that sells all kinds of modern and surplus electronic parts, and had plenty of compatible PS/2 ports in stock.  With a couple hours' worth of work, I replaced the sad old jack and was able to boot up the computer for the first time in who knows how long.  Only problem is I'm striking out on all the floppy disks & drives from Fred's so far, and this was no exception; the computer is complaining of a floppy disk or controller error.  I will need to plug in my HxC2001 floppy drive emulator to try diagnosing the problem further.  Nevertheless, it runs Cassette BASIC like a champ...

My IBM PS2 Model 25 286 not doing much
Here's its first sign of life in a while.

By the way, if anyone has a source for faceplates to cover up the two holes there in the front, or has an STL file where I could print some new ones out, that'd be appreciated.

After that, I worked backward and attempted to power on the IBM 5162 (XT 286) for the first time since receiving it.  All the Nichicon & Rubycon capacitors in the power supply looked to be in excellent condition, so I didn't bother replacing anything despite having acquired all the right components a couple months ago.  At first, it fired up but only showed the memory count.  After rearranging the expansion cards a little bit, it beeped a little bit differently at POST, but it started showing helpful diagnostic messages as well.  After trying several combinations of OS disks and drives, I finally reverted back to my trusty HxC2001 controller and it loaded DOS 3.3 like a charm.  To my surprise, it actually remembered the set date & time for a few minutes after a power cycle, but ultimately I do need to put in a replacement BIOS battery (which I also sourced locally, but just lack a cradle for at this time).

The 5162 comes with more flexible setup options than the 5150 (e.g. it actually has a battery because it needs to remember more settings than you can cram on two 8-bit DIP switches), but unfortunately, you need a setup disk to reconfigure the BIOS.  They didn't make it easy; there's no program easily loaded in the ROM that will come up when you hit F2 or DEL.  You must have the bootable setup disk, or if it doesn't boot, load an OS first and then call it up manually later.  Well, it turned out to be quite a hassle to get the setup disk going as an HFE image for the HxC, but eventually my perseverance paid off.

In the off chance you have an HxC floppy disk emulator and want to get started quickly with an OS or the setup disk set up in 5.25" formats, which they don't seem to offer on their site, save yourself some hours and utilize the hours I already spent making them.

IBMDOS11.HFE - This is IBM DOS 1.1 as a 180K (40-track single-sided 5.25" disk) image.  Experience the earliest days of Microsoft's famous operating system.

MSDOS33.HFE - This is MS-DOS 3.3 as a 360K (40-track double-sided 5.25" disk) image.  There are many more features already than with DOS 1.1.

MDA_GAMES1.HFE - This is a curated collection of games (as a 360K 5.25" disk) that have been tested to work on an IBM 5150 with MDA graphics adapter and 5151 monochrome (green-screen) monitor.  Some games are executables, others have to be loaded from BASIC.  As it's a 360K disk image, it will not work with DOS 1.1.

ATADSETUP.HFE - This is an image of the "Diagnostics for the IBM Personal Computer AT" floppy (80-track double-sided 5.25" disk).  The original source of the disk image is minuszerodegrees.net (a great site for early IBM PC system info), but it was not straightforward to convert the IMG into an HFE file.  Also, this disk image wouldn't boot (maybe it needs to be set to 96 tracks instead of 80?), but after loading DOS 3.3, I was able to switch to this disk image and run SETUP.COM (or whatever the executable is named to kick off the setup & diagnostics program).

Sorry I forget what the original sources of the other materials were; in many cases, I had to do digging & tweaking periodically over many weeks to get all these things to work.

The final story is that of the Mac Plus.  I was not forgetting about it while rummaging through Fred's looking for parts, as when I stumbled across an SCSI enclosure containing an 8x CD-ROM drive, I thought that enclosure might make a perfect candidate for a hard drive enclosure for the Mac Plus.  And sure enough, Fred's was a treasure trove of old SCSI equipment; I was able to find tons of SCSI-2 and SCSI-3 drives, cables, and terminators free for the taking.  (I only took five drives. :-P)  Combine that with some system disks & utilities purchased from rescuemyclassicmac.com, and now I have a 1GB hard drive (unfathomable in 1986, in 3.5" form factor no less) on my 1986 Macintosh.  That'll make a great scratch disk for Photoshop 1.0. >-D

Anyway, it's a pretty well-documented procedure on how to use non-Apple hard drives with the Mac Plus; the hardest parts for me was getting a hold of the Utilities disk (luckily the site I mentioned above sells disks with the already-patched version of the disk formatter utility) and actually removing/replacing the CD-ROM drive with the hard drive in the enclosure.  Other than that, with the factory settings on the Seagate ST51080N drive along with the IBM-branded external SCSI enclosure, the Mac picked up on the SCSI hard drive right away. 

1,052,733 K of available disk space
Ohmigawd, that's like ALL TEH K's EVER MADE.  They're right here.  This would have made one helluva scratch disk for Photoshop 1.0 back in the day. :-P

Now the kicker is I have compatible 2GB and 4GB SCSI drives (the other two I picked have a 68-pin SCSI-3 interface and are not compatible with the enclosure).  So yes, I could have EVEN MOAR K's.  Maybe one of my other drives could be allocated toward an A/UX installation (if such a thing actually exists)... but wouldn't you need an order of magnitude more RAM than what the Mac Plus supports to get it working well, not to mention some kind of domain controller?  Who knows...

Unfortunately I have not been able to install System 6 on it just yet because one of the System 6 installation floppies seems to be corrupt and it won't let me proceed with the installation.  Nevertheless, hopefully I can still set this HDD to be my scratch disk for Photoshop 1.0 so I can actually run a "Who can make the best art?" contest at the retrocomputing meetup this Saturday.  Besides, with not having the HDD running the system and programs, I can amuse people on how many times you have to swap the damn floppy disks out just to have it load a program or sometimes even load menu options in your program.

Thursday, January 28, 2016

Making 3D Parts in VCarve, a 2D Editor

In my quest to get certified to use the MultiCam CNC router table at the local Makerspace, I need to create some kind of part that requires use of at least a couple different types of end mills, plus do various cuts such as pockets, profiles, V-carves, engraving, and so on.

First, a bit about the Makerspace movement, in case you haven’t heard: Makerspaces (or Hackerspaces) are places set up for community good that allow people to share knowledge, resources, and tools to help further DIY activities such as programming, electronics, 3D printing, machining, or other activities where people make something.  Makerspaces come in various flavors: some are set up as startup incubators, others as for-profit centers where paid employees build things for people, and still others where members mostly come to work on something completely different from work.  The tools one would find at a makerspace were traditionally owned by companies or by individuals who have spent a long time honing their craft; as such, they are left for very few people to use either in someone’s garage or during working hours.  Through the power of crowdsourcing money from makerspace members through dues and/or fundraising drives, makerspaces can afford tools to share with the community (that is, anyone willing to become a member) and offer training around getting the most out of these tools, not to mention proper use and safety.  People who live in small apartments or are otherwise constrained for space or don’t have thousands of dollars for tools now have access to all sorts of tools that would be impractical for them to own outright.  This attracts talent and people with ideas who often form groups that can do much more than any one individual can on their own, though there are still lots of individual projects happening at makerspaces as well.

Tabling VCarve for the moment


Our MultiCam 3000 CNC router table, like most other CNC milling machines, 3D printers, and other such devices, requires information sent to it in the form of G-Code.  This flavor of code specifies things like feed rate, spindle speed, and tool position to the machine so it will mill out (or extrude plastic or etc.) in the location you want at the time it needs to be there, hopefully without breaking the tool from too much stress or damaging the machine.  The toolchain we use at the Makerspace to produce the required G-code for the table mill involves use of a program called VCarve.  It is a nice program that allows you to design your part and produce G-code to run on the machine to make the part.

VCarve is great for designing fairly simple projects.  This can take you a long way, because what is “simple” in the world of milling can often yield astonishingly detailed and fantastic results, usually by use of the actual operation called “V Carve” (which of course the program VCarve can help you do).  Even a pinball playfield could count as a simple part using this metric.  However, the part I want to make for my test is essentially a replica of the classic Nintendo game controller for the NES, which involves several contoured buttons.  Look closely at the controller, and you will see that the A and B buttons are slightly concave so as to cradle your fingertip nicely.  The D-pad has directional arrows imprinted into the plastic and a hemisphere carved out of the center, not to mention each direction bends slightly upward out of the center to give you nice tactile feedback for exactly where to press to move in that direction.  After trying hard to make these types of 3D cuts (which VCarve doesn’t support inherently) using ramps and other effects, I temporarily gave up on VCarve.

Ok, so what else is there to make the 3D shape?


Previous versions of VCarve prior to 8.0 don’t support any 3D objects whatsoever.  Luckily, my Makerspace has VCarve Pro V8 available for us to use.  With its license, I am able to upload one STL file or bitmap image for it to translate into G-code.  I created the contoured buttons using Blender in three simple steps:
  • Use negative Boolean operations to subtract a large sphere from a small cylinder to create the slight contour of the A & B buttons (then import this button into VCarve and add two instances of it)
  • Use transformations to slightly elevate desired edges of basic cubes to make the contoured D-pad shape
  • Use transformations on cubes to create arrows, and then negative Boolean operations to subtract the arrows from the D-pad shape
While on the topic of Blender, here are two other quick hints about it:
  • When doing a negative Boolean operation, the negative shape does not immediately disappear (unlike other 3D rendering environments I’ve worked with).  You have to move the negative shape out of the way in order to see the imprint it made into the positive shape.  Otherwise, you’ll think the negative Boolean operation is not working, attempt to upgrade Blender to the latest version, and find that doesn’t even help either.
  • When exporting to STL format, you need to have all the desired elements of your object selected while in Object Mode.  Otherwise, Blender won’t export anything in your STL file and VCarve will complain that your STL file is invalid because it contains nothing.

Bringing It All Back In


[EDIT 2/2/16] IMPORTANT NOTE: When I imported the STL files into VCarve and ran the MultiCam router, the router attempted to drill deeply into the material all at once and move the tool around so forcefully that, despite the table vacuum being run, the part still moved around the table quite a bit.  This made it very difficult to tell if what it tried to do actually worked.  However, since the hole was so deep, I believe it went way farther into the part than I told it to.  I will slow down the feed rate of the tool in 3D mode and see what happens, but for now, use caution when milling 3D parts.

Once you have made the STL files with Blender, import them into VCarve by going to “Model” -> “Import Component / 3D Model".  Remember that with certain versions, you might only be able to import one STL file or bitmap per part — I would get around this by making n parts of exactly the same dimensions with n different STL files, placing each imported 3D object in the desired location on the part by itself, generating n different G-code outputs, and then either merging the G-code files by hand or just having the machine run all n files sequentially, possibly without even changing the tool.  Anyway, once you import the STL file, VCarve will give you several options regarding how to treat it.  The most interesting ones to me are:

  • Model Size: Allows you to scale your model if you used the wrong units in Blender or want to make a last-minute adjustment.
  • Zero Plane Position in Model Top: This allows you to describe how far into the material the 3D model should be cut.  If your model needs to be a specific height that is pre-defined by the Z eight it was imported with, then adjust this parameter so that the bottom of the model touches the bottom of the material.  To line this up exactly, you could use the formula [model height - (material height / 2)] to calculate the “Depth Below Top” value if your model is at least half the height of the material.
With your 3D object in the right place with its desired specifications, you can now treat it like it’s another vector object.  In my case, I actually made all the other vectors describing the controller’s profile, pockets for the buttons, etc. prior to importing the STL, so that I can finely tune how VCarve makes the G-code for the cuts.  Two things you might want to do with your imported 3D object:

[EDIT 2/2/16] IMPORTANT NOTE: Beware, as noted above, that the results I expected were not the same as what the mill produced.  What ended up happening was slightly dangerous and could have resulted in a broken tool.  If you wish to try it out, please slow down the feed rate of the tool during 3D Finishing, since it tends to plunge all the way in rather than slowly descending in passes like most other cuts do.
  • To actually mill the shape you imported, you need to select the 3D object in the drawing tab and then select the “3D Finishing Toolpath" operation in the Toolbox at right.  [EDIT 2/2/16] Until I can research other guidance to give you, make sure you set the feed rate of this operation really slow for reasons described above.
  • You might want to profile this shape (i.e. cut material around it so it exists by itself) as well.  If so, select it, create a vector of its outline by going to the “Modeling” tab at left, selecting the desired instance of your 3D model, and clicking the “Create vector boundary from selected components” button under “3D Model Tools".  Then, with that vector selected, select the “Profile” operation in the Toolbox at right.  Finally, you can describe all the desired parameters of your Profile operation as usual.  Remember to do Profile last; otherwise, you would be cutting into a piece that is now mostly disconnected from other material and it could possibly come loose.

What kind of 3D models did you import into VCarve and then have milled?  Take a moment to show them off here!

Thursday, January 7, 2016

Interrupts for the Arduino Uno - More than you might think!

Are you looking to make a program that requires a bunch of interrupts using an ATmega328 or ATmega168 chip (such as on the Arduino Uno, Nano, or Mini platforms)?  If so, you may have been disappointed by the basic documentation you can find on this matter, and tempted to buy a more advanced Arduino such as the Mega2560, Zero, or even Due.  But have you seen how much their chips cost on Mouser?  If you're looking to do a small run of boards with what you ultimately produce, you will be taken aback to find that ATmega2560-16AU chips cost over 5x more than ATmega168A-AU chips!  In fact, I just recently bought an Arduino Mega2560 for less than what you can buy its chip for.  Having seen this, I knew there had to be a better way to leverage a cheaper chip.

The Problem, In Short


I need an application that can read from multiple sensor inputs, each of which will pulse with a high-frequency square wave (i.e. toggle on & off very rapidly) upon activation of the particular magnetic device they sit next to.

Given the sensors' behavior, and that there's no guarantee each sensor will produce more than one pulse if the underlying magnetic device does not stay on for a very long time, the best way to look for the changing state of the sensors is to use interrupts that can quickly capture exactly which pin just changed state, and then during the CPU's downtime (when it's not handling interrupts), it can go and take the appropriate action given which sensor(s) just pulsed.

Another problem to battle on the road to solving my problem!


By reading the standard documentation, you might be led to believe the Arduino Uno or any other ATmega328-based platform only has two useful interrupts, existing on digital pins 2 & 3.  Specifically, it says that the attachInterrupt() function only works on those two pins.  This, however, is misleading.  In fact, any of the ATmega's I/O pins can be used as interrupts -- it only really makes a difference if you need to use an external interrupt versus simply a pin change interrupt.

An external interrupt has the capability to fire upon a state transition, such as the rising edge or falling edge of a signal.  It can also be triggered upon the change of value of an input pin, and by the signal going to logic level low.  Since external interrupts on pins 2 & 3 of the ATmega328 have different interrupt vectors (addresses where the routines related to these interrupts are stored), such interrupts on these pins are distinguishable from each other, and also distinguishable from pin change interrupts you might be listening for on those pins as well.  External interrupts also have a higher priority compared to pin change interrupts, so they will be processed first.

pin change interrupt happens when the value of a pin suddenly changes state from low to high or vice versa.  The interrupt does not tell you exactly what the new value is (and the value is subject to change again by the time the interrupt can be processed), and as you will read below, pin change interrupts tend to share just a few vectors, so you might need a way to reconcile exactly who caused the interrupt.

Since my application only really needs to know about pin changes, especially since there's a small probability these sensors might get stuck High instead of being pulled back to Low upon the end of the magnetic pulse, I can leverage any and all of the input pins for my purpose.

Nota Bene: For my particular sensor, it drives High given one magnetic polarity, Low given the other polarity, and can become Undefined in the absence of the magnetic field.  I thought the sensor would pulse on its own each time with no further action from me, but the pulses did not actually appear until I used a pullup resistor on the input pin.  This is easily achieved in Arduino-land by supplanting INPUT with INPUT_PULLUP, as such:

pinMode(p, INPUT_PULLUP);

Once this was done, I was ready to move on. However, I faced yet another problem: the documentation would lead you to believe there are only three interrupt vectors you can use across all the pins. Here's what the Arduino Playground says about the topic:

  • ISR (PCINT0_vect) pin change interrupt for D8 to D13
  • ISR (PCINT1_vect) pin change interrupt for A0 to A5
  • ISR (PCINT2_vect) pin change interrupt for D0 to D7
Unfortunately, in my application, I need to read from at least four sensors.  What can I do?

The Shining Light


The Arduino documentation suggests using libraries, but links to a very scant piece of code with little documentation.  A quick Google search for this, though, yielded me a much more up-to-date and comprehensive solution: the EnableInterrupt library.  I suspected there would be an answer in here as to how to reconcile exactly which pin fired the interrupt, and sure enough, I wasn't disappointed.  It just looked a little bit different than I expected:

// These two statements must be written in this exact order!
#define EI_ARDUINO_INTERRUPTED_PIN
#include <EnableInterrupt.h>

// It's OK to use 0
// since the library doesn't really support interrupts on pin 0
// because of the implications for serial TX/RX
volatile uint8_t pinChangeIndex = 0;

void interruptFunction() {
  pinChangeIndex = arduinoInterruptedPin;
}

void setup() {
  // put this in a loop, perhaps, to initialize more than just pin "p"
  pinMode(p, INPUT_PULLUP);
  // ...
}

Simple, right?

It turns out that the ATmega has an internal register storing the index of exactly what pin toggled the interrupt, and this library exposes that for your consumption.  Now in your loop() function, all you need to do is branch off (i.e. write an if statement utilizing) the pinChangeIndex variable, and you don't have to process any of the application logic in the interrupt at all.  If you want to listen for multiple devices at once, it's possible to replace the uint8_t with a bool[] array and then replace the interrupt function's contents with pinChanged[arduinoInterruptedPin] = true

Incidentally, the volatile keyword in this context is used as a helper to the compiler so that it doesn't assume any code dealing with use of the pinChangeIndex variable is anywhere near where it might get changed.  This way, the variable's value will always be copied into one of the microcontroller's registers right before any comparison or operation is done on it, such as branching to particular spots depending on the variable's value.  The register will never contain an old copy of what the variable contained a long time ago.

May your project dreams be more attainable by unlocking cheaper microcontrollers to handle many I/O devices!