Thursday, April 30, 2015

Proxy keeping you up at night when running local Selenium tests in Chrome?

A Tale of Pesky Proxies Running Selenium / Protractor UI Tests Locally Inside Chrome On a Mac

As a tester, I face all sorts of environmental idiosyncrasies when testing products across various platforms, not just in the way those platforms behave, but also how they retain state throughout time.  (This is why Sauce Labs is great -- there is no leftover gunk from a prior state, since each test environment is a freshly-baked VM, thus eliminating the old adage "Well it works on my machine, so why not yours?")

However, it's not always practical to jump right into using Sauce Labs, especially when trying to develop new tests.  In attempting to do just that and run on my own local environment, though, Chrome on my Mac would keep asking me for proxy credentials because of various external assets the site attempts to load over HTTPS through content providers and API services.  No matter how I would try to specify the proxy settings to the WebDriver or as command-line arguments to Chrome in the Protractor conf.js file, that pesky prompt would still ask me for them in person.  I even tried Chrome Canary, a specialized browser designed distinctly for Web developers & professionals, yet that one asked for my proxy credentials even more.  My other testing counterparts run Windows and did not seem to have this problem, so it was up to me alone to solve it.  It became apparent that there would be no simple way to subvert this stupid little box from coming up and interrupting all my automation, no matter which switches, system settings, or configuration parameters I set.

Finally, I decided to take a different approach and redirect external servers (that would trip proxy authentication) to a Node server on localhost that would fetch the data through the proxy properly.  I wrote some small functions for this in order to make calls directly to our own application's internal API for testing and to access some of the external services it relies upon, so it was easy to expound upon my initial work to get automated Selenium tests in Protractor finally working in the browser without any annoying prompts.

Note: This involves running things as root.  If you are unable or unwilling to do this, stay tuned for a future post on how to redirect your Web traffic in a slightly different manner than described below.

The essence of the solution is to run a Node.js server locally that listens over ports 80 & 443 for the usual Web traffic that your internal application (behind the proxy) would normally send externally to another site or service.  However, now you will intercept all these external requests and redirect them to localhost.  For some reason, while Chrome doesn't tend to pay attention to your system proxy settings while you are running Selenium tests, Node.js will always fetch whatever data you want.

1. Edit /etc/hosts

Add the server(s) to that you need to connect to but cause proxy problems, such as:       localhost internal-name1 internal-name2

2. Force Chrome to look at /etc/hosts

There's another obscure issue in Chrome (assuming you're also using a Mac and facing the proxy popup issue) where if Chrome is set up to use a Proxy Autoconfiguration (*.pac) file, whether through system settings or some sort of override, it will ignore the hosts file altogether, let alone your proxy settings.  (Gee thanks, Chrome!)  Since our systems generally come pre-configured with a PAC file and I didn't feel like modifying my whole system just to work around something that doesn't affect my "in-person" browsing, I had to make this addition to my conf.js file for Protractor:

chromeOptions: {
    'args': [

3. Make the Node.js server with Express & Router

Unfortunately, this has to be run as root because it involves listening on ports 80 & 443.  Notice where it uses $HTTP_PROXY to pull in proxy info, and right below that where it expects a private key & certificate from OpenSSL.  There are plenty of tutorials online on how to do this.  Then, notice the routes toward the bottom that redirect to certain API sites/endpoints depending on the path requested from localhost.  Here is pretty much the code you'll need, with the exception of your specific proxy credentials and the exact routes which you need to define:

var fs = require('fs');
var express    = require('express');        // call express
var app        = express();                 // define our app using express
var bodyParser = require('body-parser');
var http = require("http");
var https = require("https");
var HttpsProxyAgent = require('https-proxy-agent');

var agent = new HttpsProxyAgent(process.env.HTTP_PROXY);

var privateKey  = fs.readFileSync('key.pem', 'utf8');
var certificate = fs.readFileSync('cert.pem', 'utf8');

var credentials = {key: privateKey, cert: certificate};

// configure app to use bodyParser()
// this will let us get the data from a POST
app.use(bodyParser.urlencoded({ extended: true }));

var makeWebRequest = function(settings, newRes) {
var externalReq = https.request(settings, function (externalRes) {
var body = '';
externalRes.on('data', function(data) {
body += data;
externalRes.on('end', function() {
console.log("Finished with request to " + + settings.path);
externalReq.on('error', function(e) {
console.log("\033[1;31mFAILED\033[0m to make the " + settings.method + " request to " + + settings.path);
// =============================================================================
var router = express.Router();          // get an instance of the express Router

//Route your external API #1
router.get('/api/v1/endpoint1/*', function(originalReq, newRes) {
var settings = {
host: "",
port: originalReq.port,
path: originalReq.url,
method: originalReq.method,
agent: agent
makeWebRequest(settings, newRes);

// ***************************************************************
// Route your external API #2 (hopefully they don't use exactly the same endpoint paths)
// ***************************************************************
router.get('/another_api_service/endpoint2/*', function(originalReq, newRes) {
var settings = {
host: "",
port: originalReq.port,
path: originalReq.url,
method: originalReq.method,
agent: agent
makeWebRequest(settings, newRes);

// REGISTER OUR ROUTES -------------------------------
app.use('/', router);

// START THE SERVER ----------------------------------
var httpServer = http.createServer(app);
var httpsServer = https.createServer(credentials, app);

console.log('Magic happens on ports 80 & 443');

Bonus for Applitools Users

While the steps above pertain to simply getting Protractor and Selenium to work in general without the pesky proxy popup, let's just say there's a certain level of bullsnot I was willing to tolerate until I started using Applitools to bolster the set of automated tests in the arsenal.  For those of you who aren't aware, Applitools is a sophisticated product that will do comparisons of your UI with mockups, previous versions, or various other types of baselines with varying degrees of granularity, all the way from "your browser isn't rendering the anti-aliasing on the text just like Photoshop did" up to "sure all the content inside changed, but all your DIVs & big layout pieces stayed in the same place."  It's a flexible tool, but to get started with it, this stupid proxy prompt was really wasting a lot of my time and getting in my way immensely, so it needed to die in a blaze of ignominy, hence my hack described above.  However, there is one more step for you Applitools users to heed.

If your proxy server requires ID such as, the https.request() function provided by Node.js' HTTPS module must be modified so that it uses the proxy when Applitools features are requested, particularly when saving test results (not just baselines).  For me, this means using the https-proxy-agent Node module rather than tunnel, and making sure to include the username & password.  Copy the following code into a new file, and name it applitools-http-proxy.js:

var HttpsProxyAgent = require('https-proxy-agent');
var agent = new HttpsProxyAgent(process.env.HTTP_PROXY);
var https = require('https');
var __request = https.request;

https.request = function (options, callback) {

    if ('applitools') > -1) {
        options.agent = agent;

    return __request(options, callback);


Then in your Applitools test, simply use this as your first line (assuming the file you just made above is in the same path as your test):


It'll be interesting to see how many people have run into this issue and find this solution to be useful.  Together, we will make test automation that runs without any annoyances!

Thursday, April 16, 2015

Saucin' Up Perl with Selenium WebDriver

My foray into the world of Sauce OnDemand, made public about a month ago in an earlier blog post, landed me a spot as Presenter for April's meeting of the DFW Perl Mongers club!

But wait, you wrote all that code in JavaScript!

Yes, that's true.  And, despite Sauce Labs not really acknowledging the availability of a WebDriver module for Perl, it does indeed exist, and I found it, and wrote some nice automation in Perl to demo to the small crowd.  I even did some extra stuff they weren't anticipating -- showing off how to test mobile apps with Perl too by using Appium + Sauce Labs in order to provide an environment where the same test code can be used to test both a native Android app and a native iOS app, assuming they both had identical resource names for the graphical elements.  Unfortunately, the two APKs I had easy access to both caused a Force Close once the Android emulator in Sauce Labs started them up.  There were also some tweaks, features, and expanded capabilities I wanted to do/demonstrate in Perl, but ran out of time to incorporate them.  Oh well, I got close. :-/

Can I see the presentation too?

Yes, of course!  I have checked the presentation materials & demonstrated source code into GitHub, and the recording of the presentation has been made available on YouTube (part 1 and part 2) thanks to all meetings also being Hangouts On Air.  Given the usually small size of the Perl meetups, having these meetings recorded is why I didn't really bother ballyhooing the event to the public a whole lot.  (That way, in case I bombed, I just wouldn't tout the recording a lot.  But since it went well, yes, I'm definitely getting the word out. ;)  Watch that GitHub repo for enhancements and updates.


Thursday, March 12, 2015

Does Protractor + Selenium WebDriver Sound "Promising?"

I have been diving into JavaScript and AngularJS heavily over the past few weeks, pushing the capabilities of my organization's application testing in the process.  Despite having written quite a bit of JavaScript in my past for a number of award-winning Web applications (in hackathons), some of the latest trends in that language had bypassed me completely.  In coming up to speed on JavaScript Promises, here is some code that has proven very useful in my activities.

Typically, doing Web page testing requires interacting with the user interface, then waiting for something to happen (you logged in, paid your bill, ordered food, wrote a review, etc.).  There are three ways to wait for such UI interactions to complete in JavaScript:

  • Pure asynchronous callbacks (nested and ugly)
  • Unchained promises (still can be nested, and has potential to get ugly)
  • Chained promises (Pretty straightforward)
The essence of these three methodologies is described succinctly and effectively on the GitHub page for WD.js (a popular WebDriver for Selenium automated GUI tests).  My organization has elected to use Protractor as its framework of choice for our AngularJS tests, plus Gulp as our build manager, and this tends to force the use of "selenium-webdriver" as opposed to WD.js.  The constructs in WD.js are mostly dissimilar and require conversion in order to be compatible with selenium-webdriver (perhaps an open-source hero could be made if someone wrote a converter!).  For instance, selenium-webdriver only seems to be happy with unchained promises, where WD.js seems to accept all types of promises.  Thus, it is important to note the following code pertains to Protractor and selenium-webdriver.

Waiting On External Events

In some cases, you may not wish to wait for changes to a UI element on the browser, but instead for a connection and query to a database or perhaps a call to an external API.  For this, most people use the "Q" Node module; however, selenium-webdriver already includes the "Q" framework through browser.controlFlow().  Specifically, browser.controlFlow().execute(<some function defined as a variable>).  The function you name must return a promise.

Here's a quick example, including a hypothetical function to connect to a database, and the test code that would wait for this function to return the connection object:

connectToDB = function() {
    var promise = protractor.promise.defer();
    var mydb = require('dbhelper');
        // some hard-coded settings
    }, function(err, connection) {
        if (err)
    return promise.promise;

describe('a test suite', function() {
    it('should connect to the DB', function() {
        browser.controlFlow().execute(connectToDB).then(function(conn) {
            console.log('I now have a connection!');

Side Note: Yes, you can use JavaScript to connect to databases directly, even Oracle databases!  Just check out the recent (and official) node-oracle Node module if you need connectivity to Oracle DBs.  I'm sure there are other great ones for MySQL, Microsoft SQL Server, MariaDB, Cassandra, etc.

This is all fine and dandy, but note that connectToDB is not set up in such a way as to take arguments.  First, the function needs to return promise.promise to execute() rather than the connection object that connectToDB() makes.  Second, due to the way control flows work, nothing is really supposed to get run right when tasks are defined, thus the parentheses must be omitted from connectToDB when it is passed as an argument to execute(), because otherwise it will get run right away.  Only when the entire test flow described inside it() has been parsed may functions returning promises get run.

This makes it rather difficult to pass arguments to functions that return promises!  Does this mean you'll have to handle making your database query in the same function, and lose the capability to abstract that into a generic query function?  No!  And, in fact, it's not difficult at all to pass arguments to such functions.

Passing Arguments to Promise Functions

Given a function select(conn, query) that runs a select-type query specified by query on the database connection conn, you can pass those arguments and still wait for the promise as such:

// code from before, plus definition of "query"...
browser.controlFlow().execute(connectToDB).then(function(conn) {
    // simply elaborating on the contents of this same function from above
    var qr = function() {    // qr = query request ;)
        return select(conn, query);
    browser.controlFlow().execute(qr).then(function(rows) {

Voila, a control flow that can wait on a database connection and query!

Help!  I can't see my code anymore!

The astute observer may have noticed that with enough of these .then() calls, your code will be so far to the right that you will constantly have to scroll in order to see it.  In this sense, it's no better than using callbacks.  However, other mechanisms in Protractor allow you to remedy this.  You can run promises all at once, and then wait for them all to complete, by using this construct:

var ar = function() {    // hypothetical API request
    httpRequest(settings);    // settings = what you use for http.request(settings, ...) where http = require('http')
var ar2 = function() {    // hpothetical request #2
protractor.promise.all([browser.controlFlow().execute(ar), browser.controlFlow().execute(ar2)].then(function(responseArr) {
    // Assuming httpRequest() fulfills the promise with the raw HTTP response:
    console.log(responseArr[0]);    // response from ar
    console.log(responseArr[1]);    // response from ar2

Just use protractor.promise.all([<array>]).then, and you can greatly cut down on indentations required.  This even works, of course, on native selenium-webdriver functions that don't require being wrapped in execute().  Unfortunately, when making HTTP calls, my experiments showed that these calls were not run in parallel.  Then again, the calls I made returned very quickly; perhaps if I made a request involving more latency, I might see more parallel behavior.

A Word Of Caution Regarding Loops

If you have code inside loops that calls other functions based on the loop iterator's value, it's easy to run into a situation where the function uses the final value of the loop iterator each (or most of the) time it runs.  For instance, if inside the second .then() function from the database example above, you wanted to call external APIs based on data in each row retrieved, you could end up executing that API call using the last row as the parameter each time unless you heed some basic JavaScript syntax advice.

browser.controlFlow().execute(qr).then(function(rows) {
    for (r in rows) {
        (function(row) {
            // Make the HTTP request based on each row in here

This essentially creates a whole bunch of function instances where row is defined within the scope of each instance, not globally; then these functions get executed in series using the data you expect.  I bet that by assigning each function as an array element, you could possibly kick off all these functions in parallel with some other clever constructs!

Thursday, March 5, 2015

Angular + Protractor + Sauce Connect, launched from Gulp, all behind a corporate firewall!

You didn't think it could be done, did you?

Well, let me prove you wrong!  First, some terms:
  • AngularJS: An MVC framework for JavaScript, allowing you to write web apps without relying on JQuery.
  • Protractor: A test harness for Angular apps.
  • Sauce Labs: A company providing cloud services to help you run E2E testing on your web app in any environment combination.
  • Node.js: Package manager for JavaScript.  You'll need this for installing all the dependencies to get this tool set working.
  • Gulp: A build manager and mundane-task automator.  Competitor with the older, well-entrenched Grunt, but gaining popularity by the hour.  Uses JavaScript syntax, but could theoretically be used as a Makefile or shell script replacement.

The Basic Premise

My organization is writing web apps using Angular.  Long before I joined, they selected Gulp to manage application tasks such as allowing it to run on localhost at port 8888 for development & unit test purposes.  They also selected Protractor as a test harness to interact with the web app.  Protractor depends on the presence of Angular in order to work properly, and provides the use of Selenium WebDriver (for interacting with browsers) and unchained promises (a JavaScript construct to avoid callback functions).

Sauce Labs has been selected as the testing tool of choice because it saves us from having to set aside a massive amount of infrastructure to run tests on multiple platforms.  Through the configuration file for Protractor, I can specify exactly what OS platform & browser combination I want the test to run on.  Of course, being an organization such as it is, they also have a corporate firewall in place that will prevent the VMs at Sauce Labs from accessing development & test deployments of our web apps under construction under normal circumstances.  This is where Sauce Connect comes in: it provides a secure mechanism for the external Sauce Labs VMs to acquire the data that the server would serve to you as if you were inside the corporate firewall.  Winful for everybody!  The best part is that Sauce Labs is free for open-source projects.

Journey Through the Forest: Wiring All This Together

It is, truthfully, "stupid simple" to set up a Gulp task that will run Protractor tests through the Sauce Connect mechanism.  All you need in your Protractor configuration file is:

exports.config = {
    sauceUser: "your login name",
    sauceKey: "the GUID provided to you on your dashboard on Sauce's site",
    specs: ["the files to run as your test"],
    sauceSeleniumAddress: "this is optional: default is, but localhost:4445/wd/hub is also valid for when you're running sc locally and ondemand doesn't work",
    capabilities: {
        'tunnel-identifier': 'I will explain this later',
        'browserName': "enter your browser of choice here"

(Note that where it says ":4445" above should be replaced by the port number specified by the sc binary if it says anything different.)  It's so simple that you don't even need any "require()"s in the config file.  And in your Gulpfile, all you need is this:

gulp.task('sauce-test', function() {
    gulp.src('same as your "specs" from above, for the most part (unless your working directory is different)')
        configFile: 'path to the config file I described above'
    })).on('error', function (e) {
        throw e;
    }).on('end', function() {
        // anything you want to run after the Sauce tests finish

Then, of course, you can run your tests by writing "gulp sauce-test" on the command line set to the same directory as the Gulpfile.  However, proper functioning of this configuration eluded me for a long time because I did not know the Sauce Connect binary ("sc" / "sc.exe") was supposed to be running on my machine.  I thought the binary was running on another machine in the organization, or on, and all I needed to do was set the settings in the Gulpfile to the instance of sc that's remote (with the SauceSeleniumAddress entry).  While I could point the SauceSeleniumAddress to a different host, it was a flawed assumption on my part that anyone else in my organization was running "sc" already.  Also, might not answer the problem because it doesn't provide the services in "sc" by itself.  It is most convenient to run sc on your own system.

This configuration issue stymied me so much that I actually played with Grunt and several plugins therein before realizing that running tests through Sauce Connect was even possible through JavaScript to any extent.  Ultimately, I found a Node plugin for Grunt called "grunt-mocha-webdriver" that proved to me this was possible, and even doable in Gulp with Protractor and Selenium-WebDriver like I want, as opposed to Grunt/Mocha/WD.js.  (By the way, blessings to jmreidy, since he also wrote the sauce-tunnel which is relied upon heavily in this tutorial.)

Nevertheless, the easiest way to run Sauce Connect on your own system is to install the "sauce-tunnel" package through npm, the Node Package Manager (visit for other hilarious things "npm" could stand for :-P).  This is, of course, achievable by running the following on the command line:

npm install sauce-tunnel

If sauce-tunnel is already in your node_modules directory, then good for you!  Otherwise, you could run this in any directory that "npm" is recognized as a valid command, but you might want to place this module strategically; the best place to put it will be revealed below.  Nevertheless, you need to traverse to the directory where sc is located; this depends on what OS you are running, as the sauce-connect package contains binaries for Mac OSX (Darwin), Linux 32/64-bit, and Windows.  So, run the "sc" executable for your given platform before you run the Gulp task specified above, or else Gulp will appear to time out (ETIMEDOUT) when it's trying to get into Sauce Connect.

The minimum options you need for sc are your Sauce login name and your Sauce key (the GUID as specified above).  There are more options you can include, such as proxy configurations, as specified in the Sauce Connect documentation.  (Note that the tunnel-identifier, as called out in the Protractor config file, can be specified as an argument to sc.)

In simple terms, here's what we have thus far:

[assuming you've set up all the Node packages]:

vendor/[platform]/bin$ sc -u <your user name> -k <your GUID key> [-i <tunnel name>] [other options]

gulp-workingdir$ gulp sauce-test

This will set up "sc" for as long as your computer is hooked up to the Internet, and will run the Sauce tests on the existing tunnel.  The tunnel will remain active until you disconnect your computer from the Internet or end the sc process, but the tests running through Gulp will set up & tear down a Selenium WebDriver that'll drive the UI on your web app.

Help!  The test did not see a new command for 90 seconds, and is timing out!!!

If you are seeing this message, you might be behind a corporate proxy that is not letting your request go straight through to the Sauce servers.  Protractor has in its "runner.js" file a section where it will pick a specific DriverProvider based on certain settings you provide in the configuration file, and by providing the "sauceUser" and "sauceKey" values, it will pick the "sauce" DriverProvider.  The sauce DriverProvider provides an "updateJob" function that communicates with Sauce Labs (via an HTTP PUT request) on the status of the job.  This function is supposed to run after the tests conclude, and if that HTTP request fails, then the Gulp task will not end properly; thus, you will see this message.  Your list of tests in your Sauce Connect dashboard will look like this:

This message is so severe in Sauce that it doesn't just show up as "Fail," it shows up as "Error".  It also wastes a bunch of execution time, as seen in the picture above, and will obscure the fact that all the test cases actually passed (as they did in the pictured case above).  If you see this message after it is apparent that there are no more commands to be run as part of the test, then it is probably a proxy issue which is easy to resolve.

Here's how:

In your Protractor configuration file, add the following lines:

var HttpsProxyAgent = require("https-proxy-agent");

var agent = new HttpsProxyAgent('http://<user>:<password>@<proxy host>:<port>');

exports.config = {
    agent: agent,
    // things you had in there before

Then, in your node_modules/protractor/lib/driverProviders/sauce.js file (i.e. the DriverProvider for Sauce Labs in Protractor), add this:

this.sauceServer_ = new SauceLabs({
    username: this.config_.sauceUser,
    password: this.config_.sauceKey,
    agent: this.config_.agent    // this is the line you add

Once you have your https-proxy-agent in place as specified, your PUT request should go through, and your tests should pass (as seen in the Sane jobs).

The whole process, end-to-end, running in Gulp

If it does not satisfy you to simply run the "sc" binary from the command line and then kick off a Gulp task that relies on the tunnel already existing, you can get everything to run in Gulp from end to end.  To do this, you need to require sauce-tunnel in your Gulpfile (thus you might as well run npm install sauce-tunnel from the same directory that your Gulpfile exists).  Then, you need to make some changes to the Gulpfile: add some additional tasks for tunnel setup & teardown, and some special provisions so these tasks are executed in series rather than in parallel.

var SauceTunnel = require('sauce-tunnel');
var tunnel;

gulp.task('sauce-start', function(cb) {
    tunnel = new SauceTunnel("<your Sauce ID>", "<Your Sauce Key>", "<Sauce tunnel name -- this must be specified and match the tunnel-identifier name specified in the Protractor conf file>");
    // >>>> Enhance logging - this function was adapted from that Node plugin for Grunt, which runs grunt-mocha-wd.js
    var methods = ['write', 'writeln', 'error', 'ok', 'debug'];
    methods.forEach(function (method) {
        tunnel.on('log:'+method, function (text) {
            console.log(method + ": " + text);
        tunnel.on('verbose:'+method, function (text) {
            console.log(method + ": " + text);
    // <<<< End enhance logging

    tunnel.start(function(isCreated) {
        if (!isCreated) {
            cb('Failed to create Sauce tunnel.');
        console.log("Connected to Sauce Labs.");

gulp.task('sauce-end', function(cb) {
    tunnel.stop(function() {

gulp.task('sauce-test', ['sauce-start'], function () {
    gulp.src('<path to your Protractor spec file(s)>')
        configFile: '<path to your Protractor conf file>'
    })).on('error', function (e) {
        throw e;
    }).on('end', function() {
        console.log('Stopping the server.');'sauce-end');

Note here that the cb() function is new to Gulp, yet the "" construct mentioned toward the bottom of the code snippet above is actually deprecated.  I will get around to fixing that once it stops working, but I think that in the grand scheme of priorities, I'd rather clean the second-story gutter with only a plastic Spork first before fixing that deprecated line. :-P

At this point, you should be able to run a test with Sauce Connect from end to end in Gulp without any extra intervention.  However, if Gulp is failing because it can't write to a file in a temporary folder pertaining to the tunnel (whose name you picked), then you can always run gulp as root find a way to have it save to a different temporary location that you have access to, since it's always good to minimize running things as root.

One Brief Important Interruption about Lingering sc Instances...

If these instructions haven't worked out 100% for you, or you are me and spent a great deal of time exploring this, you may be frustrated with how many times Sauce Connect hangs around when there's been a problem.  You can't start the Sauce Connect binary again if it's already running, yet if you try to do this, it gives you an esoteric error message that does not make it apparent that this is indeed what happened.  To remedy this in a *nix operating system, simply write "pkill sc", as long as you don't have other critical processes that have "sc" in their name.  In my case, the other processes with "sc" in the name are running under a different user, and I don't have privileges to kill them (I'm not logged in as root nor running "sudo pkill sc"), so it doesn't do anything harmful to the system.

Shutting It Down Cleanly

In order to properly shut down sc, you may have noticed one final Gulp task in the code snippet above -- "sauce-end".  This task, in the background, runs an HTTP DELETE operation on, and is subject to corporate proxy rules once again.  To circumvent this, you can simply require https-proxy-agent in node_modules/sauce-tunnel/index.js (like we did in the Protractor configuration file), and set up the agent in the same way.  In this case, you will edit the code in node_modules/sauce-tunnel/index.js as such:

// other pre-existing requires
var HttpsProxyAgent = require("https-proxy-agent");

var agent = new HttpsProxyAgent('http://<user>:<password>@<proxy host>:<port>');

// other existing code
this.emit('verbose:debug', 'Trying to kill tunnel');
  method: "DELETE",
  url: this.baseUrl + "/tunnels/" +,
  json: true,
  agent: agent    // this is the line you add
}, // ... etc

Now, obviously, this is not sustainable if you wish to ever upgrade sauce-tunnel or wish not to include a proxy agent.  For this, I will be submitting "less hacky" fixes to the respective GitHub repositories for these open-source Node modules in order to make it easier for all users in the future to use Sauce Connect with Protractor through their corporate proxies.

Nevertheless, there's no harm in this DELETE call failing, other than it makes the Gulp task stall another minute or so, which is annoying when you're at work late trying to learn how all this stuff works in order to finish off some polishing touches on your big project.

To recap running everything from end to end in Gulp:

[Assuming you've set up all your Node packages to run a Protractor script with the conf file set up for Sauce Labs, as described above]:
  • In the same directory as your Gulpfile, run:
    npm install sauce-tunnel
  • Set up your Gulpfile in the manner I described above, with the sauce-tunnel require, and the "sauce-start", "sauce-end", and "sauce-test" tasks, and with the "Sauce tunnel name" (3rd argument in new SauceTunnel()) set to the same value as the Protractor config file "tunnel-identifier" value.  Be sure to study all the possible values that "new SauceTunnel()" takes, as you can pass in options to the sc binary if you need them.
  • If you are behind a corporate proxy or firewall, make the recommended edits to the Sauce DriverProvider at node_modules/protractor/lib/driverProviders/sauce.js, and to the sauce-tunnel module at node_modules/sauce-tunnel/index.js.
  • Run the Gulp task.
    gulp sauce-test
    sudo gulp sauce-test
Once again, I plan to check in "more sustainable" and "less hacky" code to help you deal with corporate proxies in the future without making temporary workarounds to downloaded modules.

Thursday, January 29, 2015

For Hardware Startups: Insights on Scaling Up Manufacturing

Several of my previous posts relate to LEDgoes, the first product of OpenBrite, LLC, which ran a very successful Kickstarter campaign.  Stacy & I designed this product (which requires a great deal of custom hardware), though we are in fact two software engineers by trade & schooling.  She has experience, though, with manufacturing various audio components from voice coils and cables to amplifier circuits, has worked with Ray Samuels on some of his designs, and she even took a class in D/A and A/D integrated circuit design.  Nevertheless, through the Kickstarter experience, we embarked on a whole new world when it comes to scaling up production on circuit boards.

You Can't Pick Your Parents, But At Least Pick Your Partner

Selecting the right assembly company is crucial.  Many companies can produce the hardware for you from end-to-end, source the parts for it, test it, and even help out in the design phase.  Some of these companies can do everything in-house, and others will sub-contract certain pieces of it such as obtaining raw PCBs or solder stencils from other vendors.  There are websites that list PCB assembly houses by the hundreds throughout the USA, but often it is hard to find reviews for them because most companies ordering hardware have known the same suppliers and manufacturers for many years and have no reason to write reviews.

Originally, we elected to go with one of these "end-to-end" services which would source parts for us, order PCBs, and then handle assembly and testing themselves.  Interestingly, this presented us a huge tax advantage too, since our hands were free of any inventory: we licensed them to produce our design for free, paid them their contracting fee, and intended to split the profits from sales (which they would be handling) later.  Not only that, but we thought it'd be great to kickstart a local assembly business consisting of two friends of ours from the local Makerspace looking to move out of the corporate world.  Unfortunately for us, we did not really research their work ethic ahead of time.  This resulted in an inordinate amount of delays: one of them (who basically had all the assembly experience) started neglecting the company since shortly after we placed our work order, thus causing it to become very unprofitable.  (The latest drama revolves around their website, which this person is supposed to be in charge of; it has been down for approximately two weeks as of this writing.)  As such, the other person had to continue working their corporate job while conducting LEDgoes assembly and test.  When your reputation and brand is on the line, it is important to work with suppliers and vendors who do not shrink from their responsibilities, because their lack of responsibility will negatively affect your image too.  An actual business with a large staff and years of experience is the only way to go in order to handle hardware manufacturing, even if you think a small group of friends with their own startup is fully capable.  There is just no replacing trained, fully dedicated staff who have an established business process and are already familiar with the nuances of their own equipment.

What To Look Out For

Always take a tour of the facilities if available.  Make sure the pick & place is nice -- it needs optics, trays/guides, and the ability to support enough reels to make your product.  (Your components don't always have to be on reels -- tape alone is usually sufficient -- but reels will save the production engineer a bit of trouble.)  Good optics mean better quality & less rework, since the chips will align squarely with the pads and nothing will be even 1 or 2 degrees tilted coming out of the reel or tray.

The reflow oven needs to be proper and big enough to hold your panels.  It might look like a giant microwave oven with an LCD screen in front allowing you to control the temperature profiles for the specific solder paste you're using.  If there's no window, no LCD, and no fan inside to ensure even heat distribution (so all the solder reflows as expected), run away like the wind.  No proper reflow oven in sight = no business doing business.

They should have a proper way to clean the boards.  And, if a substantial amount of through-hole work is required, a professional assembly company will handle that with a huge wave soldering machine.  (Not even the entire volume of Partnerboards we had to produce warranted turning on and setting up the wave soldering machine, according to their sales representative, but they were still able to do all the through-hole work on 45 5" x 7" panels in a single day.)

You Shouldn't Be Surprised By...

Engineering support can be troublesome.  If you prototyped something and it doesn't scale up, or when you want to expose features that aren't really supported by the kit, you may end up referring to either vague documentation or support "engineers" who barely know V = IR (ahem, TI, ahem...)  One of our friends who used to work at CircuitCo lamented on TI's engineering support: to paraphrase, "They would always start by blaming the inductor, but once you would find the real cause, they would go 'Oh yeah, that...'"

Legitimate companies take a dang long time to get moving, unless you're paying for some premium rush service.  For instance, I contacted a company about assembly on December 9 to begin discussing a quote.  They took a 2-week holiday for Christmas/New Year's, while I worked on legitimate engineering diagrams for them, and they finally gave me the quote on January 14.  (In case you've been wondering what else I've been doing besides posting here... :-P)  My board house, too, always seems to take toward the upper end of their time estimate (typically 2 or 3 weeks) to finish a job.  Once upon a time, I paid them an extra rush fee to finish a job a week earlier than the low end of their usual estimate (i.e 1 week), and it ended up coming back right on the low end of their usual estimate (i.e. 2 weeks).  However, they are local, I can drive to pick up PCBs (or get issues fixed fast if there's a problem), and usually when I'm there, I get in a conversation with the lady in front about other ideas I have and what they could do to make it possible.  The only really fast job was from the company that made my solder stencil.  Then again, I probably could have laser-cut my own (which is all they did) if I had 4-mil aluminum and an appropriate frame on hand, and if the laser on the laser cutter at the local Makerspace had a small enough kerf width.

The China Syndrome

Don't forget that the Chinese New Year usually means most Oriental suppliers are on vacation throughout a good chunk of the month of February.  However, this isn't the main point of this section.

Most of my vendors are in fact local, but I did end up ordering certain things from Alibaba & AliExpress.  I have not had a bad experience with a single one of my Chinese vendors -- they all shipped out what they promised, even if it took a little longer than expected to produce.  I tend to stay up until about 1 or 2 PM Shanghai time anyway (sometimes all the way until it's time for them to go home) to seal the deal.  Keeping the conversation flowing on Skype always helps, especially if they are the slightest bit unclear on what you want.  However, one of my suppliers was so good that we only needed a short email exchange before he was clear on what I wanted, and delivered it within a few weeks.  For a shot at above-average service, be friendly & get a little personal over Skype or email.  Ask how their weekend went, if they have any kid(s), what they like to do besides work, etc.  But before engaging in any business from Ali.*, make sure they've been verified and have good ratings from other buyers for the products you're specifically asking for.  And even if such is the case, you may still get fakes -- I got fake FTDI chips, though they still work fine for the most part (a bit more sensitive to ESD, and a very slightly different IC package shape).

In all, I learned a great deal about manufacturing through the second phase of the Kickstarter experience, and I had quite a bit of fun putting some of the panels together myself in the interlude between using two different contract assemblers.  However, now that my supply chain seems to be taking shape finally, I intend to only experience the thrill of manufacturing prototypes, and save the rest of my time for design work.  Nevertheless, this knowledge will help me communicate more effectively with manufacturers in the future.

Thursday, October 23, 2014

Sock it to the banks: transfer money smartly!

Modern times call for crafty experimentation and careful observations to avoid getting ripped off at every turn by some "service provider" seeking to collect fees from their "renters."  Guided by personal experience and altruistic desires, I would like to share some simple hints for transferring money between bank accounts quickly and efficiently without incurring fees.

Use your bank's mobile app

Many banks offer applications for your mobile device that can facilitate bank transfers by easily scanning and quickly depositing checks.  My experiences with several institutions show that while some banks are hesitant to give you the full amount immediately when a check deposit is made in person or ATM, checks for hundreds or even thousands of dollars clear completely and instantly when using a bank's mobile app.  The only potential fee with this method is that you need to have checks from all your banks so you can transfer money between each bank as needed -- some banks don't offer free checks.

Before opening an account with an institution, check to see if they offer this feature, as it is a very helpful convenience.  It is also nice if they do offer free checks -- not just "free checking."

Set up transfer capabilities between all your banks, then transfer "From," not "To!"

If you are not so much in a hurry to get money from one place to another (especially since it can be kind of a hassle to write a check and get it scanned in -- especially when you're a bit OCD), set a few moments ahead of time to link all your institutions together so they can all transfer money to one another.  Once you have all the possible transfer capabilities set up, you can transfer money between banks with a simple ACH transfer which is offered for free almost universally.

Watch out for one common pitfall: if you log into the bank from which you want to send money, those funds will be immediately debited from your account, and will take at least 24 hours to show up into the receiving account.  This is time when your money is floating around in the ether, not really earning interest for you.  In order to make sure you don't miss out on that $0.0000001 cent of interest from having that $100 stuck in transit for between 1-3 days (in most cases), you must log into the bank for which you intend to receive the money, and set up the transfer so that it receives the money from your other bank.  This way, the money is not debited from the sending bank account until the waiting time period has elapsed.

Security concerns

This is especially recommended if you happen to know IT project managers or executives at said banks who can vouch for their security. ;)  Actually, the worst that can happen is the hacker would be able to send money between your accounts, so if they don't know the trick I just described, your money will be gone for 1-3 days.  But in fact, if they've made it that far into the system, then they've probably taken the time to set up their own accounts as legitimate recipients and have stolen all your money anyway.  They could be alerted to the presence of the other accounts you have linked, and might attempt to use the same username/password combo on your other bank's website (assuming they've been able to get into the bank's online database to deduce that information).

If you do this, you should never use the same username/password combo across your banking websites (or any website for that matter), and consider opening up accounts at banks that use two-factor authentication on their Web sites.  Two-factor authentication is based on the principles of "something you know" and "something you have", so you would be prompted to enter a username/password, and then you may be prompted to enter a specific code sent to a registered telephone or email address each time you log on with a new device or clear your browser's cookies.

I'll save the rants on greedy rent-seeking corporations, fatuously uninspired and easy-to-guess security questions, and sites that think they have two-factor authentication (but really don't) for some future post(s). :-P

Thursday, September 11, 2014

Make your own dual programmer in AVRDUDE

Modified 9/16/2014

Those of you who have programmed an Arduino through the Arduino or AVR Studio IDE may have noticed the utility that is really doing the work: AVRDUDE (AVR Downloader/UploaDEr).  This is a powerful program that can facilitate programming new sketches on top of a bootloader, load a brand new bootloader or chip image, capture the current firmware programmed on the chip, and set fuse bits (which can render your chip unusable without special tools if you're not careful).

You mean I could have been doing this the whole time?

The LEDgoes USB Communicator supports both programming over serial (bootloader must be present) or via ICSP bitbang (very slow).  The ICSP operation is identical to Adafruit's FTDI Friend product.  The serial programming is identical to the Arduino, except that in my case, I'd like to be able to program two ATmega chips at the same time without switching cables.  What's the best way to do this?

My original train of thought (from Mixing Logic With ISP Circuitry For Programming Firmware) involved using a switch and AND gates to decide which chip would actually get the bits.  Granted, that article was really geared for SPI programming, but the concept is even more applicable to serial programming since our UART lines (TX & RX) are common among both chips on the board.  Trying to program one chip without holding the other in RESET will cause a failure to write the program, as they will both try to send serial output on the same wire & confuse each other.  However, the logic gates used for switching still required either manual intervention or use of a Raspberry Pi which didn't really work out for me at the time.  I thought it was going to come to me needing to get yet another microcontroller just to handle one single bit of output from AVRDUDE to control the RESET lines, which seemed really stupid.  It was getting very annoying to wire up the boards two different ways to program both of the chips, though, so I still drove to find a solution.

After examining a serial port's configuration, and seeing which pins were still available after Arduino's serial programming application had been implemented, I decided it'd be simpler to use AVRDUDE to hold one chip in RESET while the other is programmed.

What if I don't have LEDgoes?

Good news!  You can use an Arduino to do this as well
.  If you haven't familiarized yourself with the ICSP header pins on the Arduino board, you'll get a crash course here.  The "RESET" header pin you can tap into is, obviously, electrically connected to the ATmega RESET pin, but it's also connected to the "RST" pin (#5) on the ICSP header.  AVRDUDE maps this RST signal to the "DTR" (Data Terminal Ready) serial signal coming from the FTDI USB/serial chip.  This is part of the mechanism used under the hood each time you upload a sketch.  However, this new AVRDUDE programmer defined below will also activate the MOSI pin on the ICSP header (#4), which is linked to the RTS signal from the FTDI chip (Request to Send).  Between these two pins (RST <- DTR, and MOSI <- RTS), we can hold one chip in reset while the other one is being programmed.

One small catch: you need to take off the on-board ATmega chip if you don't plan to use it.  For folks with SMD edition Arduinos, you cannot program two external chips without making some adjustments.  The code below assumes you have exclusive use of the RST line (i.e. the "RST" ICSP header used in the diagram below, or the Arduino "RESET" pin).  However, the SMD chip's reset pin is hooked up to this same RST line.  Thus, if you connect an external chip to this RST line while the on-board chip is still in place, the two chips will be programmed at the same time, and that always causes problems.  Usually when this happens, the chips start talking over each other loudly enough to make AVRDUDE fail.  In this case, AVRDUDE passes but the program on the external chip will be all screwed up.

To circumvent this, if you have an SMD-edition Arduino, you'll need to find (or perhaps write) yet another function in AVRDUDE to control a pin *besides* DTR and RTS.  You could pick the TXD pin (which leads to pin 3 / SCK on the ICSP header) or CTS (which goes to ICSP pin 1 / MISO).  Of course, there's no need to take these precautions if you're looking to program the SMD chip and one external chip; just make sure the external chip's reset is only hooked up to RTS (ICSP pin 4 / MOSI).

It's good to keep that in mind anyway, since with those extra functions to utilize TXD & CTS, you could program up to four ATmegas (or 16 if you wanted to get fancy with combinational logic).

Here's what the breadboard setup looks like (for the non-SMD-edition crowd):

Depending on what jumper cables you have around, you can route the pink cable into the RESET pin on the regular Arduino headers instead of the RST pin on the ICSP headers.  It doesn't matter which chip is hooked into RST or MOSI as long as you properly track which one gets programmed when.

Building Your Own AVRDUDE In Linux

To get started with this, I had to download the code and load a bunch of dependencies for it to compile.  After having looked at MinGW for Windows, I thought it'd be a little bit less effort to get it going in Linux.  So here's roughly how it went:

  • Checked out the SVN repository from
  • Learned about autoconf, a cross-platform build tool (and what AVRDUDE uses to get built)
  • Ran autoconf
  • Fought with a "error: possibly undefined macro: AM_INIT_AUTOMAKE" (for which this post suggested the correct solution for fixing this issue):
    • Add AC_CONFIG_MACRO_DIR([m4]) to -- it was already present in my case
    • libtoolize --force
    • aclocal
    • autoheader
    • automake --force-missing --add-missing
    • autoconf
  • Installed missing dependencies, such as developer libraries for libusb & libftdi (the AVRDUDE configure script will tell you what you're missing), plus flex (but not its friend bison), and yacc
  • Ran autoconf again after these missing dependencies were satisfied
  • sudo ./configure; sudo make; sudo make install
Thus was born my very own AVRDUDE!

Cloning the Arduino Programmer into "BritebloxUSB"

Since my programmer is basically an "arduino" programmer (but I wanted to program two devices at once instead of just one), I decided to base my code heavily around theirs.  After poking around to study how it was implemented and tied in with the whole application, I was able to produce the desired behavior using the following files:

  • arduino.c (saved as briteblox.c)
  • arduino.h (saved as briteblox.h)
  • pgm_type.c

Later on, you will see what I did to these files to get the programmer working as desired.  I also should have modified the following files (but took the lazy man's way out since I was only looking for Linux support at the time):

  • ser_posix.c
  • ser_win32.c

I'll explain why I need to modify these files later.  After each time I'd change any of these files, I would run:

autoconf; sudo make; sudo make install; sudo cp /usr/local/etc/avrdude.conf.stevo /usr/local/etc/avrdude.conf

This rebuilds the AVRDUDE binary and also reinstates the changes I need into the configuration file so it recognizes "britebloxusb" as a programmer.

AVRDUDE allows programmers to send the signals used for programming to different output pins than expected.  For example, the pulse sent to reset the AVR chips goes through the DTR and RTS pins from the FTDI chip.  By splitting up the function of the DTR & RTS pins to behave separately, I can "lightly tap" one chip into RESET (so it will be programmed) and hold the other chip in RESET (so it will "sleep through" all the instructions being sent to program its neighbor).  This was achieved by modifying the _open() function, and adding a new function called briteblox_set_dtr_rts().  (This new function needs to be modified to fit nicely into ser_posix.c and ser_win32.c.)

The programmer accepts an optional argument (through the _parseextparms() function) that allows you to specify which signal gets held down the entire time.  Without specifying "-x reverse", DTR is held low the entire time.  When this parameter is included, though, RTS is held low the entire time.  This way, to program both AVRs without rearranging the cables, all you need to enter on the command line is:

avrdude -p atmega168 -c britebloxusb -P /dev/ttyUSB0 -D -U <what to do>; avrdude -p atmega168 -c britebloxusb -P /dev/ttyUSB0 -D -U <what to do> -x reverse

And if you ever write anything invalid for -x, the britebloxusb programmer will politely remind you what options are supported by -x.  Right now, it's just help and reverse.


While this isn't quite perfect nor polished yet (the output from _close() and _teardown() still needs to be reconciled a bit too), here it is for your enjoyment and edification.  Soon I hope to commit this (cleaned up) into the mainline AVRDUDE source code for enjoyment by all.


 * avrdude - A Downloader/Uploader for AVR device programmers
 * Copyright (C) 2009 Lars Immisch
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * GNU General Public License for more details.
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <>.

/* $Id: $ */

 * avrdude interface for britebloxusb programmer
 * The britebloxusb programmer is mostly a STK500v1, just the signature bytes
 * are read differently.

#include "ac_cfg.h"

#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/ioctl.h>

#include "avrdude.h"
#include "libavrdude.h"
#include "stk500_private.h"
#include "stk500.h"
#include "britebloxusb.h"

/* flags */
#define BBX_FLAG_REVERSE          (1<<0)

/* set dtr & rts separately; the usual "serial" function lib makes you do them together */
/* FIXME: This will only work in POSIX-land!!! */
static int briteblox_set_dtr_rts(union filedescriptor *fdp, int dtr_on, int rts_on)
  unsigned int ctl;
  int           r;

  r = ioctl(fdp->ifd, TIOCMGET, &ctl);
  if (r < 0) {
    return -1;

  if (dtr_on) {
    /* Set DTR */
    ctl |= TIOCM_DTR;
  else {
    /* Clear DTR */
    ctl &= ~TIOCM_DTR;

  if (rts_on) {
    /* Set RTS */
    ctl |= TIOCM_RTS;
  else {
    /* Clear RTS */
    ctl &= ~TIOCM_RTS;

  r = ioctl(fdp->ifd, TIOCMSET, &ctl);
  if (r < 0) {
    return -1;

  return 0;

/* read additional params */
static int britebloxusb_parseextparms(struct programmer_t *pgm, LISTID extparms)
  const char *extended_param;
  char reset[10];
  char *preset = reset;   /* for strtok() */
  int spifreq;
  int cpufreq;
  int serial_recv_timeout;

  for (ln = lfirst(extparms); ln; ln = lnext(ln)) {
    extended_param = ldata(ln);
    if (strcmp(extended_param, "reverse") == 0) {
      pgm->flag |= BBX_FLAG_REVERSE;
      avrdude_message(MSG_INFO, "%s: Reversing reset signals for this run...\n", progname);
    } else if (strcmp(extended_param, "help") == 0) {
      avrdude_message(MSG_INFO, "%s: britebloxusb: Available Extended Commands:\n"
                      "\thelp\tPrints this help message\n"
                      "\treverse\tHolds down RTS instead of DTR throughout programming\n", progname);
      return -1;
    } else {
      avrdude_message(MSG_INFO, "%s: extended parameter %s is not understood.  Use \"-x help\" for all options.\n", progname, extended_param);
      return -1;

/* read signature bytes - britebloxusb version */
static int britebloxusb_read_sig_bytes(PROGRAMMER * pgm, AVRPART * p, AVRMEM * m)
  unsigned char buf[32];

  /* Signature byte reads are always 3 bytes. */

  if (m->size < 3) {
    avrdude_message(MSG_INFO, "%s: memsize too small for sig byte read", progname);
    return -1;

  buf[0] = Cmnd_STK_READ_SIGN;
  buf[1] = Sync_CRC_EOP;

  serial_send(&pgm->fd, buf, 2);

  if (serial_recv(&pgm->fd, buf, 5) < 0)
    return -1;
  if (buf[0] == Resp_STK_NOSYNC) {
    avrdude_message(MSG_INFO, "%s: stk500_cmd(): programmer is out of sync\n",
return -1;
  } else if (buf[0] != Resp_STK_INSYNC) {
    avrdude_message(MSG_INFO, "\n%s: britebloxusb_read_sig_bytes(): (a) protocol error, "
                    "expect=0x%02x, resp=0x%02x\n",
                    progname, Resp_STK_INSYNC, buf[0]);
return -2;
  if (buf[4] != Resp_STK_OK) {
    avrdude_message(MSG_INFO, "\n%s: britebloxusb_read_sig_bytes(): (a) protocol error, "
                    "expect=0x%02x, resp=0x%02x\n",
                    progname, Resp_STK_OK, buf[4]);
    return -3;

  m->buf[0] = buf[1];
  m->buf[1] = buf[2];
  m->buf[2] = buf[3];

  return 3;

static int britebloxusb_open(PROGRAMMER * pgm, char * port)
  union pinfo pinfo;
  strcpy(pgm->port, port);
  pinfo.baud = pgm->baudrate? pgm->baudrate: 19200;
  if (serial_open(port, pinfo, &pgm->fd)==-1) {
    return -1;

  /* Set DTR & RTS to reset both chips */
  briteblox_set_dtr_rts(&pgm->fd, 1, 1);
  if ((pgm->flag & BBX_FLAG_REVERSE) == 0) {
    /* (Normal) Clear only RTS in order to resume communication with the desired chip */
    briteblox_set_dtr_rts(&pgm->fd, 1, 0);
  } else {
    /* (Reversed) Clear only DTR in order to resume communication with the desired chip */
    briteblox_set_dtr_rts(&pgm->fd, 0, 1);

   * drain any extraneous input
  stk500_drain(pgm, 0);

  if (stk500_getsync(pgm) < 0)
    return -1;

  return 0;

static void britebloxusb_close(PROGRAMMER * pgm)
  /* Release the other chip from reset */
  briteblox_set_dtr_rts(&pgm->fd, 0, 0);
  pgm->fd.ifd = -1;

static void britebloxusb_teardown(PROGRAMMER * pgm)

const char britebloxusb_desc[] = "britebloxusb dual AVR programmer";

void britebloxusb_initpgm(PROGRAMMER * pgm)
  /* This is mostly a STK500; just the signature is read
     differently than on real STK500v1 
     and the DTR signal is set when opening the serial port
     for the Auto-Reset feature */

  strcpy(pgm->type, "britebloxusb");
  pgm->read_sig_bytes = britebloxusb_read_sig_bytes;
  pgm->open = britebloxusb_open;
  pgm->close = britebloxusb_close;

  /* Optional functions */
  pgm->parseextparams = britebloxusb_parseextparms;
  pgm->teardown = britebloxusb_teardown;



 * avrdude - A Downloader/Uploader for AVR device programmers
 * Copyright (C) 2009 Lars Immisch
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * GNU General Public License for more details.
 * You should have received a copy of the GNU General Public License
 * along with this program. If not, see <>.

/* $Id: $ */

#ifndef britebloxusb_h__
#define britebloxusb_h__

extern const char britebloxusb_desc[];
void britebloxusb_initpgm (PROGRAMMER * pgm);


Additions to

libavrdude_a_SOURCES = \
avrftdi_tpi.c \
avrftdi_tpi.h \
avrpart.c \
bitbang.c \
bitbang.h \
britebloxusb.c \
britebloxusb.h \
buspirate.c \
buspirate.h \
butterfly.c \
butterfly.h \
config.c \

confwin.c \

Additions to pgmtype.c:

#include "avrftdi.h"
#include "britebloxusb.h"
#include "butterfly.h"
const PROGRAMMER_TYPE programmers_types[] = {
        {"arduino", arduino_initpgm, arduino_desc},
        {"avr910", avr910_initpgm, avr910_desc},
        {"avrftdi", avrftdi_initpgm, avrftdi_desc},
        {"britebloxusb", britebloxusb_initpgm, britebloxusb_desc},
        {"buspirate", buspirate_initpgm, buspirate_desc},

Additions to avrdude.conf.stevo:

  id    = "arduino";
  desc  = "Arduino";
  type  = "arduino";
  connection_type = serial;

  id    = "britebloxusb";
  desc  = "BriteBlox USB Dual Serial Programmer";
  type  = "britebloxusb";
  connection_type = serial;
# this will interface with the chips on these programmers: