Thursday, October 22, 2015

Hacking TrueType Fonts For Character Information

Those of you who have ever been curious about making your own font should know that doing so on the computer isn't easy.  Sure there are several good programs out there that can help you take your design and digitize it, but a well-made font has been crafted with much care and attention to detail by a computer scientist just as much as a designer.  Some considerations that need to be made on the technical side include, for instance, how to "hint" rendering at very large or small sizes, accounting for grayscale devices in such hinting, making characters by compositing glyphs to save on file size (e.g. fi = f + i), and dealing with different platforms and character encodings across different computer systems so the font can be portable across Windows, Mac, and others.

Now, think back to one of my long-time projects that relates to displaying text and images.  Yes, BriteBlox can certainly be capable of displaying messages set with TrueType fonts, and this has been supported in the development version for quite some time.  However, to make it scale well for any message at all, it is important to know what the width of each character is.  As such, the efforts described here were undertaken for the sake of improving BriteBlox.


The simplest way to render TTFs in Python is to use PIL (the Python Imaging Library).  With this, you can establish an Image object and then instruct PIL to render text with the desired typeface onto the image.  However, you need to know in advance what the width of each character is so you can make the correct-sized Image object before rendering text onto it only to discover that either it's too short and text is chopped, or you're out of memory.  In the BriteBlox PC Tools, this feature was disabled in releases for such a long time because I would manually have to guess and check the correct size for the bounding box for my text.  Soon, that will no longer be required!

The High-Level Solution

[Important note] There may be, in fact, a better solution for those of you using Qt, an application framework.  Unfortunately, my implementation of the Qt 5 libraries in PyQt5 seg-faults (or tries to access a null pointer) when I try to run the appropriate commands, so I will have to write about that in the future once I upgrade Qt and hopefully get it working.

Along with PIL (or Pillow) in Python, you can use the fonttools and ttfquery libraries (which depend on numpy) in order to fetch the width of a particular character glyph.  (The glyph is the artistic rendering of the character; the character is more of just a concept in the realm of typography.)  To get the required width (and height) for the container image, begin by using this code:

from ttfquery import describe, glyphquery
myfont = describe.openFont("C:\\Windows\\Fonts\\arial.ttf")

glyphquery.width(myfont, 'W')

Now you have the width of a character from your TTF file.  If you actually run this, though, you may notice the values seem really odd -- in fact, very large.  This is because the values being retrieved (I'll tell you exactly where these come from later) are scaled to "font units" or "EM units", which relate to the "em square".  Remember your em-dash and en-dashes from English class?  Well, turns out they're incredibly important in typography too.  The EM units are derived from the "EM square", which is a square the size of the em-dash.  Back when fonts were cast into metal stamps and then pressed into paper, the em-dash was typically the widest character you could have.  In digital media, though, characters are allowed to be wider than the em-dash, so you have to look at each character specifically to find out how wide each one is.  Nothing can be taken for granted.

EM units are simply little divisions of the EM square such that now the EM square is divided up into a grid.  There are several acceptable values for how many units exist along one single side -- in fact, any value (or power of two?) from 16 to 16384 is acceptable.  The typical "resolution" of the EM square, as defined by the "unitsPerEm" field in the TTF specification, is 2,048 units per side of the square.  However, again, this value cannot be taken for granted; I will explain ways to fetch it later.  Once you have the correct unitsPerEm value, put it into the following equations:

pixel_size = point_size * resolution / 72
pixel_coordinate = grid_coordinate * pixel_size / EM_size


Remember that fonts are generally measured in points rather than pixels, a tradition that dates back to at least the 1700s.  Nowadays, a point is defined as 1/72 inch, thus the ratio of point_size / 72 in the first equation.  Now, you need to get rid of the "inch" in the unit by multiplying by some unit that is 1/inch (remember dimensional analysis from chemistry or physics?).  The perfect unit for this happens to be pixels per inch, which is defined differently on different computing platforms.  For instance, Microsoft typically defines an inch as 96 pixels in Windows, thus as monitors are made with ever-higher resolution, the distance on the monitor representing a physical inch gets noticeably smaller.  Now, if you consider the right edge of your glyph to be the grid coordinate of interest, you can finish off the equation.  Let's see how this would work for the capital letter "W" at size 12 point:

>>> glyphquery.width(myfont, 'W')
>>> 1933.0 * 12 * (96.0/72.0) / 2048
And now at 24 point:
>>> 1933.0 * 24 * (96.0/72.0) / 2048

IMPORTANT NOTE: To avoid rounding error, you must make special amendments to get Python to treat your numbers as floating-point values rather than integer values.  You can do this by simply adding ".0" to the end of an integer value, and the answer will automatically be "promoted" to the most detailed data type.  If I were to leave the first equation alone and simply write 1933 * 12 * (96/72) / 2048, I would get the answer 11 which is definitely wrong, as my empirical observation of the character "W" indicates that it needs at least 13 pixels of width at 12 point size, even with anti-aliasing turned off.

Finding the EM Size Of Your Font

To get the correct value for unitsPerEm (a.k.a. EM_size in the equation), there are some nice tools you can go search for. offers some nice suggestions, including SIL ViewGlyph for Windows.  Simply open the font file, go to View -> Statistics, then look for "units per Em".

If you have a hex editor handy, open your font file in the hex editor.  Toward the beginning of the file, look for the four characters "head" in plain ASCII (0x68 0x65 0x61 0x64).  Skip four bytes after this (the checksum of the table), and you will get to the table's starting address as indicated by the hex values (e.g. my version of Arial indicates the HEAD table offset is 0x00 0x00 0x01 0x8C, thus 0x18C).  Navigate to (this position in the file + 18 more bytes), and the next two bytes (representing an unsigned short integer, from 0 to 65535) are your unitsPerEm value.  Remember this value is typically 2048, or 0x800.

Trust Me, This Is Correct

I spent long enough simply trying to find out in the spec where this magical "EM_size" parameter could be found.  After spending days poring over the Apple TrueType Reference Manual and Microsoft TrueType documentation (warning: .DOC file), it finally became apparent.  This was just an exercise in being comprehensive, though, as Arial obviously had a unitsPerEm value of 2048.

Because I originally didn't know that Microsoft used a standard of 96 PPI rather than 72 PPI, my initial calculations in the formulas above always seemed wrong (too small).  I set out to find out another way to this data, so I read the TTF spec as well as some supporting documentation (including this page and the source of the equations listed above), and set out to find the bounding boxes (bbox) for each glyph, as defined by the xMin, yMin, xMax, and yMax values for each glyph in the GLYF table.  This proved to be unsatisfactory because they don't really tell you how to parse the GLYF table.
  • The raw data seems to just launch right into the 1st glyph without any nice header info as to what glyph(s) belongs to what character, or how many bytes define each glyph in advance.
  • The data I gleaned for the first glyph (which I don't even know what it is) seemed out of whack, with a total height of slightly over the EM size and a total width of almost 3 times the EM size!
I was leery of those results, and decided to take another route.  The "OS/2" table (its header is literally thus in the font file data) contains properties such as sTypoAscender, sTypoDescender, and sTypoLineGap.  Despite that OS/2 is used by Microsoft devices only, the values it contains should be platform-agnostic.  However, comparing my Arial font file to the documentation I had, something seemed fishy.  Maybe its OS/2 table is older and doesn't contain as much information, but because these three fields are so far down the table, I didn't want to take any chances with having counted incorrectly or misreading one of the data types.  I soon abandoned this idea too.

Yet another idea was to go to the CMAP table, which contains the mappings of characters to glyph indexes.  (I would have to sit and parse this table to figure out what the very first glyph is in GLYF, and there's no need for me to work backwards like that now.)  This table contains at least one sub-table (Arial has, in fact, three sub-tables here), so there is quite a lot of header data you need to go through before you get to the good stuff.  However, you still need to go through it carefully, otherwise you will be misled into meaningless data.  For Microsoft devices, you should look for the sub-table with the Platform ID of 3 and the Platform Encoding ID of 1.  After finding the byte offset to this table (which is relative to the start of CMAP, not just 0), I had to solve some equations in order to find what character (as defined by ASCII or compatible Unicode codes) mapped to which glyph.

I'm not going to go into the math here since it's described in the documentation, but I found out that in Arial, most printable characters we normally care about (specifically, those with ASCII codes between 0x20 and 0xFE) all exist sequentially and contiguously with glyph IDs ranging from 3 to 0x61.  The letters I cared about testing, the extreme-width cases of "W" and "i", happen to have glyph indices of 0x3A and 0x4C respectively, according to the algorithm.

With this information, it's time to scour the HMTX table for horizontal metrics.  The first thing in this table is an array of values pertaining to the advance width and left-side boundary of each glyph.  These values take two bytes apiece, thus from the beginning of the HMTX table, the offset to the glyph you care about is (glyph index * 4).  With the table at offset 0x268, the path to the letter W leads me down (0x3A * 4 = 0xE8) more bytes, to a total offset of 0x350.  Here, I quickly learn the advance width for the letter W is:


That's exactly what the Python program said with ttfquery & fonttools!

By this time, I had (only just, by sheer coincidence, auspicious timing, serendipity, or whatever you want to call such good fortune) discovered that Microsoft scales its PPI to 96 rather than the 72 I had originally expected.  After trying (and failing) to see if there was a particular DPI used with image objects generated by PIL, I simply stuck (96.0/72.0) into the equation and confirmed visually that the values seen here in the HMTX table are in fact the values you can use to calculate the width of a TrueType font on a Microsoft Windows system.

It remains to be seen how this'll perform on Macs.  I anticipate the PPI will need to be something different; perhaps it will in fact be 72 on that platform.  We'll see...

An Aside

In researching the equation of fi = f + i, I stumbled across the notion of ligatures.  "" is in fact a ligature, which was designed so that parts of the "f" and the "i" that run together would look coherent.  This brought me back to a time when I was very young and concerned with Evenflo products -- I am not a parent at this time, thus I was indeed a child last time I dealt with them.  They had a very odd and poorly-designed "fi" ligature on their trademarked logo that led me to believe it was some kind of weird-looking "A".  It confused me, since it seemed odd anyone would name their product "EvenAo", as it's awkward to say, and wondered what special significance that A had to be written so much differently and more fancifully than the other letters.  Just to jog your memory, here it is:

The Evenflo logo from when I was little

In my Google search, it seems apparent that they have adopted a new logo anyway, ditching an awkward ligature for something with nicer aesthetics overall and a modern vibe.  However, then another logo struck my fancy, especially with what turned up next to it:

Oh, how titillating.

Obviously having seen all these baby products, not to mention the mother with child, led me to believe the Tous Designer House logo was being quite suggestive.  As it turns out, the Tous logo is in fact a teddy bear.  Google, stop offering such awkward juxtapositions!

Thursday, October 15, 2015

Observing OCR Technologies for PDF Parsing

I’ve gotten the opportunity to investigate some Java-based OCR technologies recently for the purpose of analyzing PDFs, and wanted to write about some aspects of them that aren’t very well-documented.  I hope to incorporate this into these tools' documentation at some point, but for now, here it is... in loooong prose.


Couldn’t get this one working at all.  Was hoping to run it on Python, but it tends to claim certain functions for parsing JPGs, TIFFs, and PNGs do not exist when obviously Tesseract on the command line knows how to handle these types of files adroitly.  It also has a dependency on CTesseract which seems not to be updated for the revised Tesseract APIs (function headers with more arguments) as updated in Tesseract version 3.03, so you have to install Tesseract 3.02 to work with CTesseract.


This was a real hassle to install on my Mac.  I first started by trying to compile everything from scratch and use GCC, but faced a number of weird compilation problems.  Here was the (backwards) dependency chart:

  • libtool
    • Leptonica
      • Tesseract
      • Ghostscript
        • Tess4J

Once I installed home-brew (brew) and set it up to install libtool, I was able to successfully compile the other libraries.  Then, Tess4J still required some dependencies in Java which weren’t easily resolved.  What did the trick is when I switched to using a Maven project and simply used that to install Tess4J by adding this to my pom.xml file:


After simply allowing Maven to configure Tess4J, I was faced with configuring the location of Tess4J’s dependencies (various .dylib files on the Mac).  Since GhostScript & Tesseract ended up installing themselves in two different locations, preventing me from simply using a command-line variable (thanks to Eclipse not properly splitting on ; or : in the path used in  -Djava.library.path), I set up an environment variable on the VM called LD_LIBRARY_PATH, and set it to /opt/local/lib:/usr/local/Cellar/ghostscript/9.16/lib — the value I was hoping to put on the “command line” when running Java.

Once I reached this stage, it was time to utilize it to read from PDFs.  The results were very Tesseract-y (i.e. L’s tend to become |_), but luckily, it seemed to do a fairly good job overall.  However, it couldn’t read any data contained inside tables, which renders it relatively useless if you’re trying to parse data from, say, tax returns or product datasheets.  At first, I was thinking of finding a way to expose image-cropping tools from Leptonica to Java.  There is a nice solution for this in the Tess4J API, though, that’ll allow you to crop a PDF down to the specific area you care about:

File imageFile = new File("/path/to/my.pdf");
Tesseract instance = Tesseract.getInstance();
instance.doOCR(imageFile, rectangle);

Of course, one thing that’s not mentioned in the documentation at all about this bounding rectangle (yet is very important) is what units you actually need to specify in order to make this rectangle.  Want to know the Tess4J bounding box rectangle units?  They're in DPI.  As such, if you want a 2”x2" rectangle starting from (1”, 1”) down from the top left, and if your PDF is 300dpi, you would define your rectangle as follows:

instance.doOCR(imageFile, new Rectangle(300, 300, 600, 600));

Note that the rectangle is defined as (X distance from left, Y distance from top, width (to the right), height (downward)), all in "dpi-dots" (i.e. 300 "dpi-dots" per inch with a document of 300dpi).

Overall, once the installation headaches were solved, it works pretty nicely, and does exactly as expected when reading from fields.  However, reading from fields is Tesseract-y, slow in comparison, and fetches exactly what you ask for that happens to be within the rectangle — meaning that it may crop letters and symbols falling out of bounds.

Another interesting note is how some facets of this library appear to be aging: the argument taken by the Tesseract object’s doOCR() function is a File (, which has been superseded by Files (java.nio.file) in Java 7.  This also seems to hold true for their slightly different Tesseract1 object.


This is an extremely simple library to install if you have a Maven project.  All you need to do is add the following dependency:


Then add these imports:

import com.itextpdf.text.pdf.PdfReader;
import com.itextpdf.text.pdf.parser.*;

It is fairly simple to read an entire document.  The Java code is a touch more complex to set up for reading from a particular user-defined rectangle, though:

PdfReader reader = new PdfReader("/path/to/my.pdf");
RenderFilter filter = new RegionTextRenderFilter(rectangle);
TextExtractionStrategy strategy = new FilteredTextRenderListener(new LocationTextExtractionStrategy(), filter);
String output = PdfTextExtractor.getTextFromPage(reader, pageNum, strategy);

Nevertheless, it works flawlessly once you get it.  However, finding the correct specification for the bounding rectangle was a bit tricky on this because, of course, the units iText prefers have nothing to do with the ones Tess4J uses.  Also, like with Tess4J, the units to use in the rectangle are not specified in the documentation.  It's as if we're expected to read the minds of the original developers.  Through experimentation (which was made difficult because it returns all text from any object contained within the rectangle, rather than strictly the text within the rectangle), it was found that iText doesn’t want DPI-dots, but points (of which there are always 72 points per inch).  Also, the Y-origin is set at the bottom of each page, which is actually the standard for PDF files (rather than from the top, which is how Tess4J counts).

Also, as mentioned earlier, iText pulls all text contained within any object whose bounds overlap the rectangle you specify, rather than simply the text within the rectangle.  I imagine this is because they’re actually reading the data from the PDF and pulling text directly from the objects rather than doing OCR.  As such, I haven’t seen any errors in the results from iText (e.g. no “L” -> “|_”), and it runs much faster than Tess4J.

To specify the bounding box for the same area as above (1” from the top left corner, and 2” each side), now we must assume you have a page that’s 11” tall (US Letter size, Portrait orientation).  In that case, you would use:

RenderFilter filter = new RegionTextRenderFilter(new Rectangle(72, 576, 144, 144));

As these arguments go, 72 sets your X distance as 1 inch away from the left edge, 576 sets your Y distance as 8 inches up from the bottom edge, 144 is the width going to the right of X, and 144 is the height of the rectangle going up from Y.

Hopefully you find this useful in your quest to extract data from PDFs.  May your data-scraping activities go much smoother!

Thursday, October 1, 2015

Simple Example of Mapmaking with GIS

For a while, I have had it in mind to produce a physical map of stores and resources one might wish to go to before or during working on a project at Dallas Makerspace.  The map would consist of places one would go to procure raw materials, consumable supplies, or tools to finish a project in electronics, arts, woodworking, scientific endeavors, or you name it.  After spending a while scouring message board threads for local stores and resources previously mentioned by members, I called out for suggestions for any additional ones beyond those I scouted out.  After this, I sought out each place's address, geocoded it, and then visited each point using Web tools in order to test the accuracy of the geocoded list.

Here is a diary of my trials and tribulations throughout this journey.  Not having used GIS software before, you should know that many people have spent years building in all sorts of intricacies for dealing with many situations, from different datums one can select to orient a map all the way to accounting for atmospheric conditions when using aerial or satellite imagery as map layers.  As such, mentions of as much riffraff and unnecessary steps and settings as possible will be kept to a minimum, but you would need to learn these advanced settings in order to really build a map from scratch using all your own measurements and imagery.

Starting with QGIS on Linux

At first, QGIS seemed like a natural place to start.  I could use it on my Linux box with a huge 4K monitor, and it was easy to install from the Ubuntu Software Center.  There is a nice plugin for it called OpenLayers that really makes it easy to add nice raster map imagery from various OpenStreetMap sources.  However, I ran into 3 problems with QGIS version 2.10 "Pisa" on Linux:

1. Won't download all map imagery at once.

If the map is big and/or to be printed at a high DPI, this of course requires highly detailed map imagery which can take a long time to download.  Unfortunately, QGIS does not wait for all of the map tiles to finish downloading before it begins to render, so you will see only a rendered circle (imagine a Japanese flag where the red area is now a map) if you haven't waited long enough.

Here, I tried to show the same area with four different map servers.  You can see how fast (or not) the various servers responded to my request.

An example of a single map tile, © OpenStreetMap contributors.  Each map tile is designed to be 256*256 pixels.  All of the tiles containing imagery near this tile could be used to create a large map of the United Kingdom.

2. After upgrading from 2.08.0 to 2.10, it started giving me weird Python errors.  (Turns out these errors weren't really a big deal, at least for my use case.)

3. The print composer orients the points differently than the image renderer -- the image rendering is inaccurate and useless, especially for map insets.

Can GRASS GIS do any better?

To work around the problems faced in Linux, I installed GRASS GIS for Windows.  It makes you set up some things in advance, whereas QGIS lets you start going to town right away.  The interface can be a bit confusing and intimidating at first, but once you realize that most windows need to be expanded for you to see everything, and that there is more than just one window, it becomes easy to navigate.

I wanted to import my CSV file until I realized the Address field had commas in it too.  This was throwing off the import wizard.  I re-exported the CSV file as a tab-delimited file and then at least that problem was fixed.  However, GRASS still gave me a fair share of problems:

1. Relatively obscure error messages that don't exactly tell you why things fail.

2. Column names as you set up your database can't have spaces in them, or if they can, then you need to possibly surround them with single quotes or escaped double quotes in your command.

3. GRASS GIS kept emptying out the contents of the points file at some particular step.  I had to leave the file open and make sure to Save it each time the text editor told me the contents had changed.

4. I'm not sure if it likes file names with spaces in them.  Either it wasn't reading the file because it wasn't putting quotes around the "in=filename.txt" part of the command string, or it was trying to read from an empty file.  (I thought the whole reason of hitting the "Load" button was so the program would actually parse the points data from the Preview textbox or from allocated memory rather than having to reread it again once you hit "Run".)

5. Given all the frustrating failures I was having with importing my points file, I actually tried Command Line mode for a while.  Naturally, the instructions on this page didn't work for me because, of course, they'd released a whole 'nother major revision of GRASS GIS since I'd installed it, but it gave me a good starting point nonetheless.  After successfully importing my data, I tried to switch back to GRASS GUI but could not see the data I'd just imported!!! Why not?!?  I ended up having to re-import it through the GUI, carefully making sure it reconstructed the exact query I needed in order to get it to import correctly.

6. GRASS GIS can't handle special characters when parsing database data because their Python script hasn't been set up to handle ASCII characters above 127.

Obviously, as a new user myself, there are probably questions and blocking issues in here that someone more familiar with the program could address.  Maybe I just need to ditch the old version and try the newer one for a better user experience.  Nevertheless, with all these headaches, I finally gave up and embarked on the last frontier: Mac OSX.

Something that actually worked

There is, fortunately, a QGIS version built for Mac OSX.  It does require the manual installation of some dependencies, but it comes out just like the Linux version.  I quickly set up my desired map style and layers, then built the print composer to specify where to put the map, scale, titles, grid marks, and other text on the final rendering.  I checked to make sure the points came out the same between the map view and the Print Composer view, and sure enough, they came out OK.  It was time to hold my breath, not try anything different or unusual, and render the map.

For these specific steps, assuming you have QGIS for Mac and OpenLayers already installed:

  1. Add your Base Map layer.  In the menu, go to Web -> OpenLayers plugin and then select your map provider and map style.  Use your mouse to position the map in the window as desired.
  2. Add your data layer(s).  Go to Layer -> Add Layer -> Add Delimited Text Layer... (assuming you have a CSV-formatted list of points), then follow the prompts to guide it to your attribute names, which columns are latitude & longitude, and other such settings as desired.
  3. Specify the Coordinate Reference System (CRS) of choice if it hasn't prompted you to do so yet.  Check with your coordinates provider to see which CRS/datum they base their coordinates off of.  I typically use one of the World Geodetic System 1984 (WGS84) sets.  To specify this, go into Projects -> Project Properties -> Coordinate reference systems of the world, and make sure QGIS confirms your selection as the one to use in your project.
  4. Fine-tune the symbols used for your placemarks.  Notice in the bottom left-hand portion of the QGIS window, in the Layers panel, where you should now have at least two layers: one for your base map, and at least one per layer you added.  Right-clicking on one of your data layers and hitting Properties.  You can select several different ways to assign colors to your placemarks in the Style tab, from the dropdown list box on the top left.  Graduated is good for data on a continuous numeric scale.  Categorized is good for points associated with non-numeric categories, such as the type of store it is.  After you choose what "Column" contains the data on which you wish to index, use this panel to select the color, point type, and point size to associate with each category you specify.
  5. Ideally, your very simple map is now positioned within the window just as you imagined it.  Now, it's time to export it into an image file for uploading or printing.  Go to Project -> New Print Composer and enter a name for your new print composer.  All this represents is one specific arrangement for which you wish to export the map.  Imagine if you want to make a version of a map to hang in a police station, a version to use in the car (if you're mobile-app averse :-P), and a version to hand out to citizens on a pamphlet; you could set up several print composers in your project to format it just perfectly for the different media you're using for each purpose.
  6. Set the media size.  On the right, there is a spot where you can pick between three different panels: "Composition", "Item properties", and "Atlas generation".  Choose "Composition", then specify your paper size, resolution in DPI, and orientation.
  7. Take time to learn the toolbars in the Composer Editor.  There are very few words in this view, so the icons will really help you out.  The most important one is the "Add new map" icon.  Click this icon and drag along the area which you wish to add the new map.
  8. Set up your map.  Notice the "Item properties" panel on the right side.  Hopefully you have your map object selected in the print composer.  If so, you should be able to click "Set to map canvas extent" in the "Item properties" pane so it shows exactly what you intend it to.  Chances are the aspect ratio of your map canvas (the original window you started working with) is not exactly the same as the media you chose for the Print Composer, so you may need to scroll up a bit and adjust the Scale (zoom level) of the map.  You can also tweak the map data placement by adjusting the map in the map canvas and clicking "Set to map canvas extent" again.
  9. Add other features, such as text labels with the map's name, your name, attribution information, scale bars, grids, shapes, legends, or whatever makes you happy.
  10. Export your image.  In the menu bar, go to Composer -> Export as Image... and choose the name of your output file.  Take careful note of the output; just because it finishes (i.e. unfreezes itself) does not mean it will render what you expect.

Unfortunately, for several minutes after opening the print composer, it renders just a filled circle of map imagery, not the whole map, similar to the Linux version above.

I continued attempting to render the map, and each time, the print composer took a little longer to finish rendering, but at least the rendered circle got a little bigger too.  Eventually, after about the 13th attempt, the whole map was filled in.  Great!  Now it's time to make the one inset I need to put in to show a far-away town.

Oh crap, now it needs to re-download the imagery for both maps once again... and eventually, only certain random tiles from the original map are showing.  I restarted QGIS, then took off the original map so that only the inset is supposed to be rendered.  (To "take off" items in this context means I excluded them from the render without necessarily deleting them from the print composer.  To do this, click on the desired item, go to the "Item properties" view, and scroll down to the "Rendering" options.  Simply check the box for "Exclude item from exports".)  Now when I try to render the inset, it comes out with a completely different part of the map than what it's supposed to be rendering.  Not sure what that's all about, but it's highly annoying.  I eventually give up on doing the inset, leaving it blank and saving a spot for it so I can print it and paste it on later.

Each step of the way, I'm careful to save the final output with an incremental number, and am diligent in deleting useless copies I won't be able to send to the print shop.  After the fiasco with the inset map, I simply leave a rectangle for where it should go, and then re-enable rendering of the original map.  Now it has to spend a bunch of time re-downloading map imagery before I finally get the desired output, which can now get sent to the print shop.

A Physically Interesting Final Result

I went through some A/B testing exercises with folks at the Dallas Makerspace (really, more like A/B/C/D and A/B/C testing) regarding which map style and what type of points would be preferred.  I may have forgotten to take into account their preference for map style when creating the final output (oh well, that's what Version 2.0 is for), but among the ways to present placemarks, there was a clear winner.  The choices presented were to print the placemarks directly on the map, use strong magnets as placemarks, or use standard pushpins.  User feedback was that printed placemarks would get out of date and would obsolete the map in the event that places had to be added or (especially) removed; magnets could get disturbed by accidental or malicious activities; and pushpins would leave holes where they once existed, leaving the map to look tattered after years' worth of updates.  Weighing all these alternatives, the pushpin approach came out ahead, but I still sought to keep compatibility with magnetic markers for temporary placemarks.

For the placemarks, I considered what colors tend to be evoked when one thinks of a particular type of material or supply.  Given the number of categories, and my disdain for graphs with so many lines on them that the colors start to run together and become very hard to distinguish, I also made up shapes to pair with colors to give the final result; the shapes actually represent "category groups" including "Consumables/Materials" and "Tools".  Since I am looking out for those with disabilities too, I installed Color Oracle on my Mac to test out what all these colors would look like for colorblind people.  Since it is a very common disability mostly seen in men, and since the DMS membership is overwhelmingly men, this is an especially important situation to be aware of given the population.  I spent time rotating between the three forms of color blindness one can test for: deuteranopia, protanopia, and tritanopia.  Among these, I found deuteranopia and tritanopia to be quite ugly, so if I had to choose what colorblindness to get, I would definitely pick tritanopia (the rarest form).  It would still suck to have, but at least you don't have so many ugly yellow colors.

Magnetic backings are available in several forms.  The most common is, of course, a whiteboard, but whiteboards are impenetrable by common push pins -- not to mention expensive per square foot.  Another approach recommended to me in the Dallas Makerspace Forums was to use magnetic paint.  I'd never heard of that before, and after reading some reviews, was a bit skeptical.  I obtained a scrap piece of foam core and applied three coats of magnetic paint to one side.  Here are my hints for optimal magnetic paint application:

  • Stir it well, as the globs of magnetic ore tend to clump together over time, leaving you with nothing but a heavy oil.  If stirring it seems risky (because you're wearing your nice slacks and shirt), just grab one of the big globs with your stir stick and mush it down with your paint roller.
  • Apply several coats.  One won't do it for you.  Even after several coats, areas where you may have spread around clumps of ore will come out stronger than where you applied just the paint by itself.
  • You may have had the Paint Department at your hardware store stir it up for you, but you will need to stir it again.  Even after spending time in their machine, it was still really clumpy by the time I got it back home.

The paint is also extremely oily, so bear that in mind when you choose your brush/roller.  Nevertheless, I was very satisfied with the outcome, and I can actually hold up the whole piece of sample foam by gripping just one neodymium magnet stuck to it.

After getting the printout and proper foam core delivered from the shop, I had just a basic map with really small dots guiding where I needed to place pin markers indicating each location I chose to show and exactly what type of place it is.  It took about 1 hour to place all the pins in their right location.  Without clipping all these pins, it's impossible to put it in a conventional frame and mount it against the wall, so currently it's balanced standing up on a table against the wall.

The index of all these places took me a little while to construct, as it's the heart and soul of what makes this map actually meaningful.  The places listed are formatted into 2 columns, and organized into categories with a Table of Contents.  The next step is to add color-coded tabs along the side so people can flip directly to the desired category.

While this map only has one or two instances of chain stores listed for this entire metro region, we are planning on a Web-based version of this Makers' Markers project that will allow people to filter by location as well as who is open now hours-wise.  Nevertheless, this experience has broadened my horizons in the endeavor of producing nice static maps with more flexibility than afforded by simply Google Maps or Open Street Maps alone.