Since my last post, I've been highly focused on Tensorflow projects at home and at work. In the process of running Tensorflow behind an API, I've needed to make code changes to the "secret sauce" (business logic) that stands before Tensorflow and actually provides it with its data. This could be in the pipeline of multiple Tensorflow models chained together, image manipulation, working with data that gets outputted from the model, or whatever other reasons. Unfortunately, it is often slow and wastes a bunch of time to constantly restart the whole server (including reinitializing Tensorflow for 20 or 30 seconds), especially when you simply made a typo or used the wrong variable name or something like that.
Besides the Tensorflow work, I've been involved in many blog-worthy pursuits since my last post but simply haven't had time to write about them. (In fact, I meant to write this last week, but forgot.) Anyway, at the end of June, right before my previous post, I began running biweekly meetups called "Tensorflow Tuesday Office Hours." Here, interested people get together in various locations around town to talk about Tensorflow and get their questions addressed, whether it be about installation, scaling it up, mathematical questions, or picking a model. In the process of helping people install, I decided it'd be worthwhile to try the mainline Tensorflow version that includes the Jupyter notebook rather than the "devel" version that has command-line access only but has more of the Tensorflow Github repo included in its image. It had been many years since I used Jupyter, and had forgotten its benefits as, for such a long time, I fought the Python shell to enter long functions and make tweaks to specific lines in them. Of course, with Jupyter, you just click on what you want to tweak, then rerun that code snippet. (2013 called me and congratulated me on this rediscovery. :-P)
It didn't take me long to realize I could utilize a Jupyter notebook to run a Python server where I could change the route functions that a Python server calls when a request is made to a particular endpoint on the server. This would allow me to make small tweaks to the business logic for the sake of testing the accuracy, performance, or simply fixing typos, without having to wait on Tensorslow [sic] to restart.
The original application I was going to test this with was written using a Flask server. Flask is a popular choice for quick proofs of concept written in Python, but has many downsides that make it unsuitable for production. And, as much as I tried to change the underlying route function that Flask would call, it seems like the Flask process would simply take over the entire Jupyter notebook and no other code snippets could be run in Jupyter once you start a Flask server. Maybe further research would uncover why or how to get around it, but since the app was being ported to Tornado anyway, I put the Flask research to bed and attempted to do this with Tornado. To make a long story short, I got it working, and can make changes to the functions that Tornado runs whenever it runs a server route.
Where does the code live?
Check out my Jupyter notebook on GitHub here: https://github.com/mrcity/baby-tornado
In this notebook, simply run In , In , and In . Each time you want to change what a particular API endpoint and request type does, just edit the code in In  and run In . Call your endpoint again and observe the change!
As far as Tensorflow is concerned, you could initialize it in the notebook in stage, load the model in stage 3, and then not have to worry about those steps ever again -- just change your business logic in stage 2. Enjoy!