Brython: Python for Scripting Webpages

I’ve just discovered Brython a project that let me script webpages using Python instead of JavaScript.

Getting started is really easy:

  1. Import the brython.js library
  2. Add an onload handler to your <body> tag to boot Brython
  3. Write your code inside <script type=”text\python”> tags

Here’s how to use the translate function code from my previous post.

Note that the translation won’t work unless you set up your own account key on the Google Translate API. Drop me a line if you’d like an IP white-listed.

I’m sure there will be gotchas using Brython. I will spend some time investigating further.

Google Translate Whispers Game – Calling the API


For the next Manchester CoderDojo #3 son and I plan to run a HTML/CSS/JavaScript sessions for the attendees to build their own game. The game idea is to use multiple iterations against the Google Translate API to mangle an English phrase and see whether the players can guess what it was. As this is aimed at notice coders we’d like it to be:

  1. Friction-free to get started – no installing stuff
  2. As little boiler-plate typing as possible – fill in the blanks and tweak rather than staring on a blank page and typing

Starting Point

We know what we’re going to use for the web page stuff:

  1. Hosting the web pages we’ll use CodePen – simple interface and easy sharing
  2. Writing HTML and CSS we’ll use the editor from HTML CheatSheets as a way to get started editing snippets of web code without needing so much typing

What Next?

The next thing to sort out is how to make the basic call to the Google Translate API. We need something that hides the complexity of the API and leaves an interface something like:

translateFromTo("This original text", "english", "french");

Calling the Google Translate API

Google needs a few details to be covered off before you can actually use their translation API.

Google Cloud Account and API Key

You can’t just call the translate end point. You need to have a Google Cloud account set up and create an API key to access the translation service. You end up being on the hook for usage of the API key. The problem from the CoderDojo perspective is that all the attendees are going to need know the key, even if buried in a library, to be able to make translations. Here’s the plan:

  1. Lock the API usage down to specific IP addresses. All attendees are likely to be using the site wifi so presenting to Google as the same IP address. These are settings Google Cloud Console lets you change
  2. Put a reminder in my calendar to turn off the API once the dojo is over

Implementing the Library Function

Blocking Network Calls

Modern JavaScript is all about asynchronous event-based programming. While this is great for building responsive user interfaces it is a terrible model for new coders. It’s just too conceptually difficult. I want the translateFromTo() function to block the program execution until it synchronously returns a value. For synchronous HTTP requests out of the browser you need to turn to the old-school XMLHttpRequest and turn off the async flag when constructing a request with open().

Using the Google API Key

The other wrinkle to be aware of when implementing code that calls Google APIs is to pick the correct method for authenticating to the API service. The Google documentation tacitly makes the assumption that you’ll be using the OAuth service to get your users to authenticate through to Google before being allowed to use the API. This is not what we want for the CoderDojo. Rather, as described above, we want to use the simple API key. The use of the API key involves adding a key= parameter to the request URL. The Google documentation for how to use the API in this mode is here.

The Code

Here’s the full code for the function:

Deploying Into CodePen

The final step it to load this code into a pen. This requires clicking on the settings cog of the JavaScript panel and adding in the file as an external resource. To host the resource from GitHub I’ll use the RawGit CDN Service.

You can see the CodePen for yourself here:

Coding HTML and CSS at the Manchester CoderDojo

I’m starting to prepare what we’re going to do at the June 2018 Manchester CoderDojo. It’s going to be something web-based as #3 son has started enjoying playing around with Mozilla Thimble tool. The question is what tools can we use.

It is a perennial problem to find tools that will help the CoderDojo attendees (typically in the 10-14 age range) work on whatever project we are doing that month. We get a wide collection of random PCs and operating systems showing up at the dojo. Something web-based definitely saves loads of headaches setting things up. This month I’m going to have a go using HTML Editor and HTML Cheatsheet. They are simple tools to explore and experiment with HTML and CSS. I think they have the right level of simplicity and interactivity.

Anyone with experience of using these, or other HTML/CSS tools, for kids’ starter projects please drop me a line.

Azure CosmosDB is Too Expensive for Experimenting. Alternative: MongoDB Atlas


Recently I’ve been experimenting with Azure Functions, and I’ve got to the point where I wanted to play with Functions interacting with a storage layer. Azure CosmosDB was the obvious choice. I went through the process of setting up a database and turning all the performance dials down to their minimum settings. Nonetheless, after four days playing with my experimental DB I realised that it was going to cost me more than £20 (UK) per month to keep my toy DB running. I needed an alternative.

MongoDB Atlas

MongoDB Atlas ( is the hosted DB service from the people behind MongoDB. For my purposes it is attractive as it provides a free tier for up to 500MB of storage (

It set up an account and downloaded the Compass DB management tool to work with the data. One snag I ran into was that if you had a password with special characters in it then this needed to be URI encoded before entering into the Compass login screen. This held me up for a good couple of hours as the error message back from Compass was the cryptic, “Missing delimiting slash between hosts and options”.

Finally I had my database running and had entered some test data:

2018-05-04 08_22_47-MongoDB Compass Community - sleepsuntil-shard-00-00-vyqwg.mongodb.net_27017_slee

Connecting Azure Functions to MongoDB Atlas

I pulled my connection string information over from Atlas and stored it in the application keys (See my previous post on storing API keys for Azure Functions).

Now I needed to open up the Atlas firewall to allow inbound connections from Azure. This is non-trival since Azure will allocate an outbound IP from Functions from any of the range of IP addresses for their whole data centre. See the Microsoft article explaining outbound IP addresses. I’m hosting in “UK West” and at the time of writing the data centre had 24 different IP ranges. Given the fact that I only have toy data in my DB I decided to allow access from all IPs. If you have a real world example you will need to implement some process to lock this down some more.

With this setup complete I now have some working code, see below, to show Azure Functions connecting to MongoDB Atlas…and it is free!

module.exports = function (context, req) {
    context.log("Starting Atlas example");
    const mongoClient = require("mongodb").MongoClient;

    function opendb() {
        const url = process.env["atlasurl"];
        context.log("Attempting to connect"); 
        const db_promise =  mongoClient.connect(url);
        return db_promise;

    function readdata(db) {
        context.log("Accessing sleepsuntil");
        let dbo = db.db("sleepsuntil");
        context.log("Got DBO " + dbo);
        let query = { key: "example1" };
        context.log("Starting query");
        let results = dbo.collection("testing").findOne(query);
        context.log("Got results");
        return results;

        .then((db) => {
            context.log("Return from open was: " + db);
            return readdata(db);
        .then((results) => {
            context.log("Read: " + JSON.stringify(results));
            context.res.body = `
            context.res.headers = { "Content-Type": "text/html" };

        .catch((msg) => {
            context.log("Error caught: " + msg);

Managing Azure Functions API Keys


I’ve been working on example code to use the JavaScript MongoDB driver to work with the Azure Cosmos DB. To connect to the DB I’ve had to manage my DB API keys – the secrets that allow only me to get at my data. Here’s how to do that in Azure Functions.

Azure Functions Environment Variables

The simplest way to store API keys for use in Azure functions is to write them to the environment variables using the Portal web interface and then read in the environment variable as the script runs.

Setting Environment Variables

To set an environment variable open up the Azure Portal, navigate to your Function app and click the “Applications Settings” link.

2018-04-30 06_38_43-sleepsuntil - Microsoft Azure

Once you open applications settings tile, go down to the “Applications settings” area and click the plus sign to add a new setting. Give the new setting the key “example” and the value anything you like. You then must scroll back to the top of the page and click “Save”.

In your JavaScript function you read the application setting value from the process.env array.

Here is some code to show you the value that you just set.

module.exports = function (context, req) {
    context.res.body = process.env["example"];


API Keys From Applications Settings

Now we have a simple way to look after API keys: set the value in an application setting and read into our code using process.env. My code to connect to my Azure CosmosDB is something like this:

let url = 

How to Setup Git Deployment of Azure Web Apps

The Azure Portal ( has had me stumped on an apparently simple task for the past couple of days. I became lost in the UI of are trying to create a new Azure Web App that I could deploy changes to by pushing to a git repository. Here are the steps you actually need to get this done…

  1. Login to
  2. Top left of the screen click “Create a resource”
  3. In the search box type for “node empty” and pick the “Node JS Empty Web App”
    2018-04-20 07_47_22-Everything - Microsoft Azure

  4. Fill in details for the new web app and optionally click “App service plan/location” if you want to change the size, and therefore cost, of the instance, and where it is located in the world

  5. Press “Create” and wait for your app to deploy
  6. Open the app and click the URL to make sure the deployment is actually now serving web pages
    2018-04-19 07_55_02-Dashboard - Microsoft Azure
  7. This is where I became stuck. Just how do you get the git deployment working from here?
  8. In the “Deployment” sidebar menu click “Deployment credentials” and make sure you have a username and password set up
  9. Click the “Deployment options” two menus down. Pick “Choose source” then “Local Git Repository”. You may have to disconnect any existing options using the button on the top menu of the deployment options tile
    2018-04-19 07_58_45-Choose source - Microsoft Azure
  10. Now you need to scroll halfway down the sidebar menu to find the “Properties” title. In that tile is the git url for your web app
    2018-04-19 08_01_01-Properties - Microsoft Azure
  11. On your local PC git clone that git url. You’ll need the username and password you created earlier to login
  12. Git should then clone you a directory structure something like this:
    2018-04-20 07_39_48-sleepsuntil-web
  13. Edit the file server.js changing the text res.end('Hello, world!'); to something of your choosing
  14. Commit the change in git then do a git push
  15. Reload the URL of Azure app and you should see your changes

Exercise: Test Driven Development With Azure Functions


Use a unit test framework to create a test driven development (TDD) pipeline for the “Sleeps Until” Azure Function we’ve been building.


The main feature of a good unit test is that it must be fast. You need to be running the tests repeatedly while developing and if your tests are slow you won’t keep up the discipline. Fast implies that you can’t wait on a network call to run the tests; they must run locally.

The Azure team has created a set of tools to allow you to run your functions locally (here are the docs), but this is too heavyweight. We’re trying to set up unit tests. We don’t need to integrate the whole functions stack. All we need to do is run a little JavaScript code.

Working Example

Let’s try and use TDD to refactor some candidate code from the Sleepsuntil code base.

Example code to refactor:


We want to pull that into a single function, has_correct_params(), and we want to develop the test first.

We need to write a test test_has_correct_params() that asserts some things about how the code should work and then refactor out the existing code into a new function. The problem is how to have a separate test file and have that call the function under test. Azure functions won’t let us export more than the one function from a single file, see here:

All JavaScript functions must export a single function via module.exports for the runtime to find the function and run it. This function must always include a context object.

As a solution we will pull the code to refactor into a separate file, sleeps.js, and then export the functions we need back to both the test files and the Azure HTTP Trigger function.

Steps to Write the Code

We’re going to need a unit testing framework to build with. There are several available. We’re picking the AVA unit test framework for its simplicity.

  • Let’s start writing the testing function in test_sleeps.js
"use strict";

import test from 'ava';
  • That should be enough to have a breaking test. On the console run npm test. This of course fails since we’ve not installed AVA yet:


  • npm install --save-dev ava@next
  • Update the package.json file to run the AVA tests: `”test”: “ava”
  • npm test now complains, correctly, that it can’t find any test files
  • Add a section to package.json to tell AVA where to find the tests:
  "ava": {
    "files": ["test_*.js"]
  • Now AVA runs fine but complains it has no tests.
  • Add some simple test code. Remember we’re testing whether the correct parameters have been passed to the function. These parameters will be in the Azure Functions request object:
import sleeps from "./sleeps.js";

test("Request has required parameters", 
    function (t) {
        const req = { "query" : {}};
  • This now fails since we haven’t written the sleeps.js code yet. Let’s start with the simplest thing that changes the test result:
"use strict";

function has_required_params(req) {
    return false;

module.exports.has_required_params = has_required_params;
  • Now for the first time we have a passing test!
  • What remains is to iteratively add test cases until we have test coverage:
test("Request has required parameters", 
    function (t) {
        const req = { "query" : {}};

        req.query = {"year": "1234"};

        req.query = {
            "year": "1234",
            "month" : "56",
            "day": "78"};


…and the associated implementation in sleeps.js

function has_required_params(req) {
    return Boolean(req.query.year) 
        && Boolean(req.query.month)
        && Boolean(;

module.exports.has_required_params = has_required_params;
  • The last step is to plug that back into the Azure Functions code:
import sleeps from "./sleeps.js";


    if (sleeps.has_required_params(req)) {
        const target = moment({year: req.query.year,
            month: (parseInt(req.query.month) - 1), // JS Dates are zero indexed!!!
            day: });


…and git push to deploy up to the Azure Portal and test.


  • Except that didn’t work because Azure Functions don’t support ES6 imports yet:


  • Modify the code to use the node require() form of module import. Of course changing the test code first and then the Azure Function.
const sleeps = require("./sleeps.js");


We’ve created a unit testing pipeline for Azure functions allowing us to develop in a test-first style. In the process we have factored application logic out of the Azure Functions code nicely separating concerns of application logic vs. handling web requests.