Web push notification using Python

Building a web push should be straight forward. But while implementing it, I found the resources lacks and there is no straight forward guide to follow and implement it in your existing stack or form scratch. Which make sense as its still in early days. In this blog post, I would be covering step by step procedure to build a web push service. If you are implementing a push service from scratch or integrating it in your existing application, this blog post will help you to reach your goal. There will be following sections in this post:

  • How web push works
  • Building a push service
  • Browser supports
  • References

How web push works

On high level web push needs three parties/component to work. Those are:

  • Client side application: Get users permissions, get users subscription token and sends to the backend service.
  • Push Service: Validates push request coming from backend service and forward the push message to the appropriate browser.
  • Backend service: Persists users subscription information and initiate push sending.

Steps to send/receive push web push notification

  1. User accepts push permission and browser generate push subscription token via communicating with the Push API
  2.  Client app should send the subscription information to the backend service and backend service should be persisting the subscription information and use it to the next steps
  3. Backend push service initiate the push and send the payload to the specific push service (which is denoted in the users subscription information)
  4. Push service receives the push notification and forward it the specific user and browser display the notification

Backend service using python

We will be building a REST interface which will communicate with the client application and push service. It will store the subscription information of users and distribute VAPID  public key. VAPID is the short term for Voluntary Application Server Identification, the generated public key will be used via the client app.  We will need to develop following API endpoints:

  1. /subscription
    • GET – to get vapid public key
    • POST – to store subscription information
  2. /push
    • POST – will send push request to all users ( will be used for testing )

Generate the VAPIDs via following command:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem
openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Create base64 encoded DER representation of the keys:

openssl ec -in ~/.ssh/vapid_private.pem -outform DER|tail -c +8|head -c 32|base64|tr -d '=' |tr '/+' '_-' >> private_key.txt

openssl ec -in ~/.ssh/vapid_private.pem -pubout -outform DER|tail -c 65|base64|tr -d '=' |tr '/+' '_-' >> public_key.txt

These VAPIDs keys will be used in the newly developed backend service. We will be using pywebpush library for sending the web push notification. We will be wrapping the push like below by using newly generated keys:

import os
from pywebpush import webpush, WebPushException


VAPID_PRIVATE_KEY = open(DER_BASE64_ENCODED_PRIVATE_KEY_FILE_PATH, "r+").readline().strip("\n")

"sub": "mailto:youremail"

def send_web_push(subscription_information, message_body):
    return webpush(

Full service code can be found here as gist. Follow the gist readme for details about running the service.

Frontend application to test the backend service

Rather than writting an application form scratch lets use google push example client app. Which you will find here.
Use the 08-push-subscription-change version which is last part of step by step tutorial from Google. Put the VAPID public key in main.js in this variable applicationServerPublicKey. Client side application will use the public key to generate the subscription information. And send the public key which will be used by Push Service.

Putting it all together

Meanwhile pull the whole code from gist, install necessary packages and run it via following commands.

pip install -r requirements.txt

python api.py

Run following command to get the VAPID public key from the service via following command:

curl -X GET

It will return the public key like as key value pair. Copy the public key and paste it to frontend application as value of applicationServerPublicKey in main.js. 

Navigate the browser to the Push lab application and click on “Enable Push Messaging”, a browser pop up will appear like below:

Click on “Allow” by which you will be giving permission to the application to show web push notification. And the client app will generate the PushSubscription object. Which we will need to send to the our backend service which will persists the information and use to send around a push notification.

It will generate the payload which should be sent to backend service via following curl request:

curl -X POST

The push will arrive right top of the screen something like below:

Browser supports push

While writing this blog post Chrome and Firefox only supports push. Here you can find latest supported browser lists.


On Mac while developing the backend service, openssl can throw exception while sending out push, cryptography library can not find appropriate version of openssl. Which looks like below:

Symbol not found: _EC_curve_nist2nid
Referenced from: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib
Expected in: /usr/lib/libcrypto.dylib in /usr/local/opt/openssl/lib/libssl.1.0.0.dylib

To fix the issue we need to export openssl library path like below:

export DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib

Also I faced an issue with python cryptography library it can not find the right version of openssl and install cryptography based on inappropriate version.
To overcome that, I had to uninstall and install it again like below:

pip uninstall cryptography
LDFLAGS="-L/usr/local/opt/openssl/lib" pip install cryptography --no-use-wheel


  1. Google post details about how push works
  2. Firefox post web push API 

Building a micro service with Django, Docker.

If you never wrote a micro service before but you know what is a micro service is, this post will introduce you by writing a μ-service. As it is a new “Buzz” floating around for last couple of years. Read details.

Micro service architecture has definitely many advantages over monolithic application, on the other hand it depends on several factors whether it make sense to go with micro service architecture or not. If you want to read more details about Micro service pattern and its pros and cons, please check this post for details. Specially micro service “Pros”, “Cons” section.

Let’s not get into the debate and start writing some code. In this post we will be doing the following:

  1. Building a REST API using Django (DRF)
  2. Docerize the newly developed REST API and run it via uwsgi

Step 1: Building the REST API using Django:

We will be using Django REST Framework (DRF). The API will be exposing data for Event Management company (imaginary) where the company uses the API to manage their events and performer. For sake of simplicity in our API we will be able to able Add new performers and events. And there will be listing endpoint where we will be listing recent events and associated performers name.

So lets write some code:

Django REST framework made easier to develop REST API on top of Django, all one need to do define serializers and the load query objects via Django models and thats it. DRF will take care rest of the staff. As the API is minimum and we are doing CRUD, in the serializers we need to extend serializers. Model and thats is. Finally Views.py looks like below:

class EventViewSet(viewsets.ModelViewSet):
   queryset = Event.objects.all()
   serializer_class = EventSerializer

class PersonViewSet(viewsets.ModelViewSet):
   queryset = Person.objects.all()
   serializer_class = PersonSerializer






You can checkout the codebase from here.

Step 2: Dockerized μ-service:

Lets checkout the Dockerfile for details:

FROM python:2.7
RUN git clone https://github.com/mushfiq/djmsc.git djmsc
RUN pip install -r requirements.txt
RUN python manage.py migrate
RUN python manage.py loaddata data/dummped.json
CMD ["uwsgi", "–module=djmsc.wsgi:application", "–env=DJANGO_SETTINGS_MODULE=djmsc.settings", "–master", "–pidfile=/tmp/djmsc.pid", "–http=", "–socket=", "–buffer-size=32768"]

view raw
hosted with ❤ by GitHub

In the Dockerfile from Line 1-11 we are cloning the repo, updating the working directory, installing dependencies . From line 13-19 we creating db through manage.py, loading dummy data and running uwsgi to serve the API.

Let’s run the docker file like below:

docker-machine start default #starting docker virtual machine named default
docker build -t mush/djmsc . #building docker image from the Docker file
docker run -d -p 8000:8000 mush/djmsc #running the newly build docker image

List the IP of the docker machine and then make a cURL request to check whether the REST API is up or not
like below:

api=$(docker-machine ip default) #returns in which IP docker-machine is running
curl $api:8000/person/?format=json | json_pp

And it returns json response like below:


You can pull the docker image from here and start your own container 🙂

Good read: Building Microservices

Access key based authentication in DRF (Django REST Framework)

If you start developing a REST API, one of the fundamental requirements you will need to implement an authentication system. Which prevents any anonymous  user to expose your REST endpoint.

For developing REST API, I used to start from scratch by using Django/Flask, then I used Piston . And when the further development of Piston stopped, I started using Tastypie. Last year I was reading documentation of DRF and I realised, my next REST API I will develop on top of DRF. And since then I am using it. The documentation  is organised and it has a growing community around it.

So back to the point, in DRF you can have an access key based authentication system quickly without coding much configuration and code.

While authenticating an user via access key, the core idea is, we need to check whether there is any user exists with the provided access_key or not. And to return data or raising exception.

At the beginning, add a new file in your django app called “authentication.py“. To write custom authentication in DRF,  “BaseAuthentication” and then we need to override “authenticate” method. authenticate takes to django request object from which we will get the access key like request.get(“access_key”, None). The whole sub-class look like below:

from rest_framework import authentication
from rest_framework import exceptions
from apps.newspaper.models import Subscriber
class AccessKeyAuthentication(authentication.BaseAuthentication):
def authenticate(self, request):
access_key = request.GET.get("access_key", None)
if not access_key:
raise exceptions.NotFound("Access key not provided.")
user = Subscriber.objects.get(access_key=access_key)
except Subscriber.DoesNotExist:
raise exceptions.PermissionDenied("No User found with the access key")
except ValueError:
raise exceptions.ValidationError("Badly formed hexadecimal UUID string")
return (user, None)

view raw
hosted with ❤ by GitHub

And next step is to add it to our REST_FRAMEWORK settings in project settings (settings.py), like below:


To use it, we need to import it and apply it as a decorator like below:

from apps.newspaper.authentication import AccessKeyAuthentication
@authentication_classes((AccessKeyAuthentication, ))
def list_news(request):
   # your code goes here

And then call the endpoint like: /news?access_key=”ACCESS_KEY”. And it will return our REST output.

In this tutorial, in Subscriber model I have a field called which is “access_key”, you can use any other models/field for authentication checking.

This is the preferred way I mostly apply  authentication in DRF based REST API and then as the API grows I used to add more sophisticated authentication for the API. DRF also comes with token based authentication which is described in the docs briefly.

Further reading:
DRF Authentication Documentation


Enable CORS in bottle python

To access data of the REST API from other domain API should have CORS enabled for the website. Like most of all framework Bottle by default does not set CORS header. To enable it, following decorator can be used:

from bottle import Bottle,response
def allow_cors(func):
""" this is a decorator which enable CORS for specified endpoint """
def wrapper(*args, **kwargs):
response.headers['Access-Control-Allow-Origin'] = 'example.com' # * in case you want to be accessed via any website
return func(*args, **kwargs)
return wrapper
#example usages in an API endpoint
app = Bottle()
def get_cakes_by_id():
# loading cakes by ID
return {"cakes": cakes}

view raw
hosted with ❤ by GitHub

In the API response header “Access-Control-Allow-Origin” will be added. As per our example, it will be Access-Control-Allow-Origin: example.com.  To enable it for any website you can set it as “*”.   There is an interesting discussion whether to set it * or not.

Python script to download Google spreadsheet.

I like to automate tasks, I think every software engineer like that, right? After all thats our job. I wrote the following script for downloading google spreadsheet as csv. Just got it when I was going through my old code base, hopefully it would help someone else too.

To run the script you have to install gdata python module.

import os
import sys
from getpass import getpass
import gdata.docs.service
import gdata.spreadsheet.service
get user information from the command line argument and
pass it to the download method
def get_gdoc_information():
email = raw_input('Email address:')
password = getpass('Password:')
gdoc_id = raw_input('Google Doc Id:')
download(gdoc_id, email, password)
except Exception, e:
raise e
#python gdoc.py 1m5F5TXAQ1ayVbDmUCyzXbpMQSYrP429K1FZigfD3bvk#gid=0
def download(gdoc_id, email, password, download_path=None, ):
print "Downloading the CSV file with id %s" % gdoc_id
gd_client = gdata.docs.service.DocsService()
#auth using ClientLogin
gs_client = gdata.spreadsheet.service.SpreadsheetsService()
gs_client.ClientLogin(email, password)
#getting the key(resource id and tab id from the ID)
resource = gdoc_id.split('#')[0]
tab = gdoc_id.split('#')[1].split('=')[1]
resource_id = 'spreadsheet:'+resource
if download_path is None:
download_path = os.path.abspath(os.path.dirname(__file__))
file_name = os.path.join(download_path, '%s.csv' % (gdoc_id))
print 'Downloading spreadsheet to %s…' % file_name
docs_token = gd_client.GetClientLoginToken()
gd_client.Export(resource_id, file_name, gid=tab)
print "Download Completed!"
return file_name
if __name__=='__main__':

view raw
hosted with ❤ by GitHub


You have to run the script like below:

python gdooc.py spread_sheet_id#gid=tab_id

For example check the following screenshot:



And after downloading you will have the csv file in the same directory, currently the document id is being used as name of the csv file, you can change it as you want.

Happy Coding 🙂

Python super and init explained with example

Python super :

Python super keyword is confusing some time to newbie or even for intermediate python programmers.

But the idea behind super is really simple. In OOP paradigm we often need to do implement inheritance like below:


class A(object):

    def fancy_func(self):
        print 'Fancy Function Called from Class A'

class B(A):

    def fancy_func(self):
        return super(B, self).fancy_func()

b is an Object of class B and fancy_func is the method of B, super is returning the base classes method. If we don’t use super,we had declare an object of class A and then we had to call fancy_func. On the other hand,super returns proxy object. Super uses __mro__(method resolution order).

Super can be used:

  • For single inheritance using to refer parent classes
  • In multiple inheritance its very useful in during dynamic execution.

For real life coding when we need to enhance any module method we can easily super to get things done.

And we don’t even have to know details about the base class that we are extending from.

super is only applicable for python new style classes ( the classes derived from object ex. class A(object) )

For python3 the syntax is like below:


Syntax of calling super is like below:

super(subClass, instance).method(args)

Python __init__ :

If you declare a __init__ in your python class, it will be run when you initialize an object from that class.

__init__ acts like constructors in other languages but actually its not. There is a basic major difference between from other methods and __init__,its you cant return anything from it.You can add properties to the current object using like self.myProperty = ‘TEST’ and you can use it in any other method by accessing like self.myProperty

Simply __init__ is used when we want to control the initialization of the class.

Lets build something real with these features:

import requests
from BeautifulSoup import BeautifulSoup
class crawlPyCentral(object):
def __init__(self, url='http://pythoncentral.org/'):
self.url = url
def getSoup(self):
response = requests.get(self.url)
soup = BeautifulSoup(response.content)
return soup
def getTitles(self):
soup = self.getSoup()
uls = soup.findAll('ul',{'class':'category-posts'})
for ul in uls:
lis = ul.findAll('a')
for li in lis:
yield li
class filteredCrawler(crawlPyCentral):
def getTitles(self, keyword):
for t in super(filteredCrawler, self).getTitles():
if t.text.find(keyword) > 1:
yield t.text
if __name__=='__main__':
f = filteredCrawler()
for title in f.getTitles('1'):
print title

view raw
hosted with ❤ by GitHub

In the above example we implement both the concept of __init__ and super. Here __init__ using for setting value of url while intilazing the object  and super is being used to call the crawlPyCentral’s getTitles.

To dig more deep into super check this blog post

Factory pattern in Python

We use design pattern to build reusable solution. Building reusable solution is hard and design patterns helps us by giving common design solution for same sort of problems.

One of the important design patterns is Factory Method Pattern. In Python the implementation of factory pattern look like below:

class Ladder(object):
def __init__(self):
self.hight = 20
class Table(object):
def __init__(self):
self.legs = 4
my_factory = {
"target1": Ladder,
"target2": Table,
if __name__ == '__main__':
print my_factory ["target1"]().hight

When to use factory pattern?

There are couple of cases when we can use factory pattern, one of the case is- when there is needed to create objects that are dependent on other objects.

That means when we are going to create a complex objects, and complex objects will be based on other objects. When we need to create the complex object we dont need to know the details about other objects that rely on the creation process. Example is like below:

class Train(object):
def __init__(self):
self.speed = 120
class Bus(object):
def __init__(self):
self.speed = 60
class Trum(object):
def __init__(self):
self.speed = 40
class System(object):
def create(self, *args):
return args
class TranspotationSystem(object):
def __init__(self):
self.train = Train()
self.bus = Bus()
self.trum = Trum()
def createTSSyetm(self):
s = System()
t_system = s.create(self.train, self.trum, self.bus)
for t in range(0,len(t_system)):
print t_system[t].speed
if __name__=='__main__':
T = TranspotationSystem()
t = Train()
tr = Trum()
b = Bus()

Ideal situation would be, when we see we are coding to gather information to create objects. And factories help to gather object creation in a single place. And also it helps to create decoupled system.

If you have better understanding and experience of using factory pattern in your python code, please share it in comment.

Django merging two QuerySet using itertools.

I was working with a django application where I need to merge two query set. After going through django ORM docs, could not find anything helpful.

I was planning to do it in a unpythonic way like iterating two queryset and appending each item to a new list, just before doing it I thought it would be better to google for it. And after couple of minutes found it. We can use python itertools to merge two or more query set. Like below:

from itertools import chain
cars = Cars.objects.all()
trucks = Truck.objects.all()
all_vechiles = list(chain(cars, trucks))

Python itertools is an amazing module that contains real handy methods what we need to handle iterators and doing different types of operation. If you never used  itertools before you are missing one of the charm of python.

Check Itertools chain docs for details.

Happy Coding!

Painless deployment with Fabric

Deployment of code in test/staging/production  servers is one of the important part of modern web applications development cycle.

Deploying code were painful because its repetitive same tasks we have to do every time we want to push code, during deployment  if something goes wrong the application will go down too. But the scenario has changed, now we have many tools to make the deployment easier and fun. I have used Capistrano and Fabric for deployment. Found Fabric really painless and as its a Python battery, it was easier for me to adopt and get things done.

I am going to cover fundamental operations and finally a simple fabric script(like boilerplate) for writing your own fabric script.

env = its a Python dictionary like subclass where we define specific settings like password,user etc

local = runs command in  local host(where fabric script is being run)

run = runs command in a remote host

You can use these code tasks in many different ways, to do that check the Fabric Office Documentation from here.

from fabric.api import local, run, env, put
env.graceful = False
def test_server():
env.user = 'your_user_name'
env.serverpath = '/'
env.site_root = 'your_app_root'
env.password = 'your_pass' #ssh password for user
# env.key_filename = ''# specify server public key
#lis of hossts in env.hosts
env.hosts = [
#sample method for git pull
def pull(branch_name):
env.site_root = 'your_project_path'
run('cd %s && git pull origin %s' % (env.site_root, branch_name))
#deploy current directories all code without fabfile.py
def deploy():
env.files = '*'
env.site_name = 'your_app_name'
env.site_path = 'your_application_path'
run('rm -rf %s/%s' % (env.site_path,env.site_name))
local('zip -r %s.zip -x=fabfile.py %s' % (env.site_name, env.files))
run('cd %s && unzip %s.zip -d %s && rm %s.zip' % (env.site_root, \
env.site_name, env.site_name, env.site_name))
local('rm -rf %s.zip' % env.site_name)
#restart apache of remote host
def restart_apache():
cmd = "/usr/local/apache2/bin/apachectl -k graceful" if (env.graceful is True) \
else "service httpd restart"
def latest_access_log():
cmd = "tail -n 10 /var/log/apache2/access.log"
def latest_error_log():
cmd = "tail -n 10 /var/log/apache2/error.log"

view raw
hosted with ❤ by GitHub

sudo apt-get install python-pip
sudo pip install fabric

view raw
hosted with ❤ by GitHub

First gist is a sample fabric script,second one is a bash script to install fabric in your ubuntu machine.

 After setting username,password and host information into the script you cab check your server’s access log by running  fab test_server latest_access_log 

I am using fabric for around two years and used for different small,medium and large projects.

There are many interesting open source projects going on top of Fabric. I found these two projects really promising.



Search through github,you will find many advance level Fabric use.

Happy Coding!

Pythonic way to calculate Standard Deviation

If you are familiar with  basic statistics, I think you know what is Standard Deviation, if you dont know what is Standard Deviation you can check wiki for details.

And if it seems yet hard to wrap the idea into your brain,check this thread. Hope you understand it now. Standard deviation is an efficient when you want to understand a set of data and widely used in different industries. I was working with an algorithm couple of months ago where I had to calculated standard deviation of  a series of data. And the sets of data is large.

After coding couple of versions I code a small python class which calculates standard deviation of data. Check it out

from __future__ import division
from math import sqrt, pow
class StandardDeviation(object):
def do_round(self, data):
data = "%.3f" % round(data, 3)
return float(data)
def do_diff(self, n, mean):
return pow((nmean), 2)
def standDev(self, data_list):
mean = sum(data_list)/len(data_list)
result = sqrt(sum([self.do_diff(s, mean) for s in \
data_list] )/len(data_list))
return float(self.do_round(result))
if __name__=='__main__':
data_list = [2, 4, 4, 4, 5, 5, 7, 9 ]
std = StandardDeviation()
print std.standDev(data_list)

Happy Coding!