Konpat's Record of Struggles


Jupyter matplotlib auto inline

You might need to use this magic:

%matplotlib inline

Every time you want to force the inline output for a graph figure generated from matplotlib, which is actually most of the times.

There is a way to make this "automatic" by configuring the file .jupyter/jupyter_notebook_config.py:

By adding:

c.InteractiveShellApp.matplotlib = "inline"  

And just reload the notebook in your browser, this should work automatically without the explicit magic anymore.

อ่านต่อ »

Tensorflow: Connecting Two Graphs Together using "import_graph_def"

อ่านต่อ »

Jupyter Notebook Exports For Blogging

Jupyter notebook can export as HTML but it comes with a lot of stuffs that are not so compatible with blogs.

Here is how I tried to remove all of those while still keeping the styles of the notebooks.


  • export with basic template, this will give only minimal html output
  • manually, capture the CSS styles part from the richer export version
  • combine them

Caution: There will be some side effects to the styles of the blog!

Export with basic template

jupyter nbconvert --to html --template basic <path>  

If you want to get it in stdout, you can add --stdout to the command.

Capture the CSS styles

Here is the CSS styles I took from the normally exported html: https://gist.github.com/phizaz/8f05ede652b8fa58b87e108a65a772a5

Note: If you want to use the script below, please save the file as head.html in the same place of the script below.

Combine them!

You can either do it by hand, or create your own script to do this.

In fact I have written a small Python script for this:

Filename blog-convert:

#! /usr/bin/env python

import subprocess  
import os  
import pyperclip  
import argparse

def shell(cmd):  
    return subprocess.check_output(cmd, shell=True)

def nbconvert(path):  
    cmd = 'jupyter nbconvert --stdout --to html --template basic {}'.format(
    return shell(cmd)

def get_header():  
    this = os.path.dirname(os.path.realpath(__file__))
    with open(os.path.join(this, 'head.html')) as file:
        content = file.read()
    return content

def clipboard(content):  
    print('clipboard for {} bytes'.format(len(content.encode('utf-8'))))

def main():  
    parser = argparse.ArgumentParser(description="convert ipynb to html for blogging")
    parser.add_argument('path', type=str, help="path to your ipynb file")
    args = parser.parse_args()

    header = get_header()
    body = nbconvert(args.path)
    clipboard(header + body.decode(
อ่านต่อ »

Managing Tensorflow GPU Memory Usage

อ่านต่อ »

You should handle timeout in all the IO callbacks in NodeJS

At first, you might hope that just calls the function and let it tell you just when the job is done and you wait ... and wait. Soon, you realize that the callback you have been waiting will never come.

This is true, and I can pretty much guarantee you that. I have been working with a project involves in heavy IO operations. Even I try to fine tune the speed at which node reads. It is not suffice as I have seen. It soon slows down and stops completely with the cause I have yet to find.

So, I decided that I have to put some timeout, which is not provided by default means, here is my attempt:

export async function delay(ms: number) {  
    return new Promise<void>((res, rej) => {
        setTimeout(() => res(), ms)

class Timeout extends Error {}  
export async function delayedError(timeout: number) {  
    await delay(timeout)
    throw new Timeout()

export async function withTimeout<T>(promise: Promise<T>, timeout: number) {  
    return await Promise.race<T>([
        delayedError(timeout) as any

Let's further say that wrap the fs.readFile with this async:

import fs = require('fs')

export async function readFile(path: string) {  
    return new Promise<Buffer>((res, rej) => {
        fs.readFile(path, (err, content) => {
            if (err) rej(err)
            else res(content)

To use it, now you can:

try {  
    await withTimeout(readFile('test.txt'), 1000)
} catch (err) {
    if (err instanceof Timeout) {
    } else {
        throw err
อ่านต่อ »

NodeJS to check whether the file is being written by another process

To me at first, it should have a simple graceful solution, but turned out it has none.

I will try to convince you by the following scenario.


I have a python writer (to make sure the file will be written by another process rather than the node itself), here is the code:

import os  
import time  
from functools import partial

def copy(a, b):  
    with open(a, 'rb') as r:
        with open(b, 'wb', 0) as w:
            for chunk in iter(partial(r.read, 4 * 1024), b''):

Basically, it will write a file by copying chunk by chunk of 4 KB to the destination.

I have added some timeout in the code to make the write slower and to emphasize the problem.


Now, we expect that the reader must be able to know whether the file is being written by the code above.

Things that don't work

  1. fs.access(path, fs.constants.R_OK & fs.constants.W_OK & fs.constants.X_OK, callback)
  2. fs.open(path, 'r+', callback)
  3. fs.open(path, 'a+', callback)
  4. lockfile.lock(path, {}, callback) by using npm install lockfile here. It just cannot lock whether how long.
  5. lockfile.lock(path, callback) by using npm install proper-lockfile here. It always can lock.

Things that do work

  1. Check the file size difference:
    const aSize = await fileSize(path) await delay(100) const bSize = await fileSize(path) return aSize !== bSize Note: It is not reliable though because you cannot guarantee that the delay you put is enough for any writer.

  2. Check whether the file can be moved:
    const tmp = path + '-tmp' try { await rename(path, tmp) await rename(tmp, path) return false } catch (err) { return true }


To what I have to be working, try-moving-the-file wins because it

อ่านต่อ »