A 12-post collection

Ethereum CUDA Mining with nvidia-docker

Somebody has done quite a good job :

But, if you're using Nvidia's Pascal cards, you're better off using CUDA 8.0 not 7.5 as of the above's Dockerfile says.

So the new Dockerfile should be begin with:

FROM nvidia/cuda:8.0-devel-ubuntu14.04  

But, if you rush to building it now, if you like me, you will find some errors during cmake, here is one of them :

Building NVCC (Device) object libethash-cuda/CMakeFiles/ethash-cuda.dir/  

Fortunately, there are others facing the very same problem as well, here is the thread:

And here is the response from the author of CUDA cpp-ethereum, namely Genoil:

@Cubirez yes ran into that myself too. This was caused by somebody who tried to make copatibility with Fedora work. I'll fix it in the rep later, but for now, add --std=c++11 to the NVCC flags in CMakelists.txt in libethash-cuda folder

So, actually, you have to make some change to the cloned source.

Here, I will change the course of the Dockerfile a bit, so that we can do the change :

git clone  
cd cpp-ethereum/libethash-cuda  
vim CMakeLists.txt  

Change from

set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};--disable-warnings;--ptxas-options=-v;-use_fast_math;-lineinfo)  

To, by adding --std=c++11; :

set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};--std=c++11;--disable-warnings;--ptxas-options=-v;-use_fast_math;-lineinfo)  

Now, to the Dockerfile, instead of cloning we will be copying instead:

COPY cpp-ethereum /cpp-ethereum  
RUN cd cpp-ethereum \  
    && mkdir build \
    && cd build \
    && cmake -DBUNDLE=cudaminer -DCOMPUTE=61 .. \
    && make -j8 \
    && mkdir /data

Note : I added -DCOMPUTE=61 flag because

อ่านต่อ »

Ubuntu Can't Verify SSL Certs

I found something like this while wget a file from https protocol:

Unable to locally verify the issuer's authority.

What it's saying is that its knowledge about ssl certs doesn't cover the one we're using.

Fortunately, it can be solved using :

apt-get install ca-certificates  

And here is the description of the package.

PEM files of CA certificates to allow SSL-based applications to check for the authenticity of SSL connections.

อ่านต่อ »

Setup A Deep Learning Engine With Ubuntu 16.04 + Nvidia Pascal + Nvidia Docker

I have seen a lot of instruction on building and setting up a machine learner, but with nvidia-docker I practically have seen so little relatively.

Using nvidia-docker is a good thing. It simplifies so much the complex part installing a large bunch of dependencies before you can start developing and running your deep learning algorithms.

With nvidia-docker you practically type nvidia-docker run ... and it just works like you would have intended to with a kind of magic with docker.

One question arises, however, if I were to use nvidia-docker what exactly would I have to install on my computer?

It's kind a obvious for me to look back, but not so for me at the time figuring it out.

Obviously, the first thing you can't miss out is the docker itself, simple because the nvidia-docker is built upon it.

So, if you don't have it in your machine already, here is a good place to start

Second thing, you need a working driver for your nvidia pascal graphic card, which means GTX1060, 70, 80 and Titan. You will need a nvidia graphic driver better than 367.44 (the first version of the base line graphic driver to support Pascals), or if you to test on more recent drivers you can go to 370 line which is less stable.

# if you don't have the 'add-apt-repository'
apt-get install software-properties-common  
# begin !!
sudo add-apt-repository ppa:graphics-drivers/ppa  
sudo apt update  
# list all your choices
sudo apt search nvidia  
# install driver 367.44 (at the time of writing)
sudo apt install nvidia-367  
# install driver 370.23 (at the time of writing)
sudo apt install nvidia-370  

It's important to mention CUDA driver and CUDA toolkit are packed inside the nvidia/cuda docker's image, so we not need to install

อ่านต่อ »

RemoteDocker: A fresh approach to run on remote host without hassle

I have a fast-computer at home which I don't use it as often as my laptop. I intentionally bought it for computation-intensive tasks. But running an arbitrary script on a remote host is always a hassle, never was smooth.

Now, this is an attempt to acquire the smoothness we all deserve!

I will just copy and paste the readme here (it also provides an example so you will have a better idea of how this will help improve your daily life):


Run a docker command, tracking progress, sync results and manage, all of these in one simple cli.


  1. Unix based OS (I suspect that some portion of the code is not os independent)
  2. rsync, which should be ubiquitous among that kind of OSes.
  3. Python 3, I just didn't test on Python 2 and even it works it's not gonna be without a glitch.

If you're quilified ...

pip install remote-docker  


It's easier to give a realistic use case, let's say we have arranged our project (python) as follows:

- src
    - lib
        - ...
- Dockerfile
  1. Declare the running environment in a Dockerfile (in the same directory at which the cli will be run, basically, the same as your source directory).

    e.g. Dockerfile -> FROM python:3

  2. Run using run command in the form rdocker run --tag=<jobname> --host=<[email protected]> --path=<host_path> <command> <args...>. In this very case, we will use rdocker run --tag=test [email protected] --path=/tmp/myproject python -u -m src. What it really does is:

    1. Sync (using rsync) the source code to the remote host, in this case, whole directory of project_root will be copied to
อ่านต่อ »