Devstream | Journal – Initial designs

Ditched sketchapp for Adobe XD, even though pretty similar xd seemed fast and intuitive. Faced occasional crashes, NOTE to self – Save your documents more often.


Actually i think landing page in initial design process can help in getting more clarity.


Designed a decent logo

Onboarding screen

Pretty much a rip off of the landing page sections

Connect all streams



Profile and Interests.

If by any remote chance this seems intriguing let me know what you think about this.

Devstream – Journal

Bored with the current gig, the ick to start something where i can contribute significantly. Partially because of the fact that its been 3 yrs as a developer and i haven’t created something on my own, seems just wrong.


Love reading new things about tech, following n number of blogs is tedious.
What do i read?

  • Hackernews
  • Richer blogs ranging from – Ars technica, OmgUbuntu, Highscalability.
    • More Specific like – engineering blogs from tech companies.
  • Devrant
  • Stackoverflow
  • Github/Gitlab/(pick your scm site)

Yet another news feed aggregator. Fuck No!!

What can be done here?

  • Filtering through the noise, awesome content often gets missed.
  • Notification to events on these site are often very obfuscated or missing entirely. Maybe overdo it and add channels like pagerduty??
  • Recognize interests on general and deliver relevant content from all sources.
    Often content discovery happens by following people/tags, this setup is tedious on each of the sources and often not in sync.

Okay, So what is the aim here(if at all there is something)?

A single place to get relevant content as a developer, notified of all important stuff and follow topics at one place.

Getting into design a lot lately, so why not lets design this first.

Hope this doesn’t become another ghost-town project.

Designs Next >

Tips for writing Dockerfile

Started using Docker yet? Here are some tips on writing dockerfile for your application.
While may be obvious to docker experts, these tips might help you avoid common issues.

1. Minimize the number of layers.

FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install node
RUN apt-get install npm
RUN apt-get install curl

Apt-get would be the most used command in all docker files command. While the above dockerfile looks fine, it has a couple of issues.

  1. apt-get update and apt-get install’s are on different lines, which would lead to caching of apt-get update command. Read more on docker build cache.
  2. Each of the RUN statements creates a layer in docker image, this leads to a bulkier image, try clubbing RUN commands logically.

A Better build would start like –

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y curl \

2. Use .dockerignore

Continue reading

Ngrok – Publish localhost securely

Ngrok allows users to deploy a secure tunnel to their own box. This allows for application sharing without the hassle of deploying application or sharing local network ips.

Ngrok acts as a NAT server sitting between your localhost and the internet. Ngrok generates http endpoints like

To Get Started with Visit.

For a quick taste of ngrok use the gist below. Make sure to use linux or mac.

curl | bash -s linux

The snippet creates a Hello World flask application and intializes a ngrok tunnel.Once ngrok connects you can open the endpoint to access the application.

Ngrok handles a bunch of other stuff like

  • Authentication – Signup on and save auth token by ngrok authtoken {token}
  • Protocols – Out of the box ngrok allows for HTTP TLS and tcp based tunnels.
  • Custom domains – Custom domains can be reserved on ngrok by upgrading from free plan.

Ngrok also provides a neat dashboard to monitor account tunnels.

I myself found ngrok pretty useful for getting feedback on early stage projects, debugging mobile apps by routing traffic to a local setup and even connect to my machine at home through ssh.

Blue Ocean – Create Pipelines easily

BlueOcean is a plugin on top of Jenkins which makes creating and maintaining piplines fun.

Getting Started


If you want to start playing with it the easiest way to get started is through docker.
docker run -p 8080:8080 jenkinsci/blueocean
This will pull the image from docker-registry and run Blueocean with Jenkins locally.

Manual Installation

If you directly want to set it up on a VM or locally, jenkins doc should be a good starting point.

Defining a pipeline

Blueocean’s preferred source of pipeline configuration is Jenkinsfile. Jenkinsfile is built on the motivation of pipeline as code. Jenkinsfile describes the steps and execution details of tasks in the pipeline.

Creating a pipeline involves selecting the repository, blueocean out of the box supports git.

Task definition can be either created by the visual editor Or created manually. For a sample Jenkins file i will use the configuration below.

Or created manually. For a sample Jenkins file i will use the configuration below.

Build your project

Builds can be triggered manually or through webhooks integration.

I hope this post gives you the basic idea about what blueocean is and if it might be worth trying.

Making of a container: Cgroups and Namespaces

Let me clear out one thing – Containers are not a thing. VMs are a thing FreeBSDs Jails and Solaris containers are primitive concepts, Containers are almost a clever trickery over Linux
kernel features.

Most of the container management tool out there including docker are made up from Linux kernel primitives C-groups and namespace;(yes they have a lot of tooling and patches that make the environment more consistent and stable).

C-groups and namespaces
Cgroups and namespace applied on process groups allow the container to have an isolated and accounted environment.


Namespaces provides the necessary isolation on subsystems. This allows the processes to run in their own bubble.
Some of the namespace are listed.

  • pid – Allows processes to see only processes inside the group
  • net – Namespace for the network.Everything from ip tables to routing rules.
  • uts – namespace hostname
  • ipc
  • mnt
  • user

Convenient utility to run process in new namespace UNSHARE(1)
unshare -p -f /bin/bash


Control groups allow for accounting and throttling of sub-systems like io, memory, cpu.

  • Memory cgroup – memory group
  • CPU cgroup
  • CPUset cgroup
  • BlockIo cgroups
  • Network io cgroup
  • Device cgroups

Control groups have a file based Api and can be accessed through /sys/fs/cgroup/. Though its is advised to use a higher level abstraction than directly writing to files.
# tree -L 1 -d /sys/fs/cgroup/

|– blkio
|– cpu -> cpu,cpuacct
|– cpuacct -> cpu,cpuacct
|– cpu,cpuacct
|– cpuset
|– devices
|– freezer
|– hugetlb
|– memory
|– net_cls -> net_cls,net_prio
|– net_cls,net_prio
|– net_prio -> net_cls,net_prio
|– perf_event
|– pids
|– systemd


AWS Cost Optimizations

Think about instance reservations

Spot Instances

Spot instances can be provisioned at 10-20% price of on-demand pricing. Spot instances can be used to do batch tasks convert media or do stateless jobs in general

Reserved  Instances

If machines are running at good utilisation for a long time consider reserving instance. They can give you around 25-40% savings depending on payment options and tenure.

Note – Instances can be reserved by region only , so make sure you decide the right region for you.

Use only what you need

Stopping servers which need not be up the whole time can lead to large savings. Dev servers at off-time or autoscaling for tasks which have a very varying usage pattern.

Simple python scripts and good tagging on instances can make this task of stopping and bringing them up very easy.

Take Care of Resource Leakages

These are typically small savings but it can help to maintain these so they dont add up.

  • Unused Elastic Ips
  • Unused EBS Volumes
  • Redundant Snapshots

I made a simple script to find these out here.

If you would rather just have the relevant script

Checks on AWS Console

Trusted advisor gives recommendation about your resources on aws. These includes cost saving tips, increasing performance  and security. You can also register alarms on budgets console.


Building an Elixir CLI application

Elixir-lang is a dynamic, functional programming language which runs on top of the erlang VM. Running on beam(VM) it comes with all the goodies like low-latency and ease of building fault-tolerant distributed systems.

Elixir comes with excellent tooling and getting started with a new project is a breeze with mix. Mix is a build tool that ships with Elixir that provides tasks for creating, compiling, testing your application, managing its dependencies and much more.

Mix creates a new project with appropriate readme a gitignore the lib directory (all your source code would live here) and a test directory. Continue reading

Whats new in Mysql 8.0

Mysql 8.0 was released a couple of months back, if you havent heard the news here is what it brings to the world.

MySQL 8.0 status. MySQL 8.0 is a development series not recommended for production use.For latest update see

Global Data Dictionary

Instead of maintaining the FRM, TRG.. files, MySql will use the global data dictionary to store table metadata. These will be cached in memory as data objects. There is a pragmatic difference between system table and data dictionary as the mysql blogs says

Data dictionary contains meta data needed to execute SQL queries while system tables contain auxiliary meta-data like timezone and help information.


Instead of remembering which user had the INSERT UPDATE privilege now MySql gives the ability to create roles. Roles are basically a group of privileges.
So you might want to create an analytics role and assign it to all users who only want read access. Certainly, should help DevOps, DBA guys.

Index Toggling

MySql now allows for making an index invisible, what invisibility signifies is that the MySql query optimizer wont consider that index in the query plan optimization. The primary use case here is to monitor query performance after making index invisible and then proceed as needed, as toggling index costs almost nothing.

Better UUID support

For long it has been considered a bad idea to make UUID as primary key And for good reasons. Mysql now adds UUID_TO_BIN, BIN_TO_UUID functions, these solve two problems

  • Increase space efficiency vs varchar(36)
  • The problem of innodb storing data by primary key order(by swapping the high time variant part to start

For more detailed changes refer MySql Blog or What new in MySql 8.0

This is the first blog in the HHYN(Have you heard the news) series.

Feedback is always precious so  please like, reblog or comment below.