This post is a compilation of short notes from the Ultimate Rust Crash Course available on Udemy. Taught by Nathan Stocks, the course is pretty exhaustive considering that it’s only 3 hours of teaching material. I’ll recommend it to anyone who wants to get started in Rust quickly.

Introduction and Quick Start

  • Rust ensures memory safety (unlike C/C++), concurrency and speed.
  • Rust has a package manager named cargo. It is also the build system, doc generator and testing tool.
  • Variables are by default immutable.
  • Functions defined using fn keyword. Don’t need to be chronologically arranged. Macros are special functions that end with an ‘!’.


Staying in touch with your users is paramount, especially as a startup. But the complexities of setting up a scalable notification service that can handle increasingly large volumes of automated notifications delivered on multiple platforms can be overwhelming. Facing this exact situation while working at Skillbee, I created a viable solution.

This blog post is to ensure that you do not have to start from scratch. You can find the code, with a quick-start README here.

Photo by Brett Jordan on Unsplash

Requirements

Our system needed the following capabilities:

  • Handle communication on email, SMS, WhatsApp and Android notification
  • Personalize notifications for the target user
  • Have in-built failure-tolerance…


This post builds upon the guide to developing projects in MERN stack that I posted a while back, you can find it here.

Heroku probably has the best free-tier service out there for hosting MERN stack web applications. Though it requires certain non-intuitive settings before you can go about deploying your app. This article will guide you through the process of deploying your web-app on Heroku.

  • We would be deploying the server and the frontend on the same deployment instance.
  • The database being used here is MongoDB Atlas.
  • This deployment uses npm instead of yarn for deployment.

Step 1 — App Directory Structure

Your app comprises…


Heroku is a popular application deployment platform with a functional free tier of services, and Flask is populalar application development micro-framework in Python. The Heroku-Flask environment is one of the quickest ways to deploy a small application for testing, yet sadly, I did not find a single tutorial that covered all aspects of the deployment process without leaving room for a whole bunch of errors.

Hence, this guide.

Note that this is not a guide to Flask and Flask based development and is focused specifically on deployment of your flask application. There are excellent resources online for learning development Flask…


MLFlow is an open-source platform for managing your machine learning lifecycle. You can either run MLFlow locally on your system, or host an MLFlow Tracking server, which allows for mutiple people to log models and store them remotely in a model repository for quick deployment/reuse.

In this article, I’ll tell you how to deploy MLFlow on a remote server using Docker, an S3 storage container of your choice Minio or Ceph and SQL SQLite or MySQL.

Setting up the Server

  • Login to your remote server. It should have docker installed. For docker installation, check their official guide.
  • Create a new folder for your Mlflow…

Ever since it’s introduction in 1994, Stanford’s MOSS, or Measure of Software Similarity, has been an integral part of programming classes worldwide to detect plagiarism in coding assignments. It’s an effective measure, a good system, but, it’s not perfect.

Stanford’s MOSS is beatable.

In this article, I aim to accumulate all knows techniques of beating MOSS, summarize knowledge from various sources, and bring it all together in one place. Though I would like to clarify that (most) assignments are an integral part of learning and this article has been written solely to satiate academic curiosity.

Original Sample Code:

// Original…

Hamming’s last lecture was dense with bits of wisdom, and I agree in principle with most of the points that he raised in his attempt to define both success, and a path to success. There is though some contradiction in what he proposes and some bias in his definition of success and an ‘ideal’ life, but he’s allowed the liberty of biases owing to his age and immense experience at the frontiers of success.

Link to the talk: https://www.youtube.com/watch?v=a1zDuOPkMSw

My reflection on points that I take away from this lecture, in a decreasing order of importance and relevance:

Doing Important Things

If what…


SQS stands for Simple Queuing Service — an offering from AWS which is simple to use, highly scalable to build upon and reliable.

In this post, I’ll walk through setting up SQS for queueing and consumption from Node.js.

Create AWS Account

Create and account on AWS, sign in to the console and find SQS.

Create Queue

Click on the Create Queue button, choose a name – we’ll call our queue sample-queue – and continue with the default pre-filled configurations. Click on the Create Queue button at the bottom of the page. Done! …


Due to an anomaly on our frontend, over 500k objects had been uploaded to an AWS bucket without the correct content-type. Keeping the non-mutable nature of S3 objects in mind, and constraint of retaining the same link, here’s how I fixed this anomaly through a python script built on top of boto3.

In a nutshell, here’s what this script does:

  • fetches all objects in a bucket
  • for each object:
  • fetches the metadata of the file
  • checks the intended content type of a file, based on file extension
  • updates the following in metadata:
  • sets access to ‘public-read’
  • sets ContentDisposition to ‘inline’
  • sets ContentType to the intended content type

The code, along with inline documentation, is as follows:

If you need any other help with this, feel free to reach out to me.


A guide to load balancing your TFServing Inference API over multiple GPUs.

This is a mini-project I worked upon a few months ago, but never got around to writing about. TF serving is used to serve Tensorflow models for inference. It provides a REST/gRPC API that can be used to send inference requests and get results from one’s ML model. You can read more about TFServing here:

TFserving can either be run on GPUs or CPUs — it has two different docker images for these deployments — https://hub.docker.com/r/tensorflow/serving. One problem here is that TFserving doesn’t allow for parallelization out of…

Vivek Kaushal

Hacker | Senior Engineer @ Samsung vivekkaushal.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store