This post is a compilation of short notes from the Ultimate Rust Crash Course available on Udemy. Taught by Nathan Stocks, the course is pretty exhaustive considering that it’s only 3 hours of teaching material. I’ll recommend it to anyone who wants to get started in Rust quickly.
Staying in touch with your users is paramount, especially as a startup. But the complexities of setting up a scalable notification service that can handle increasingly large volumes of automated notifications delivered on multiple platforms can be overwhelming. Facing this exact situation while working at Skillbee, I created a viable solution.
This blog post is to ensure that you do not have to start from scratch. You can find the code, with a quick-start README here.
Our system needed the following capabilities:
This post builds upon the guide to developing projects in MERN stack that I posted a while back, you can find it here.
Heroku probably has the best free-tier service out there for hosting MERN stack web applications. Though it requires certain non-intuitive settings before you can go about deploying your app. This article will guide you through the process of deploying your web-app on Heroku.
Your app comprises…
Heroku is a popular application deployment platform with a functional free tier of services, and Flask is populalar application development micro-framework in Python. The Heroku-Flask environment is one of the quickest ways to deploy a small application for testing, yet sadly, I did not find a single tutorial that covered all aspects of the deployment process without leaving room for a whole bunch of errors.
Hence, this guide.
Note that this is not a guide to Flask and Flask based development and is focused specifically on deployment of your flask application. There are excellent resources online for learning development Flask…
MLFlow is an open-source platform for managing your machine learning lifecycle. You can either run MLFlow locally on your system, or host an MLFlow Tracking server, which allows for mutiple people to log models and store them remotely in a model repository for quick deployment/reuse.
Ever since it’s introduction in 1994, Stanford’s MOSS, or Measure of Software Similarity, has been an integral part of programming classes worldwide to detect plagiarism in coding assignments. It’s an effective measure, a good system, but, it’s not perfect.
Stanford’s MOSS is beatable.
In this article, I aim to accumulate all knows techniques of beating MOSS, summarize knowledge from various sources, and bring it all together in one place. Though I would like to clarify that (most) assignments are an integral part of learning and this article has been written solely to satiate academic curiosity.
Original Sample Code:
Hamming’s last lecture was dense with bits of wisdom, and I agree in principle with most of the points that he raised in his attempt to define both success, and a path to success. There is though some contradiction in what he proposes and some bias in his definition of success and an ‘ideal’ life, but he’s allowed the liberty of biases owing to his age and immense experience at the frontiers of success.
Link to the talk: https://www.youtube.com/watch?v=a1zDuOPkMSw
My reflection on points that I take away from this lecture, in a decreasing order of importance and relevance:
SQS stands for Simple Queuing Service — an offering from AWS which is simple to use, highly scalable to build upon and reliable.
In this post, I’ll walk through setting up SQS for queueing and consumption from Node.js.
Create and account on AWS, sign in to the console and find SQS.
Click on the
Create Queue button, choose a name – we’ll call our queue sample-queue – and continue with the default pre-filled configurations. Click on the
Create Queue button at the bottom of the page. Done! …
Due to an anomaly on our frontend, over 500k objects had been uploaded to an AWS bucket without the correct content-type. Keeping the non-mutable nature of S3 objects in mind, and constraint of retaining the same link, here’s how I fixed this anomaly through a python script built on top of boto3.
In a nutshell, here’s what this script does:
The code, along with inline documentation, is as follows:
If you need any other help with this, feel free to reach out to me.
A guide to load balancing your TFServing Inference API over multiple GPUs.
This is a mini-project I worked upon a few months ago, but never got around to writing about. TF serving is used to serve Tensorflow models for inference. It provides a REST/gRPC API that can be used to send inference requests and get results from one’s ML model. You can read more about TFServing here:
TFserving can either be run on GPUs or CPUs — it has two different docker images for these deployments — https://hub.docker.com/r/tensorflow/serving. One problem here is that TFserving doesn’t allow for parallelization out of…