<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Abhishek shivale]]></title><description><![CDATA[i write blog to share my knowledge, if you found helpful or have feedback reach out to me.]]></description><link>https://blog.abhishek.win</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 07:35:56 GMT</lastBuildDate><atom:link href="https://blog.abhishek.win/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Streams In NodeJs]]></title><description><![CDATA[Have you ever heard about streams? Like senior engineers talking about how you can pipe certain streams to achieve performance gains? In this article, I will try to explain what streams actually are in Node.js and what their use cases are.
Streams ar...]]></description><link>https://blog.abhishek.win/streams-in-nodejs</link><guid isPermaLink="true">https://blog.abhishek.win/streams-in-nodejs</guid><category><![CDATA[Node.js]]></category><category><![CDATA[streams in nodejs]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Abhishek Shivale]]></dc:creator><pubDate>Sun, 15 Feb 2026 08:37:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771144611611/0954b1e6-4e78-4a89-96a2-26d8b147cacc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever heard about streams? Like senior engineers talking about how you can pipe certain streams to achieve performance gains? In this article, I will try to explain what streams actually are in Node.js and what their use cases are.</p>
<p>Streams are a fundamental concept not only in Node.js but in computer science in general. A stream represents a continuous flow. For example, when we talk about a continuous flow of water, we call it a water stream. In computer science, just like water, data flows through complex networks. So yes (we can say), flowing data is called a stream in CS.</p>
<h3 id="heading-why-do-we-need-streams">Why do we need streams?</h3>
<p>Let’s say you have a 5 GB file that you want to upload to S3 from your frontend to your backend, and then to S3.</p>
<p>One way to do it would be to load the entire 5 GB file into memory on the backend and then push it to S3. But here lies the problem: this solution cannot scale. What if 10 people try to upload at once? You would need a huge server, and it could still fail.</p>
<p>The solution would be to use streams.<br />Get a stream of the file from the frontend and pipe it from the backend directly to S3.</p>
<h3 id="heading-how-does-this-actually-work">How does this actually work?</h3>
<p>In this example, data is flowing from the frontend to the backend. In Node.js, the incoming HTTP request (<code>req</code>) is a readable stream. On the backend, we just change its direction using <code>pipe()</code> to send it to S3. So you are basically streaming the file to S3 without storing the whole thing in memory.</p>
<h3 id="heading-streams-in-nodejs">Streams in Node.js</h3>
<p>Node.js provides an abstract API to work with streams of data. It basically provides 4 types of streams:</p>
<ol>
<li><p><strong>Writable stream</strong> – You can use this stream to write data. A good example would be storing an uploaded file from the frontend to the backend.</p>
</li>
<li><p><strong>Readable stream</strong> – You can use this stream to read streamed data. For example, if you have a large JSON file and you want to perform certain operations on it but cannot load it fully into memory, you use a readable stream.</p>
</li>
<li><p><strong>Duplex stream</strong> – This stream implements both writable and readable behavior, which makes it very powerful. A good example would be a TCP client-server implementation.</p>
</li>
<li><p><strong>Transform stream</strong> – This is a special type of duplex stream that takes input and produces output. For example, in a read stream we just read, but in a transform stream we can modify the data while reading and writing it.</p>
</li>
</ol>
<p>There are helper APIs that Node.js provides to help with streaming data.</p>
<ul>
<li><strong>pipeline</strong> – This is a top-level function that allows you to pipe one stream to another safely and handle errors and backpressure properly.</li>
</ul>
<hr />
<p>After reading so far, you might have an idea about streams. But you might be wondering: how does it actually manage everything internally?</p>
<p>Internally, data is handled as buffers (binary data). In Node.js, when you use read or write streams, they store data in an internal buffer. The size of this internal buffer is controlled by the <code>highWaterMark</code> value. For many streams it is 16 KB by default, but for file streams like <code>fs.createReadStream()</code> it is usually 64 KB.</p>
<p>Let’s take the same example where we send a 5 GB file from the frontend to the backend and then to S3. Everything works using streams.</p>
<p>First, you get a stream from the frontend. This is a readable stream because the browser does not send all the data at once. When Node.js receives data, it stores chunks in its internal buffer. It emits a <code>data</code> event when chunks are available (in flowing mode), and you can listen to that event.</p>
<p>If the writable destination is slower and the internal buffer reaches its limit, Node.js applies backpressure and waits until the buffer is drained before continuing. In real-world scenarios, when you pipe the incoming stream directly to S3, data is continuously drained and forwarded, so memory usage stays controlled.</p>
<p>If you use <code>pipeline()</code>, you don’t need to manage this manually — it handles backpressure and errors for you automatically.</p>
<hr />
<p>There are many interesting things about streams, and we only talked briefly about some of them. You can read more about streams in the official Node.js documentation as well.</p>
<p>If you find anything wrong or if I made any mistakes, I’m open to suggestions. Feel free to point them out.</p>
]]></content:encoded></item><item><title><![CDATA[Buffer in Node.js]]></title><description><![CDATA[What is a Buffer?

Buffer is a Node.js core module that works with raw binary data. 
We can directly deal with the buffer using the Buffer module.

Why Node.js needs Buffers for binary data?

Everything on the internet is in binary format, so we need...]]></description><link>https://blog.abhishek.win/buffer-in-nodejs</link><guid isPermaLink="true">https://blog.abhishek.win/buffer-in-nodejs</guid><category><![CDATA[Node.js]]></category><category><![CDATA[buffer]]></category><dc:creator><![CDATA[Abhishek Shivale]]></dc:creator><pubDate>Thu, 04 Sep 2025 15:02:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756998667421/31f6031f-03da-4f7b-877d-06d8f107924f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-a-buffer">What is a Buffer?</h2>
<ul>
<li>Buffer is a Node.js core module that works with raw binary data. </li>
<li>We can directly deal with the buffer using the Buffer module.</li>
</ul>
<h2 id="heading-why-nodejs-needs-buffers-for-binary-data">Why Node.js needs Buffers for binary data?</h2>
<ul>
<li>Everything on the internet is in binary format, so we need something to deal with binary data. Here, the buffer comes into play and saves the day.</li>
</ul>
<h2 id="heading-creating-buffer">Creating Buffer</h2>
<p>We can create the buffer using a 3-way, which are as follows:-</p>
<ul>
<li><strong>Buffer.from()</strong></li>
</ul>
<p>We can create the new buffer using a string, an array, or the buffer itself as parameters to <code>Buffer.from()</code> object.</p>
<pre><code class="lang-js"><span class="hljs-comment">// 1. From a string</span>

<span class="hljs-keyword">const</span> buf1 = Buffer.from(<span class="hljs-string">'Hello, Buffer'</span>)
<span class="hljs-built_in">console</span>.log(buf1) <span class="hljs-comment">// &lt;Buffer 45 78 6c ..&gt;</span>
<span class="hljs-built_in">console</span>.log(buf1.toString()) <span class="hljs-comment">// Hello, Buffer</span>

<span class="hljs-comment">// 2. From an array</span>

<span class="hljs-keyword">const</span> buf2 = Buffer.from([<span class="hljs-number">72</span>, <span class="hljs-number">101</span>, <span class="hljs-number">108</span>, <span class="hljs-number">108</span>, <span class="hljs-number">111</span>])
<span class="hljs-built_in">console</span>.log(buf2.toString()) <span class="hljs-comment">// Hello</span>

<span class="hljs-comment">// 3. From another buffer (copying it)</span>

<span class="hljs-keyword">const</span> buf3 = Buffer.from(buf1);
<span class="hljs-built_in">console</span>.log(buf3.toString()) <span class="hljs-comment">// Hello, Buffer</span>
</code></pre>
<ul>
<li><strong>Buffer.alloc() and Buffer.allocUnsafe()</strong></li>
</ul>
<p>We use this function to create a new Buffer of a given size. </p>
<pre><code class="lang-js"><span class="hljs-number">1.</span> From Buffer.alloc()

<span class="hljs-keyword">const</span> buf4 = Buffer.alloc(<span class="hljs-number">10</span>)
<span class="hljs-built_in">console</span>.log(buf4) <span class="hljs-comment">// &lt;Buffer 00 00 00  00 00 ...&gt;</span>

<span class="hljs-number">2.</span> From Buffer.allocUnsafe()
<span class="hljs-keyword">const</span> buf5 = Buffer.allocUnsafe(<span class="hljs-number">10</span>)
<span class="hljs-built_in">console</span>.log(buf5); <span class="hljs-comment">// &lt;Buffer e8 91 ... random data&gt;</span>
</code></pre>
<h4 id="heading-when-to-use-alloc-or-allocunsafe">When to use <strong>alloc()</strong> or <strong>allocUnsafe()</strong>?</h4>
<ul>
<li><strong>alloc()</strong><ul>
<li>Slower because it initializes memory with zeros.</li>
<li>Safer, since no old data leaks (NodeJS clears/cleans memory before assigning).</li>
</ul>
</li>
<li><strong>allocUnsafe()</strong><ul>
<li>Faster because it skips initialization (it skips the zero-filling step).</li>
<li>Unsafe, because the buffer may contain old memory values until overwritten (NodeJS does not clear/clean memory before assigning).</li>
</ul>
</li>
</ul>
<p>It depends on the situation, what to use if you need speed and don't need to care about safety use <code>allocUnsafe</code> or use <code>alloc</code></p>
<blockquote>
<p><strong>NOTE:</strong><br />Don't use <code>new Buffer()</code> – it’s <strong>deprecated</strong> and <strong>unsafe</strong>.<br />Use <code>Buffer.from()</code>, <code>Buffer.alloc()</code>, or <code>Buffer.allocUnsafe()</code> instead.</p>
</blockquote>
<h2 id="heading-reading-amp-writing-data-in-buffers">Reading &amp; Writing Data in Buffers</h2>
<ul>
<li><strong>Accessing Bytes</strong></li>
</ul>
<p>Each element in the buffer is a byte (0-255).</p>
<pre><code class="lang-js">
<span class="hljs-keyword">const</span> buf = Buffer.from(<span class="hljs-string">"Hello"</span>);
<span class="hljs-built_in">console</span>.log(buf[<span class="hljs-number">0</span>]);  <span class="hljs-comment">// 72 (ASCII for 'H')</span>
<span class="hljs-built_in">console</span>.log(buf[<span class="hljs-number">1</span>]);  <span class="hljs-comment">// 101 (ASCII for 'e')</span>
</code></pre>
<ul>
<li><strong>Converting Buffer to String</strong></li>
</ul>
<p>Buffers can easily convert back and forth with strings, using encodings.</p>
<pre><code class="lang-js">
<span class="hljs-keyword">const</span> buf = Buffer.from(<span class="hljs-string">"Hello, world!"</span>, <span class="hljs-string">"utf8"</span>);

<span class="hljs-built_in">console</span>.log(buf.toString(<span class="hljs-string">"utf8"</span>));   <span class="hljs-comment">// "Hello, world!"</span>
<span class="hljs-built_in">console</span>.log(buf.toString(<span class="hljs-string">"hex"</span>));    <span class="hljs-comment">// "48656c6c6f2c20776f726c6421"</span>
<span class="hljs-built_in">console</span>.log(buf.toString(<span class="hljs-string">"base64"</span>)); <span class="hljs-comment">// "SGVsbG8sIHdvcmxkIQ=="</span>


<span class="hljs-keyword">const</span> buf1 = Buffer.from(<span class="hljs-string">"Hello"</span>, <span class="hljs-string">"utf8"</span>);
<span class="hljs-keyword">const</span> buf2 = Buffer.from(<span class="hljs-string">"48656c6c6f"</span>, <span class="hljs-string">"hex"</span>);
<span class="hljs-keyword">const</span> buf3 = Buffer.from(<span class="hljs-string">"SGVsbG8="</span>, <span class="hljs-string">"base64"</span>);

<span class="hljs-built_in">console</span>.log(buf1.toString()); <span class="hljs-comment">// Hello</span>
<span class="hljs-built_in">console</span>.log(buf2.toString()); <span class="hljs-comment">// Hello</span>
<span class="hljs-built_in">console</span>.log(buf3.toString()); <span class="hljs-comment">// Hello</span>
</code></pre>
<blockquote>
<p><strong>Tips:</strong>  </p>
<ul>
<li>You can perform <strong>array-like operations</strong> on a buffer, such as <code>slice()</code>, <code>copy()</code>, <code>concat()</code>, etc.  </li>
<li>Buffers are widely used in <strong>networking (TCP sockets)</strong>, file I/O, and <strong>exchanging data across the web</strong> where raw binary data is required.  </li>
</ul>
</blockquote>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>That’s all for <strong>Buffers in Node.js</strong>.<br />We learned how to create buffers, read/write data, and convert between different encodings.  </p>
<p>This is just the beginning! In the upcoming articles, we’ll gonna learn about <strong>Node.js core modules</strong> and eventually put everything together into a real-world project.  </p>
<p>Stay tuned, and follow along for more advanced stuff.<br />Till then, happy coding &amp; goodbye.</p>
]]></content:encoded></item><item><title><![CDATA[Building Video Transcoding Service Using TurboRepo, NestJS, and React]]></title><description><![CDATA[In this project, I built a Video Transcoding Service using Turborepo, NestJS, React, Docker, and other tools. The system supports features like uploading videos, queue-based background processing, format conversion with FFmpeg, and HLS output with au...]]></description><link>https://blog.abhishek.win/building-video-transcoding-service</link><guid isPermaLink="true">https://blog.abhishek.win/building-video-transcoding-service</guid><category><![CDATA[Node.js]]></category><category><![CDATA[turborepo]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[React]]></category><category><![CDATA[Docker]]></category><category><![CDATA[queue]]></category><category><![CDATA[video streaming]]></category><dc:creator><![CDATA[Abhishek Shivale]]></dc:creator><pubDate>Mon, 14 Apr 2025 18:12:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744657076710/1867b850-2334-427d-861d-90c6abb1d15b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, I built a <strong>Video Transcoding Service</strong> using <strong>Turborepo</strong>, <strong>NestJS</strong>, <strong>React</strong>, <strong>Docker</strong>, and other tools. The system supports features like uploading videos, queue-based background processing, format conversion with FFmpeg, and HLS output with auto-bitrate support. This article walks through the <strong>architecture</strong>, <strong>tech stack</strong>, <strong>challenges faced</strong>, and some <strong>key lessons learned</strong> during the process.</p>
<h2 id="heading-tech-stack">Tech Stack</h2>
<ul>
<li><p><strong>Turborepo</strong> – for monorepo orchestration</p>
</li>
<li><p><strong>NestJS</strong> – backend and APIs (Auth, Upload, Queue Management)</p>
</li>
<li><p><strong>React</strong> – frontend (upload form, progress viewer)</p>
</li>
<li><p><strong>PostgreSQL + Prisma</strong> – database and ORM</p>
</li>
<li><p><strong>BullMQ</strong> – job queue and worker system</p>
</li>
<li><p><strong>Docker</strong> – isolated video processing environment</p>
</li>
<li><p><strong>AWS S3</strong> – for video storage (input &amp; output)</p>
</li>
</ul>
<h2 id="heading-architecture-overview">Architecture Overview</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744810281614/1d02dbbd-a6d6-46c8-99ee-ba1a87c9ecb9.png" alt class="image--center mx-auto" /></p>
<p>The overall architecture follows this flow:</p>
<ol>
<li><p><strong>User uploads a video</strong> via the frontend → backend receives it via NestJS.</p>
</li>
<li><p>The backend <strong>uploads the raw video to AWS S3</strong> and saves metadata in PostgreSQL.</p>
</li>
<li><p>A new job is <strong>enqueued in BullMQ</strong> for video processing.</p>
</li>
<li><p>A <strong>worker service</strong> picks the job and <strong>spins up a Docker container</strong> with:</p>
<ul>
<li><p>S3 video URL</p>
</li>
<li><p>AWS credentials</p>
</li>
</ul>
</li>
<li><p>Inside the Docker container:</p>
<ul>
<li><p>The video is <strong>downloaded from S3</strong></p>
</li>
<li><p>Converted to <code>.m3u8</code> using FFmpeg with <strong>multiple formats and auto-bitrate</strong></p>
</li>
<li><p>The processed folder is uploaded <strong>back to S3</strong></p>
</li>
<li><p><code>master.m3u8</code> URL is logged via <code>stdout</code></p>
</li>
</ul>
</li>
<li><p>The worker <strong>listens to Docker logs</strong>, extracts the <code>master.m3u8</code> URL, and <strong>updates the database</strong>.</p>
</li>
</ol>
<p>Everything is designed to be <strong>fully decoupled</strong>, <strong>scalable</strong>, and <strong>cloud-native</strong>.</p>
<h2 id="heading-challenges-faced">Challenges Faced</h2>
<h3 id="heading-1-choosing-the-right-queue-system">1. Choosing the Right Queue System</h3>
<p>At first, choosing which queue system to use was frustrating. I didn’t want the overhead of <strong>Kafka</strong> or <strong>RabbitMQ</strong> just to manage basic jobs. I needed a simple, reliable, and Node.js-friendly solution.</p>
<blockquote>
<p>I chose <strong>BullMQ</strong> — it offers Redis-based queues with good developer experience and async/await support.</p>
</blockquote>
<h3 id="heading-2-video-processing-inside-docker">2. Video Processing Inside Docker</h3>
<p>Running FFmpeg inside Docker was a challenge. Some public images worked partially, but weren’t customizable or too heavy.</p>
<blockquote>
<p>I built my <strong>own lightweight Docker image</strong> optimized specifically for FFmpeg and S3 integration. This allowed full control, faster spin-up, and smaller footprint.</p>
</blockquote>
<h3 id="heading-3-uploading-from-inside-docker-amp-updating-the-database">3. Uploading from Inside Docker &amp; Updating the Database</h3>
<p>Uploading to S3 inside Docker is straightforward — but there's a twist:</p>
<ul>
<li><p>We didn’t want to download the video on the main server</p>
</li>
<li><p>Docker doesn’t have access to the DB directly</p>
</li>
<li><p>We couldn’t easily "return" data from Docker</p>
</li>
</ul>
<blockquote>
<p><strong>Solution</strong>: Instead of returning the processed URL via API or database, I made the Docker container <strong>log the</strong> <code>master.m3u8</code> URL.<br />The worker <strong>listens to stdout</strong>, parses logs, and when a specific log is found (e.g., <code>HLS_READY: &lt;URL&gt;</code>), it updates the DB.<br />This lightweight pattern was <strong>clean, effective, and flexible</strong>.</p>
</blockquote>
<h2 id="heading-key-lessons-learned">Key Lessons Learned</h2>
<ul>
<li><p><strong>Turborepo</strong> helped me manage shared types, interfaces, and services across multiple apps (frontend, backend, workers).</p>
</li>
<li><p><strong>Docker</strong> is powerful but can be tricky when communicating with services outside of its context.</p>
</li>
<li><p><strong>FFmpeg</strong> is a beast — combining formats, bitrates, and stream maps takes time and testing.</p>
</li>
<li><p>Streaming logs and designing your <strong>own communication protocols</strong> (like log-based status updates) can be extremely useful in decoupled systems.</p>
</li>
<li><p><strong>BullMQ</strong> is enough for most video processing workloads unless you hit extreme scale.</p>
</li>
</ul>
<h2 id="heading-whats-next">What's Next?</h2>
<p>Here are a few future improvements I’m planning:</p>
<ul>
<li><p>Add retry &amp; failure queue handling in BullMQ</p>
</li>
<li><p>Better job status dashboard with real-time updates</p>
</li>
<li><p>CDN integration for fast HLS delivery</p>
</li>
<li><p>Auth + token-based video access control</p>
</li>
<li><p>Support for more formats (e.g., audio-only, 4K rendering)</p>
</li>
</ul>
<h2 id="heading-project-links">Project Links</h2>
<ul>
<li><p><strong>Main Project Repository</strong>: <a target="_blank" href="https://github.com/abhishek-shivale/video_streaming.git">GitHub – video_streaming</a></p>
</li>
<li><p><strong>Custom Docker Image Source</strong>: <a target="_blank" href="https://github.com/abhishek-shivale/ffmpeg_docker.git">GitHub – ffmpeg_docker</a></p>
</li>
<li><p><strong>Docker Image (Public)</strong>: <a target="_blank" href="https://hub.docker.com/repository/docker/abhishekshivale21/ffmpeg/general">Docker Hub – abhishekshivale21/ffmpeg</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>