Previously, I discussed Node’s streaming module, as well as the pipe method, to mitigate buffering large amounts of data into memory by instead streaming the data to its destination--one chunk at a time. This is often used when making requests and responses, to and from an API, for sizable files like images and videos. 
However, recently, I came across a need to use a stream for something other than an ordinary consumable like an image or video--7GBs of URLs stored in a CSV file! After toying around with different options (like dragging and dropping my file into a database GUI--which didn’t work), I ended up deciding to use a stream again; though, I came across an interesting package on NPM during this process. I ended up not using the package. However, it did inspire me to recreate it for fun.
First, an introduction to Node’s readline module, then the fun. Readline allows a developer to do exactly what it sounds like--read input line by line. The quintessential example in Node’s docs follows:

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

rl.question('What do you think of Node.js? ', (answer) => {
  // TODO: Log the answer in a database
  console.log(`Thank you for your valuable feedback: ${answer}`);

  rl.close();
});

The example above makes the terminal capable of sending user input to a program for handling. 
I used this same interface for the MERN Stack Build project. But in this case, I found the longest ebook that I could (in the public domain), which happened to be Moby Dick, and created a read stream of it. 
After every chunk came in, I paused the stream, split the chunk on new line characters (‘\n’), and stored each line as an element in an array. Every time the user (me) pressed the Enter key (which corresponds to the ‘input: process.stdin’ above), a new line shifted from the array and displayed in the terminal (which corresponds to the ‘output: process.stdout’ above). Once the array was empty, I resumed the stream for one additional chunk, then repeated the entire process. Eventually, I put a timer on the program so that after every two seconds the array shifted automatically.
Getting back to that 7GB CSV file... I started thinking--what’s a more practical and stimulating use case for this type of program? Answer: parse every row of my CSV file, clean and count the URLs, then display the top 25 URLs in the terminal!
Though simple and overlooked, Node’s readline module can be very inspiring!

You may also like

Back to Top