×

Search anything:

Implement Memory Efficient Applications in Node.JS

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

Node.JS provides us with great APIs, modules, etc. for optimizing the code that is being run on Random Access Memory (RAM). With the help of the inbuilt tools provided by Node, we can change the way how our software runs in a system. Our software applications usually runs in a system's Primary memory i.e. Random Access Memory (RAM).

Node.JS allows us to write small to large sized applications and projects including Databases. Dealing with the software's efficiency and memory is always a challenging problem since a software that is not efficient can block or slow down other applications that are running on a system. If we write our applications casually, it can degrade performance of other applications or can even block them. Therefore, it is important for developers to write a memory efficient applications.

The one big problem infront of developers in todays world is of the huge file copy. To overcome the problem of the Huge file copy, we use the mechanism of Buffers and Streams.

We can break the large data into multiple parts to distribute the size of the software applications. The broken multiple parts can be stored in a data structure. Therefore, here comes the use of Buffers. A Buffer is a data structure which stores binary data. Furthur, we need the functionality of read/write for the parts. Streams provide us with that functionality.

Now, lets study about both of them seperatily.

Buffers

Buffer can be created by initializing the Buffer object. For example ->

var buffer = new Buffer(5);
console.log(buffer)

Here, 5 is the size of the buffer. The output for the code will be ->

<Buffer 00 00 00 00 00>

If any of the data is already declared and want to use in Buffer, then ->

var string = 'OpenGenus'
var buffer = Buffer.from(string)
console.log(buffer)

In these cases, the size of the buffer will be decided by the number of characters in a string. In this case, it is 9, therefore, size of the buffer will be 9.
The Output for the above code will be ->

<Buffer 4f 70 65 6e 47 65 6e 75 73>

If you want to look at the data stored in the buffer, then .toString() method is used. For example ->

var string = 'OpenGenus'
var buffer = Buffer.from(string)
console.log(buffer.toString())

The output will be again the string that is stored in the buffer i.e. OpenGenus.
If you want to have a look at the JSON format of the data stored in buffer, then .toJSON() method is used. For example ->

var string = 'OpenGenus'
var buffer = Buffer.from(string)
console.log(buffer.toJSON())

Here, the output will be ->

{
    type: 'Buffer',
    data: [
        79, 112, 101, 110,
        71, 101, 110, 117,
        115
    ]
}

NOTE - We dont create RAW buffers to optimize code. Node.Js does it by itself by creating internal buffers while working with streams and network sockets.

Streams

Streams are collection of data just like arrays or strings. But, the difference is streams is not available all at once, which makes streams more powerful and can be used for handling large scale applications.
There are four types of streams ->

  • Readable Streams
  • Writable Streams
  • Duplex Streams
  • Transform Streams
    Stream module is primarily used for limiting the buffering of data in such a way that sources and destinations of other applications don't block the memory available.
    We use ".pipe()" method for attaching a readable stream to a writable stream.

Each type of stream throws several events at different instances of time. Some instances thrown by streams are ->

  • data - This event is thrown when the data is available to read.
  • end - This event is thrown when no more data is left to read.
  • error - This event is thrown when there is any error in reading or writing data.
  • finish - This event is thrown when all the data is written into the system.

Readable Streams

Create a file named "opengenus.txt" and put some content into it. For example, I will write just "Software Developer, OpenGenus" in the opengenus.txt file.
After that create a main javascript file named "index.js" and write the following code ->

var fs = require("fs");

var readerStream = fs.createReadStream('input.txt');
console.log(readerStream)

The output for above code will be ->

ReadStream {
  _readableState: ReadableState {
    objectMode: false,
    highWaterMark: 65536,
    buffer: BufferList { head: null, tail: null, length: 0 },
    length: 0,
    pipes: null,
    pipesCount: 0,
    flowing: null,
    ended: false,
    endEmitted: false,
    reading: false,
    sync: true,
    needReadable: false,
    emittedReadable: false,
    readableListening: false,
    resumeScheduled: false,
    emitClose: false,
    autoDestroy: false,
    destroyed: false,
    defaultEncoding: 'utf8',
    awaitDrain: 0,
    readingMore: false,
    decoder: null,
    encoding: null,
    [Symbol(kPaused)]: null
  },
  readable: true,
  _events: [Object: null prototype] { end: [Function] },
  _eventsCount: 1,
  _maxListeners: undefined,
  path: 'input.txt',
  fd: null,
  flags: 'r',
  mode: 438,
  start: undefined,
  end: Infinity,
  autoClose: true,
  pos: undefined,
  bytesRead: 0,
  closed: false,
  [Symbol(kCapture)]: false,
  [Symbol(kIsPerformingIO)]: false
}

Now, for instances to take place, we write the following code ->

var fs = require("fs");
var data = '';

var readerStream = fs.createReadStream('input.txt');
 
readerStream.setEncoding('UTF8');

readerStream.on('data', (block) => {
   data = data + block;
});

readerStream.on('end', () => {
   console.log(data);
});

readerStream.on('error', (err) => {
   console.log(err.stack);
});

console.log("Program Ended");

Here, in this code, firstly, .createReadStream() method is used to read the contents of the file provided as argument. In the second line, .setEncoding() is used to set the encoding to utf-8. Then comes the event instances. We handled Stream events i.e. 'data', 'end', and 'error'. And finally ended our code. Lets see the output of the code. To run,

node index.js

Output ->

Program Ended
Software Developer, OpenGenus

Writable Streams

To implement Writable Streams, write the following code ->


var fs = require("fs");
var data = 'OpenGenus';

var writerStream = fs.createWriteStream('output.txt');

writerStream.write(data,'UTF8');

writerStream.end();

writerStream.on('finish', () => {
   console.log("Write completed.");
});

writerStream.on('error', (err) => {
   console.log(err.stack);
});

console.log("Program Ended");

It will create a new file and write the content to it. Here, we provided "OpenGenus" as the content.

Piping

It is the mechanism for providing the output of one stream as the input to another stream. Lets learn this with an example ->

var fs = require("fs");

var readerStream = fs.createReadStream('input.txt');

var writerStream = fs.createWriteStream('output.txt');

readerStream.pipe(writerStream);

console.log("Program Ended");

Here, "readerStream.pipe(writerStream)" is used to write the output of readerStream to the input of writerStream. In this case, it will create a file named as "output.txt" and write the content of "input.txt" to it.
Therefore, piping can modify our programs to adjust the reading and writing speeds of the disks automatically. This mechanism is called Backpressure. Node.js automatically controls backpressure in our application. Since, piping controls the read and write speeds of systems, therefore, blocking of any application because of other will never happen.

Therefore, here we finish our article. With this article at OpenGenus, you must have the complete idea of writing Memory Efficient Software Applications in Node.JS. Enjoy.

Implement Memory Efficient Applications in Node.JS
Share this