A graph is a **collection of nodes connected by edges**, where each node represents an entity, and each edge represents a relationship between two entities. In this article, we will discuss how to implement a graph data structure in TypeScript and perform BFS and DFS on it.

In TypeScript, we can implement a graph data structure using classes. We will define two classes, namely Node and Graph, to represent a node and a graph, respectively.

The Node class will have two properties: value and neighbors:

`value`

property will hold the value of the node.`neighbors`

property will hold an array of neighboring nodes.

```
class Node {
value: any;
neighbors: Node[];
constructor(value: any) {
this.value = value;
this.neighbors = [];
}
addNeighbor(node: Node) {
this.neighbors.push(node);
}
}
```

The Graph class will have one property, `nodes`

, which will hold an array of all the nodes in the graph. It will also have two methods, `addNode`

and `addEdge`

, to add a node and an edge to the graph, respectively.

```
class Graph {
nodes: Node[];
constructor() {
this.nodes = [];
}
addNode(value: any) {
const node = new Node(value);
this.nodes.push(node);
}
addEdge(source: Node, destination: Node) {
source.addNeighbor(destination);
destination.addNeighbor(source);
}
}
```

Breadth-First Search (BFS) is a graph traversal algorithm that visits all the nodes of a graph in breadth-first order, i.e., it visits all the nodes at the same level before moving to the next level. To perform BFS, we need to maintain a queue of nodes to be visited.

```
function bfs(startNode: Node) {
const visited: Set<Node> = new Set();
const queue: Node[] = [];
visited.add(startNode);
queue.push(startNode);
while (queue.length > 0) {
const currentNode = queue.shift()!;
console.log(currentNode.value);
for (const neighbor of currentNode.neighbors) {
if (!visited.has(neighbor)) {
visited.add(neighbor);
queue.push(neighbor);
}
}
}
}
```

In the above code, we first initialize a set called visited to keep track of the nodes that have been visited. We also initialize a queue to hold the nodes to be visited. We add the `startNode`

to both visited and queue. Then, while the queue is not empty, we dequeue the currentNode, print its value, and add its unvisited `neighbors`

to the queue and visited set.

Here's a visual cue for the code above:

Depth-First Search (DFS) is a graph traversal algorithm that visits all the nodes of a graph in depth-first order, i.e., it visits all the nodes in a branch before moving to the next branch. To perform DFS, we need to maintain a stack of nodes to be visited.

```
function dfs(startNode: Node) {
const visited: Set<Node> = new Set();
const stack: Node[] = [];
stack.push(startNode);
while (stack.length > 0) {
const currentNode = stack.pop()!;
if (!visited.has(currentNode)) {
console.log(currentNode.value);
visited.add(currentNode);
for (const neighbor of currentNode.neighbors) {
stack.push(neighbor);
}
}
}
}
```

We first initialize a set called visited to keep track of the nodes that have been visited. We also initialize a stack to hold the nodes to be visited. We push the `startNode`

to the stack. Then, while the stack is not empty, we pop the currentNode from the stack, print its value, add it to the visited set, and push its unvisited `neighbors`

to the stack.

Here's a visual cue:

Consider the following problem:

```
Given a graph, find the shortest path between two nodes using BFS.
```

To solve this problem, we can use the BFS algorithm provided in this article. We start by adding the start node to the visited set and queue. Then, while the queue is not empty, we dequeue the `currentNode`

, check if it is the target node, and return the path if it is. Otherwise, we add its unvisited `neighbors`

to the queue and visited set, and update the path.

Here's the typescript code to solve the problem above:

```
function shortestPath(graph: Graph, start: Node, target: Node) {
const visited: Set<Node> = new Set();
const queue: [Node, Node[]][] = [];
queue.push([start, [start]]);
while (queue.length > 0) {
const [currentNode, currentPath] = queue.shift()!;
if (currentNode === target) {
return currentPath;
}
visited.add(currentNode);
for (const neighbor of currentNode.neighbors) {
if (!visited.has(neighbor)) {
queue.push([neighbor, [...currentPath, neighbor]]);
}
}
}
return null;
}
```

]]>Heap is a variant of tree data structure, with two additional properties:

**It is a Complete Binary Tree:**Each level of a Complete Binary Tree contains the maximum number of nodes,**except**possibly the last layer, which must be filled from left to right. A*Complete Binary Tree is always balanced*by it's definition. For reference, the diagrams below show you when a tree can be called a CBT:

**Every node satisfies the "heap property":**Heap property essentially means that for any given node C, if P is a parent node of C, then:- For a max heap: the key of P should be greater than or equal to the key of C.
- For a min heap: the key of P should be less than or equal to the key of C.

If you've gone through the previous posts, you'd notice we usually start implementation with class representation of a node and then tie it up with class representation of the actual data structure itself. We can do the same for heaps as well. However, there's a simpler way of solving this problem, which is because of the one of the two properties that all heaps must abide by:

All heaps must be Complete Binary Trees

Since all heaps must be Complete Binary Trees and we know that all levels, excepts last one, in Complete Binary Tree must be completely filled. Also, for the last level, all children must be filled from left to right direction, without any gaps. This definition ensures that **a Complete Binary Tree of n nodes can only have 1 possible shape**. It, in turn, would allow us to represent the Complete Binary tree using an array. Which means, heaps can be represented using arrays too. For example, we can represent a simple heap as an array, as illustrated below:

Key thing to note here is the relationship between parent and children nodes. If you look closely at the diagram above, we can deduce following:

- If a node is placed at index i in array, then given that the resultant index lies within length of the array:
- It's left child would be at (2i+1)th position
- Right one would be at (2i+2)the position

- If a node is placed at index i in array, it's parent node would be located at
`floor((i-1)/2)`

th index.

Diagram below makes it easier to consume the info above:

Note: throughout the implementation we'll only be talking about min-heap. We'll later see that how the same idea can be easily extended to max-heap as well.

Now that we've covered the representation details, let's come up with an interface for using the data structure. There are three key things we want to be able to achieve with the help of our heap data structure:

- Add a new key into the heap
- Remove the max or min key from the heap (depending on whether it's min heap or max heap)
- Get the max of min key from the heap (depending on whether it's min or max heap)

Third operation is quite trivial. We know for the min heap, first item in the array would be the min key and similarly for max heap, first item in the array would max key. So we're left with implementation of two operations:

```
// adds the provided newKey into the min-heap named "heap"
function heappush(heap, newKey){}
// removes the smallest key from the min-heap named "heap"
function heappop(heap){}
```

`heappush()`

How can we add a new key into the heap? Let's say we start by pushing the new key into the array. Pushing the new key still let's us abide by the first requirement of the heap i.e it must be a Complete Binary Tree. However, we need to ensure that it abides by the "heap property" as well.

We can do so by comparing the pushed item with it's parent. If parent is larger than the pushed item then we know heap property is being violated, hence we can swap. We can continue doing this swapping until a legal parent is found or we've reached top of the heap. Here's a visual guide for better reference:

Here's the final implementation:

```
function heappush(heap, newKey){
// push the new key
heap.push(newKey);
// get the current index of pushed key
let curr = heap.length-1;
// keep comparing till root is reached or we terminate in middle
while(curr > 0){
let parent = Math.floor((curr-1)/2)
if( heap[curr] < heap[parent] ){
// quick swap
[ heap[curr], heap[parent] ] = [ heap[parent], heap[curr] ]
// update the index of newKey
curr = parent
} else{
// if no swap, break, since we heap is stable now
break
}
}
}
```

`heappop()`

Using `heappop()`

we need to remove the topmost item of the heap. Meaning, for a min-heap the minimum key would be removed and for a max-heap maximum key would be removed. From the perspective of array, it simply means we should remove the first item of the array. But then which node should become the root ? If we randomly choose either of left or right children of the removed node, as new root node, that wouldn't guarantee following the heap property.
We can follow these steps instead (for a min-heap):

- Swap the root node with last node (first item with last item in the array)
- Remove the root node by popping the last item out of the array
- Compare the new root node's key with it's children:
- If key is less than both of it's children keys then heap is stable
- Else, swap the key with the smaller child key

- Repeat step 3 until last child is reached or the heap property is established.

Essentially we're following a similar process as `heappush()`

, except we're trying to establish the heap-property in **top to bottom** fashion i.e. start with the root and keep going till last child. In `heappush()`

we followed opposite order i.e. start from the last child and keep going till the root.

Here's how the actual implementation looks like:

```
function heappop(heap){
// swap root with last node
const n = heap.length;
[heap[0], heap[n-1]] = [ heap[n-1], heap[0]]
// remove the root i.e. the last item (because of swap)
const removedKey = heap.pop();
let curr = 0;
// keep going till atleast left child is possible for current node
while(2*curr + 1 < heap.length){
const leftIndex = 2*curr+1;
const rightIndex = 2*curr+2;
const minChildIndex = (rightIndex < heap.length && heap[rightIndex] < heap[leftIndex] ) ? rightIndex :leftIndex;
if(heap[minChildIndex] < heap[curr]){
// quick swap, if smaller of two children is smaller than the parent (min-heap)
[heap[minChildIndex], heap[curr]] = [heap[curr], heap[minChildIndex]]
curr = minChildIndex
} else {
break
}
}
// finally return the removed key
return removedKey;
}
```

Creating a heap from a pre-existing array looks pretty simple. Just create an empty heap and then iterate through all items of the array and perform `heappush()`

:

```
function heapify(arr){
const heap = []
for(let item of arr){
heappush(heap, item)
}
return heap;
}
```

But can we do slightly better here? Yes. First off, we can avoid using extra space for the new heap altogether. Why not just re-arrange the items of the array itself so that it satisfies the heap property? To do this we can follow a similar logic as we did for heap pop. We can look at the first node and compare to it's children to see if it's the smallest one, if not swap it with the smaller child. In fact let's create a function for that called `percolateDown()`

, since we're moving downwards:

```
// follows pretty much the same logic as heappush, except minor modifications
function percolateDown(heap, index){
let curr = index;
// keep going down till heap property is established
while(2*curr + 1 < heap.length){
const leftIndex = 2*curr+1;
const rightIndex = 2*curr+2;
const minChildIndex = (rightIndex < heap.length && heap[rightIndex] < heap[leftIndex] ) ? rightIndex :leftIndex;
if(heap[minChildIndex] < heap[curr]){
// quick swap, if smaller of two children is smaller than the parent (min-heap)
[heap[minChildIndex], heap[curr]] = [heap[curr], heap[minChildIndex]]
curr = minChildIndex
} else {
break
}
}
```

Alright. So now, we can use the `percolateDown()`

function for all items of the array one by one to put everything in correct order as per heap-property:

```
function heapify(heap){
for(let i in heap){
percolateDown(heap, i)
}
return heap
}
```

So that saves us an extra array. But can we do anything to improve time taken? Yes. If you look closely we're actually doing some repetitive work here. Say there are `n`

nodes in heap, out of which `x`

are leaf nodes then that means we only need to perform `percolateDown()`

for `n-x`

nodes, since last `x`

nodes would be in correct place by then.

Great! So in the array representation of heap, till which index we should perform the `percolateDown()`

operation? Well, till the index where parent of the last node lies. Because as soon as parent of last node is percolated down it'll take care of the last node too. So:

- If array length is
`n`

- Last node's index would be:
`n-1`

- It's parent node's index would be:
`Math.floor((n-1) - 1 / 2) = Math.floor(n/2 - 1)`

Hence our final heapify function would be:

```
function heapify(heap){
const last = Math.floor(heap.length/2 - 1);
for(let i = 0; i <= last; i++){
percolateDown(heap, i)
}
return heap
}
```

Looking at the `heappush()`

and `heapop()`

operation, it's apparent that we're running through the height of the tree while trying to add or remove a key. Since heap is a balanced tree, height is `log(n)`

where n is total number of nodes. Hence for push and pop operations of heap the time complexity would be `O(log(n))`

. The time complexity for `heapify()`

operation may seem like `Onlog(n)`

, since each call takes `O(log(n))`

. This observation is correct for deriving the upper bound of the time complexity for `heapify()`

, however, the asymptotic (averaged) time complexity comes out to be `O(n)`

. More details on this here. In terms of space complexity, it's constant, since extra space is only being taken up by the constant-sized variables like `curr`

, `leftIndex`

etc.

If we've an implementation of minHeap we can easily use it as a max heap as well. We just need to ensure that while adding values to the heap we insert negative of the key. It would ensure that heap acts as min-heap for negative of all the keys which is equivalent to maxHeap for all the actual keys. Example:

- Say we have an array
`const x = [23, 454, 54, 29]`

; - Min-heap can be created using:

```
const heap = [];
for(let el of x) heappush(heap, el);
// min value
const min = heappop(heap)
```

- Max-heap can be created using:

```
const heap = [];
for(let el of x) heappush(heap, -el);
// max value
const max = -heappop(heap)
```

]]>Trie is a variation of tree data structure. It's also referred to as prefix tree or a variation of search tree. Just like n-ary tree data structure, a trie can have n children stemming from single parent. Usually all the nodes in a trie would store some character. Assuming we're only dealing with English words, here's how a simple trie may look like:

Things to note here:

- We're trying to use a tree to represent English words here, as efficiently as possible.
- In the diagram above, a path from root node to any of the green nodes, denotes an English word. For example:
`NULL->C->A->T`

: CAT`NULL->D->O`

: DO`NULL->D->O->G`

: DOG`NULL->D->A->R->K`

: DARK`NULL->A`

: A`NULL->A->N`

: AN

- Each node can have at most 26 children (if we're only dealing with English alphabet). We have a NULL node as root node, because a word can start with any of 26 letters hence we need a dummy node that can have any of potential first letters as a child.
- A green node, essentially represents 'end of a word', while traversing from the root till that node.

Nice! So we've got conceptual background. Now, let's try to come up with the programatic representation of the Trie node. Referring back to tree node, this is how we presented it:

```
function Node(value){
this.value = value
this.left = null
this.right = null
}
```

So, we can follow a similar idea for Trie while ensuring it meets the requirements we discussed in the Introduction section. To understand requirements of a Trie node, let's zoom in on any of the nodes:

Alright, so it makes more sense now. Here's the final code:

```
function Node(value){
this.value = value
this.isEndOfWord = false // false by default, a green node means this flag is true
this.children = {} // children are stored as Map, where key is the letter and value is a TrieNode for that letter
}
```

We can represent it using a simple ES6 class:

```
class Trie{
constructor(){
this.root = new Node(null)
}
insert(word){
// TODO
}
search(word){
// TODO
}
}
```

So we've got the overall interface in place. Each trie would create it's own root node (NULL) as part of initialisation. Then we can implement the two methods as follow:

`insert(word)`

: We can split the word into letters, and create a`Node()`

for each of these letters. Then we can start chaining each of these Trie nodes to the root node , to insert the word. Finally we'll mark the`isEndOfWord`

property as true for last inserted Node.`search(word)`

: We can split the word into letters. Then we can start looking for each of these letters one by one, starting from the root. If we're able to find all the letters sequentially, then we can return true else false.

Let's understand both the operations visually for better context:

`insert(CAR)`

and then`insert(CAN)`

:

`search(CODE)`

and`search(CAR)`

:

Here's how the final implementation would look like:

```
class Trie{
constructor(){
this.root = new Node(null)
}
insert(word){
let current = this.root
// iterate through all the characters of word
for(let character of word){
// if node doesn't have the current character as child, insert it
if(current.children[character] === undefined){
current.children[character] = new Node(character)
}
// move down, to insert next character
current = current.children[character]
}
// mark the last inserted character as end of the word
current.isEndOfWord = true
}
search(word){
let current = this.root
// iterate through all the characters of word
for(let character of word){
if(current.children[character] === undefined){
// could not find this character in sequence, return false
return false
}
// move down, to match next character
current = current.children[character]
}
// found all characters, return true if last character is end of a word
return current.isEndOfWord
}
}
```

Usage is straightforward. Here's a sample code showing we can use the implementation above:

```
const trie = new Trie();
// insert few words
trie.insert("CAT");
trie.insert("DOG");
// search something
trie.search("MAT") // false
trie.search("DOG") // true
```

In the worst case, each character of all inserted words can take up a single node in a Trie. So that would mean worst space complexity can be (W*n), where W is average number of characters per word and n is total number of words in the Trie.

- Insert: For inserting a word having
`n`

characters, we just need to loop through n characters, so time complexity is`O(n)`

- Search: Similar to Insertion, we only need to loop through all the characters of the word to search it. So time complexity is
`O(n)`

, where n is number of characters in the word.

Now, step back for a moment and think how else could you search for a word in a huge list of words?

- Probably using an array? Time complexity would be O(m), where where m is total number of words, which is pretty bad.
- How about using a map (or an object in JavaScript) ? Well, that would decrease time complexity to O(1), but how fast it would be to find list of words having certain prefix? It would be O(m).

Trie not only brings down the time complexity to O(n) (n = no. of characters in word), but you can also effectively search for a list of words having a prefix, which would be a much more expensive task with any of the two approaches above.

**Autocomplete and Typeahead:**If you type something in a text box and you see list of potential searches with same prefix i.e. an Autocomplete widget, then that's probably being handled by a Trie behind the scenes. Similarly Typeahead can also be implemented using a Trie.Spell checker: We can use trie to create a spell checker i.e. given a list of words we can check if the spelling of a given word is correct or not.

IP routing (Longest prefix matching): The Internet consists of multiple router nodes which decide the destination packet should be sent. Each router on the Internet needs to send the packet to the appropriate target node decided by the given IP destination. But how each router can decide the next destined router with the given IP address? This problem can be solved using IP routing. Here's a great article diving into this subject.

HTTP was invented alongside HTML to create the first interactive, text-based web browser: the original World Wide Web. In this article, we'll be covering the key concepts related to HTTP, which all developers should be aware of.

Let's start with basics i.e. understanding how data transfer takes place and overall anatomy of HTTP messages.

The OSI (Open Systems Interconnection) is a conceptual framework used to describe the functions of a networking system. It thus helps to see how information is transferred across a network. Here's a diagram depicting various networking layers:

**Application Layer**: It's the layer that user interacts with. This layer uses protocols like HTTP and FTP.**Presentation Layer**: This layer prepares and translates data from the network format to the application format or vice versa.**Session Layer**: It's the layer responsible for establishing, maintaining, and ending connections between different applications. Typically youll see protocols such as NetBios, NFS, RPC, and SQL operating on this layer.**Transport Layer**: It is the layer responsible for transferring data between end systems and hosts. It dictates what gets sent where, and how much of it gets sent. At this level, you see protocols like TCP, UDP, and SPX.**Network Layer**: It has the job of dealing with most of the routing within a network. In simple terms, the Network Layer determines how a packet travels to its destination. Protocols like TCP/IP, AppleTalk, and IPX operate at this layer.**Data Link Layer**: The data link provides for the transfer of data frames between hosts connected to the physical link.**Physical Layer**: It is the hardware layer of the OSI model which includes network elements such as hubs, cables, ethernet, and repeaters. For example, this layer is responsible for executing electrical signal changes like making lights light up.

As mentioned above, HTTP operates in application layer i.e. the layer user directly interacts with. Some key points regarding this protocol:

- HTTP follows the classical client-server model. A client opens a connection to issue a request and then waits for the server to respond.
- HTTP is a stateless protocol i.e. each request has isolated and independent lifecycle. HTTP is not session-less though. For example, HTTP cookies allow the use of stateful sessions.
- HTTP, which is an application layer protocol, rides on top of TCP (Transmission Control Protocol): a transport layer protocol.
- HTTP is text based protocol i.e data transmission takes place using text format.

An HTTP request can consist of four parts:

- Request method
- URL
- Request headers
- Request body

These are the possible HTTP request methods:

**GET**requests a specific resource in its entirety**HEAD**requests a specific resource without the body content**POST**adds content, messages, or data to a new page under an existing web resource**PUT**directly modifies an existing web resource or creates a new URI if need be**DELETE**gets rid of a specified resource**TRACE**shows users any changes or additions made to a web resource**OPTIONS**shows users which HTTP methods are available for a specific URL**CONNECT**converts the request connection to a transparent TCP/IP tunnel**PATCH**partially modifies a web resource

An HTTP request is just a series of lines of text that follow the HTTP protocol. A GET request might look like this:

```
GET /hello.txt HTTP/1.1
User-Agent: curl/7.63.0 libcurl/7.63.0 OpenSSL/1.1.l zlib/1.2.11
Host: www.example.com
Accept-Language: en
```

Once the server receives the request, It may respond with some data. A sample HTTP response would like this:

```
HTTP/1.1 200 OK
Date: Wed, 30 Jan 2019 12:14:39 GMT
Server: Apache
Last-Modified: Mon, 28 Jan 2019 11:17:01 GMT
Accept-Ranges: bytes
Content-Length: 12
Vary: Accept-Encoding
Content-Type: text/plain
Hello World!
```

As stated earlier, HTTP uses text format for data transmission. The problem is this data is not encrypted, so it can be intercepted by third parties to gather data being passed between the two systems. This issue can be addressed using HTTPS.

The S in HTTPS stands for "secure." HTTPS uses TLS (or SSL) to encrypt HTTP requests and responses, so in the example above, instead of the text, an attacker would see a bunch of seemingly random characters.

Instead of:

```
GET /hello.txt HTTP/1.1
User-Agent: curl/7.63.0 libcurl/7.63.0 OpenSSL/1.1.l zlib/1.2.11
Host: www.example.com
Accept-Language: en
```

The attacker would see something like:

```
t8Fw6T8UV81pQfyhDkhebbz7+oiwldr1j2gHBB3L3RFTRsQCpaSnSBZ78Vme+DpDVJPvZdZUZHpzbbcqmSW1+dkughdkhkuyi2u3gsJGSJHF/FNUjgH0BmVRWII6+T4MnDwmCMZUI/orxP3HGwYCSIvyzS3MpmmSe4iaWKCOHQ==
```

TLS uses a technology called public key encryption. In a nutshell:

- There are two keys, a public key and a private key.
- The public key is shared with client devices via the server's SSL certificate.
- When a client opens a connection with a server, the two devices use the public and private key to agree on new keys, called session keys, to encrypt further communications between them.
- All HTTP requests and responses are then encrypted with these session keys, so that anyone who intercepts communications can only see a random string of characters, not the plaintext.

You can find a great article on encryption here if that interests you.

The protocol was developed by Tim Berners-Lee and his team between 1989-1991. The first version: HTTP/0.9 is also referred to as one line protocol. Only `GET`

request type was supported back then. HTTP/0.9 was very limited and both browsers and servers quickly extended it to be more versatile, resulting in HTTP/1.0.

HTTP/1.0 brought in quite a few novelties. It introduced concepts of status code, multiple request methods(`GET`

, `HEAD`

, `POST`

), request/response headers etc.

HTTP/1.0 required to open up a new TCP connection for each request (and close it immediately after the response was sent).
TCP connection in turn uses a **three-way handshake** to establish a reliable connection. The connection is full duplex(two way connection), and both sides synchronize (SYN) and acknowledge (ACK) each other. The exchange of these four flags is performed in three stepsSYN, SYN-ACK, and ACKas shown in Figure:

For better performance, it was crucial to reduce these round-trips between client and server. HTTP/1.1 solved this with **persistent connections**. What's a persistent connection? It's a (network communication) channel that remains open for further HTTP requests and responses rather than closing after a single exchange.

`keep-alive`

header was added to HTTP 1.0 to facilitate persistent connection. If the client supports `keep-alive`

, it adds an additional header to the request:

```
Connection: keep-alive
```

Then, when the server receives this request and generates a response, it also adds a header to the response:

```
Connection: keep-alive
```

Following this, the connection is not dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection.

HTTP/1.1 Introduced critical performance optimizations and feature enhancements. Major offerings are listed below:

**Persistence:**In HTTP 1.1, all connections are considered persistent unless declared otherwise. The HTTP persistent connections do not use separate`keep-alive`

messages, they just allow multiple requests to use a single connection by default.**Pipelining:**is the process of sending successive requests, over the same persistent connection, without waiting for the answer. This avoids latency of the connection.

The image below illustrates difference between short lived, persistent and pipelined connections.

**Head of line blocking**: Even though pipelining reduces number of requests and re-uses same connection, it still requires the responses to arrive in order. Which means if the first request takes too long to be responded, subsequent requests remain blocked. This is called "Head of line blocking". HTTP/2.0 sloves this using**binary framing**without sacrificing parallelism. More on this is discussed ahead in this article.

`keep-alive`

makes it difficult for the client to determine where one response ends and the next response begins, particularly during pipelined HTTP operation. This is a serious problem when `Content-Length`

cannot be used due to streaming. To solve this problem, HTTP 1.1 introduced a chunked transfer coding that defines a last-chunk bit. The last-chunk bit is set at the end of each response so that the client knows where the next response begins.

HTTP/1.1 introduced headers that allow transfer of compressed data over the network. It can be done with the help of `Accept-Encoding`

and `Content-Encoding`

headers. Here's summary of how it works:

- Client issues request with
`Accept-Encoding`

header to let server understand the compression schemes it supports:`GET /encrypted-area HTTP/1.1 Host: www.example.com Accept-Encoding: gzip, deflate`

- If server supports any these compression schemes, it can choose to compress the content and respond with it along with
`Content-Encoding`

header:`HTTP/1.1 200 OK Date: mon, 26 June 2016 22:38:34 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Accept-Ranges: bytes Content-Length: 438 Connection: close Content-Type: text/html; charset=UTF-8 Content-Encoding: gzip`

HTTP/1.1 Also introduced following concepts:

**Virtual hosting:**a server with a single IP Address hosting multiple domains**Cache support:**faster response and great bandwidth savings by adding cache support.

HTTP/2 is a major revision of the HTTP protocol. It was derived from the earlier experimental SPDY protocol, originally developed by Google.

At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which dictates how the HTTP messages are encapsulated and transferred between the client and server. Following are the critical terms associated with framing layer:

**Frame**: The**smallest unit of communication**in HTTP/2, each containing a frame header, which at a minimum identifies the stream to which the frame belongs.**Message**: A complete**sequence of frames**that map to a logical request or response message.**Stream**: A bidirectional flow of bytes within an established connection, which may carry**one or more messages**in it.

The image below illustrates how an HTTP/1.x message compares to HTTP/2.0 message (Source):

In HTTP/2.0, client and server can break down an HTTP message into independent frames, interleave them, and then reassemble them on the other end. This is called multiplexing. It can be understood better by the diagram below:

With the new binary framing mechanism in place, HTTP/2 no longer needs multiple TCP connections to multiplex streams in parallel; each stream is split into many frames, which can be interleaved and prioritized. As a result, all HTTP/2 connections are persistent, and only one connection per origin is required, which offers numerous performance benefits.

Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses for a single client request. That is, in addition to the response to the original request, the server can push additional resources to the client without the client having to request each one explicitly.

Server push is intended to be deprecated. More details on this post, shared by the chromium team.

HTTP/3.0 is the upcoming major version of HTTP. So far the underlying transport layer mechanism behind HTTP has been TCP. HTTP/3.0 changes that, even though the core semantics remain unchanged.

The fundamental difference between HTTP/2 and HTTP/3 is that HTTP/3 runs over QUIC, and QUIC runs over connectionless UDP instead of the connection-oriented TCP.

Another significant different is HTTP/3.0 mandates secure transfer of data. HTTP/3 includes encryption that borrows heavily from TLS but isnt using it directly. This change is because HTTP/3 differs from HTTPS/TLS in terms of what it encrypts:

- With the older HTTPS/TLS protocol, only the data itself is protected by TLS, leaving a lot of the transport metadata visible.
- In HTTP/3 both the data and the transport protocol are protected.

Note: Most browsers do not support h2c (HTTP/2 without TLS), which means opting for HTTP/2.0 pretty much needs you to opt for TLS if you're hosting a website. Here's a relevant stackoverlow thread on why browsers act this way.

The diagram below illustrates fundamental difference between HTTP/3.0 and it's predecessor(source):

- https://www.developer.mozilla.org/en-US/docs/Web/HTTP/Connection_management_in_HTTP_1.x
- https://www.en.wikipedia.org/wiki/HTTP_compression
- https://www.greenlanemarketing.com/resources/articles/seo-101-http-vs-http2/
- https://developers.google.com/web/fundamentals/performance/http2
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages
- https://developer.okta.com/books/api-security/tls/how/

.

]]>This pattern is referred to as Publish-Subscribe or PubSub. Let's start with the overall notion behind this pattern before writing some code.

The image above describes the general idea behind this pattern:

- We have a PubSub 'container' that maintains a list of
`subscribers`

(a subscriber is just a function) - A new subscription can be created by using the
`subscribe(subscriber)`

method, which essentially adds the`subscriber`

into our PubSub container - We can use
`publish(payload)`

to call all the existing`subscribers`

in the PubSub container with`payload`

- Any specific
`subscriber`

can be removed from the container, at any point in time, using the`unsubscribe(subscriber)`

method.

Looking at the points above it's pretty straightforward to come up with a simple implementation:

```
// pubsub.js
export default class PubSub {
constructor(){
// this is where we maintain list of subscribers for our PubSub
this.subscribers = []
}
subscribe(subscriber){
// add the subscriber to existing list
this.subscribers = [...this.subscribers, subscriber]
}
unsubscribe(subscriber){
// remove the subscriber from existing list
this.subscribers = this.subscribers.filter(sub => sub!== subscriber)
}
publish(payload){
// publish payload to existing subscribers by invoking them
this.subscribers.forEach(subscriber => subscriber(payload))
}
}
```

Let's add a bit of error handling to this implementation:

```
// pubsub.js
export default class PubSub {
constructor(){
this.subscribers = []
}
subscribe(subscriber){
if(typeof subscriber !== 'function'){
throw new Error(`${typeof subscriber} is not a valid argument for subscribe method, expected a function instead`)
}
this.subscribers = [...this.subscribers, subscriber]
}
unsubscribe(subscriber){
if(typeof subscriber !== 'function'){
throw new Error(`${typeof subscriber} is not a valid argument for unsubscribe method, expected a function instead`)
}
this.subscribers = this.subscribers.filter(sub => sub!== subscriber)
}
publish(payload){
this.subscribers.forEach(subscriber => subscriber(payload))
}
}
```

We can use this implementation as follows:

```
// main.js
import PubSub from './PubSub';
const pubSubInstance = new PubSub();
export default pubSubInstance
```

Now, elsewhere in the application, we can publish and subscribe using this instance:

```
//app.js
import pubSubInstance from './main.js';
pubSubInstance.subscribe(payload => {
// do something here
showMessage(payload.message)
})
```

```
// home.js
import pubSubInstance from './main.js';
pubSubInstance.publish({ message: 'Hola!' });
```

Yes. In fact, there are many libraries that use it under the hood and you may not have realized it so far. Let's take the example of the popular state management library for ReactJS - **Redux**. Of course, its implementation is not as simple as ours, since it's been implemented to handle many other nuances and use-cases. Nevertheless, the underlying concept remains the same.

Looking at the methods offered by Redux, You would see `dispatch()`

and `subscribe()`

methods which are equivalent to `publish()`

and `subscribe()`

methods we implemented above. You usually won't see `subscribe()`

method getting used directly, this part is abstracted away behind `connect()`

method offered by react-redux library. You can follow the implementation details here if that interests you.

In summary, all react components using `connect()`

method act as subscribers. Any component using `dispatch()`

acts as the publisher. And that explains why dispatching an action from any component causes all `connected`

components to rerender.

- We'll see how the idea behind PubSub can be extended further to build a state management library like redux from scratch.
- We'll also see how an Event Emitter can be built from scratch, using similar notion as PubSub

1- Traversing through the binary tree using recursive and iterative algorithms

2- Traversing through the binary tree using parent pointers

In this article, we'll put those learnings to use for an n-ary tree i.e. DOM. We'll see how we can locate DOM elements using various CSS selectors without using inbuilt APIs like `getElementById`

, `getElementsByClassname`

or `querySelector`

/`querySelectorAll`

. The article would thus also throw light on how these APIs might be working under the hood.

Borrowing the idea from the first article, let's come up with the preOrder traversal algorithm for DOM:

```
function walkPreOrder(node){
if(!node) return
// do something here
console.log(node)
for(let child of node.children){
walkPreOrder(child)
}
}
```

We can modify this algorithm to return an iterator instead:

```
function* walkPreOrder(node){
if(!node) return
// do something here
yield node
for(let child of node.children){
yield* walkPreOrder(child)
}
}
// USAGE
for(let node of walkPreOrder(root)){
console.log(node)
}
```

We can use any of the breadth-first or depth-first algorithms (discussed in previous articles) to traverse the DOM. For the sake of this article, we'll stick with the above approach though.

Let's also assume we're working on a document having following HTML:

```
<html>
<head>
<title>DOM selection algorithm</title>
</head>
<body>
<div class="container">
<div class="body">
<div class="row">
<img id="profile" src="xyz.jpg" alt="">
</div>
<div class="row"></div>
<div class="row"></div>
</div>
</div>
</body>
</html>
```

Browsers offer `document.getElementById()`

API to achieve this result. Using the `walkPreOrder()`

helper it becomes really simple to achieve this. Let's see:

```
function locateById(nodeId){
// iterate through all nodes in depth first (preOrder) fashion
// return the node as soon as it's found
for(let node of walkPreOrder(document.body)){
if(node.id === nodeId){
return node
}
}
return null
}
```

We can use the `locateById()`

function as follows:

```
const img = locateById('profile')
// returns the image node
```

Browsers offer `document.getElementsByClassName()`

API to achieve this result. Let's see how we can implement something similar:

```
function locateAllByClassName(className){
const result = []
for(let node of walkPreOrder(document.body)){
if(node.classList.contains(className)){
result.push(node)
}
}
return result
}
// USAGE
const elements = locateAllByClassName('row')
```

Selecting DOM nodes is a fairly common operation for web applications. Traversing through the tree multiple times for the same selector doesn't seem optimal. Browser optimizes the selection by using memoization.

Looking at mozilla parser's source, namely an excerpt from the function startTag:

```
// ID uniqueness
@IdType String id = attributes.getId();
if (id != null) {
LocatorImpl oldLoc = idLocations.get(id);
if (oldLoc != null) {
err("Duplicate ID \u201C" + id + "\u201D.");
errorHandler.warning(new SAXParseException(
"The first occurrence of ID \u201C" + id
+ "\u201D was here.", oldLoc));
} else {
idLocations.put(id, new LocatorImpl(tokenizer));
}
}
```

We can see those node ids are kept in a simple hash map. We can use a similar approach to ensure repeated queries for the same ID do not require full traversal, instead, we can just look it up from hashMap and return it.

Here's how our solution would look like post memoization:

```
function getSelectors(){
const idLocations = {}
const classLocations = {}
// updated selector functions
function locateById(nodeId){
if(idLocations.hasOwnProperty(nodeId))
return idLocations[nodeId]
for(let node of walkPreOrder(document.body)){
if(node.id === nodeId){
idLocations[nodeId]= node //memoize
return node
}
}
idLocations[nodeId]= null // memoize
return null
}
function locateAllByClassName(className){
if(classLocations.hasOwnProperty(className))
return classLocations[className]
const result = []
for(let node of walkPreOrder(document.body)){
if(node.classList.contains(className)){
result.push(node)
}
}
classLocations[nodeId]= result
return result
}
return {
locateById,
locateAllByClassName
}
}
// USAGE
const {locateById, locateAllByClassName} = getSelectors();
const result = locateAllByClassName('row') // returns array of elements
const img = locateById('profile') // returns an element, if found
```

Let's try to implement something like `element.querySelector`

. Here's how MDN describes it:

The querySelector() method of the Element interface returns the first element that is a descendant of the element on which it is invoked that matches the specified group of selectors.

Example:

```
const firstRow = document.querySelector('.container .row:first-child')
```

In this case we can pass any CSS selector to the function and it should be able to traverse the DOM to find that element for us. Let's see it we how it can be implemented:

```
// given a selector and root node, find that selector within the root node
function select(selector, root){
for(let node of walkPreOrder(root)){
if(node.matches(selector)){
return node
}
}
return null;
}
function myQuerySelector(path, node){
// if path is empty, nothing to find
if(path.length === 0) return null;
// if node is not provided, let's assume user wants to search within document.body
let root = node || document.body;
const selector = path[0];
// if there's only one selector in the path, just traverse using select function above
if(path.length === 1) return select(selector, root);
// else, either the current node matches the first selector in path or not
// if first selector matches with current node, look through it's children for subsequent selectors only
// else, look through it's children for the whole path
const newPath = root.matches(selector) ? path.slice(1): path;
for(let child of root.children){
const ans = myQuerySelector(newPath, child);
if(ans) return ans
}
// nothing found
return null;
}
// USAGE:
const firstRow = myQuerySelector([".container", ".row"])
```

Implementation of `myQuerySelectorAll`

(similar to `element.querySelectorAll`

) also follows the same approach with slight modification:

```
function selectAll(selector, root){
let result = []
for(let node of walkPreOrder(root)){
if(node.matches(selector)){
result.push(node)
}
}
return result;
}
function myQuerySelectorAll(path, node){
let result = [];
if(path.length === 0) return result;
let root = node || document.body;
const selector = path[0];
if(path.length === 1) return selectAll(selector, root);
const newPath = root.matches(selector) ? path.slice(1): path;
for(let child of root.children){
result = [...result, ...myQuerySelectorAll(newPath, child)]
}
return result;
}
```

We can use the recursive preOrder traversal approach, describe at the start of this article, to clone any tree. Let's see how we can use it to clone any DOM tree, similar to what `element.cloneNode(true)`

does:

- Create a clone of the source node, by creating a new node with the same tagName and then copying over the attributes.
- Recursively call the
`cloneTree`

method on all children of the source node, and append the returned nodes as children to the cloned node.

```
function cloneTree(node){
if(!node) return
const clonedNode = document.createElement(node.tagName.toLowerCase())
const attributes = node.getAttributeNames()
attributes.forEach(attribute => {
clonedNode.setAttribute(attribute, node.getAttribute(attribute))
})
for(const child of node.children){
clonedNode.append(cloneTree(child))
}
return clonedNode
}
```

]]>Let's start with memoizing a pure function. Let's say we have a function called `getSquare`

, which returns the square of the given:

```
function getSquare(x){
return x * x
}
```

To memoize this we can do something like this:

```
const memo = {}
function getSquare(x){
if(memo.hasOwnProperty(x)) {
return memo[x]
}
memo[x] = x * x
return memo[x]
}
```

So, with few lines of code, we've memoized our `getSquare`

function.

Let's create a `memoize`

helper. It would accept a pure function as the first argument and a`getKey`

function (which returns a unique key given argument of the function) as the second argument, to return a memoized version of the function:

```
function memoize(fn, getKey){
const memo = {}
return function memoized(...args){
const key = getKey(...args)
if(memo.hasOwnProperty(key)) return memo[key]
memo[key] = fn.apply(this, args)
return memo[key]
}
}
```

We can apply this function to `getSquare`

as follows:

```
const memoGetSquare = memoize(getSquare, num => num)
```

Memoizing a function accepting multiple arguments:

```
const getDivision = (a, b) => a/b
// memoizing using the helper
const memoGetDivision= memoize(getDivision, (a, b) => `${a}_${b}`)
```

Let's say there' a function called `expensiveOperation(key)`

which accepts a key as an argument and performs some async operation before returning the final result via a callback:

```
// does some async operation and invokes the callback with the final result
expensiveOperation(key, ( data) => {
// Do something
})
```

Let's use similar notion as above to memoize this function:

```
const memo = {}
function memoExpensiveOperation(key, callback){
if(memo.hasOwnProperty(key)){
callback(memo[key])
return
}
expensiveOperation(key, data => {
memo[key] = data
callback(data)
})
}
```

So that was pretty easy. But wait! It doesn't solve the whole problem yet. Consider the following scenario:

1- Invoked `expensiveOperation`

with key 'a'

2- While #1 is still in progress, invoked it again with same key

The function would run twice for the same operation because #1 is yet to save the final data in `memo`

. That's not something we wanted. We would instead want concurrent calls to be resolved at once after the earliest call is complete.

To address this issue we can track the operations currently in progress, say we do so by putting it in some sort of queue. Now, if we receive a call for an operation, while it's still in queue, we further enqueue the new call. This way we keep accumulating the repetitive calls and once the operation is done, all of these are processed in one go.

```
const memo = {}, progressQueues = {}
function memoExpensiveOperation(key, callback){
if(memo.hasOwnProperty(key)){
callback(memo[key])
return
}
if(!progressQueues.hasOwnProperty(key)){
// processing new key, create an entry for it in progressQueues
progressQueues[key] = [callback]
} else {
// processing a key that's already being processed, enqueue it's callback and exit.
progressQueues[key].push(callback);
return
}
expensiveOperation(key, (data) => {
// memoize result
memo[key] = data
// process all the enqueued items after it's done
for(let callback of progressQueues[key]) {
callback(data)
}
// clean up progressQueues
delete progressQueue[key]
})
}
```

We can go a step further, just like the last section, and create a re-usable helper say `memoizeAsync`

:

```
function memoizeAsync(fn, getKey){
const memo = {}, progressQueues = {}
return function memoized(...allArgs){
const callback = allArgs[allArgs.length-1]
const args = allArgs.slice(0, -1)
const key = getKey(...args)
if(memo.hasOwnProperty(key)){
callback(key)
return
}
if( !progressQueues.hasOwnProperty(key) ){
// processing new key, create an entry for it in progressQueues
progressQueues[key] = [callback]
} else {
// processing a key that's already being processed, enqueue it's callback and exit.
progressQueues[key].push(callback);
return
}
fn.call(this, ...args , (data) => {
// memoize result
memo[key] = data
// process all the enqueued items after it's done
for(let callback of progressQueues[key]) {
callback(data)
}
// clean up progressQueues
delete progressQueue[key]
})
}
}
// USAGE
const memoExpensiveOperation = memoizeAsync(expensiveOperation, key => key)
```

Let's say we have a function `processData(key)`

which accepts a key as argument and returns a Promise. Let's see how it can be memoized.

Simplest way would be to memoize the promise issued against the key. Here's how it would look like:

```
const memo = {}
function memoProcessData(key){
if(memo.hasOwnProperty(key)) {
return memo[key]
}
memo[key] = processData(key) // memoize the promise for key
return memo[key]
}
```

The code is fairly simple and self-explanatory here. We can use the `memoize`

helper we created a while ago:

```
const memoProcessData = memoize(processData, key => key)
```

Yes. We can apply the same approach as the callback here. Though it might be an overkill for the sake of memoizing such a function:

```
const memo = {}, progressQueues = {}
function memoProcessData(key){
return new Promise((resolve, reject) => {
// if the operation has already been done before, simply resolve with that data and exit
if(memo.hasOwnProperty(key)){
resolve(memo[key])
return;
}
if( !progressQueues.hasOwnProperty(key) ){
// called for a new key, create an entry for it in progressQueues
progressQueues[key] = [[resolve, reject]]
} else {
// called for a key that's still being processed, enqueue it's handlers and exit.
progressQueues[key].push([resolve, reject]);
return;
}
processData(key)
.then(data => {
memo[key] = data; // memoize the returned data
// process all the enqueued entries after successful operation
for(let [resolver, ] of progressQueues[key])
resolver(data)
})
.catch(error => {
// process all the enqueued entries after failed operation
for(let [, rejector] of progressQueues[key])
rejector(error);
})
.finally(() => {
// clean up progressQueues
delete progressQueues[key]
})
})
}
```

Since we're using a `memo`

object to keep track of memoized operations, with too many calls to `expensiveOperation`

with various keys(and each operation returning a sizeable chunk of data after processing) the size of this object may grow beyond what's ideal. To handle this scenario we can use a cache eviction policy such as LRU (Least Recently Used). It would ensure we're memoizing without crossing memory limits!

In real life applications, it's quite common for tree nodes to have a parent field: a field which points to the parent node, hence also called as the parent pointer. Let's take the example of DOM in browser. Say we select any node using the following:

```
const element = document.querySelector("#id")
```

Now, we would find `element.parentNode`

to be pointing to the element's parent.

In this article, we'll look at how we can use these `parent`

pointers to make the traversal more efficient. I'll explain what do I mean by 'more efficient' in a bit. In the next article, we'll also see how we can use the lessons learnt here to create `myQuery`

library (a lightweight `jQuery`

clone) from scratch.

`Node`

definition:First off, we need to update our `Node`

function (side-note: in OOPs world you may call this a `class`

instead, because this function is always to be invoked with the `new`

operator):

```
function Node(value){
this.value = value
this.left = null
this.right = null
this.parent = null // added parent field
}
```

Nothing fancy here! Now let's see how we can use this new `Node`

definition to create a similar tree, as we did in last article.

```
const root = new Node(2)
const left = new Node(1)
root.left = left
left.parent = root
const right = new Node(3)
root.right = right
right.parent = root
```

Alright so that was simple too. We simply needed to ensure `parent`

fields point to the parent node. Here's a visual reference for the final tree we get using the code above:

Let's do something more fun. How about finding the next node in `preOrder`

traversal of tree given the current node and the fact that each node has a `parent`

pointer. Let me rephrase this question for clarity:

How can be find out the preOrder successor of any node in a binary tree? Assume that each node has a

`parent`

pointer.

Let's try to dissect this problem.

- First off, we're dealing with preOrder here, that means we're looking for following order:
`root -> left -> right`

- That means if we're already at current node, we want to look for the left child node as successor.
- What if there's no left child at all ? Well in that case, we look for the right child and if it's there, that's the successor.
- If there's no left or right child, then we need to backtrack (keep going upwards towards the parent). We keep backtracking
**till parent is being reached via it's right child**(because that means preOrder is complete for whole subtree under the parent, by definition on #1).

So here's what the final algorithm would look like:

```
function preOrderSuccessor(node){
if(!node) return
if(node.left) return node.left
if(node.right) return node.right
let parent = node.parent
while(parent && parent.right === node) {
node = node.parent
parent = parent.parent
}
if(!parent) return null // we backtracked till root, so no successor
return parent.right
}
```

Here's the visual cue for better understanding.

Here's a link to a gist on this idea, in case you want to explore it for yourself.

Finding InOrder successor is pretty similar. Let's go step by step:

- For InOrder successor we are looking to traverse in following way:
`left -> root -> right`

If we're at current node, and there's anything on it's right then we can get the successor by finding the leftmost node on the right subtree.

If there's no right child, then we need to backtrack (move upwards). We keep moving upwards till

**parent is reached via it's right child**, because that means whole subtree has been traversed already (by definition in #1).Once we find the nearest parent, which has been found via it's left child, this it is returned as the successor. Why? Because it means it's a node whose left tree has been explored, so by definition in #1, the node itself is now the successor.

Here's the final algorithm:

```
function inOrderSuccessor(node){
if(!node) return
if(node.right){
let current = node.right
while(current && current.left) current = current.left
return current
}
let parent = node.parent
while(parent && parent.right === node) {
root = node.parent
parent = parent.parent
}
if(!parent) return null
return parent
}
```

Visual cue:

Link to gist on this idea.

Let's follow similar thought process for finding the postOrder successor:

- For postOrder, we are looking to traverse in following way:
`left -> right -> root`

So, if we're at any node it means it's left and right subtree has already been explored. That means we need to look at the parent for successor.

If we're reaching the parent from it's right child, that means parent itself is successor, by the definition in #1

If we're reaching the parent from it's left child, that means parent's right child is to be explored next (as per definition in #1). So now we need to simply return the leftmost node in parent's right child as successor.

Here's the final algorithm:

```
function postOrderSuccessor(node){
if(!node) return
let parent = node.parent
if(!parent) return null
if(parent.right === node). return parent
let current = parent.right
while(current && (current.left || current.right)){
current = (current.left || current.right)
}
return current
}
```

Link to gist to play around with this idea.

Why do we need to use `parent`

field to come up with a traversal algorithm at all? It's a valid question, since we've already come up with recursive and iterative approaches to traverse the tree, that too without the need for `parent`

field.

The reason why we're doing this is because of added space complexity in our previous approaches. If you remember we needed to use one or two stacks (depending on traversal method) in the previous article, to get any of the traversal algorithms working. Even in recursive approach, though we're not directly using a stack but recursion itself is based on call-stacks, so there's hidden in-memory stack being used there as well. The problem is that size of this stack is going to increase with depth of our tree, hence it's not the best solution since we've a way to do the same task while spending lesser space . By using the `parent`

pointer we can get rid of those stacks completely, saving us significant space i.e. going from space complexity of O(logN) where N denotes size of a balanced tree to O(1). Let's see how.

For preOrder traversal, we start at the `root`

of the tree. Afterwards, we can keep fetching the preOrder successor using the algorithm above to traverse the whole tree:

```
function preOrder(root){
// first node
console.log(root.value);
let current = root
while(true){
const next = preOrderSuccessor(current)
if(!next) break
// do something
console.log(next.value)
current = next
}
}
```

For InOrder traversal, starting node would be the left-most node of the tree. Thereafter we can keep fetching the successor using the algorithm above to traverse the whole tree:

```
function inOrder(root){
// start at the left most node
while(root && root.left){
root = root.left
}
// first node
console.log(root.value);
let current = node
while(true){
const next = inOrderSuccessor(current)
if(!next) break
// do something
console.log(current.value)
current = next
}
}
```

Very similar to InOrder approach above:

```
function postOrder(root){
// start at the left most node
while(root && root.left){
root = root.left
}
// first node
console.log(root.value);
let current = node
while(true){
const next = postOrderSuccessor(current)
if(!next) break
// do something
console.log(current.value)
current = next
}
}
```

Can you come up with algorithms for finding predecessor (inOrder, preOrder and postOrder) if each node has a parent pointer ? It would be a fun exercise. Try it out and let me know in the comments.

]]>- DOM is a tree data structure
- Directory and files in our OS can be represented as trees
- A family hierarchy can be represented as a tree.

There are bunch of variations of tree (such as heaps, BST etc.) which can be used in solving problems related to scheduling, image processing, databases etc. Many of complex problems may not seem related to tree on a quick look, but can actually be represented as one. We'll walk through such problems as well (in later parts of this series) to see how trees can make seemingly complex problems much easier to comprehend and solve.

Don't forget to subscribe to my newsletter (subscription form should be at the top of this article) if you'd like to be informed about further posts in this series.

Implementing a `Node`

for a binary tree is pretty straightforward.

```
function Node(value){
this.value = value
this.left = null
this.right = null
}
// usage
const root = new Node(2)
root.left = new Node(1)
root.right = new Node(3)
```

So these few lines of code would create a binary tree for us which looks like this:

```
2
/ \
/ \
1 3
/ \ / \
null null null null
```

Cool! So that was easy. Now, how do we put this to use?

Let's start with trying to walk through these connected tree nodes (or a tree). Just as we can iterate through an array, it would be cool if we can 'iterate' through tree nodes as well. However, trees are not linear data structures like arrays, so there isn't just one way of traversing these. We can broadly classify the traversal approaches into following:

- Breadth first traversal
- Depth first traversal

In this approach, we traverse the tree level by level. We would start at the root, then cover all of it's children, and we cover all of 2nd level children, so on and so forth. For example for the tree above, traversal would result in something like this:

```
2, 1, 3
```

Here's an illustration with slightly complex tree to make this even simpler to understand:

To achieve this form of traversal we can use a queue (First In First Out) data structure. Here's how the overall algorithm would look like:

- Initiate a queue with root in it
- Remove the first item out of queue
- Push the left and right children of popped item into the queue
- Repeat steps 2 and 3 until the queue is empty

Here's how this algorithm would look like post implementation:

```
function walkBFS(root){
if(root === null) return
const queue = [root]
while(queue.length){
const item = queue.shift()
// do something
console.log(item)
if(item.left) queue.push(item.left)
if(item.right) queue.push(item.right)
}
}
```

We can modify above algorithm slightly to return an array of arrays, where each inner array represents a level with elements within in:

```
function walkBFS(root){
if(root === null) return
const queue = [root], ans = []
while(queue.length){
const len = queue.length, level = []
for(let i = 0; i < len; i++){
const item = queue.shift()
level.push(item)
if(item.left) queue.push(item.left)
if(item.right) queue.push(item.right)
}
ans.push(level)
}
return ans
}
```

In DFS, we take one node and keep exploring it's children until the depth the fully exhausted. It can be done in one of following ways:

```
root node -> left node -> right node // pre-order traversal
left node -> root node -> right node // in-order traversal
left node -> right node -> root node // post-order traversal
```

All of these traversal techniques can be implemented recursively as well as iteratively. Let's jump into the implementation details:

Here's how PreOrder traversal looks like for a tree:

```
root node -> left node -> right node
```

We can use this simple trick to find out the PreOrder traversal of any tree manually: traverse the entire tree starting from the root node keeping yourself to the left.

Let's dive into actual implementation for such a traversal.
**Recursive approach** is fairly intuitive.

```
function walkPreOrder(root){
if(root === null) return
// do something here
console.log(root.val)
// recurse through child nodes
if(root.left) walkPreOrder(root.left)
if(root.right) walkPreOrder(root.right)
}
```

**Iterative approach** for PreOrder traversal is very similar to BFS, except we use a `stack`

instead of a `queue`

and we push the right child first into the stack:

```
function walkPreOrder(root){
if(root === null) return
const stack = [root]
while(stack.length){
const item = stack.pop()
// do something
console.log(item)
// Left child is pushed after right one, since we want to print left child first hence it must be above right child in the stack
if(item.right) stack.push(item.right)
if(item.left) stack.push(item.left)
}
}
```

Here's how InOrder traversal looks like for a tree:

```
left node -> root node -> right node
```

We can use this simple trick to find out InOrder traversal of any tree manually: keep a plane mirror horizontally at the bottom of the tree and take the projection of all the nodes.

**Recursive:**

```
function walkInOrder(root){
if(root === null) return
if(root.left) walkInOrder(root.left)
// do something here
console.log(root.val)
if(root.right) walkInOrder(root.right)
}
```

**Iterative:**
This algorithm may seem a bit cryptic at first. But it's fairly intuitive. Let's look at it this way: in InOrder traversal left most child is printed first, then root and then right children. So first thought would be to come up with something like this:

```
const curr = root
while(curr){
while(curr.left){
curr = curr.left // get to leftmost child
}
console.log(curr) // print it
curr = curr.right // now move to right child
}
```

In the above approach we're not able to backtrack however i.e. go back to parent nodes which led to left most nodes. So we'll need a stack to record those. Hence our revised approach may look like:

```
const stack = []
const curr = root
while(stack.length || curr){
while(curr){
stack.push(curr) // keep recording the trail, to backtrack
curr = curr.left // get to leftmost child
}
const leftMost = stack.pop()
console.log(leftMost) // print it
curr = leftMost.right // now move to right child
}
```

Now we can use the above approach to lay down the final iterative algorithm:

```
function walkInOrder(root){
if(root === null) return
const stack = []
let current = root
while(stack.length || current){
while(current){
stack.push(current)
current = current.left
}
const last = stack.pop()
// do something
console.log(last)
current = last.right
}
}
```

Here's how postOrder traversal looks like for a tree:

```
left node -> right node -> root node
```

For quick manual PostOrder traversal of any tree: pluck all the leftmost leaf nodes one by one.

Let's dive into actual implementation for such a traversal.

**Recursive:**

```
function walkPostOrder(root){
if(root === null) return
if(root.left) walkPostOrder(root.left)
if(root.right) walkPostOrder(root.right)
// do something here
console.log(root.val)
}
```

**Iterative:**
We already have iterative algorithm for preOrder traversal. Can we use that? Since PostOrder traversal seems to be just reverse of PreOrder traversal. Let's see:

```
// PreOrder:
root -> left -> right
// Reverse of PreOrder:
right -> left -> root
// But PostOrder is:
left -> right -> root
```

Ah! So there's a slight difference. But we can accomodate that by modifying our PreOrder algorithm slightly and then reversing it should give the PostOrder results. Overall algorithm would be:

```
// record result using
root -> right -> left
// reverse result
left -> right -> root
```

Use a similar approach to the iterative preOrder algorithm above, using a temporary

`stack`

.- Only exception is we go
`root -> right -> left`

instead of`root -> left -> right`

- Only exception is we go
Keep recording the traversal sequence in an array

`result`

Reversal of

`result`

gives postOrder traversal

```
function walkPostOrder(root){
if(root === null) return []
const tempStack = [root], result = []
while(tempStack.length){
const last = tempStack.pop()
result.push(last)
if(last.left) tempStack.push(last.left)
if(last.right) tempStack.push(last.right)
}
return result.reverse()
}
```

How nice it would be if we could traverse the tree in the following way:

```
for(let node of walkPreOrder(tree) ){
console.log(node)
}
```

Looks really nice and pretty simple to read, isn't it? All we've got to do is use a `walk`

function, which would return an iterator.

Here's how we can modify our `walkPreOrder`

function above to behave as per the example shared above:

```
function* walkPreOrder(root){
if(root === null) return
const stack = [root]
while(stack.length){
const item = stack.pop()
yield item
if(item.right) stack.push(item.right)
if(item.left) stack.push(item.left)
}
}
```

]]>