Wednesday, May 31, 2017

A Tricky DP Problem on Trees

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: small-to-large trick, dp, trees

Not many programmers are aware of this "small-to-large" trick.  I've picked up this problem from CsAcademy Round 31: you are given a tree rooted at 1 where every node $i$ has some value, $val[i]$. You need to find the number of ways of selecting nodes so that the induced tree is uniform.
1. A uniform tree is one where for every node, its direct child have same value.
2. Induced tree from a subset of nodes $X$ is one where:
         a. There exists exactly one node that doesn't have any ancestor present in $X$.
         b. For every other node, its parent is its lowest ancestor that is present in $X$.

The DP solution is as follows:
Let dp[i][j] be the number of uniform forests(i.e it can be a single uniform tree, or a collection of uniform trees) that exist in the subtree of $i^{th}$ node such that the root of every tree in the forest has value $j$.
Now $dp[i][j] = \left(\prod_{x\in child[i]}(dp[x][j]+1)\right)-1$ , if we do not include node $i$
This is because from every child $x$, it can contribute forests with root value $j$ in $dp[x][j]$ ways, plus $1$ extra way, when we do not select any forest from $x^{th}$ child.
So we need to subtract $-1$ from the total because we have also counted a way where all the children of node $i$ contribute no forest.

Note that we can also include node $i$. This will add more ways in $dp[i][val[i]]$.

If this is not clear, think more by drawing on paper. Also you can refer to editorial here.

So this takes $O(n^2)$ time as well as space, and we need to do better.
Note that for given subtree of node $i$, if none of the nodes in its subtree has $v$ value, then $dp[i][v]$ will be $0$.
So we only iterate on the required values for given subtree using a map.

Now here is the small-to-large technique part. When we are combining all the values for childs of node $i$, we can do that in a smarter way. We always iterate on the smaller sized map, and put its entries in the larger sized map.
So whenever an item is being moved, it is always moved from smaller map to bigger map.
This makes the size of resultant map greater than twice the size of smaller map. So each item can be moved upto max of $log_2(n)$ times, and we have $N$ items in total (one value per node). So we get $O(NlogN)$ time complexity as well as space.

As usual, here is the code implementing the above idea.

I hope you've learnt a bit about this small-to-large technique.  This is another problem based on it.
You can compare my 2 solutions for the same: based on Mo's, based on small-to-large. See how shorter and cleaner the code becomes using this trick.

Comment if you have any doubts. I'm always ready to help.


Tuesday, May 30, 2017

A Hard Problem on Bit Logic

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: bits, analysis, math, counting problem

This time, you will have a lot of fun. Get ready for some high level analysis! :D

So this is some mind-boggling, extremely beautiful problem from AtCoder: you are having a collection of natural numbers from $L$ to $R(<10^{18})$ inclusive, you want to count the number of unique elements you can obtain by performing OR operation to the collection of your numbers.
The approach I am mentioning below will not pass TL, but is something worth mentioning.

I have solved a similar problem on CsAcademy, with the only difference being:
1. We can use AND operation between the numbers from our collection.
2. The collection of numbers was having no pattern, unlike here where we have nos from $L$ to $R$.

Approach 1(Slow):
You might not understand this if you are unaware of SOS DP. Also this solution will cause TLE.
So you can SKIP this, otherwise you also learn one more technique.
We can form a number $X$, only if $res = OR_i(a_i)$ is same as $X$, where $a_i$ is a number from our collection which is a subset of $X$ i.e $X\&a_i=a_i$.  So if we perform OR operation on all such nos and the result is still not $X$, then unfortunately $X$ cannot be formed.
This can be done in $O(A)$ or $O(Alg(A))$, where $A=max(a_1,a_2,...,a_n)$.
How?
Simple DP:
Let dp[i][j] be the OR of all such elements that are subsets of $i$ and differ from $i$ in first $j$ bits.
Then,
if $j^{th}$ bit is set in $i$, $dp[i][j]=dp[i][j-1] | dp[i$^$(1<<j)][j-1]$
otherwise $dp[i][j]=dp[i][j-1]$

This is commonly known as SOS DP. If you didn't understand the above thing, see it from the link mentioned.
Here is the code for implementation above.
So this will not pass TL. We need to make use of the fact that in our problem we just don't have any numbers. We are given nos from $L$ to $R$.

Approach 2:  
The analysis is very beautiful and I really enjoyed solving this.
Let $r$ be the 1st bit from left where $A,B$ differ. So we can remove the bits before this from both $A,B$ as they won't affect our answer because no matter what nos we select in OR operation those bits will remain as it is, and thus have just 1 choice.
Now, $A<2^r\le B <2^{r+1}$
This equation is really very important as we will see later on.
Let $T=2^r$, and lets partition the nos from $A$ to $B$ into 2 sets:
$X=\{A,A+1,...,T-1\}$ and $Y=\{T,T+1,...,B\}$

1. Taking nos from only X:
We can obtain all nos from $A$ to $T-1$. That's it. We can't obtain higher nos as $T-1$ is already a bit sequence of all ones.

2. Taking nos from only Y:
This is interesting.
Note that  $T\le B <2^{r+1}$, so $T,B$ have same number of bits.
The highest bit set in both $T,B$ is $r$. Infact $T$ has only one bit set.
Let the next highest bit set in $B$ be $k$.
Now refer to the pic below.

We can trivially obtain all nos from $T$ to $B$. Point is how far can we go from $B$.
Now refer to the image on left and read the following.
The nos between $2^r$ and $2^r+2^k$ will be of form $1000*****$.
The shaded portion of length $k-1$ bits will have all permutations of $0$s and $1$s(from $0$ to $2^k-1$).
So we can set any pattern of $0$s and $1$s in the first $k-1$ bits of $B$ by suitable OR operations.
This enables us to obtain any number from $B$ to $2^r+2^{k+1}-1$.  Voila!

So in this case, we can obtain nos from $T$ to $2^r + 2^{k+1}-1$.

3. Taking nos from X and Y:
The min number thus obtained will be the OR of $A,T$ = $A+T=A+2^r$.
Note that $T$ just has a bit set at $r^{th}$ position, while all nos from $X$ have bits from $0$ to $r-1$.
$\therefore$, OR of $T=2^r$ with elements from $X$ will give us nos from $A+T$ to $T-1+T=2T-1$.
Well, this is the highest number that we can achieve as it is $2^{r+1}-1$ i.e all bits set to $1$. So we can't go any further.

Finally we have 3 segments, some of which may overlap. All that is left is to find the total length spanned by these segments.  Phew!

Here is the SMALL code implementing the above too BIG an idea.

I loved this problem very much and had a lot of fun writing the explanation for it.
Hope you have also learnt SOS DP trick incase you didn't know about it.
Feel free to ask comment below incase of any queries.





Monday, May 29, 2017

A Connectivity Problem on Directed Graph

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: bitsets, graphs, SCC, connectivity

So the Hackerrank Codesprint 11 ended, and here is the question that I will talk about today: you are given a directed graph, and you will be given 2 types of queries: introduce a new node with an edge from/to the existing nodes, or answer whether there is a path from node $x_i$ to node $y_i$.

If it were a undirected graph, DSU would have nailed it.
Anyway, this is a standard technique used for connectivity in directed graphs.
As for now, forget the type of query that modifies the graph structure.
1. Find all the SCCs in the directed graph.
2. Form a new graph, a DAG by compressing the SCCs and adding edges between the SCCs.
3. Now for each SCC $A$, maintain the list$(reach[A])$ of all the SCCs it can reach.
4. For given nodes $x$ and $y$, there is a path from $x$ to $y$ iff the SCC component $Y$ is present in $reach[X]$, where $X,Y$ refer to the SCC that contain $x,y$ respectively.

Note that this takes $O(n^2)$ because of step $3$. We can do it faster by using BITSETS. Click the above link which excellently outshows the applications of bitsets in C++.
So we replace the list $reach[A]$ by the bitset $bit[A]$.

So we now do this in $O(\frac{N^2}{32})$. This is pretty fast to pass the TL.

Now notice that queries modifying the structure of graph won't affect our algorithm much.
We first build the complete graph and then finally answer the queries.
The only thing that one can worry about is that there might be some case for which the answer to present query is "No" but since we have built the complete graph, we might get a "Yes".
So if answer for $x,y$ is "No", then this means that till now there isn't a path from $x$ to $y$. Notice that as we continue to modify the graph, we always bring a NEW node inside the structure and it's never possible in this way to add a path from $x$ to $y$.  Wow!

As usual, here is the code implementing the above idea.
I've seen similar questions in past hackerrank contest too(can't recall it).
So I learnt that using bitsets is a standard technique for connectivity queries on directed graphs.

A Hard Combinatorics Problem

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: math, combinatorics, DSU

Now, this is a really beautiful problem from AtCoder: you are given an array of $N$ balls and two numbers $X,Y$.   $col[i], w[i]$ represent the color and weight of $i^{th}$ ball respectively. You can swap two balls,$i$ and $j$ of same color  if $w[i]+w[j] \le X$; you can swap two balls,$i$ and $j$ of different color if $w[i]+w[j] \le Y$. You need to find the total number of distinct permutations of colors that you can obtain.

The solution is as follows:
The key concept is that if you can swap balls $A,B$ and balls $B,C$, then you can also swap $A,C$ regardless of the condition. Why?
1. Let the original position be $A ... B ... C$
2. Swap $A,B$ to obtain  $B ... A ... C$
3. Swap $B,C$ to obtain  $C ... A ... B$
4. Swap $A,B$ to obtain  $C ... B ... A$
Voila! You can simply swap $A,C$.  

This means that we need to make clusters of balls. A cluster contains balls that can be swapped into each other's position and the total answer would be the product of total permutations for each cluster of balls.  

So the only problem is to make clusters.  Each ball can be connected to $n-1$ other balls. So we have around $O(n^2)$ edges that we need to see and add. We need something faster.

Note that each edge can be of two types, connecting balls of: same color or different color.  We use DSU to add edges between the balls.  
1. If for every color $C$, we connect the lightest ball of  color $C$ to all the balls of same color which can be joined as per the rules, we completely incorporate all the edges of type that connect two balls of same color.  
2. Now we find the two balls of different colors and minimum weight: let them be A,B.
3. Connect all the balls of color different from $col[A]$ to $A$ if we can connect them.  
4. Now connect balls with color same as $col[A]$ to $B$ if we can connect them.   

Each step takes $O(n)$ time.

How this works?
Consider $i^{th}, j^{th}$ balls, and assume that they can be swapped. We will prove that our idea of addng edges will already collect both these balls in the same cluster.
Case 1: Same Color
Let the lightest ball of that color be $k$.
Since they can be swapped,
$w[i]+w[j]\le X$
This means that $w[i]+w[k] \le X, w[j]+w[k] \le X$
$\therefore $ our idea of connecting ball to the lightest ball of that color makes the cluster complete and we never miss out any edge of this type.
Case 2: Different Color
You can work(similar to what we did above) out to see that if $w[i]+w[j] \le Y$, then our idea would also collect both these balls in the same cluster.

Now we have made clusters, all that is left is to compute answers for each and then multiply them.  
Check the code for implementation of above idea.  

Overall, I had too much fun solving this problem and I learnt something from it.

A Longest SubArray Problem

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: data structures, constructive, math

Here is another interesting problem that I found as a subproblem while solving a difficult problem: you are given an array $A$, and you have to build up array $inc$ such that $inc[i]$ represents the length of  longest good subarray(LGD) starting from $i$. We call a subarray $(L, R)$ good if its first element($A_L$) $\ge 1$, its second element($A_{L+1}$) is $\ge 2$, ...., its last element is $\ge (R-L+1)$.

Eg: If the array given is $A = \{1,2,3,3,3\}$ ,
then the output will be $inc = \{3, 3, 3, 2, 1\}$.

Eg: If the array given is $A = \{4,2,3,5,9\}$ ,
then the output will be $inc = \{5, 4, 3, 2, 1\}$.

After a lot of thinking for about an hour, I had the following simple O(n) solution:

Let $LGS_i$ mean the longest good subarray starting from index $i$.
Let's modify our array $A$ as $A[i] = A[i] - i$,  then:
1. $LGS_0$ will continue to stretch till the maximum index $j$ such that $A[k]\ge 1$ for $k \le j$.  
2. $LGS_1$ will continue to stretch till the maximum index $j$  such that $A[k]\ge 0$ for $k \le j$.
3. $LGS_2$ will continue to stretch till the maximum index $j$  such that $A[k]\ge -1$ for $k \le j$.

Since at every step, the lower bound is always decreasing by 1, we can use one pointer to find the farthest ending point of good subarrays for all indices in $O(n)$.  Note that we are able to use 1 pointer technique because of the fact that ending point of $LGS_i$ will always be after the ending point of $LGS_{i-1}$.

Check the code for the implementation of above idea.

Friday, May 26, 2017

A Greedy, Math Problem

Rachit Jain, rachit iit roorkee, rachit iitr, rachit codechef, rachit hackerrank, rachit jain microsoft
Tags: math, greedy, constructive, combinatorics
So I have found questions from AtCoder interesting since the launch of the site.  
Here is another problem: you are given a  number $n(<10^{12})$ and you have to find an array of size $\le 200$, such that there exist exactly $n$ subsequences of the array which are good. A subsequence is good if it's first half is same as the second half like (1,2,3,1,2,3).   

I was amazed by looking at the solution for this. I wasn't able to solve it on my own. So here is the solution:  
Let $P_k$ denote any permutation of first $k$ natural nos. Then consider the array $A$ formed by adding the sequence of first $k$ natural nos to $P_k$  i.e $A$ = $\{P_k, 1, 2,3,4,...,k\}$  
Now the total no of good subsequences of $A$ is same as number of increasing subsequences in $P_k$.   

Now, note that if total no of good subsequences in $P_k$ are $x$, then we can place the next number $k+1$ in  
1. Front of $P_k$, resulting in total good subsequences to be $x+1$.
2. Back of $P_k$, resulting in total good subsequences to be $2x$.

Now when we need $n$ total good subsequences, we can recursively find the array that will generate it based on the parity of $n$.  Note that you scale down $n$ by half in atmost two steps, so we will need at max $2log_2(10^{12})$ steps i.e $80$ steps.

Check the code for details.  

Art of Time Management | Achieving Multiple Things in Life

rachit jain blog, Rachit Jain, rachit jain iit roorkee, rachit jain iitr, rachit iitr,rachit codechef, rachit hackerrank, rachit jain micr...