Skip to main content

MO’s algorithm and its applications

 

Before learning MO’s Algorithm, let us take a quick look at the prerequisites through which we can solve it.

In competitive programming, we come across questions which involves queries mostly static queries which are given as vectors or arrays. Each such queries represent a run in the code. So, do you think running the code each time for every query operation is fine? Obviously, it is a brute force approach and it will fail most of the times in a coding platform as there will be a huge number of queries. So, what we can do? Can we always break up the array in two parts for like we do in Binary Search? Not really! And nor it will be beneficial.

Let us consider a particular part of the array may be some xth part of the array size. We divide the array into x parts to reduce our time complexity. By using complex mathematical calculations – differentiation we found out that this x is nothing but, where n is the size of the array. We come to the conclusion that dividing the array into blocks of elements help to reduce the time complexity significantly.

This very technique is called Square root Decomposition (Don’t worry we will need this in Mo’s Algorithm).

Square Root Decomposition
In this technique, we divide the array into blocks of √n elements. When we process a query from l to r, the idea is to store the pre-processed answer in the blocks array so when we get a block in a range of l to r we do not iterate through all the √n elements of the block rather we get the value from block array in O(1) time. This method reduces the time complexity by √n . Let us see this with an example of Range Sum Query.

Array Given

There are 10 elements in the array hence n = 10, now we divide this array into  blocks of 3 elements each and 4 elements in the last block.

Our block array will look like:

Block Ids are given by the formulae:

 Here remainder 0 refers to starting of new block.

The idea is to pre-process the array and store the sum of each blocks in blocks array with the above shown block ids. This helps when we process a query from l to r, if the range includes some blocks, we directly get their sum from blocks array in  time.

Let us see the code implementation of range sum of Square Root Decomposition:

#include<bits/stdc++.h>

using namespace std;
vector v(0);
int block_sum[1000]{0};
int blk_size; root n block sizes
//update operation
void update(int idx,int val){
int block_idx = idx/blk_size; // Gives the block ids
block_sum[block_idx] = block_sum[block_idx] + val – v[idx];
v[idx] = val;
}
//Query operation
int querry(int l,int r){
int sum = 0;
//case1
while(l<r && l%blk_size!=0 && l!=0){
sum+=v[l];
l++;
}
//case2
while(l+blk_size<=r){
sum +=block_sum[l/blk_size];
l+=blk_size;
}
//case3
while(l<=r){
sum +=v[l];
l++;
}
return sum;
}
void preprocess(int n){
int blk_idx = -1;
for(int i = 0;i<n;i++){
if(i%blk_size==0)
blk_idx++;
block_sum[blk_idx]+=v[i];
}
}
void printblock(){
for(int i = 0;i<blk_size;i++)
cout<<i<<” “<<block_sum[i]<<endl;

}
int main() { //Driver Program
int n; cin>>n;
blk_size = sqrt(n);
v.clear();
for(int i = 0;i>num;
v.push_back(num);
} preprocess(n);
printblock();
update(4,11);
printblock();

return 0;

}

Mo’s Algorithm

Now when we are done with the pre-requisite let us now discuss Mo’s Algorithm. This is exactly the same algorithm with a slight modification that we need to apply. Here we divide the array into same √n blocks and process the queries although here we first sort the queries in increasing order of their l values and if same then r values. Why we do such?

Let this be the previous query and for next query we need to move L to left, that means we have to again include those elements in the left. Moving L to left includes the elements and towards right excludes the elements and for the R pointer, moving it left excludes the elements and moving it right includes the elements. Hence, we see that we traverse the array with these two pointers which determines the time complexity of the program. Now if we minimise this movement, we can easily reduce the time complexity. Therefore, we need to sort the query list. Although this algorithm fails where there is any updation as updation distorts the sorted query list.

Now let us see the code implementation of Mo’s Algorithm.

#include<bits/stdc++.h>

using namespace std;
int block;
// Structure to represent a query range
struct Query
{
int L, R;
};
bool compare(Query x, Query y)
{
//Sort the blocks
if (x.L/block != y.L/block)
return x.L/block < y.L/block; return x.R < y.R; } void Result(int a[], int n, Query q[], int m) { // Find block size block = sqrt(n); sort(q, q + m, compare); int currL = 0, currR = 0; int currSum = 0; for (int i=0; i L)
{
currSum += a[currL-1];
currL–;
}
while (currR <= R) { currSum += a[currR]; currR++; } while (currR > R+1)
{
currSum -= a[currR-1];
currR–;
}
}
}

// Driver program
int main()
{
int n;
cin>>n;
int *arr = new int[n];
for(int i = 0;i>arr[i];
//int n = sizeof(a)/sizeof(a[0]);
int m;
cin>>m;
Query *q = new Query[m];
for(int i = 0;i>x>>y;
q[i].L = x;
q[i].R = y;
}
Result(arr, n, q, m);
return 0;
}

Time Complexity:  Which is far better than complexity. Hence in this way by dividing the array into blocks we can achieve a more efficient program than brute force.

Let us also see another example/application of Mo’s algorithm:

An array of positive integers a1, a2, …, an is given. Let us consider its arbitrary subarray al, al + 1…, ar, where 1 ≤ l ≤ r ≤ n. For every positive integer s denote by Ks the number of occurrences of s into the subarray. We call the power of the subarray the sum of products Ks·Ks·s for every positive integer s. The sum contains an only a finite number of nonzero summands as the number of different values in the array is indeed finite. You should calculate the power of t given subarrays. This is a popular code forces problem.

Solution:

Solution and Implementation of Mo’s algorithm here:

#include<bits/stdc++.h>

using namespace std;
int N, Q;
ll cur_ans, ans[200500];
int cnt[1000500];
int b_size;
int arr[200500];
pair queries[SS];
inline bool cmpF(const pair&a, const pair&b){
int xblock = a.first.first / b_size;
int yblock = b.first.first / b_size;
if (xblock != yblock){
return xblock < yblock;
}
return a.first.second < b.first.second;
}

inline void add(ll x){
cur_ans += x * 1ll * (cnt[x] << 1 | 1);
cnt[x]++;
}

inline void removeX(ll x){
cur_ans -= x * 1ll * ((cnt[x] – 1) << 1 | 1); cnt[x]–; } int main(){ cin>>N>>Q;
cur_ans = 0;

b_size = sqrt(N);
for (int i = 0; i < N; i++){
cin>>a[i];
}
for (int i = 0; i < Q; i++){
    scanf("%d %d", &queries[i].first.first, &queries[i].first.second);
    queries[i].first.first--; queries[i].first.second--;
    queries[i].second = i;
}
sort(queries, queries + Q, cmpF);
int m_left = 0, m_right = -1;
for (int i = 0; i < Q; i++){
    if (i == 0 || queries[i].first.first / b_size != queries[i - 1].first.first / b_size){
        memset(cnt, 0, sizeof(cnt));
        cur_ans = 0;
        for (int j = queries[i].first.first; j <= queries[i].first.second; j++){
            add(arr[j]);
        }
        m_left = queries[i].first.first, m_right = queries[i].first.second;
        ans[queries[i].second] = cur_ans;
    }
    else{
        int left = queries[i].first.first;
        int right = queries[i].first.second;
        while (m_right < right){
            m_right++;
            add(arr[m_right]);
        }
        while (m_right>right){
            removeX(arr[m_right]);
            m_right--;
        }
        while (m_left < left){
            removeX(arr[m_left]);
            m_left++;
        }
        while (m_left > left){
            m_left--;
            add(arr[m_left]);
        }

        ans[queries[i].second] = cur_ans;
    }
}
for (int i = 0; i < Q; i++){
    cout<<ans[i];
}
return 0;

}

Hence, we can see that Mo’s Algorithm gives us the power of reducing the time complexity by the square root of the original in offline query problems. This is a very handy tool in competitive programming as it makes the program more efficient. Hope this article helps an aspiring programmer and developer.

Comments

  1. Nice blog. It was very useful for me. I m happy I found this blog. Thank you for sharing with us.https://www.acte.in/java-training-in-pune

    ReplyDelete

Post a Comment

Popular posts from this blog

Breaking The Summation Formula (Part 1)

 Q. f ( n ) =  - 1 + 2 - 3 + .. + ( - 1) n n .  Given n, find out f(n) Approach(1)- Bruteforce: 1. Calculation sum=n*(n+1)/2 2. loop[i=1,i=n : i+=2] odd+=i 3.ans=sum-2*odd Code: #include < bits / stdc ++. h > using namespace std ; int main (){   long long x ; cin >> x ; long long p =( x *( x + 1 ))/ 2 ; long long bad = 0 ; for ( long long i = 1 ; i <= x ; i += 2 ) bad += i ; cout << p - 2 * bad << endl ; } Approach(2)-Greedy: Basic: s=1+2+3+4+....+n Formula: sum=n*(n+1)/2= (n/2) + (n+1).2

Big O Notation — Time & Space Complexity in Ruby!

  Big O Cheat Sheet In this post, we will look at the Big O Notation both time and space complexity! For all our examples we will be using Ruby. What is Big O Notation? Big O notation is just a fancy way of describing how your code’s performance depends on the amount of data its processing. It allows us to calculate the worst possible runtime of an algorithm, or how long it takes the algorithm to complete. In other words, it informs us of how much it will slow down based on the input size. It’s usually talked about in two ways the  time complexity ( how long an algorithm takes )and  space complexity ( how much space an algorithm uses ) Let me put this graph here we can refer to it as we go on, it’s a graph showing the complexities compare against each other. Big O complexity chart Time Complexity Before we get try wrapping our heads around the different big O notations here is a nice table that I am borrowing to show how speed will slow down based on the size of the dataset. I found th

Game Theory with examples

Game Theory with examples Introduction In this article I will be covering problems related to two players game in which it is assumed that both players play optimally and we have to find out the winner of the game. First we will look at the  basic   division of positions to winning and losing . Then we will see the  Game of Nim  and then see how it will be used to solve the  Composite games . Basic Division of positions to winning and losing Problem Statement:  Consider a simple game played by two players A and B . There are n stones on the table. Each player can pick 1 , 2 or 5 stones in each turn. Both players pick the stones alternately until the total number of stones left on the table is 0. The player unable to make the move will lose. Assuming that both the players play optimally, output the winner of the game. Solution:  As you can see positions 1 , 2 and 5 are winning positions since the player can pick up all the stones and other player will not be able to make a move . 0 is a