OpenSource Internship opportunity by OpenGenus for programmers. Apply now.
In this article, we have demonstrated the Time Complexity analysis of Addition operation. It is assumed to take constant time O(1) but it takes linear time O(N) in terms of number of bits. We have presented the space complexity along with implementation of addition using bitwise operations.
Table of content:
 Introduction to Analysis of Addition
 Basics of Bitwise Operations
 Adding two numbers using bitwise operators
 Time and Space Complexity of addition
We will explore addition operation deeper.
Introduction to Analysis of Addition
Basic Arithmetic operations like addition is assumed to be constant time operation but in reality, it is not. For practical implementations, addition and subtraction are assumed to be constant time operations while Multiplication and Division are assumed to be costlier operations compared to addition and subtraction and requires extra clock cycles.
In summary, the Time Complexity of Addition Operation is:
Operation  Real TC  Parallel TC  Assumed TC 

Addition  O(logN)  O(1)  O(1) 
The space complexity of Addition is O(1).
Note:
 N is the number to be added and logN is the number of bits in N.
 Real Time Complexity: Time Complexity of Algorithms that are usually used.
 Parallel Time Complexity: Time Complexity of the algorithm that is run in parallel.
 Assumed Time Complexity: Time Complexity assumed in practice and in the analysis of other algorithms involving these operations.
Data is represented in Binary format internally in a Computing Device. So, basic operations like addition (+) work on binary data. To understand the different arithmetic operations and why they are not constant time O(1), we need to get some background on Binary data/ Bitwise operations.
Basics of Bitwise Operations
We know that computer stores all kinds of data (videos, files, photos, etc.) in the form of binary numbers 0s and 1s. These 0s and 1s are called bits and the various operations that can be carried out on these binary numbers are called bitwise operations.
In short, the time complexity of these bitwise operations are:
Bitwise operation  Time Complexity  Parallel algorithm 

AND  O(N)  O(1) 
OR  O(N)  O(1) 
NOT  O(N)  O(1) 
XOR  O(N)  O(1) 
LEFT SHIFT  O(N)  O(1) 
RIGHT SHIFT  O(N)  O(1) 
Note: N is the number of bits
You will understand the time complexity better once we know how these bitwise operations works and how parallel algorithm is common in this case for all modern computing device.
Let us take an example and see how each of these operators work.
Consider 2 decimal numbers a and b.
a = 25 the binary equivalent of 25 is 00011001
b = 14 the binary equivalent of 14 is 00001110

Bitwise AND  The Bitwise and returns 1 only if both the bits are 1.

Bitwise OR  The Bitwise or returns 1 if either of the bits is 1.

Bitwise NOT  The Bitwise not returns the complement of the bit.

Bitwise XOR  The Bitwise XOR returns 1 only if one of the bits is zero.

Bitwise Left Shift
In the above image, we can see that three 0s have been added on the right side that is after the Least significant bit, causing the shift towards the left side.
 Bitwise Right shift
In the above image, we can see that three 0s have been added on the left side that is after the Most significant bit, causing the shift towards the right side.
Now since we have got the idea of how the bitwise operators work, let us move on to adding two numbers without the addition operator.
Adding two numbers using bitwise operators
Let us first take a look at how addition takes place at the binary level and understand it before trying to do it with bitwise operators.
The binary addition is pretty similar to usual addition. From the above example, we can understand that:
1 + 0 = 0 + 1 = 1
0 + 0 = 1
1 + 1 = 10 that is the binary equivalent of 2
And another important point to note is that when we get 10, 1 is taken over to the carry and 0 is kept at the bottom itself.
A truth table will give a better understanding of how the binary addition takes place
From the truth table, we infer that
 The carry expression is A & B
 The Sum expression is A ^ B
Using the above two expressions the addition of any two numbers can be done as follows.
Steps
 Get two positive numbers a and b as input
 Then checks if the number b is not equal to 0
a. Finds the carry value (a & b)
b. Finds the sum value (a ^ b) and stores it in the variable a
c. Then shifts the carry to the left by 1bit stores it in b  Again, go back to step 2
 When b becomes 0 it finally returns the sum
def Bitwise_add(a,b):
while b != 0:
# Carry value is calculated
carry = a & b
# Sum value is calculated and stored in a
a = a ^ b
# The carry value is shifted towards left by a bit
b = carry << 1
return a # returns the final sum
The whole idea behind this code is that the carry gets added again with the sum value. So, what we do is we find the value of the carry separately using the expression a & b and use it again to find the sum. We keep on doing this till the carry value becomes 0.
Another important point to note here is that we shift the carry towards the left by 1 bit. That's because the carry value is added to the next bit rather than the current bit. Take a look at this image to get a better understanding.
We keep on repeating this process till the carry value that is a & b becomes 0 . And finally, we get the required sum of the two numbers.
Bitwise add using Recursion
Adding the numbers using the bitwise operators can also be done in a recursive manner. The same logic takes place but instead of a loop, we use a recursive function to do the Addition.
def Bitwise_add(a,b):
if b == 0:
return a
else :
return Bitwise_add(a^b , (a&b) << 1)
In this code, a^b is the sum expression, and (a&b) << 1 is the carry expression after shifting. So, it is directly passed as input to the recursive function. And when the carry expression becomes 0 that is b becomes 0, the recursion loop stops.
Time and Space Complexity of addition
The time complexity of the algorithm is O(N)
where N is the number of bits in the numbers. The space complexity of the algorithm is O(1).
Given a number M, the number of bits N is equal to logM. So, adding two numbers M is not a constant time O(1) operation. It takes log(M) time. Base of log is 2.
Therefore, addition takes linear time.
Note: In Computing, data is of fixed size usually 32 bits or 64 bits. Due to this, N (the number of bits) is limited. In this view, one can consider addition operation to be of constant time as N is limited.
In real system, bitwise operations are executed in constant time O(1) as each bit is processed in parallel. If we assume bitwise operations take linear time, then the time complexity of addition operation is O(N^2) where N is the number of bits. You need to note that the bitwise operations are done on 1 bit at a time hence, it takes O(1) time.
Therefore, it is practical to assume addition to be O(N) time operation. For simplicity, you can assume addition operation to be O(1) time.
Therefore, if number N is to be added, then the Time Complexity is as follows:
Operation  Usual TC  Optimal TC  Assumed TC 

Addition  O(logN)  O(logN)  O(1) [parallel] 
This is because number N has logN bits.
The space complexity of addition is O(1).
This is because only one extra variable that is a carry is used in addition operation.
There exists no other optimal algorithm for addition. With this article at OpenGenus, you must have a strong idea of Time and Space complexity of addition.