Get this book > Problems on Array: For Interviews and Competitive Programming
Table of Content
 Understanding Microarchitecture
 Introduction to Pollack's rule
 The rule
 How Complexity Affects Performance
 Conclusion
Understanding Microarchitecture
Microarchitecture is like the internal design and structure of the CPU. It's about how the CPU is put together on the inside to carry out its tasks efficiently.
i.e., getting data, processing instructions, and producing output.
 Getting Data: Microarchitecture helps the CPU know how to find the information it needs from the computer's memory.
 Processing Instructions: It guides the CPU in carrying out instructions, like performing calculations or operations on the data.
 Producing Output: Microarchitecture ensures that the CPU provides the final result or output after executing all the required tasks.
Key Takeaways: Pollack's rule
 In computer science, Pollack's Rule tells us that making a microprocessor more complex doesn't always give us a proportional boost in performance. The improvement in speed tends to grow slower than the added complexity. So, more complexity doesn't mean a straightup boost in speed and efficiency
Introduction to Pollack's rule
Pollack's Rule is a guideline in computer science that suggests how a microprocessor's performance and power consumption are affected when the processor becomes more complex. It says that when the microarchitecture, which refers to how the processor's components are designed and organized, becomes more advanced or complex, the performance improvement isn't directly proportional to this increase in complexity.
Pollack's rule states: performance improvement tends to increase at a rate that is roughly the square root of the increase in microarchitecture complexity.
On the other hand, when a processor becomes more complex, the power it consumes tends to increase more in line with the complexity. In simpler terms, as a processor becomes more sophisticated in its design, the boost in performance won't be as significant as the increase in the complexity, while the power consumed tends to increase more directly with the added complexity.
Pollack's Rule says that the improvement in how fast the processor can work (its performance) doesn't directly match how much more complicated they make it. Instead, it's like saying, if they double the complexity, the actual speed might not double. The performance boost is more like the square root of that complexity increase. For instance, if the complexity becomes four times greater, the speed might only become two times faster (because the square root of four is two).
However, when the engineers make the processor more complex, the power it needs (its consumption) goes up more directly with the complexity. If they double the complexity, the power it consumes might double as well.
The rule
Pollack's Rule, named after Fred Pollack, an engineer at Intel, became significant in 2008 with the rise of multicore computing. It addresses concerns about the high electricity consumption of computers. In essence, the rule suggests that increasing the number of transistors in a single CPU or a multiprocessor system doesn't necessarily lead to proportional performance improvements. While some argue that a device with many lowpower cores could perform well, the original formulation of Pollack's Law was seen as a limit on performance gains as transistor counts increased. This means that simply adding more transistors may not translate into significantly better performance for certain types of processors.
How Complexity Affects Performance

The relationship between performance (P) and complexity (C) can be expressed as P ∝ √C, indicating that performance is roughly proportional to the square root of the increase in complexity: P ∝ √C.

When complexity doubles, the corresponding performance increase can be calculated as P(new) = 1.4 x P(old). This implies that the new performance (P(new)) is 40% more than the original performance (P(old)).

A specific functional relationship capturing this proportionality is P = k√C, where k is a constant of proportionality. This formula provides a quantitative representation of the observed relationship between performance and complexity.
Conclusion
In a nutshell, Pollack's Rule states that when computer engineers make a chip more complex, the performance boost isn't as significant as the increase in complexity. On the flip side, the power consumption tends to rise more directly with complexity. In simpler terms, making a computer chip fancier might not make it much faster, but it's likely to demand more power to perform its tasks.