Frequency scaling

In computer architecture , frequency scaling (also known as frequency ramping ) is the technique of increasing a processor’s frequency to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004.

The effect of processor frequency on computer speed can be seen by the following:

{\ displaystyle \ mathrm {Runtime} = {\ frac {\ mathrm {Instructions}} {\ mathrm {Program}}} \ times {\ frac {\ mathrm {Cycles}} {\ mathrm {Instruction}}} \ times { \ {\ mathrm {Time}} {\ mathrm {Cycle}}},}

where per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. [1] An increase in frequency thus decreases runtime.

However, power consumption in a chip is given by the equation

{\ displaystyle P = C \ times V ^ {2} \ times F,}

where P is power consumption, C is the capacitance being switched by clock cycle, V is voltage , and F is the frequency frequency (cycles per second). [2] Increases in frequency thus increasing the amount of power used in a processor. Increasing Processor Power Consumption Ultimately Intel’s May 2004 cancellation of its Tejas and Jayhawk processors, which is cited as the leading edge of the dominant computer architecture paradigm. [3]

Moore’s Law , despite predictions of its demise, is still in effect. Despite power issues, transistor densities are still doubling every 18 to 24 months. With the end of frequency scaling, thesis new transistors (qui are no skirt needed to Facilitate frequency scaling) can be used to add extra hardware, Such As additional cores, to Facilitate parallel computing – a technology That Is being white Referred to as parallel scaling .

The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors .

NOTE: At this point in time, 2016, the comment on Moore’s Law appears to be misdirected and outdated. See for example: [4]


  1. Jump up^ John L. HennessyandDavid A. Patterson. Computer Architecture: A Quantitative Approach. 3rd edition, 2002. Morgan Kaufmann,ISBN 1-55860-724-2. Page 43.
  2. Jump up^ JM Rabaey. Digital Integrated Circuits. Prentice Hall, 1996.
  3. Jump up^ Laurie J. Flynn. Intel Halts Development of 2 New Microprocessors. New York Times, May 8, 2004.
  4. Jump up^

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright 2019
Shale theme by Siteturner