From Surf Wiki (app.surf) — the open knowledge base
Program counter
Register that stores where in a program a processor is executing
Register that stores where in a program a processor is executing
A program counter (PC) is a register that stores where a computer program is being executed by a processor. It is also commonly called the instruction pointer (IP) in Intel x86 and Itanium microprocessors, and sometimes called the instruction address register (IAR), the instruction counter, or just part of the instruction sequencer.
Usually, a PC stores the memory address of an instruction. Further, it usually is incremented after fetching an instruction, and therefore points to the next instruction to be executed. For a processor that increments before fetch, the PC points to the instruction being executed. In some processors, the PC points some distance beyond the current instruction. For instance, in the ARM7, the value of PC visible to the programmer reflects instruction prefetching and reads as the address of the current instruction plus 8 in ARM State, or plus 4 in Thumb State. For modern processors, the location of execution in the program is complicated by instruction-level parallelism and out-of-order execution.
By default, a processor fetches instructions sequentially from memory, but a control transfer instruction changes the sequence by writing a value in the PC. A branch allows the next instruction to be fetched from elsewhere in memory. A function call not only branches but saves the value of the PC. A return restores the value of the PC to resume execution with the instruction following the call by branching to the saved value. A transfer that is conditional on the truth of some condition lets the computer follow a different sequence under different conditions.
Hardware implementation
In a simple central processing unit (CPU), the PC is a digital counter (which is the origin of the term "program counter") that may be one of several hardware registers. The instruction cycle begins with a fetch, in which the CPU places the value of the PC on the address bus to send it to the memory. The memory responds by sending the contents of that memory location on the data bus. (This is the stored-program computer model, in which a single memory space contains both executable instructions and ordinary data.) Following the fetch, the CPU proceeds to execution, taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction).
Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC. The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 232 units of memory. On some processors, the width of the program counter instead depends on the addressable memory; for example, some AVR microcontrollers have a PC which wraps around after 12 bits.
If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input.
To identify the current instruction, the PC may be combined with other registers that identify a segment or page. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity.
Consequences in machine architecture
Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the von Neumann architecture. Thus programmers write a sequential control flow even for algorithms that do not have to be sequential. The resulting “von Neumann bottleneck” led to research into parallel computing, including non-von Neumann or dataflow models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired function and the low-level programmer might specify this using combinatory logic.
This research also led to ways to making conventional, PC-based, CPUs run faster, including:
- Pipelining, in which different hardware in the CPU executes different phases of multiple instructions simultaneously.
- The very long instruction word (VLIW) architecture, where a single instruction can achieve multiple effects.
- Techniques to predict out-of-order execution and prepare subsequent instructions for execution outside the regular sequence.
Consequences in high-level programming
Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine, too complex to be built as hardware but instead emulated or interpreted by software.
However, new programming models transcend sequential-execution programming:
- When writing a multi-threaded program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads.
- In event-driven programming, the programmer may write sequences of instructions to respond to events without specifying an overall sequence for the program.
- In dataflow programming, the programmer may write each section of a computing pipeline without specifying the timing relative to other sections.
References
References
- Hayes, John P.. (1978). "Computer Architecture and Organization". McGraw-Hill.
- Bates, Martin. (2011). "PIC Microcontrollers". Elsevier.
- (April 2018). "Operating System Concepts". [[Wiley (publisher).
- (1980). "Introduction to VLSI Systems". [[Addison-Wesley]].
- (1953). "Principles of Operation, Type 701 and Associated Equipment". [[IBM]].
- Harry Katzan (1971), ''Computer Organization and the System/370'', [[Van Nostrand Reinhold Company]], New York, USA, LCCCN 72-153191
- (2001). "ARM7TDMI (Rev3) Reference manual". [[ARM Limited]].
- [[John L. Hennessy]] and [[David Patterson (scientist). David A. Patterson]] (1990), ''Computer Architecture: a quantitative approach'', [[Morgan Kaufmann Publishers]], Palo Alto, USA, {{ISBN. 1-55860-069-8
- B. Randall (1982), ''The Origins of Digital Computers'', [[Springer-Verlag]], Berlin, D
- [[C. Gordon Bell]] and [[Allen Newell]] (1971), ''Computer Structures: Readings and Examples'', [[McGraw-Hill Book Company]], New York, USA
- (1967). "Introduction to Computer Engineering". [[University of London Press]].
- F. B. Chambers, D. A. Duce and G. P. Jones (1984), ''Distributed Computing'', [[Academic Press]], Orlando, USA, {{ISBN. 0-12-167350-2
- [[Douglas Hofstadter]] (1980), ''Gödel, Escher, Bach: an eternal golden braid'', [[Penguin Books]], Harmondsworth, UK, {{ISBN. 0-14-005579-7
- (2020). "Macro Assembler AS – User's Manual".
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Program counter — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report