What were wait-states, and why was it only an issue for PCs?












3















PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.



Basically, the wait-states I am asking about are due to the main system DRAM being too slow for the CPU, so extra bus cycles were added to make up for this latency. This reduced the overall processing speed.
I'm not asking about cases where a CPU is blocked from accessing main system RAM by some peripheral doing DMA, for example. Obviously, that's a feature for improving performance.



But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.



What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?










share|improve this question




















  • 1





    Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

    – Raffzahn
    7 hours ago











  • Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

    – Erik Eidt
    6 hours ago
















3















PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.



Basically, the wait-states I am asking about are due to the main system DRAM being too slow for the CPU, so extra bus cycles were added to make up for this latency. This reduced the overall processing speed.
I'm not asking about cases where a CPU is blocked from accessing main system RAM by some peripheral doing DMA, for example. Obviously, that's a feature for improving performance.



But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.



What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?










share|improve this question




















  • 1





    Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

    – Raffzahn
    7 hours ago











  • Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

    – Erik Eidt
    6 hours ago














3












3








3








PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.



Basically, the wait-states I am asking about are due to the main system DRAM being too slow for the CPU, so extra bus cycles were added to make up for this latency. This reduced the overall processing speed.
I'm not asking about cases where a CPU is blocked from accessing main system RAM by some peripheral doing DMA, for example. Obviously, that's a feature for improving performance.



But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.



What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?










share|improve this question
















PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.



Basically, the wait-states I am asking about are due to the main system DRAM being too slow for the CPU, so extra bus cycles were added to make up for this latency. This reduced the overall processing speed.
I'm not asking about cases where a CPU is blocked from accessing main system RAM by some peripheral doing DMA, for example. Obviously, that's a feature for improving performance.



But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.



What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?







ibm-pc memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 6 hours ago







Brian H

















asked 7 hours ago









Brian HBrian H

18k67154




18k67154








  • 1





    Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

    – Raffzahn
    7 hours ago











  • Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

    – Erik Eidt
    6 hours ago














  • 1





    Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

    – Raffzahn
    7 hours ago











  • Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

    – Erik Eidt
    6 hours ago








1




1





Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

– Raffzahn
7 hours ago





Brian, this question is rather wide. It's almost like asking why TTL chips need current and why don't they use all the same. It might be a good idea to narrow it a bit down.

– Raffzahn
7 hours ago













Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

– Erik Eidt
6 hours ago





Many processors have had some notion of variable memory timing that can be configured one way or another. Some dynamically, some by data driven configuration, others by simply hard-coding one specific timing. See, for example, retrocomputing.stackexchange.com/questions/9562/…

– Erik Eidt
6 hours ago










3 Answers
3






active

oldest

votes


















6














It was an issue on all machines — wait states resolve any situation in which the part a processor needs a response from isn't yet ready to respond — but only in the commoditised world of the PC was it a variable and therefore worth putting in the advertising.



In the Atari ST, wait states are inserted if the 68000 tries to access RAM during a video slot (two of every four cycles), or when it accesses some peripherals (e.g. there is a fixed one-cycle delay for accessing the sound chip).



The Amiga differentiates chip RAM and fast RAM. Chip RAM is that shared with the coprocessors and in which the CPU may encounter wait states. Small Amigas like the unexpanded A600 have only chip RAM.



Conversely, on the PC processors scaled in processing speed much more widely, the underlying reasons for potential waits were much more variable, and one manufacturer would likely do a better job than another. So it warranted boasting about if your machine has a good number rather than a bad one.






share|improve this answer
























  • I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

    – Brian H
    6 hours ago











  • It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

    – supercat
    5 hours ago






  • 2





    There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

    – Matthew Barber
    5 hours ago













  • There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

    – Tommy
    5 hours ago











  • Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

    – Tommy
    3 hours ago



















3














The DRAM chips used for memory needed a certain memory access cycle length, for example 1000ns. Also CPUs needed several clock cycles to perform a memory cycle, so for example a 8086 could take 4 cycles to access memory. If the CPU is running at 5 MHz, the memory access takes only 800ns which is too fast for the memory. Therefore one wait state is needed to get 1000ns memory cycle. Lowering the CPU speed to 4 MHz would allow it to run with zero wait states. Basically wait states were needed because memory speeds were slower than what CPUs could access. Advertising does tell something about system performance. For example, if one system has 1000ns memories and another has 800ns memories, a 5 MHz 8086 is able to run at 0ws with faster memories and at 1ws with slower ones. In theory the 0ws machine can transfer data 25% more in same time than the 1ws machine can. Surely faster memories were more expensive so maybe it was important to advertise why two identical looking systems had a significant price difference.






share|improve this answer































    3














    Preface 1: The way it is asked, it's way to broad that any answer can make any sense. Especially making some assumtions at the same time as widening it across all different CPU memory interface technologies there where.



    Preface 2: Design decisions for wait state design and application is a topic for a whole course on system design. It's next to impossible to give a satisfying Answer to this in general within a single unspecific RC.SE question.






    PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.




    It looks like, at first sight, but looking close, systems without are more often than not the slower ones.




    Basically, the RAM was too slow so extra bus cycles were added to make up for this latency. That reduced the overall processing speed.




    No, not rally. It's way more complex than that, so it may be a helpful to restrict this to the core of wait state usage on IBM and compatible machines of the 80286 area where the question was most prominent and seams to be originated.



    To start with, a wait state isn't an issue of memory but of the CPU. It's the CPU design requiring memory access to be synchronized with the clock speed used. An 8086 class CPU always takes four clock cycles per memory access, while it is two with a 80286. For the following we go with the 80286, as it's a quite different timing than on the 8088.



    For an AT running at 6 MHz this two cycle structure would make a basic access time of 333ns per bus cycle, but the structure is a bit more complex. The first cycle is a setup cycle where all control signals are presented (called TS), while the second is called command cycle (TC) and contains the operation itself and handshaking. This cycle will be extended as long as there is no acknowledge from the memory system (called Ready). Intel added quite some nifty structure to enable the full use of these two cycles for memory operation, like address pipelining, which offers a stable address already before the first cycle. Using this requires non trivial circuitry.



    IBM did choose a more simple approach. The decoding waits for the raising edge of ALE, when all signals (status like direction and memory/IO) are valid, simplifying decoding a lot. ALE becomes valid a few ns after the first half of TS, reducing the time available for decoding and memory access to somewhat less than 250 ns. The AT was designed to run with 200 ns RAM access time, so already tight, especially when considering that RDY needs to be assigned by the memory system before the second half of TC, effectively reducing the timing to less than 133ns. Way to short.



    For the AT (5170 Type 139) IBM decided to be better safe than sorry, adding a wait state. In addition, it also made sure that access time for I/O cards would stay within the limits set by the PC. Equally important, with a wait state, they could be sure that there is no chance a less than perfect charge of RAMs would screw the quality. Considering that the AT was about three times faster than the PC, there was no need to take any risk.



    With the later PC-XT 286 (5162), with basically the same design (and the same 200 ns memory), IBM did go with zero wait states. Maybe they became more confident.



    Then again, it's as well possible that the whole system was already designed to run at 8 MHz from start on and has been only slowed down to 6 MHz for company policy reasons. In that case the wait state does make a lot more sense, as an 8 MHz design (as IBM did) can only run with 200 ns RAM by implying a wait state. Similar to keep the slots compatible. The difference between 6 MHz AT (type 139) and 8 MHz (type 239) is basically just the clock chip.



    Bottom line: It all comes down to design decisions. With a more sophisticated decoder circuitry 200 ns RAM can well work with a 8 MHz 80286 without wait states - as many other 80286 machines of the same time showed.



    Now, then there was the race to more MHz, cranking 80286 machines up from original 6/8 MHz way past 12 or 16 MHz. At that time there was no memory fast enough to support this speed. Even adding a more sophisticated decoding, as for example NEAT boards added, couldn't help at the higher end.



    It might be important to remember that memory access of the 8086 family is different from next to all previous or concurrent CPUs as only data access was synchronous. Code was read ahead of time by asynchronous BIU operation, resulting in way less unused cycles compared to like a 68k. Eventually the reason why Intel CPUs performed quite well in comparison.




    But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.




    Comparatively-priced is a more than vague term, considering that full fitted PC could well outprice high end workstations. And neither is clock speed an issue, as clock speed is only marginally related to memory speed. As mentioned before, it's all about memory access (and cycle) time. Other CPUs where hit by the same problem when clock speed did increase. A 68k used at minimum two cycles per access, resulting, which means that 200 ns is fine for a 8 MHz 68000 (assuming a simple decoding circuit). Anything faster than that will as well require wait states.




    What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?




    Because they where fricking slow :)) Considering RAM speed of the time it's obvious why even upper end machines, like a SUN 1 or 2 did run at 'only' 6 and 10 MHz. It wasn't until the 68020 based SUN 3 reached 15 MHz - enabled due the 68020's cache, as memory access was done with three wait states.



    Even many, many years (1992) later, Commodores Amiga 4000 (A3640 CPU card) used as well 3 wait states by default adapt the 25 MHz CPU to slower memory.






    share|improve this answer


























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "648"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9779%2fwhat-were-wait-states-and-why-was-it-only-an-issue-for-pcs%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6














      It was an issue on all machines — wait states resolve any situation in which the part a processor needs a response from isn't yet ready to respond — but only in the commoditised world of the PC was it a variable and therefore worth putting in the advertising.



      In the Atari ST, wait states are inserted if the 68000 tries to access RAM during a video slot (two of every four cycles), or when it accesses some peripherals (e.g. there is a fixed one-cycle delay for accessing the sound chip).



      The Amiga differentiates chip RAM and fast RAM. Chip RAM is that shared with the coprocessors and in which the CPU may encounter wait states. Small Amigas like the unexpanded A600 have only chip RAM.



      Conversely, on the PC processors scaled in processing speed much more widely, the underlying reasons for potential waits were much more variable, and one manufacturer would likely do a better job than another. So it warranted boasting about if your machine has a good number rather than a bad one.






      share|improve this answer
























      • I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

        – Brian H
        6 hours ago











      • It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

        – supercat
        5 hours ago






      • 2





        There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

        – Matthew Barber
        5 hours ago













      • There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

        – Tommy
        5 hours ago











      • Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

        – Tommy
        3 hours ago
















      6














      It was an issue on all machines — wait states resolve any situation in which the part a processor needs a response from isn't yet ready to respond — but only in the commoditised world of the PC was it a variable and therefore worth putting in the advertising.



      In the Atari ST, wait states are inserted if the 68000 tries to access RAM during a video slot (two of every four cycles), or when it accesses some peripherals (e.g. there is a fixed one-cycle delay for accessing the sound chip).



      The Amiga differentiates chip RAM and fast RAM. Chip RAM is that shared with the coprocessors and in which the CPU may encounter wait states. Small Amigas like the unexpanded A600 have only chip RAM.



      Conversely, on the PC processors scaled in processing speed much more widely, the underlying reasons for potential waits were much more variable, and one manufacturer would likely do a better job than another. So it warranted boasting about if your machine has a good number rather than a bad one.






      share|improve this answer
























      • I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

        – Brian H
        6 hours ago











      • It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

        – supercat
        5 hours ago






      • 2





        There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

        – Matthew Barber
        5 hours ago













      • There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

        – Tommy
        5 hours ago











      • Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

        – Tommy
        3 hours ago














      6












      6








      6







      It was an issue on all machines — wait states resolve any situation in which the part a processor needs a response from isn't yet ready to respond — but only in the commoditised world of the PC was it a variable and therefore worth putting in the advertising.



      In the Atari ST, wait states are inserted if the 68000 tries to access RAM during a video slot (two of every four cycles), or when it accesses some peripherals (e.g. there is a fixed one-cycle delay for accessing the sound chip).



      The Amiga differentiates chip RAM and fast RAM. Chip RAM is that shared with the coprocessors and in which the CPU may encounter wait states. Small Amigas like the unexpanded A600 have only chip RAM.



      Conversely, on the PC processors scaled in processing speed much more widely, the underlying reasons for potential waits were much more variable, and one manufacturer would likely do a better job than another. So it warranted boasting about if your machine has a good number rather than a bad one.






      share|improve this answer













      It was an issue on all machines — wait states resolve any situation in which the part a processor needs a response from isn't yet ready to respond — but only in the commoditised world of the PC was it a variable and therefore worth putting in the advertising.



      In the Atari ST, wait states are inserted if the 68000 tries to access RAM during a video slot (two of every four cycles), or when it accesses some peripherals (e.g. there is a fixed one-cycle delay for accessing the sound chip).



      The Amiga differentiates chip RAM and fast RAM. Chip RAM is that shared with the coprocessors and in which the CPU may encounter wait states. Small Amigas like the unexpanded A600 have only chip RAM.



      Conversely, on the PC processors scaled in processing speed much more widely, the underlying reasons for potential waits were much more variable, and one manufacturer would likely do a better job than another. So it warranted boasting about if your machine has a good number rather than a bad one.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 6 hours ago









      TommyTommy

      16.3k14780




      16.3k14780













      • I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

        – Brian H
        6 hours ago











      • It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

        – supercat
        5 hours ago






      • 2





        There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

        – Matthew Barber
        5 hours ago













      • There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

        – Tommy
        5 hours ago











      • Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

        – Tommy
        3 hours ago



















      • I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

        – Brian H
        6 hours ago











      • It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

        – supercat
        5 hours ago






      • 2





        There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

        – Matthew Barber
        5 hours ago













      • There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

        – Tommy
        5 hours ago











      • Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

        – Tommy
        3 hours ago

















      I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

      – Brian H
      6 hours ago





      I was referring to wait states introduced by DRAM latency, not contention with DMA devices sharing the bus. I'll try to clarify...

      – Brian H
      6 hours ago













      It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

      – supercat
      5 hours ago





      It wasn't an issue with all machines, but it was an issue which spread across many types of machines. On something like a VIC-20 or an Apple II, the memory system will always respond to a request by the time the CPU would be interested in the response. Conversely, on something like a 6502-based Nibbler arcade machine, ROM accesses were designed to have a wait state (although a machine used in competition had a broken wait-state circuit whose failure simply caused the game to run a little faster than normal, giving the player an unfair advantage).

      – supercat
      5 hours ago




      2




      2





      There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

      – Matthew Barber
      5 hours ago







      There's an obvious reason for that, in that the Apple II and VIC-20 have CPUs running at 1MHz, while the RAM is faster. As such the CPU can access the RAM on without any wait states; it's even fast enough for the video hardware to have a go on alternate cycles usually without contention. Once you've got a CPU that's faster than its RAM, wait states are inevitable though.

      – Matthew Barber
      5 hours ago















      There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

      – Tommy
      5 hours ago





      There are small machines with wait states induced only by slow RAM — e.g. the Electron accesses ROM at 2Mhz but RAM at a max 1Mhz and forces delays to the CPU to deal with the latter. Does that count?

      – Tommy
      5 hours ago













      Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

      – Tommy
      3 hours ago





      Also: the sound chip example on the ST is exactly the same phenomena, I'd argue. There's no contention, just a memory-mapped area where the underlying device is slower than the processor, so the processor is made to wait.

      – Tommy
      3 hours ago











      3














      The DRAM chips used for memory needed a certain memory access cycle length, for example 1000ns. Also CPUs needed several clock cycles to perform a memory cycle, so for example a 8086 could take 4 cycles to access memory. If the CPU is running at 5 MHz, the memory access takes only 800ns which is too fast for the memory. Therefore one wait state is needed to get 1000ns memory cycle. Lowering the CPU speed to 4 MHz would allow it to run with zero wait states. Basically wait states were needed because memory speeds were slower than what CPUs could access. Advertising does tell something about system performance. For example, if one system has 1000ns memories and another has 800ns memories, a 5 MHz 8086 is able to run at 0ws with faster memories and at 1ws with slower ones. In theory the 0ws machine can transfer data 25% more in same time than the 1ws machine can. Surely faster memories were more expensive so maybe it was important to advertise why two identical looking systems had a significant price difference.






      share|improve this answer




























        3














        The DRAM chips used for memory needed a certain memory access cycle length, for example 1000ns. Also CPUs needed several clock cycles to perform a memory cycle, so for example a 8086 could take 4 cycles to access memory. If the CPU is running at 5 MHz, the memory access takes only 800ns which is too fast for the memory. Therefore one wait state is needed to get 1000ns memory cycle. Lowering the CPU speed to 4 MHz would allow it to run with zero wait states. Basically wait states were needed because memory speeds were slower than what CPUs could access. Advertising does tell something about system performance. For example, if one system has 1000ns memories and another has 800ns memories, a 5 MHz 8086 is able to run at 0ws with faster memories and at 1ws with slower ones. In theory the 0ws machine can transfer data 25% more in same time than the 1ws machine can. Surely faster memories were more expensive so maybe it was important to advertise why two identical looking systems had a significant price difference.






        share|improve this answer


























          3












          3








          3







          The DRAM chips used for memory needed a certain memory access cycle length, for example 1000ns. Also CPUs needed several clock cycles to perform a memory cycle, so for example a 8086 could take 4 cycles to access memory. If the CPU is running at 5 MHz, the memory access takes only 800ns which is too fast for the memory. Therefore one wait state is needed to get 1000ns memory cycle. Lowering the CPU speed to 4 MHz would allow it to run with zero wait states. Basically wait states were needed because memory speeds were slower than what CPUs could access. Advertising does tell something about system performance. For example, if one system has 1000ns memories and another has 800ns memories, a 5 MHz 8086 is able to run at 0ws with faster memories and at 1ws with slower ones. In theory the 0ws machine can transfer data 25% more in same time than the 1ws machine can. Surely faster memories were more expensive so maybe it was important to advertise why two identical looking systems had a significant price difference.






          share|improve this answer













          The DRAM chips used for memory needed a certain memory access cycle length, for example 1000ns. Also CPUs needed several clock cycles to perform a memory cycle, so for example a 8086 could take 4 cycles to access memory. If the CPU is running at 5 MHz, the memory access takes only 800ns which is too fast for the memory. Therefore one wait state is needed to get 1000ns memory cycle. Lowering the CPU speed to 4 MHz would allow it to run with zero wait states. Basically wait states were needed because memory speeds were slower than what CPUs could access. Advertising does tell something about system performance. For example, if one system has 1000ns memories and another has 800ns memories, a 5 MHz 8086 is able to run at 0ws with faster memories and at 1ws with slower ones. In theory the 0ws machine can transfer data 25% more in same time than the 1ws machine can. Surely faster memories were more expensive so maybe it was important to advertise why two identical looking systems had a significant price difference.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 6 hours ago









          JustmeJustme

          4373




          4373























              3














              Preface 1: The way it is asked, it's way to broad that any answer can make any sense. Especially making some assumtions at the same time as widening it across all different CPU memory interface technologies there where.



              Preface 2: Design decisions for wait state design and application is a topic for a whole course on system design. It's next to impossible to give a satisfying Answer to this in general within a single unspecific RC.SE question.






              PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.




              It looks like, at first sight, but looking close, systems without are more often than not the slower ones.




              Basically, the RAM was too slow so extra bus cycles were added to make up for this latency. That reduced the overall processing speed.




              No, not rally. It's way more complex than that, so it may be a helpful to restrict this to the core of wait state usage on IBM and compatible machines of the 80286 area where the question was most prominent and seams to be originated.



              To start with, a wait state isn't an issue of memory but of the CPU. It's the CPU design requiring memory access to be synchronized with the clock speed used. An 8086 class CPU always takes four clock cycles per memory access, while it is two with a 80286. For the following we go with the 80286, as it's a quite different timing than on the 8088.



              For an AT running at 6 MHz this two cycle structure would make a basic access time of 333ns per bus cycle, but the structure is a bit more complex. The first cycle is a setup cycle where all control signals are presented (called TS), while the second is called command cycle (TC) and contains the operation itself and handshaking. This cycle will be extended as long as there is no acknowledge from the memory system (called Ready). Intel added quite some nifty structure to enable the full use of these two cycles for memory operation, like address pipelining, which offers a stable address already before the first cycle. Using this requires non trivial circuitry.



              IBM did choose a more simple approach. The decoding waits for the raising edge of ALE, when all signals (status like direction and memory/IO) are valid, simplifying decoding a lot. ALE becomes valid a few ns after the first half of TS, reducing the time available for decoding and memory access to somewhat less than 250 ns. The AT was designed to run with 200 ns RAM access time, so already tight, especially when considering that RDY needs to be assigned by the memory system before the second half of TC, effectively reducing the timing to less than 133ns. Way to short.



              For the AT (5170 Type 139) IBM decided to be better safe than sorry, adding a wait state. In addition, it also made sure that access time for I/O cards would stay within the limits set by the PC. Equally important, with a wait state, they could be sure that there is no chance a less than perfect charge of RAMs would screw the quality. Considering that the AT was about three times faster than the PC, there was no need to take any risk.



              With the later PC-XT 286 (5162), with basically the same design (and the same 200 ns memory), IBM did go with zero wait states. Maybe they became more confident.



              Then again, it's as well possible that the whole system was already designed to run at 8 MHz from start on and has been only slowed down to 6 MHz for company policy reasons. In that case the wait state does make a lot more sense, as an 8 MHz design (as IBM did) can only run with 200 ns RAM by implying a wait state. Similar to keep the slots compatible. The difference between 6 MHz AT (type 139) and 8 MHz (type 239) is basically just the clock chip.



              Bottom line: It all comes down to design decisions. With a more sophisticated decoder circuitry 200 ns RAM can well work with a 8 MHz 80286 without wait states - as many other 80286 machines of the same time showed.



              Now, then there was the race to more MHz, cranking 80286 machines up from original 6/8 MHz way past 12 or 16 MHz. At that time there was no memory fast enough to support this speed. Even adding a more sophisticated decoding, as for example NEAT boards added, couldn't help at the higher end.



              It might be important to remember that memory access of the 8086 family is different from next to all previous or concurrent CPUs as only data access was synchronous. Code was read ahead of time by asynchronous BIU operation, resulting in way less unused cycles compared to like a 68k. Eventually the reason why Intel CPUs performed quite well in comparison.




              But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.




              Comparatively-priced is a more than vague term, considering that full fitted PC could well outprice high end workstations. And neither is clock speed an issue, as clock speed is only marginally related to memory speed. As mentioned before, it's all about memory access (and cycle) time. Other CPUs where hit by the same problem when clock speed did increase. A 68k used at minimum two cycles per access, resulting, which means that 200 ns is fine for a 8 MHz 68000 (assuming a simple decoding circuit). Anything faster than that will as well require wait states.




              What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?




              Because they where fricking slow :)) Considering RAM speed of the time it's obvious why even upper end machines, like a SUN 1 or 2 did run at 'only' 6 and 10 MHz. It wasn't until the 68020 based SUN 3 reached 15 MHz - enabled due the 68020's cache, as memory access was done with three wait states.



              Even many, many years (1992) later, Commodores Amiga 4000 (A3640 CPU card) used as well 3 wait states by default adapt the 25 MHz CPU to slower memory.






              share|improve this answer






























                3














                Preface 1: The way it is asked, it's way to broad that any answer can make any sense. Especially making some assumtions at the same time as widening it across all different CPU memory interface technologies there where.



                Preface 2: Design decisions for wait state design and application is a topic for a whole course on system design. It's next to impossible to give a satisfying Answer to this in general within a single unspecific RC.SE question.






                PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.




                It looks like, at first sight, but looking close, systems without are more often than not the slower ones.




                Basically, the RAM was too slow so extra bus cycles were added to make up for this latency. That reduced the overall processing speed.




                No, not rally. It's way more complex than that, so it may be a helpful to restrict this to the core of wait state usage on IBM and compatible machines of the 80286 area where the question was most prominent and seams to be originated.



                To start with, a wait state isn't an issue of memory but of the CPU. It's the CPU design requiring memory access to be synchronized with the clock speed used. An 8086 class CPU always takes four clock cycles per memory access, while it is two with a 80286. For the following we go with the 80286, as it's a quite different timing than on the 8088.



                For an AT running at 6 MHz this two cycle structure would make a basic access time of 333ns per bus cycle, but the structure is a bit more complex. The first cycle is a setup cycle where all control signals are presented (called TS), while the second is called command cycle (TC) and contains the operation itself and handshaking. This cycle will be extended as long as there is no acknowledge from the memory system (called Ready). Intel added quite some nifty structure to enable the full use of these two cycles for memory operation, like address pipelining, which offers a stable address already before the first cycle. Using this requires non trivial circuitry.



                IBM did choose a more simple approach. The decoding waits for the raising edge of ALE, when all signals (status like direction and memory/IO) are valid, simplifying decoding a lot. ALE becomes valid a few ns after the first half of TS, reducing the time available for decoding and memory access to somewhat less than 250 ns. The AT was designed to run with 200 ns RAM access time, so already tight, especially when considering that RDY needs to be assigned by the memory system before the second half of TC, effectively reducing the timing to less than 133ns. Way to short.



                For the AT (5170 Type 139) IBM decided to be better safe than sorry, adding a wait state. In addition, it also made sure that access time for I/O cards would stay within the limits set by the PC. Equally important, with a wait state, they could be sure that there is no chance a less than perfect charge of RAMs would screw the quality. Considering that the AT was about three times faster than the PC, there was no need to take any risk.



                With the later PC-XT 286 (5162), with basically the same design (and the same 200 ns memory), IBM did go with zero wait states. Maybe they became more confident.



                Then again, it's as well possible that the whole system was already designed to run at 8 MHz from start on and has been only slowed down to 6 MHz for company policy reasons. In that case the wait state does make a lot more sense, as an 8 MHz design (as IBM did) can only run with 200 ns RAM by implying a wait state. Similar to keep the slots compatible. The difference between 6 MHz AT (type 139) and 8 MHz (type 239) is basically just the clock chip.



                Bottom line: It all comes down to design decisions. With a more sophisticated decoder circuitry 200 ns RAM can well work with a 8 MHz 80286 without wait states - as many other 80286 machines of the same time showed.



                Now, then there was the race to more MHz, cranking 80286 machines up from original 6/8 MHz way past 12 or 16 MHz. At that time there was no memory fast enough to support this speed. Even adding a more sophisticated decoding, as for example NEAT boards added, couldn't help at the higher end.



                It might be important to remember that memory access of the 8086 family is different from next to all previous or concurrent CPUs as only data access was synchronous. Code was read ahead of time by asynchronous BIU operation, resulting in way less unused cycles compared to like a 68k. Eventually the reason why Intel CPUs performed quite well in comparison.




                But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.




                Comparatively-priced is a more than vague term, considering that full fitted PC could well outprice high end workstations. And neither is clock speed an issue, as clock speed is only marginally related to memory speed. As mentioned before, it's all about memory access (and cycle) time. Other CPUs where hit by the same problem when clock speed did increase. A 68k used at minimum two cycles per access, resulting, which means that 200 ns is fine for a 8 MHz 68000 (assuming a simple decoding circuit). Anything faster than that will as well require wait states.




                What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?




                Because they where fricking slow :)) Considering RAM speed of the time it's obvious why even upper end machines, like a SUN 1 or 2 did run at 'only' 6 and 10 MHz. It wasn't until the 68020 based SUN 3 reached 15 MHz - enabled due the 68020's cache, as memory access was done with three wait states.



                Even many, many years (1992) later, Commodores Amiga 4000 (A3640 CPU card) used as well 3 wait states by default adapt the 25 MHz CPU to slower memory.






                share|improve this answer




























                  3












                  3








                  3







                  Preface 1: The way it is asked, it's way to broad that any answer can make any sense. Especially making some assumtions at the same time as widening it across all different CPU memory interface technologies there where.



                  Preface 2: Design decisions for wait state design and application is a topic for a whole course on system design. It's next to impossible to give a satisfying Answer to this in general within a single unspecific RC.SE question.






                  PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.




                  It looks like, at first sight, but looking close, systems without are more often than not the slower ones.




                  Basically, the RAM was too slow so extra bus cycles were added to make up for this latency. That reduced the overall processing speed.




                  No, not rally. It's way more complex than that, so it may be a helpful to restrict this to the core of wait state usage on IBM and compatible machines of the 80286 area where the question was most prominent and seams to be originated.



                  To start with, a wait state isn't an issue of memory but of the CPU. It's the CPU design requiring memory access to be synchronized with the clock speed used. An 8086 class CPU always takes four clock cycles per memory access, while it is two with a 80286. For the following we go with the 80286, as it's a quite different timing than on the 8088.



                  For an AT running at 6 MHz this two cycle structure would make a basic access time of 333ns per bus cycle, but the structure is a bit more complex. The first cycle is a setup cycle where all control signals are presented (called TS), while the second is called command cycle (TC) and contains the operation itself and handshaking. This cycle will be extended as long as there is no acknowledge from the memory system (called Ready). Intel added quite some nifty structure to enable the full use of these two cycles for memory operation, like address pipelining, which offers a stable address already before the first cycle. Using this requires non trivial circuitry.



                  IBM did choose a more simple approach. The decoding waits for the raising edge of ALE, when all signals (status like direction and memory/IO) are valid, simplifying decoding a lot. ALE becomes valid a few ns after the first half of TS, reducing the time available for decoding and memory access to somewhat less than 250 ns. The AT was designed to run with 200 ns RAM access time, so already tight, especially when considering that RDY needs to be assigned by the memory system before the second half of TC, effectively reducing the timing to less than 133ns. Way to short.



                  For the AT (5170 Type 139) IBM decided to be better safe than sorry, adding a wait state. In addition, it also made sure that access time for I/O cards would stay within the limits set by the PC. Equally important, with a wait state, they could be sure that there is no chance a less than perfect charge of RAMs would screw the quality. Considering that the AT was about three times faster than the PC, there was no need to take any risk.



                  With the later PC-XT 286 (5162), with basically the same design (and the same 200 ns memory), IBM did go with zero wait states. Maybe they became more confident.



                  Then again, it's as well possible that the whole system was already designed to run at 8 MHz from start on and has been only slowed down to 6 MHz for company policy reasons. In that case the wait state does make a lot more sense, as an 8 MHz design (as IBM did) can only run with 200 ns RAM by implying a wait state. Similar to keep the slots compatible. The difference between 6 MHz AT (type 139) and 8 MHz (type 239) is basically just the clock chip.



                  Bottom line: It all comes down to design decisions. With a more sophisticated decoder circuitry 200 ns RAM can well work with a 8 MHz 80286 without wait states - as many other 80286 machines of the same time showed.



                  Now, then there was the race to more MHz, cranking 80286 machines up from original 6/8 MHz way past 12 or 16 MHz. At that time there was no memory fast enough to support this speed. Even adding a more sophisticated decoding, as for example NEAT boards added, couldn't help at the higher end.



                  It might be important to remember that memory access of the 8086 family is different from next to all previous or concurrent CPUs as only data access was synchronous. Code was read ahead of time by asynchronous BIU operation, resulting in way less unused cycles compared to like a 68k. Eventually the reason why Intel CPUs performed quite well in comparison.




                  But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.




                  Comparatively-priced is a more than vague term, considering that full fitted PC could well outprice high end workstations. And neither is clock speed an issue, as clock speed is only marginally related to memory speed. As mentioned before, it's all about memory access (and cycle) time. Other CPUs where hit by the same problem when clock speed did increase. A 68k used at minimum two cycles per access, resulting, which means that 200 ns is fine for a 8 MHz 68000 (assuming a simple decoding circuit). Anything faster than that will as well require wait states.




                  What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?




                  Because they where fricking slow :)) Considering RAM speed of the time it's obvious why even upper end machines, like a SUN 1 or 2 did run at 'only' 6 and 10 MHz. It wasn't until the 68020 based SUN 3 reached 15 MHz - enabled due the 68020's cache, as memory access was done with three wait states.



                  Even many, many years (1992) later, Commodores Amiga 4000 (A3640 CPU card) used as well 3 wait states by default adapt the 25 MHz CPU to slower memory.






                  share|improve this answer















                  Preface 1: The way it is asked, it's way to broad that any answer can make any sense. Especially making some assumtions at the same time as widening it across all different CPU memory interface technologies there where.



                  Preface 2: Design decisions for wait state design and application is a topic for a whole course on system design. It's next to impossible to give a satisfying Answer to this in general within a single unspecific RC.SE question.






                  PC compatibles in the 1980s were often advertised as having zero, one, two, or sometimes more "wait states". Zero wait states was the best.




                  It looks like, at first sight, but looking close, systems without are more often than not the slower ones.




                  Basically, the RAM was too slow so extra bus cycles were added to make up for this latency. That reduced the overall processing speed.




                  No, not rally. It's way more complex than that, so it may be a helpful to restrict this to the core of wait state usage on IBM and compatible machines of the 80286 area where the question was most prominent and seams to be originated.



                  To start with, a wait state isn't an issue of memory but of the CPU. It's the CPU design requiring memory access to be synchronized with the clock speed used. An 8086 class CPU always takes four clock cycles per memory access, while it is two with a 80286. For the following we go with the 80286, as it's a quite different timing than on the 8088.



                  For an AT running at 6 MHz this two cycle structure would make a basic access time of 333ns per bus cycle, but the structure is a bit more complex. The first cycle is a setup cycle where all control signals are presented (called TS), while the second is called command cycle (TC) and contains the operation itself and handshaking. This cycle will be extended as long as there is no acknowledge from the memory system (called Ready). Intel added quite some nifty structure to enable the full use of these two cycles for memory operation, like address pipelining, which offers a stable address already before the first cycle. Using this requires non trivial circuitry.



                  IBM did choose a more simple approach. The decoding waits for the raising edge of ALE, when all signals (status like direction and memory/IO) are valid, simplifying decoding a lot. ALE becomes valid a few ns after the first half of TS, reducing the time available for decoding and memory access to somewhat less than 250 ns. The AT was designed to run with 200 ns RAM access time, so already tight, especially when considering that RDY needs to be assigned by the memory system before the second half of TC, effectively reducing the timing to less than 133ns. Way to short.



                  For the AT (5170 Type 139) IBM decided to be better safe than sorry, adding a wait state. In addition, it also made sure that access time for I/O cards would stay within the limits set by the PC. Equally important, with a wait state, they could be sure that there is no chance a less than perfect charge of RAMs would screw the quality. Considering that the AT was about three times faster than the PC, there was no need to take any risk.



                  With the later PC-XT 286 (5162), with basically the same design (and the same 200 ns memory), IBM did go with zero wait states. Maybe they became more confident.



                  Then again, it's as well possible that the whole system was already designed to run at 8 MHz from start on and has been only slowed down to 6 MHz for company policy reasons. In that case the wait state does make a lot more sense, as an 8 MHz design (as IBM did) can only run with 200 ns RAM by implying a wait state. Similar to keep the slots compatible. The difference between 6 MHz AT (type 139) and 8 MHz (type 239) is basically just the clock chip.



                  Bottom line: It all comes down to design decisions. With a more sophisticated decoder circuitry 200 ns RAM can well work with a 8 MHz 80286 without wait states - as many other 80286 machines of the same time showed.



                  Now, then there was the race to more MHz, cranking 80286 machines up from original 6/8 MHz way past 12 or 16 MHz. At that time there was no memory fast enough to support this speed. Even adding a more sophisticated decoding, as for example NEAT boards added, couldn't help at the higher end.



                  It might be important to remember that memory access of the 8086 family is different from next to all previous or concurrent CPUs as only data access was synchronous. Code was read ahead of time by asynchronous BIU operation, resulting in way less unused cycles compared to like a 68k. Eventually the reason why Intel CPUs performed quite well in comparison.




                  But I don't recall this being an issue on comparatively-priced machines with 16 or 32-bit Motorola processors, and running at similar clock speeds.




                  Comparatively-priced is a more than vague term, considering that full fitted PC could well outprice high end workstations. And neither is clock speed an issue, as clock speed is only marginally related to memory speed. As mentioned before, it's all about memory access (and cycle) time. Other CPUs where hit by the same problem when clock speed did increase. A 68k used at minimum two cycles per access, resulting, which means that 200 ns is fine for a 8 MHz 68000 (assuming a simple decoding circuit). Anything faster than that will as well require wait states.




                  What was the cause of the wait states, precisely, and how come other low-cost home computers were able to avoid this performance problem?




                  Because they where fricking slow :)) Considering RAM speed of the time it's obvious why even upper end machines, like a SUN 1 or 2 did run at 'only' 6 and 10 MHz. It wasn't until the 68020 based SUN 3 reached 15 MHz - enabled due the 68020's cache, as memory access was done with three wait states.



                  Even many, many years (1992) later, Commodores Amiga 4000 (A3640 CPU card) used as well 3 wait states by default adapt the 25 MHz CPU to slower memory.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 4 hours ago

























                  answered 4 hours ago









                  RaffzahnRaffzahn

                  57.1k6139232




                  57.1k6139232






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Retrocomputing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9779%2fwhat-were-wait-states-and-why-was-it-only-an-issue-for-pcs%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      數位音樂下載

                      When can things happen in Etherscan, such as the picture below?

                      格利澤436b