How is graphics RAM different from system RAM?
up vote
48
down vote
favorite
I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
Why aren't we using the same kind of RAM for both? What makes them different?
memory graphics-card cpu
add a comment |
up vote
48
down vote
favorite
I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
Why aren't we using the same kind of RAM for both? What makes them different?
memory graphics-card cpu
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
yesterday
1
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago
add a comment |
up vote
48
down vote
favorite
up vote
48
down vote
favorite
I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
Why aren't we using the same kind of RAM for both? What makes them different?
memory graphics-card cpu
I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
Why aren't we using the same kind of RAM for both? What makes them different?
memory graphics-card cpu
memory graphics-card cpu
asked Nov 16 at 0:50
Wes Sayeed
10.6k32756
10.6k32756
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
yesterday
1
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago
add a comment |
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
yesterday
1
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.– hanshenrik
yesterday
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.– hanshenrik
yesterday
1
1
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago
add a comment |
3 Answers
3
active
oldest
votes
up vote
56
down vote
But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).
One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.
However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.
Source: DDR2 SDRAM
Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.
Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.
Source: GDDR5 SDRAM
As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.
The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).
The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.
This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.
- GDDR4 SDRAM
- DDR3 SDRAM
Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.
Why aren't we using the same kind of RAM for both?
The two standards are not compatible with one another.
What makes them different?
What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.
– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
add a comment |
up vote
36
down vote
The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.
GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.
CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.
It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.
New contributor
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
add a comment |
up vote
3
down vote
One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
56
down vote
But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).
One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.
However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.
Source: DDR2 SDRAM
Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.
Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.
Source: GDDR5 SDRAM
As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.
The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).
The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.
This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.
- GDDR4 SDRAM
- DDR3 SDRAM
Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.
Why aren't we using the same kind of RAM for both?
The two standards are not compatible with one another.
What makes them different?
What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.
– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
add a comment |
up vote
56
down vote
But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).
One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.
However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.
Source: DDR2 SDRAM
Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.
Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.
Source: GDDR5 SDRAM
As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.
The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).
The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.
This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.
- GDDR4 SDRAM
- DDR3 SDRAM
Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.
Why aren't we using the same kind of RAM for both?
The two standards are not compatible with one another.
What makes them different?
What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.
– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
add a comment |
up vote
56
down vote
up vote
56
down vote
But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).
One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.
However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.
Source: DDR2 SDRAM
Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.
Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.
Source: GDDR5 SDRAM
As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.
The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).
The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.
This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.
- GDDR4 SDRAM
- DDR3 SDRAM
Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.
Why aren't we using the same kind of RAM for both?
The two standards are not compatible with one another.
What makes them different?
What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.
But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).
One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.
However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.
Source: DDR2 SDRAM
Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.
Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.
Source: GDDR5 SDRAM
As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.
The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).
The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.
This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.
- GDDR4 SDRAM
- DDR3 SDRAM
Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.
The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.
Why aren't we using the same kind of RAM for both?
The two standards are not compatible with one another.
What makes them different?
What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.
edited 2 days ago
psmears
45238
45238
answered Nov 16 at 1:27
Ramhound
19k156082
19k156082
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.
– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
add a comment |
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.
– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.– Dan M.
1 hour ago
The GDDR specification is not compatible with Intel and AMD x86 processors.
not 100% true. AMD x86 processors found in PS4, for example, work directly with GDDR. There it used as both RAM and VRAM.– Dan M.
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
@DanM. - PlayStation 4 has an APU and used UMA
– Ramhound
1 hour ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
I don't think this does a great job of explaining what the difference is. This spends a lot of words saying "They're different, despite their confusing naming patterns". It doesn't explain how the two standards are different. I also don't think it's accurate to say that GDDR is connected via PCIe; GDDR memories are typically connected to a GPU, through some bus, which is in turn connected to the computer through PCIe.
– pbfy0
43 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
@pbfy0 - I focused on a single interpretation of the question. I don’t believe it would be productive to go into depth about the literal differences between the two standards
– Ramhound
37 mins ago
add a comment |
up vote
36
down vote
The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.
GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.
CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.
It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.
New contributor
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
add a comment |
up vote
36
down vote
The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.
GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.
CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.
It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.
New contributor
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
add a comment |
up vote
36
down vote
up vote
36
down vote
The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.
GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.
CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.
It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.
New contributor
The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.
GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.
CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.
It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.
New contributor
New contributor
answered 2 days ago
Robert
34113
34113
New contributor
New contributor
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
add a comment |
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
9
9
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
Your last paragraph is, I think, the most important point: they're optimized for different things. Graphics cards need high bandwidth but aren't as concerned with latency, whereas CPUs need the best latency possible and bandwidth is a more secondary concern. There's no fundamental reason a CPU couldn't use GDDR or a GPU use regular DDR (indeed, many integrated graphics do), it's just that performance would be worse.
– Nate Strickland
2 days ago
3
3
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
@NateStrickland CPUs do actually use GDDR as their memory on consoles. Specifically, two last generations of consoles use GDDR as a shared memory for both CPU and GPU.
– creker
yesterday
add a comment |
up vote
3
down vote
One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....
add a comment |
up vote
3
down vote
One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....
add a comment |
up vote
3
down vote
up vote
3
down vote
One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....
One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....
answered 2 days ago
rackandboneman
67036
67036
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1375854%2fhow-is-graphics-ram-different-from-system-ram%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I do want to point out that in some cases the system RAM and graphics RAM are exactly the same. Typically found in lower end computers, the BIOS assigns an amount of the system's RAM to the GPU to use as graphics memory. This amount is typically 128 megabytes or less, which is more than enough for a graphical desktop.
– Keltari
2 days ago
what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.
- they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.– hanshenrik
yesterday
1
@hanshenrik your problem there is that GDDR3 already existed. It was based on DDR2.
– anaximander
7 hours ago