Dnarever wrote on Oct 24
th, 2024 at 9:54pm:
Aurora Complexus wrote on Oct 24
th, 2024 at 9:47pm:
Dnarever wrote on Oct 24
th, 2024 at 9:33pm:
Aurora Complexus wrote on Oct 24
th, 2024 at 9:08pm:
Bobby. wrote on Oct 24
th, 2024 at 8:53pm:
Setanta wrote on Oct 24
th, 2024 at 8:42pm:
Software in the sense of what operating system you run or what programs you run on it are not the issue. It's the hardware code and what
Microcode provides. That's correct but we are talking about an unknown instruction set -
well - known only to the CPU designers and the NSA and their co-conspirators.
I suppose it's possible that the instruction set could be 128 bits wide. But if so, it would slow the processor down to have to read 128 bits for every instruction (and bear in mind that instructions are sometimes passed from one part of the processor to the other.) Even assuming the other hardware passed such an instruction, the obvious action would be to cull the top 64 bits.
But even assuming that 128 bit instructions somehow get to the processor (by a corrupted compiler) and are somehow processed to a cause a "hidden instruction" to be executed, this is still something that a hacker could detect.
They basically just have to do <crazy instruction> <input> and see if they get something other than "illegal instruction" back. Do it over and over (perhaps while they take a much needed nap.) They do that 2^32 times, or even 2^64 times, for every possible instruction.
Well maybe the "hidden instruction" requires a specific string or else it returns "illegal instruction." But remember that there is more than one hacker. A hacker may be curious enough to record the timing of every "illegal instruction" which comes back. Then they have one suspect instruction number, and can start peppering it with random strings to try to guess the "password" of that instruction. Imagine the fame they could gain, not just finding a hidden instruction but finding its password.
Quote:I suppose it's possible that the instruction set could be 128 bits wide
32 bit processing was limited to 4 G of access space. This was a restrictive limit.
64 bit processing allows what to us is unlimited addressing space.
Effectively unlimited, for now. "Over 18 quintillion" which certainly covers the hard disk capacity of your home network and your work network. But actually it's a bit limiting when considering the internet. IP-v6 has an address space of 128 bits.
Quote: Going to 128 bit is technically difficult - i.e. not all 64 bit processors work correctly. there have been a lot of failed processors developed. but the main reason is that to do this would be fixing a problem that does not currently exist. It would be a huge expense for something that is just not needed.
32-bit - 4,294,967,295 This is where the 4 G limit comes from.
64-bit - 18,446,744,073,709,551,615 This is the number we currently use
128-bit - 340,282,366,920,938,463,463,374,607,431,768,211,455 We don't need this
It's a sensible upgrade for internet addresses. 64 bit isn't actually necessary yet, but the new domains are largely IP-v6, and if I was founding a website I would make sure it is registered in v4 AND v6.
Wasn't it a bummer for you, when hard disks got stuck at 4G? You could pay more for a faster disk, when all you wanted was a bigger disk.
Maybe it's a bloke thing. Bigger is always better than faster, when it comes to hard disks.
That 4G limit applied to Exchange 5.5 corporate mail servers. A mid sized company had it whole organisations email store limited to 4G. The work arounds were expensive and ugly.
These days companies can have individual mail users with 10G of email. Imagine when the whole company was sharing 4G. OH and best of all when the 4G run out the email server would stop. Nobody would recieve or send any email.
[/quote]
4G (4 gigabytes) was a hard limit on media that an Intel/AMD could access from a hard disk. There were ways around it, but if your disk got corrupted it was bad news.
You could also split an 8G drive into two partitions, but it was only later that MS introduced "virtual drives" so so you see the two smaller drives as on larger one. Again though, it wasn't good if one of your drives got corrupted.
Linux of course was all over that. You could have "software RAID" spreading data across many drives: you could have fast but terribly insecure RAID-0, or slow and very secure RAID levels up to 6. If data security is a concern for you, everything but 0 is good, and you can still do them with multiple SSD's.
My previous computer had a hardware raid card, and four Seagate 500 GB drives in RAID-0. It was astoundingly better in the disk department, than any computer I had owned before. And I ran it for years. I have a single SSD and it's much faster than than before. I think the only reason you'd want a raid array now, is data security. You could have to equal sized SSD's, in RAID-1, it would be practically as fast as a single drive, but if either drive fails you're fully up to date. In fact your computer keeps working.
It's important to note that hardware raid (which your motherboard may provide) is a whole lot better than software raid (which burdens your processor.)