Oh, you know those small companies with a shared folder? Not exactly what I'm talking about.
So, let's see... large multinational company with several thousand deployed computers running 4GB RAM and 1TB hard drive apiece and the money is spent on the centralized management of servers to hold the data?
OK, you got me. ... But what if it were possible to leverage all the machines for encrypted, distributed data storage and processing power? It's not just data on your servers, it's data on your network.
What if you didn't really need a server?
But how would you back up?
Same way as normal, probably.
The key is that the data is redundantly distributed, a kind of hybrid of both RAID and torrent, but the workstations don't have to hold *all* the data apiece, just enough of the data to help complete the request if they're a part of the network, with the centralized repository being the key. Of interest, perhaps, is that if the server dies, the data should be aggregately recoverable from the workstations. In theory, if the server dies, the data is still accessible from the workstations, and likely the server's downtime would be unnoticed by the users.
I'm sure someone has thought of this, but if not, something I'm thinking about.
What if, further, the "server" floated CPU between the machines? I mean that the server is a virtual machine in the workstation cloud.
No comments:
Post a Comment