$\begingroup$

Network latency is orders of magnitude too high for a remote server to usefully share its RAM directly, even if you could cobble together a virtualization layer to make it work. However, today's network speeds are high enough that remote RAM based key/value stores like memcached can compete favorably with hammering a local database due to insufficient local memory for caching.

Since this question has the virtual-memory tag, I'll also point out that network servers (which is what "cloud computing" is the latest name for) have been used for virtual memory (also known as "swap") since the diskless workstations of the late 1980's. These machines are called "thin clients" today.