Post by Rob van der HeijFrom a pure technical point of view, swapping to DCSS is much more
elegant because you copy a page under SIE and don't step out to CP to
interpret a channel program. But the drawback is that the DCSS is
relatively small and requires additional management structures in the
Linux virtual machine memory. I see some goodness for very small
virtual machines, I think.
in one sense it is like extended memory on 3090 ... fast memory move
operations. however, real extended memory was real storage. dcss is
just another part of virtual memory. in theory you could achieve
similar operational characteristics just by setting up linux to have
larger virtual memory by the amount that would have gone to dcss
... and having linux rope it off and treat that range of memory the
same way it might treat a range of dcss memory.
the original point of dcss was having some virtual memory semantics
that allowed definition of some stuff that appeared in multiple
virtual address spaces ... recent post discussing some of the dcss
history
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
if the virtual space range only occupies a single virtual address
space ... for most practical purposes, what is the difference between
that and just having equivalent virtual space range as non-DCSS (but
treated by linux in the same way that you would treat a DCSS space).
note that in the originally virtual memory management implementation
... only a small subset was picked up for the original DCSS
implemenation, a virtual machine could arbitrarily changes its
allocated segments (contiguous or non-contiguous) ... so long as it
didn't exceed its aggregate resource limit. however, the original
implementation also included support for extremely simplified api and
veriy high performance page mapped disk access (on which page mapped
filesystem was layered)
http://www.garlic.com/~lynn/subtopic.html#mmap
... and sharing across multiple virtual address spaces could be done
as part of the page mapped semantics (aka create a module on a page
mapped disk ... and then the cms loading of that module included
directives about shared segment semantics).
note that one of the issues in unix-based infrastructure ... is that
the unix-flavored kernels may already be using 1/3 to 1/2 of their
(supposedly) real storage for various kinds of caching (which
basically gets you very quickly into 3-level paging logic ... the
stuff linux is using currently, the stuff it has decided to save in
its own cache, and the total stuff that vm is deciding to keep in real
storage). for linux operation in constrained virtual machine memory
sizes, you might get as much or better improvement by tuning its own
internal cache operation.
one of the things i pointed out long ago and far away about running a
lru-algorithm under a lru-algorithm ... is that that things can get
into pathelogical situations (back in the original days of mft/mvt
adding virtual memory for original vs1 & vs2). the cp kernel has
selected a page for replacement based on its not having been used
recently ... however, the virtual machine page manager also discovers
that it needs to replace a page and picks the very same page as the
next one to use (because both algorithms are using the same "use"
criteria). the issue is that both implementations are using the least
used characteristic for the basis for replacement decision. the first
level system is removing the virtual machine page because it believes
it is not going to be used in the near future. however, the virtual
machine is choosing the least recently used page to be the next page
that is used (as opposed to be the next page not to be used).
running a LRU page replacement algorithm under a LRU page replacement
aglorithm is not just an issue of processing overhead ... there is
also the characteristic that LRU algorithm doesn't recurse gracefully
(i.e. a virtual LRU algorithm starts to take on characteristics of an
MRU algorithm to the 1st level algorithm ... i.e. the least recently
used page is the next most likely to be used instead of the least
likely to be used). misc. past stuff about page replacement work ...
originally done as undergraduate for cp67 in the 60s
http://www.garlic.com/~lynn/subtopic.html#wsclock
some specific past posts on LRU algorithm running under LRU algorithm
http://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
http://www.garlic.com/~lynn/2001f.html#54 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
http://www.garlic.com/~lynn/2003c.html#13 Unused address bits
http://www.garlic.com/~lynn/2003j.html#25 Idea for secure login
http://www.garlic.com/~lynn/2004l.html#66 Lock-free algorithms
http://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
http://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
http://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
http://www.garlic.com/~lynn/94.html#51 Rethinking Virtual Memory
http://www.garlic.com/~lynn/95.html#2 Why is there only VM/370?
http://www.garlic.com/~lynn/96.html#10 Caches, (Random and LRU strategies)
http://www.garlic.com/~lynn/96.html#11 Caches, (Random and LRU strategies)
with respect to this particular scenario or 2nd level disk access
... one of the characteristics of this long ago and far away page
mapped semantics for high performance disk access (originally done on
cp/67 base) was that it could be used by any virtual machine for all
of their disk accesses (at least those involving page size chunks)
whether it was filesystem or swapping area.
--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/