Anne & Lynn Wheeler
2007-03-16 08:06:01 UTC
This is not important, but I just have to ask this. Does anybody know
why the original designers of VM did not do something for "minidisks"
akin to a OS/360 VTOC? Actually, it would be more akin to a "partition
table" on a PC disk. It just seems that it would be easier to maintain
if there was "something" on the physical disk which contained
information about the minidisks on it. Perhaps with information such
as: start cylinder, end cylinder, owning guest, read password, etc. CP
owned volumes have an "allocation map", this seems to me to be an
extention of that concept.
CP67 had a global directory ... that was indexed and paged ... so itwhy the original designers of VM did not do something for "minidisks"
akin to a OS/360 VTOC? Actually, it would be more akin to a "partition
table" on a PC disk. It just seems that it would be easier to maintain
if there was "something" on the physical disk which contained
information about the minidisks on it. Perhaps with information such
as: start cylinder, end cylinder, owning guest, read password, etc. CP
owned volumes have an "allocation map", this seems to me to be an
extention of that concept.
didn't need individual volume index.
it also avoided the horrendous overhead of multi-track search that
os/360 used to search the volume VTOC on every open. lots of past
posts mentioning multi-tract paradigm for VTOC & PDS directory was
io/memory trade-off ... os/360 target in the mid-60s was to burn
enormous i/o capacity to save having in-memory index.
http://www.garlic.com/~lynn/subtopic.html#dasd
that resource trade-off had changed by at least the mid-70s ... and
it wasn't ever true for the machine configurations that cp67 ran on.
the other characteristic was that both cp67 and cms treated disks as
fixed-block architecture ... even if they were CKD ... CKD disks would
be formated into fixed-blocks ... and then treated as fixed-block
devices ... and avoid the horrible i/o performance penalty of ever
doing multi-track searches for looking up location and/or other
information on disk.
recent thread in bit.listserv.ibm-main
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#39 FBA rant
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#42 FBA rant
http://www.garlic.com/~lynn/2007e.html#43 FBA rant
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007e.html#51 FBA rant
http://www.garlic.com/~lynn/2007e.html#59 FBA rant
http://www.garlic.com/~lynn/2007e.html#60 FBA rant
http://www.garlic.com/~lynn/2007e.html#63 FBA rant
http://www.garlic.com/~lynn/2007e.html#64 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#2 FBA rant
http://www.garlic.com/~lynn/2007f.html#3 FBA rant
http://www.garlic.com/~lynn/2007f.html#5 FBA rant
http://www.garlic.com/~lynn/2007f.html#12 FBA rant
the one possible exception was loosely-coupled single-system-image
support done for HONE system. HONE mini-disk volumes had an in-use
bitmap directory on each volume ... that was used to manage "LINK"
consistency across all machines in the cluster. it basically used
a channel program with search operation to implement i/o logical
equivalent to the atomic compare&swap instruction ... avoiding having
to do reserve/release with intervening i/o operations. I have some
recollection talking to the JES2 people about them trying a similar
strategy for multi-system JES2 spool allocation. post from above
mentioning HONE "compare&swap" channel program for multi-system
cluster operation
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
HONE was vm-based online interactive for world-wide sales, marketing,
and field people. It originally started in the early 70s with a clone
of the science center's cp67 system
http://www.garlic.com/~lynn/subtopic.html#545tech
and eventually propogated to several regional US datacenters ... and
also started to propogate overseas. I provided highly modified cp67
and then later vm370 systems for HONE operation for something like 15
yrs. I also handled some of the overseas clones ... like when EMEA
hdqtrs moved from the states to just outside paris in the early 70s.
In the mid-70s, the US HONE datacenters were consolidated in northern
cal. ... and single-system-image software support quickly emerge
... running multiple "attached processors" in cluster operation. HONE
applications were heavily APL ... so it was quite compute intensive.
With four-channel controllers and string-switch ... you could get
eight system paths to every disk. Going with "attached processors"
... effectively two processors made use of a single set of channels
... so you could get 16 processors in single-system-image ... with
load-balancing and failure-fallover-recovery.
Later in the early 80s, the northern cal. HONE datacenter was
replicated first in Dallas and then a third center in Boulder ... for
triple redundancy, load-balancing and fall-over (in part concern about
natural disasters like earthquakes).
lots of past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
At one point in SJR after the 370/195 machine ... recent reference
http://www.garlic.com/~lynn/2007f.html#10 Beyond multicore
http://www.garlic.com/~lynn/2007f.html#11 Is computer history taught now?
http://www.garlic.com/~lynn/2007f.html#12 FBA rant
was replaced with mvs/168 system ... and vm was running on 370/158 ...
there were multiple strings of 3330 dasd ... with whole strings supposedly
dedicated to vm and other strings dedicated to mvs. there were "rules"
that mvs packs should never be mounted on vm "strings" (because of the
horrendous vtoc & pds directory multi-track search overhead hanging
channel, control units, string switches, and drives).
Periodically it would happen .. in specific instances ... users would be calling
the operator within five minutes claiming vm/cms interactive response and totally
deteriorated. Then it would require tracking down the offending MVS pack.
One of the events, the MVS operator refused to take down the pack and move it ...
because some long running application had just started. So to give them a
taste of their own medicine ... we brought up a highly optimized VS1 system
in a virtual machine on the (loaded) vm/158 with a couple packs on MVS string
and proceeded to start some operations that brought MVS to its knees ...
drastically inhibiting the long running MVS application from getting any useful
thruput (and effectively negating its debilitating effect of vm/cms interactive
response). The MVS operator then quickly reconfigured everything and aggreed
that MVS would keep its packs off VM disk strings.
some old posts retelling the sjr mvs/168 and vm/158 response story:
http://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2002d.html#22 DASD response times
http://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
http://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
http://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
http://www.garlic.com/~lynn/2003f.html#51 inter-block gaps on DASD tracks
http://www.garlic.com/~lynn/2003k.html#28 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
http://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#51 Channel busy without less I/O
http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
http://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
http://www.garlic.com/~lynn/2005r.html#19 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
http://www.garlic.com/~lynn/2006s.html#15 THE on USS?