Discussion:
Moving z/OS from LPARs to z/VM guests
(too old to reply)
Brian Nielsen
2006-02-10 23:03:01 UTC
Permalink
We currently run z/OS in multiple LPARS, one for production, two for test,
with more LPARs for both production and test on the way. We also run a
z/VM LPAR on IFLs for LINUX guests.

I'm putting together a pros & cons list for running z/VM on the CP side
with some or all of the z/OS systems as guests. I'd like to make sure I
don't leave out anything important. There would be no other work in that
z/VM image other than the z/OS guests, so the main thrust is the improved
management of the z/OS images and devices. If the production LPARs are
under z/VM they would almost certainly have RESERVED pages to
minimize/eliminate them being paged by z/VM.


My high level outline has:

Cons:
- Additional license charges for z/VM on CP engines
- z/VM will use some CPU, memory, & DASD
- Some operating procedures will change
- z/OS Systems Programmers & Operators will need some z/VM skills

Pros:
- Virtualization allows better workload isolation and resource sharing
- Fewer POR's to make LPAR changes, and new guests on demand
- Some real CTC's & devices can be replaced by virtual counterparts
- VM minidisk support can be used to improve DASD management
- Can simulate the disaster recovery site


There are some items I don't know enough about yet to gauge the impact:

- Workload Manager is used to throttle back the z/OS LPARS below a
specified 4 hour rolling average of CPU usage (for cost reasons). I've
never used Workload Manager, but wonder: (a) will it work if z/OS is a
guest of z/VM, and (b) if not, what would accomplish the same thing?
Setting SHAREs is obviously not up to the task because we're talking about
the whole CP side.

- How do I properly evaluate if the production LPARs should be left alone
and to only consolidate the test LPARs under z/VM?

- How will SMF records from z/OS, which are used for billing, be impacted?

- What will the impact of the additional level of SIE on z/OS be?


Have I overlooked anything major? (Especially z/OS specific issues.)
I'm trying to anticipate questions so that I have answers and to avoid
surprises later.

When other people have made this type of change, what problems popped up?
What problems disappeared?

Many (many) years ago I used to run MVS under VM/SP on a 4381, so that
environment isn't new to me, it's just not recent vintage.

Thanks.

Brian Nielsen
Dave Jones
2006-02-10 23:55:02 UTC
Permalink
Hi, Brian.

A couple of things to consider that you didn't mention in your note:

1) what version of z/VM are you considering? There have been significant
changes in recent versions w.r.t. the z/OS guest support. E.g., z/VM 5.1
does not support preferred guest (V=R and V=F) virtual machines
2) Guest support for advanced z/OS functions like crypto hardware might
be something your site needs to consider as well.

Hope this is of some help. Good luck.

DJ
Brian Nielsen wrote:
> We currently run z/OS in multiple LPARS, one for production, two for test,
> with more LPARs for both production and test on the way. We also run a
> z/VM LPAR on IFLs for LINUX guests.
>
> I'm putting together a pros & cons list for running z/VM on the CP side
> with some or all of the z/OS systems as guests. I'd like to make sure I
> don't leave out anything important. There would be no other work in that
> z/VM image other than the z/OS guests, so the main thrust is the improved
> management of the z/OS images and devices. If the production LPARs are
> under z/VM they would almost certainly have RESERVED pages to
> minimize/eliminate them being paged by z/VM.
>
>
> My high level outline has:
>
> Cons:
> - Additional license charges for z/VM on CP engines
> - z/VM will use some CPU, memory, & DASD
> - Some operating procedures will change
> - z/OS Systems Programmers & Operators will need some z/VM skills
>
> Pros:
> - Virtualization allows better workload isolation and resource sharing
> - Fewer POR's to make LPAR changes, and new guests on demand
> - Some real CTC's & devices can be replaced by virtual counterparts
> - VM minidisk support can be used to improve DASD management
> - Can simulate the disaster recovery site
>
>
> There are some items I don't know enough about yet to gauge the impact:
>
> - Workload Manager is used to throttle back the z/OS LPARS below a
> specified 4 hour rolling average of CPU usage (for cost reasons). I've
> never used Workload Manager, but wonder: (a) will it work if z/OS is a
> guest of z/VM, and (b) if not, what would accomplish the same thing?
> Setting SHAREs is obviously not up to the task because we're talking about
> the whole CP side.
>
> - How do I properly evaluate if the production LPARs should be left alone
> and to only consolidate the test LPARs under z/VM?
>
> - How will SMF records from z/OS, which are used for billing, be impacted?
>
> - What will the impact of the additional level of SIE on z/OS be?
>
>
> Have I overlooked anything major? (Especially z/OS specific issues.)
> I'm trying to anticipate questions so that I have answers and to avoid
> surprises later.
>
> When other people have made this type of change, what problems popped up?
> What problems disappeared?
>
> Many (many) years ago I used to run MVS under VM/SP on a 4381, so that
> environment isn't new to me, it's just not recent vintage.
>
> Thanks.
>
> Brian Nielsen
Chris Langford
2006-02-13 18:29:11 UTC
Permalink
When I log on my disks are not defined - vol 310w01(?) not mounted.

I can rebuild all my stuff if you can give me disks..

When will 5.2 be ready ??

--
Chris Langford,
Cestrian Software:
Consulting services for: VM, VSE, MVS, z/VM, z/OS, OS/2, P/3x0 etc.

z/FM - A toolbox for VM & MVS at http://zfm.cestrian.com
Deva Woodcrafting:
Furniture creation, House remodeling, Wagon restoration etc.
Rob van der Heij
2006-02-13 22:18:30 UTC
Permalink
On 2/13/06, Kris Buelens <***@be.ibm.com> wrote:

> Find the address of 310W01 and attach it to SYSTEM
> Next PIPE helps you find the address
> PIPE CP Q DASD ALL !LOCATE /310W01/!CONS

Or Q DASD 310W01 maybe?

Rob
--
Rob van der Heij
Velocity Software, Inc
Tom Duerbusch
2006-02-11 18:53:30 UTC
Permalink
I'm not an MVS type, as I left the MVS world at MVS/SP time frame. But
I'm a VM bigot, so beware<G>.

Given your VM knowledge in the 4300 days, z/VM is a lot more than you
know. It is also a lot less then you remember. Times have changed. It
use to be that VM was required to share hardware. Now a days, escon and
above attached hardware can usually be shared across LPARs, so that
eliminated one of the historic VM advantages.

On whether to run the production MVS machine under z/VM....

If you currently have sufficient resources for your z/OS machine, then
it may be a canidate to run under z/VM.
If you are some resource limitation, then futher study is needed to
determine if z/VM will help eliminate that constrant or make that
constrant worse.

Then, if there are resources that are currently dedicated, that may be
of benefit if they were shared, then running under z/VM would be a
benefit.

Of course, I'm not talking about dedicated non-SNA 3174s. If you are
still running that way, it is because it is your choice, not a
requirement anymore. I bring up the LPAR using the PCOMM on the z/890
console, and after VTAM is up, I switch the consoles over to an SNA
tube. If VTAM ever goes down, I can always go back to the PCOMM
session.

Of course most test systems are good canidates for running under z/VM.
When their resources are not being used, the resources can be used for
other guests. Hence you can support more test machines with the same
amount of resources.

Now that I have flashcopy support, I flash the production machines and
do a trial upgrade (new software, maintenance, whatever), to see what
problems I hit, before I apply the changes to the production machines.
Just a small change that helps production machines from taking a hit,
and it also minimized my time in on nights and weekends.

(yea, I know, what are you doing in on this weekend? DS6800 dasd
subsystem software upgrade.)

The most simular thing to hypersockets between LPARs is Guest Lans
between machines. It performs much better than hypersockets. Virtual
Switch is a special case of Guest Lans where the Guest Lan is connected
to the OSA. There are performance reasons to use Guest Lans over the
Virtual Switch.

One of the reasons to leave a large system, such as z/OS in its own
LPAR, was eliminated with z/VM 5.2. The reason had to do with z/VM 5.1
and under, limitation of 2 GB of central storage. The rest was expanded
storage. That is old hat now. If you hear others bashing z/VM over
this issue, as long as your processor can run z/VM 5.2....old news.

For the most part, z/VM doesn't need a full time VM Systems Programmer
anymore. You can hire a consultant or a Business Partner to help with
the install. After that, very little training is needed to run a VM
system. Most of my VM clients only use between 150 to 300 man hours a
year of my time for VM. It is higher now, due to bringing up more
zLinux machines. Last year, one of my clients only needed 45 hours of
my time for VM system programming related stuff. (No new hardware, no
VM upgrades, just some application related stuff.)

Your IOCP can be handled by z/OS or by z/VM, but not both. Both
systems will avail them selves, dynamically, of any new hardware added.

As you have mentioned, virtual channel to channel connections between
guest running under z/VM can replace the real CTCA that you may have
between LPARs now. But you would still need the real CTCA(s) that go
from z/VM to your production machines, if they are running in another
LPAR. But it would be "test z/OS to virtual CTCA to z/VM to real CTCA
to LPAR". i.e. no machines running under z/VM would need their own real
CTCA hardware.

As you remember, dedicated dasd to your z/OS system is most efficient.
Minidisks are slightly less efficient, but allow for easy sharing
between systems on the same LPAR.

z/VM doesn't have a native tape manager. Many backup their VM systems
with z/OS utilities.

Well, time to go home <G>. IBM is done with the upgrades.
I'm sure others with chime in when they get in on Monday.

Tom Duerbusch
THD Consulting

St. Louis Missouri
Host City
1904 Summer Olympics


>>> ***@SCO.STATE.ID.US 2/10/2006 5:03 PM >>>
We currently run z/OS in multiple LPARS, one for production, two for
test,
with more LPARs for both production and test on the way. We also run a

z/VM LPAR on IFLs for LINUX guests.

I'm putting together a pros & cons list for running z/VM on the CP side

with some or all of the z/OS systems as guests. I'd like to make sure
I
don't leave out anything important. There would be no other work in
that
z/VM image other than the z/OS guests, so the main thrust is the
improved
management of the z/OS images and devices. If the production LPARs are

under z/VM they would almost certainly have RESERVED pages to
minimize/eliminate them being paged by z/VM.


My high level outline has:

Cons:
- Additional license charges for z/VM on CP engines
- z/VM will use some CPU, memory, & DASD
- Some operating procedures will change
- z/OS Systems Programmers & Operators will need some z/VM skills

Pros:
- Virtualization allows better workload isolation and resource
sharing
- Fewer POR's to make LPAR changes, and new guests on demand
- Some real CTC's & devices can be replaced by virtual counterparts
- VM minidisk support can be used to improve DASD management
- Can simulate the disaster recovery site


There are some items I don't know enough about yet to gauge the
impact:

- Workload Manager is used to throttle back the z/OS LPARS below a
specified 4 hour rolling average of CPU usage (for cost reasons). I've

never used Workload Manager, but wonder: (a) will it work if z/OS is a

guest of z/VM, and (b) if not, what would accomplish the same thing?
Setting SHAREs is obviously not up to the task because we're talking
about
the whole CP side.

- How do I properly evaluate if the production LPARs should be left
alone
and to only consolidate the test LPARs under z/VM?

- How will SMF records from z/OS, which are used for billing, be
impacted?

- What will the impact of the additional level of SIE on z/OS be?


Have I overlooked anything major? (Especially z/OS specific issues.)
I'm trying to anticipate questions so that I have answers and to avoid

surprises later.

When other people have made this type of change, what problems popped
up?
What problems disappeared?

Many (many) years ago I used to run MVS under VM/SP on a 4381, so that

environment isn't new to me, it's just not recent vintage.

Thanks.

Brian Nielsen
Alan Ackerman
2006-02-13 01:27:28 UTC
Permalink
The big advantage of VM nowadays is the sharing of real memory. LPARs can share processor and
channels and devices. Minidisks are pretty small, for z/OS there is usually no particular advantage
to them.

So the question is, do you have workloads that can take advantage of shared memory? Production
z/OS systems with different memory peaks might be complimentary. Test systems that are only up
part of the time are an even better candidate. If all your z/OS systems have peak storage
requirements at the same time, then memory savings will be limited.

You can overcommit memory -- but the bad news is that when the z/OS guest takes a page fault,
the whole guest stops until the page fault is satisfied. VSE has implemented handshaking so that
only the particular task whose page is faulted has to stop, but so far z/OS has never supported
that. So for production z/OS guests, you really don't want to take page faults. For test z/OS
guests this may be tolerable.

There is a CPU cost to running under z/VM, and reliability will be slightly lower (?), so you have a
trade off between storage savings and other overhead.

Also, there is the problem of having yet another operating system, with different commands and
different command syntax, for your operations staff to support. If they have no VM experience,
then they are likely to oppose you. People who are used to all those commas get very upset that
VM does not use them! Tuning another operating system may also be a problem for your
performance or capacity planning people.

We use z/VM to run large numbers of DR and system test z/OS guests. We do not run z/OS
production that way. (It somewhat depends on whether you consider DR to be production or test.)
Generally, each group (for example the IMS group, the DB2 group, the z/OS group, etc.) has its
own set of z/OS test guests. This has the big advantage that when the phone rings and the system
programmer has to "leave" to fix a production problem, he (or she) doesn't lose his (or her) test
shot -- when they come back, the guest is waiting where they left off. Most of the pages are
paged out, so they don't cost much. (Even an "idle" z/OS guest consumes cycles, alas.)

Similarly we have DR guests corresponding to real z/OS LPARS. We use Capacity On Demand to
add processors when we are going to run a DR test with large numbers of guests all active at the
same time.

Another advantage of using VM is that the systems programmers can perform their tests
(including "destructive testing") during regular business hours, instead of having to come in nights
and weekends for test shots. That saves us quite a bit of overtime. Our DR tests are still run
mostly on weekends (the limitation is the number of tape drives, not CPU cycles).
Brian Nielsen
2006-02-13 15:03:24 UTC
Permalink
That's a fine overview. I'm just sorry if I gave the impression that I
havn't been involved with VM over the years. On the contrary, it's merely
z/OS as a guest that is not recent experience, so I'm more concerned with
any potential z/OS specific concerns.

Brian Nielsen (a fellow VM bigot)


On Sat, 11 Feb 2006 12:53:30 -0600, Tom Duerbusch
<***@stlouiscity.com> wrote:

>I'm not an MVS type, as I left the MVS world at MVS/SP time frame. But
>I'm a VM bigot, so beware<G>.
>
>Given your VM knowledge in the 4300 days, z/VM is a lot more than you
>know. It is also a lot less then you remember. Times have changed. It
>use to be that VM was required to share hardware. Now a days, escon and
>above attached hardware can usually be shared across LPARs, so that
>eliminated one of the historic VM advantages.
>
>On whether to run the production MVS machine under z/VM....
>
>If you currently have sufficient resources for your z/OS machine, then
>it may be a canidate to run under z/VM.
>If you are some resource limitation, then futher study is needed to
>determine if z/VM will help eliminate that constrant or make that
>constrant worse.
>
>Then, if there are resources that are currently dedicated, that may be
>of benefit if they were shared, then running under z/VM would be a
>benefit.
>
>Of course, I'm not talking about dedicated non-SNA 3174s. If you are
>still running that way, it is because it is your choice, not a
>requirement anymore. I bring up the LPAR using the PCOMM on the z/890
>console, and after VTAM is up, I switch the consoles over to an SNA
>tube. If VTAM ever goes down, I can always go back to the PCOMM
>session.
>
>Of course most test systems are good canidates for running under z/VM.
>When their resources are not being used, the resources can be used for
>other guests. Hence you can support more test machines with the same
>amount of resources.
>
>Now that I have flashcopy support, I flash the production machines and
>do a trial upgrade (new software, maintenance, whatever), to see what
>problems I hit, before I apply the changes to the production machines.
>Just a small change that helps production machines from taking a hit,
>and it also minimized my time in on nights and weekends.
>
>(yea, I know, what are you doing in on this weekend? DS6800 dasd
>subsystem software upgrade.)
>
>The most simular thing to hypersockets between LPARs is Guest Lans
>between machines. It performs much better than hypersockets. Virtual
>Switch is a special case of Guest Lans where the Guest Lan is connected
>to the OSA. There are performance reasons to use Guest Lans over the
>Virtual Switch.
>
>One of the reasons to leave a large system, such as z/OS in its own
>LPAR, was eliminated with z/VM 5.2. The reason had to do with z/VM 5.1
>and under, limitation of 2 GB of central storage. The rest was expanded
>storage. That is old hat now. If you hear others bashing z/VM over
>this issue, as long as your processor can run z/VM 5.2....old news.
>
>For the most part, z/VM doesn't need a full time VM Systems Programmer
>anymore. You can hire a consultant or a Business Partner to help with
>the install. After that, very little training is needed to run a VM
>system. Most of my VM clients only use between 150 to 300 man hours a
>year of my time for VM. It is higher now, due to bringing up more
>zLinux machines. Last year, one of my clients only needed 45 hours of
>my time for VM system programming related stuff. (No new hardware, no
>VM upgrades, just some application related stuff.)
>
>Your IOCP can be handled by z/OS or by z/VM, but not both. Both
>systems will avail them selves, dynamically, of any new hardware added.
>
>As you have mentioned, virtual channel to channel connections between
>guest running under z/VM can replace the real CTCA that you may have
>between LPARs now. But you would still need the real CTCA(s) that go
>from z/VM to your production machines, if they are running in another
>LPAR. But it would be "test z/OS to virtual CTCA to z/VM to real CTCA
>to LPAR". i.e. no machines running under z/VM would need their own real
>CTCA hardware.
>
>As you remember, dedicated dasd to your z/OS system is most efficient.
>Minidisks are slightly less efficient, but allow for easy sharing
>between systems on the same LPAR.
>
>z/VM doesn't have a native tape manager. Many backup their VM systems
>with z/OS utilities.
>
>Well, time to go home <G>. IBM is done with the upgrades.
>I'm sure others with chime in when they get in on Monday.
>
>Tom Duerbusch
>THD Consulting
>
>St. Louis Missouri
>Host City
>1904 Summer Olympics
>
>
>>>> ***@SCO.STATE.ID.US 2/10/2006 5:03 PM >>>
>We currently run z/OS in multiple LPARS, one for production, two for
>test,
>with more LPARs for both production and test on the way. We also run a
>
>z/VM LPAR on IFLs for LINUX guests.
>
>I'm putting together a pros & cons list for running z/VM on the CP side
>
>with some or all of the z/OS systems as guests. I'd like to make sure
>I
>don't leave out anything important. There would be no other work in
>that
>z/VM image other than the z/OS guests, so the main thrust is the
>improved
>management of the z/OS images and devices. If the production LPARs are
>
>under z/VM they would almost certainly have RESERVED pages to
>minimize/eliminate them being paged by z/VM.
>
>
>My high level outline has:
>
>Cons:
> - Additional license charges for z/VM on CP engines
> - z/VM will use some CPU, memory, & DASD
> - Some operating procedures will change
> - z/OS Systems Programmers & Operators will need some z/VM skills
>
>Pros:
> - Virtualization allows better workload isolation and resource
>sharing
> - Fewer POR's to make LPAR changes, and new guests on demand
> - Some real CTC's & devices can be replaced by virtual counterparts
> - VM minidisk support can be used to improve DASD management
> - Can simulate the disaster recovery site
>
>
>There are some items I don't know enough about yet to gauge the
>impact:
>
> - Workload Manager is used to throttle back the z/OS LPARS below a
>specified 4 hour rolling average of CPU usage (for cost reasons). I've
>
>never used Workload Manager, but wonder: (a) will it work if z/OS is a
>
>guest of z/VM, and (b) if not, what would accomplish the same thing?
>Setting SHAREs is obviously not up to the task because we're talking
>about
>the whole CP side.
>
> - How do I properly evaluate if the production LPARs should be left
>alone
>and to only consolidate the test LPARs under z/VM?
>
> - How will SMF records from z/OS, which are used for billing, be
>impacted?
>
> - What will the impact of the additional level of SIE on z/OS be?
>
>
>Have I overlooked anything major? (Especially z/OS specific issues.)
>I'm trying to anticipate questions so that I have answers and to avoid
>
>surprises later.
>
>When other people have made this type of change, what problems popped
>up?
>What problems disappeared?
>
>Many (many) years ago I used to run MVS under VM/SP on a 4381, so that
>
>environment isn't new to me, it's just not recent vintage.
>
>Thanks.
>
>Brian Nielsen
>=========================================================================
Brian Nielsen
2006-02-13 15:26:58 UTC
Permalink
On Sun, 12 Feb 2006 19:27:28 -0600, Alan Ackerman
<***@BANKOFAMERICA.COM> wrote:

>The big advantage of VM nowadays is the sharing of real memory. LPARs can
share processor and
>channels and devices. Minidisks are pretty small, for z/OS there is
usually no particular advantage
>to them.

The Minidisk advantage I was thinking about for this environment is mostly
because of the DASD being on an ESS 800. Unlike a DS6800, the ESS 800
can't delete individual volumes to free the space for making new volumes
of different sizes. On the ESS 800 you have to delete every volume in the
LCU. I was thinking how much more flexiblity it would add to the DASD
management on z/OS if Minidisks were used to make some smaller volumes out
of larger ones when needed and stick to dedicated volumes otherwise (for
PAV reasons).

>Generally, each group (for example the IMS group, the DB2 group, the z/OS
group, etc.) has its
>own set of z/OS test guests. This has the big advantage that when the
phone rings and the system
>programmer has to "leave" to fix a production problem, he (or she)
doesn't lose his (or her) test
>shot -- when they come back, the guest is waiting where they left off.
Most of the pages are
>paged out, so they don't cost much. (Even an "idle" z/OS guest consumes
cycles, alas.)

That's one of the big benfits I'd included under "new guests on demand".
Right now they singly resuse a test LPAR. Under VM they can have more.

Brian Nielsen
Brian Nielsen
2006-02-13 15:31:30 UTC
Permalink
We're currently running z/VM 4.4.0 and I've got z/VM 5.2 ready to go as
soon as the MCL levels on the z/890 are brought up-to-date. So z/OS would
be running under z/VM 5.2.

Not using crypto here, but that's a good observation.

Brian Nielsen

On Fri, 10 Feb 2006 17:55:02 -0600, Dave Jones <***@vsoft-software.com>
wrote:

>Hi, Brian.
>
>A couple of things to consider that you didn't mention in your note:
>
>1) what version of z/VM are you considering? There have been significant
>changes in recent versions w.r.t. the z/OS guest support. E.g., z/VM 5.1
>does not support preferred guest (V=R and V=F) virtual machines
>2) Guest support for advanced z/OS functions like crypto hardware might
>be something your site needs to consider as well.
>
>Hope this is of some help. Good luck.
>
>DJ
>Brian Nielsen wrote:
>> We currently run z/OS in multiple LPARS, one for production, two for
test,
>> with more LPARs for both production and test on the way. We also run a
>> z/VM LPAR on IFLs for LINUX guests.
>>
>> I'm putting together a pros & cons list for running z/VM on the CP side
>> with some or all of the z/OS systems as guests. I'd like to make sure I
>> don't leave out anything important. There would be no other work in
that
>> z/VM image other than the z/OS guests, so the main thrust is the
improved
>> management of the z/OS images and devices. If the production LPARs are
>> under z/VM they would almost certainly have RESERVED pages to
>> minimize/eliminate them being paged by z/VM.
>>
>>
>> My high level outline has:
>>
>> Cons:
>> - Additional license charges for z/VM on CP engines
>> - z/VM will use some CPU, memory, & DASD
>> - Some operating procedures will change
>> - z/OS Systems Programmers & Operators will need some z/VM skills
>>
>> Pros:
>> - Virtualization allows better workload isolation and resource sharing
>> - Fewer POR's to make LPAR changes, and new guests on demand
>> - Some real CTC's & devices can be replaced by virtual counterparts
>> - VM minidisk support can be used to improve DASD management
>> - Can simulate the disaster recovery site
>>
>>
>> There are some items I don't know enough about yet to gauge the impact:
>>
>> - Workload Manager is used to throttle back the z/OS LPARS below a
>> specified 4 hour rolling average of CPU usage (for cost reasons). I've
>> never used Workload Manager, but wonder: (a) will it work if z/OS is a
>> guest of z/VM, and (b) if not, what would accomplish the same thing?
>> Setting SHAREs is obviously not up to the task because we're talking
about
>> the whole CP side.
>>
>> - How do I properly evaluate if the production LPARs should be left
alone
>> and to only consolidate the test LPARs under z/VM?
>>
>> - How will SMF records from z/OS, which are used for billing, be
impacted?
>>
>> - What will the impact of the additional level of SIE on z/OS be?
>>
>>
>> Have I overlooked anything major? (Especially z/OS specific issues.)
>> I'm trying to anticipate questions so that I have answers and to avoid
>> surprises later.
>>
>> When other people have made this type of change, what problems popped
up?
>> What problems disappeared?
>>
>> Many (many) years ago I used to run MVS under VM/SP on a 4381, so that
>> environment isn't new to me, it's just not recent vintage.
>>
>> Thanks.
>>
>> Brian Nielsen
>=========================================================================
Kris Buelens
2006-02-14 19:19:09 UTC
Permalink
Yes, rigth, I'm redfaced... I only learned about Q DASD volser recently,
and I always forget it again, my age is definetely showing.

Kris,
IBM Belgium, VM customer support


VM/ESA and z/VM Discussions <VMESA-***@LISTSERV.UARK.EDU> wrote on
2006-02-13 23:18:30:

> On 2/13/06, Kris Buelens <***@be.ibm.com> wrote:

> > Find the address of 310W01 and attach it to SYSTEM
> > Next PIPE helps you find the address
> > PIPE CP Q DASD ALL !LOCATE /310W01/!CONS

> Or Q DASD 310W01 maybe?

> Rob
> --
> Rob van der Heij
> Velocity Software, Inc
Loading...