Thanks Ron. I did some testing on Sunday but I did not monitor very closely
the dump space usage during the spxtape load. It is possible that spxtape
uses DUMP area because I did notice at one time my dump usage much higher
but it has dropped down to a reasonable value.
The problem I have on z/VM 5.2 is far worse. I didn't have time to follow up
with IBM today but will tomorrow.
If you have a lot of files to spxtape to a 5.2 system be careful. Do a test
and monitor the pages loaded against the Q ALLOC SPOOL pages used to see if
you will run into the same problem. A warm IPL does release them some how.
Hans
-----Original Message-----
From: VM/ESA and z/VM Discussions [mailto:VMESA-***@LISTSERV.UARK.EDU] On
Behalf Of Schmiedge, Ronald
Sent: March 6, 2006 11:28 AM
To: VMESA-***@LISTSERV.UARK.EDU
Subject: Re: Problem moving spool files from z/vm 3.1 to z/vm 5.2 using
spxtape
We find on our z/VM 4.4 system there are a lot of spool pages used when
we do our daily spool offloads. We opened an ETR with IBM and found out
that SPXTAPE is doing this, and it is working as designed.
Do you have DUMP set to DASD? Do you have a SPOOL volume dedicated to
DUMP?
If DUMP is set to DASD, you could try this:
Q DUMP
Q ALLOC SPOOL
Do some of your SPXTAPE loads
Q DUMP again
SET DUMP OFF
SET DUMP DASD
Q ALLOC SPOOL
IBM suggested two things to us - set aside dedicated DUMP space (since
SPXTAPE seems to be using the DUMP file for "working storage") or set
DUMP OFF and back to DASD after each SPXTAPE.
Ron Schmiedge
CGI Group Inc
-----Original Message-----
From: VM/ESA and z/VM Discussions [mailto:VMESA-***@LISTSERV.UARK.EDU] On
Behalf Of Hans Rempel
Sent: Monday, March 06, 2006 5:26 AM
To: VMESA-***@LISTSERV.UARK.EDU
Subject: Re: Problem moving spool files from z/vm 3.1 to z/vm 5.2 using
spxtape
Thanks Mike. I read you e-mail earlier but didn't have a chance to
reply.
You're correct. I used a spxtape to backup of nss files from 5.2 system
and found that it used much more space than necessary about 13% when I
loaded them the 5.2 syste. I than IPL'ld and it dropped to 3%. So its
not just between releases. SPXTAPE 5.2 is bad.
Your comments made me comfortable to proceed with the spxtape load. I
just added an additional 3 spool volumes to the 4 I had.
Hans
-----Original Message-----
From: VM/ESA and z/VM Discussions [mailto:VMESA-***@LISTSERV.UARK.EDU] On
Behalf Of Mike Hammock
Sent: March 5, 2006 6:16 PM
To: VMESA-***@LISTSERV.UARK.EDU
Subject: Re: Problem moving spool files from z/vm 3.1 to z/vm 5.2 using
spxtape
I can't help much, but may be able to 'reinforce' your experience.
I was helping a customer install one of our systems and migrate their
4.4
system to it and change to 5.2. When we loaded the SPXTAPE spool files
they seemed to occupy 2 to 3 times as much space as they had on the 4.4
system. We had to add a new spool volume to be able to load them all.
But, when we re-IPL'ed, the 'missing space' reappeared and the correct
amount of spool space/cylinders was allocated.
It looks like there may be some kind of SPXTAPE problem on 5.2 Mike C.
M. (Mike) Hammock Sr. Technical Support zFrame & IBM zSeries Solutions
(404) 643-3258
***@csihome.com
Janice Calder
<***@hu
mber.ca>
To
Sent by: VM/ESA VMESA-***@LISTSERV.UARK.EDU
and z/VM
cc
Discussions
<VMESA-***@LISTSERV
Subject
.UARK.EDU> Problem moving spool files from
z/vm 3.1 to z/vm 5.2 using
spxtape
03/05/2006 02:23
PM
Please respond to
VM/ESA and z/VM
Discussions
<VMESA-***@LISTSERV
.UARK.EDU>
When loading spool files using spxtape we appear to be
using up a lot of spool pages that are not identified or associated with
regular spool entries but are flagged as in use. VMspool identifies them
as "Other" in the sysuse screen.
I tried loading by userid only, issuing Q ALLOC SPOOL command
frequently and the number of spool pages loaded was about 1/3 the number
being used up.
The spxtape dump on 3.1 dumped approx 11,000 files using about 800,000
pages which looks fine. The loading appears to be the problem.
Problem has been report to IBM but this problem does not appear in their
database. More help from them tomorrow but I like to get a work around
today.
Any comments or suggestions would be appreciated.
Thanks
Hans Rempel / Janice Calder
[This E-mail scanned for viruses]
[This E-mail scanned for viruses]