MVS TOOLS AND TRICKS OF THE TRADE November 1996 Sam Golob MVS Systems Programmer sbgolob@cbttape.org Sam Golob is a Senior Systems Programmer ALTERNATIVES AND DATASET RECOVERY Once upon a time in my career, I used to sell shoes. Back then, among my clientele, it was common to hear the expression: "Sometimes the shoemaker goes barefoot." Recently, a similar thing happened at my shop. In our place, we have a person who is an expert at DASD and SMS administration. This person was in the middle of setting up SMS on one of our LPARs which didn't use SMS, and he put in many hours of work figuring out all the rules that would be needed. These were all placed in a partitioned dataset he had, on his own disk pack. One day, he came to me very disturbed; he had accidentally deleted this pds. I asked him if there was any full-pack backup, and he said no--his private pack was not in the shop's backup schedule. I asked him if he HBACKed the pds, and he told me that most of his work was done from the side of the shop that is not running HSM. In short, he was stuck. Our expert shoemaker was indeed, temporarily barefoot. What did he come to me for? He wanted to see if I could "un-delete" his PDS. I told him I would try. As you undoubtedly know, un-deleting a PDS is highly non-trivial in the MVS system. The VTOC and index entries get deleted, although (usually) the data is still there. Un-deleting a dataset involves creating a new VTOC entry, with the correct dataset attributes, that points to the correct data, including all its previous extents. I have some personal experience doing this, and have written about the subject before. ("A VTOC Adventure - Part 3" in this column, June 1995.) But the MVS operating system and DASD structures have changed subtly since then, as we shall see. This process is not as straightforward as it once was. What I had to do, was to un-index the disk pack using ICKDSF with its OSVTOC command. Then I had to find the data on the disk pack. I looked for available extents with a track mapping tool (any "MAPDISK" type utility will do - I used IEHMAP from File 083 of the CBT MVS Utilities Tape, which can be ordered through NaSPA). Then I had to verify that the deleted data was what I wanted to restore, so I had to look at it, using the Fullscreen Zap TSO command from File 134 of the CBT Tape, in FULLVOL mode, using ZAP's absolute CCHHR display, and looking at record 1 from the available extents shown by IEHMAP. I had to make sure I had the correct deleted data. Then I used the free Fullscreen ZAP command again to look at the VTOC. First, I had to find an unused (Format 0) VTOC entry, and then I had to create from it, a Format 1 VTOC entry which pointed to the correct data, with dataset attributes that were like the ones the original dataset had. This was by means of direct zapping of the blank VTOC entry, to create a Format 1 entry to point to the deleted data. I have written this up in my June 1995 column (all of my previous columns can be found on File 120 of the CBT Tape), but the procedure is quite delicate, because all of the VTOC fields have to be right. After I got done with the VTOC entry, all the member data was still there, since the directory was still in place. I thought I was just about finished. All that needed to be done at this point, was to copy all the members from this reconstructed pds into a "healthier" pds which was created the regular way. It looked as if we were home free. That's where we were in for a shock. It seemed that with our system configuration, IEBCOPY had some kind of bug, and it deleted all the pds members before they could be copied out. I now had to restore all the deleted pds members, by recreating directory entries for them. There are several ways to restore deleted pds members without a tremendous amount of effort. See the PDSGAS program from File 316 of the CBT Tape, or the RESTORE subcommand of the PDS command processor from File 182 of the CBT Tape, or the FIXPDS utility from File 036 of the CBT Tape. I went and restored all the pds members, using names $$000001, $$000002, etc. with this sort of tool. After that was all done, I turned the restored pds back over to my colleague, so he could copy the members to another pds. Again IEBCOPY failed. Both my colleague and I tried it several times, and IEBCOPY wiped out all the members, so I had to keep restoring them again. I needed another way to save the members and copy them out, so my colleague's work didn't have to be duplicated. Fortunately, we have alternatives. It seems that IEBCOPY opens its input dataset for INOUT, and therein lies the problem. If we could find a copying utility which just opens the input dataset for INPUT and straightforwardly copies the members, we would be home free. There are actually many alternatives we could try. First, we might try to offload the members of the pds, in IEBUPDTE format, to a sequential dataset, and then reload them into a new partitioned dataset. If IBM's IEBUPDTE would'nt work, we could try the LISTPDS program (from File 316 of the CBT Tape) which does a similar job. Or we could try the OFFLOAD program from File 093 of the CBT Tape. Or we could use the REVIEW TSO command from File 134 of the CBT Tape. With REVIEW, you allocate a sequential output dataset to the SYSUT2 ddname. Then you REVIEW the partitioned dataset and get a directory listing for it. Finally, in the eight-byte input field at the top of REVIEW's directory screen, you enter the command, "=OFFLOAD", and REVIEW will write all the members in IEBUPDTE format, to the sequential dataset. Still another alternative would be to use the new COPY facility in later versions of the REVIEW command. That facility allows you to COPY anything you are browsing on a REVIEW screen, into another dataset that has been previously allocated to the SYSUT2 ddname. What I actually did, was to use an alternative member-copying utility in a vendor product, which EXCP'ed the input member data on the read, and BSAM'ed it out to an output pds as a new member. This worked fine, because only the member data was read, and the input dataset as a whole was not mucked with. The actual tool was the DUP subcommand of the PDSTOOLS (now called STARTOOL) product from Serena in Burlingame, California. Other alternatives could surely work also. You just have to think, using your system knowledge, your tool repertoire, and your creativity. But my main point with this story, is that you should acquire a large collection of alternatives. As a result of all my experiences, I never say that if there is one tool to do a job, I don't need another. I always look for alternative tools to do the same job. When one tool doesn't work, another tool just might. RECOVERING DATA FROM DAMAGED DATASETS. Sometimes, hopefully rarely, one comes across a situation where a disk dataset is damaged in one or several places, but you need the data. It would be extremely helpful in such a case, to save whatever good data you can. Again, my approach to the subject is: one tool is good to have, two tools are better to have, and the more tools, the merrier. Many MVS shops have bought IBM's MVS DITTO product, which is quite good in such a case. You can use DITTO to copy off a bunch of records until just before the I/O error in the file. Then you can use DITTO to skip the bad area, and copy off more good records. And so on, until you have saved as many good data records as you could. There is another way, however, which is a batch program that is much quicker. This is a program called RECOVER. RECOVER originally came from the University of Waterloo in Canada. Currently, RECOVER is on the CBT Tape in File 455, as part of Paul Moinil's excellent and large collection of system programmer tools, taken from "everywhere". Paul Moinil is a highly skilled systems programmer who works in Ispra, Italy, and who has compiled a large collection of extremely useful tools, many of which he has modified. RECOVER's function is to automate the process of sequential dataset recovery, which we briefly described above. RECOVER works by copying the damaged dataset, but when it comes to an I/O error, it skips to a pre-specified good area which follows, and then copies further. RECOVER works something like IEBGENER, but for damaged datasets. The SYSUT1 ddname points to the input, the damaged dataset. SYSUT2 points to an output dataset, to which RECOVER will copy records from the input dataset, according to instructions specified by the PARM field. DCB information from the input dataset is always copied to the output dataset, so as to avoid additional problems. The parms work as follows: There are four keywords, COUNT, START, POINT, and WRITE. These can optionally be abbreviated to C, S, P, and W respectively. COUNT=N specifies the number of I/O errors or end-of-files the program is to accept (default=1). START=XXXXXXXX specifies the relative track and record beyond the start of the dataset where the recovery is to start. The value given is in TTRZ format in hex (default=00000000, which is the very beginning of the dataset). If the input dataset is a pds, START=00000000 will copy the directory as part of the output dataset with DCB attributes of the data. Of course, the directory will be useless as a directory, but you can edit it out later. POINT specifies the type of skipping that is to be done if the program encounters bad data. Valid values are T, R, or Z, corresponding with TTRZ format. In other words, do you want to skip to the next track (POINT=T) or the next block on that track, (POINT=R) or the next block in the dataset, in whatever track or extent it falls (POINT=Z)? Default is POINT=Z. My opinion is that POINT=Z is probably safest. Finally, the WRITE keyword specifies whether you want to write a partially filled input buffer out to the output dataset. The idea is: You never know when you'll encounter an I/O error. At the time, you'll probably have a partially filled input buffer. Do you want to write these records to the output dataset, or do you want to discard them? WRITE=Y tells RECOVER to write out partially filled buffers at the time of I/O errors. WRITE=N tells the program to discard the records in the partially filled buffer (default=N). So now you see that RECOVER, if you make the effort to set it up, can save you oodles of time, on occasion. Instead of playing games with DITTO for an hour or more, you can set up this batch job, and see what sort of recovery data it gives you. If the recovery data is sufficient, then fine. Otherwise, you can play with RECOVER's parms, and see if another run will give you more satisfactory results. There is one more alternative I'd like to suggest, as space will allow. This our old favorite browsing tool, the TSO command called REVIEW from File 134 of the CBT Tape. More recent versions of REVIEW have the capability of copying the data that you are browsing. The output dataset is pointed to by previously allocating the SYSUT2 ddname to it, under your TSO session. COPY and ADD subcommands of REVIEW will copy data to the output dataset or append additional data to it. You can control how much data you want to COPY or ADD with additional keywords. At this point, it is apparent that REVIEW can be used for data recovery purposes, in much the same way as we described for DITTO, except that with REVIEW, you see the data in a more natural format. This month, I've made several points: First, the more ways you can do a job, the safer you are, in case one of them won't work. Second, some tools are quicker than others which do the same job. A quicker utility will save you valuable time in a pinch. Third, the CBT MVS Utilities Tape contains many collections of such tools to explore. Making an investment in time, with the CBT Tape and other software tool sources, including your shop's vendor products, will expand your skill repertoire, and you'll gain the time back in the long run. Good luck in all your endeavors, and I hope to see you again next month.