MVS TOOLS AND TRICKS OF THE TRADE MARCH 2004 Sam Golob MVS Systems Programmer P.O. Box 906 Tallman, New York 10982 Sam Golob is a Senior Systems Programmer. He also participates in library tours and book signings with his wife, author Courtney Taylor. Sam can be contacted at sbgolob@cbttape.org. Information about the CBT MVS Tapes can be found on the web, at http://www.cbttape.org. MODULARIZING AN MVS SYSTEM - PART 2 Last month, we began to talk about an enormously important MVS subject--the strategic placement of essential MVS datasets on the "proper" disk packs. The aim of this exercise is to be able to modularize your MVS system. This means that if you remove some of the disk packs, you can still run part of the system; different components are on different packs. If you want to leave out a component, then you can (so to speak) leave out its packs, and the rest of the system will still run. In practice, such a concept is easy to talk about, but highly difficult and tricky to actually implement. One reason for this, is that some MVS components simply cannot be separated--they require each other. For example, TSO or TCAS requires VTAM, and so do other essential MVS components. So we might say that VTAM is more "basic" than some other components, and therefore it should be placed on a more essential pack, such as the first MVS res pack. The more "dependent" components might be safely put on some other pack. But even that is not a good rule. If you put TSO on a different pack than the "lowest level" pack, and if for some reason you can't get access to TSO's pack, then you can't fix the rest of the system. So TSO, even though it is dependent on VTAM, is just as essential as VTAM, and practically speaking, they can't be separated from each other. On the other side, there are often reasons why we have to separate two MVS components and put them on different packs. One such reason might be because of system performance. For instance, on a heavily loaded production MVS system, we might want to keep production datasets and often-used system datasets away from a pack which is subject to frequent RESERVEs. Or we might try to lessen channel usage to certain heavily used packs, and not put two heavily used datasets on the same pack. A second consideration would apply to packs containing paging datasets. If a non-paging dataset coexists with a paging dataset on a pack, the system paging algorithms are less efficient, so in a heavily loaded situation, this could slow the system down. By now you should begin to see that efficient dataset placement boils down to "knowing an awful lot of stuff" about how MVS works. These are the kind of complications that we commonly encounter in our efforts to modularize MVS. There are complex reasons why one component needs to be available together with another component, with the reason not always being a "logical dependency", but a "practical dependency". And there are just as many reasons sometimes, why it is better that one component should be kept separate from some other component. Modularizing MVS can be very tricky business--it is very site-dependent, and sometimes it is actually better practice NOT TO DO IT AT ALL! But the bottom line is that you have to have tried it. You have to make the effort. And while you're making that effort, you have to find out about as many factors affecting how your installation operates, as possible. In the end, you may make the decision that it is best NOT to modularize. But your installation will be far better off, just because you tried. You'll know a lot more about what makes your installation run. And the site will profit greatly because you have acquired this extra knowledge. So to begin to gain this very essential knowledge, I feel that the first step is to do the exercise of putting all the essential MVS components on ONE single disk pack. Nowadays, we call that process "making an MVS Rescue pack". We talked about doing that last month. In performing this exercise, you'll see that the art, craft, and science of placing MVS functionality all in ONE place, will help you later, so you'll be able to divide your big system over three, four, or ten places, and as a result (if you've done it well), you'll have a better running MVS system. MY RESCUE PACK TIPS Last month, we mentioned that several schemes and sets of JCL jobs can be found in the CBT Tape collection, to help you set up your own one-pack MVS system. Several of the CBT Tape files which contain this kind of help are: Files 022, 164, 434, 609, 613, and 657. These can be obtained at: http://www.cbttape.org/ftp/cbt/CBT434.zip for File 434 (for example). Since the server is a UNIX system, this URL is case sensitive. File 657 is still new, and its URL is still: http://www.cbttape.org/ftp/updates/CBT657.zip as of the time of this writing. The idea behind all of these schemes is to build all the structures necessary to IPL MVS on one disk pack, using a "driving MVS system" to build the Rescue system. Remember we said last time, that you need an existing MVS system to create a new MVS system. I know that any one of you can customize and follow any of these schemes for yourself. But what I want to show you now, is how to make your new MVS Rescue Pack look familiar, so that all of your own good tools are there. The idea here is that logging on to the Rescue system's TSO should look and feel essentially like logging onto the production system's TSO. I'll assume that you've already created a workable "vanilla" rescue pack, and you've copied your RACF database (or ACF2 or TSS equivalent) so that you have userids and passwords already defined for it. Now I'll show you how to import your tool collection to the Rescue system. The first thing you should do is to IPL your new Rescue pack together with all the packs where you keep your tools. Logon with a "bare-bones" LOGON PROC, and with all the allocations done via CLIST. That's always a better practice, because it is much harder to get a JCL error in your TSO id, when you LOGON. But even if your LOGON PROC specifies all the allocations via JCL, and you have a fairly up-to-date ISPF (I think 4.4 or later is enough), then you can convert your logon allocations from JCL to CLIST in about five minutes. Just do the following: Issue TSO ISRDDN from the ISPF command line, enter the keyword "CLIST" on the command line and press ENTER. A CLIST showing all the allocations (including the temporary ones) will appear. CREATE a new member with this CLIST in a system CLIST library that's already in the SYSPROC DD concatenation, EDIT that CLIST to remove the temporary allocations, and use the resulting CLIST in a "bare-bones" LOGON PROC, such as the one I've shown in Figure 1. The SYSPROC DD name in the logon PROC only has to point to the one CLIST library which contains your LOGON CLIST member. When you now re-LOGON with the new PROC, all the allocations should be exactly the same as before, but they'll be dynamic, and not static. If a library is missing, you'll still get into TSO and not get a JCL error. If you have the RPF (CBT Tape File 415) or RPF/E (File 417) editor installed, which doesn't need ISPF, you can fix the logon CLIST after such an allocation failure and get back into ISPF pronto. Once you've converted your LOGON allocations to a CLIST, you can add or change the concatenation of datasets with the greatest of ease. Now customize the allocation CLIST to look like your production allocation CLIST. (To get that, just do ISRDDN for your production TSO userid and enter the CLIST command. Then print out that CLIST and use it as a pattern.) Finally, once your Rescue TSO allocation CLIST looks like your production one, re-logon to TSO, do another ISRDDN, and see how many of the datasets are not on your Rescue res pack. For each of those datasets, copy the dataset to another one on the Rescue res pack, and re-point the Rescue logon CLIST to the corresponding new dataset on the Rescue res pack. Do this until an ISRDDN shows all the allocated datasets to be on the Rescue res pack. If you need an APF authorized STEPLIB load library, don't forget to make a copy of your production one on the Rescue res pack, and authorize it in the PROGxx PARMLIB member. Issue SET PROG=xx on the master console to get the authorization done right away. After the next IPL, the authorization will stay there too. Doing this exercise will show you where your own tools are located. When you'll later possibly have to move them, as you begin to reorganize your production MVS system, you'll have a much better handle on how to do it. The same considerations also apply to the system datasets that have been customized by your installation. Now, with your tools in place on the Rescue res pack, logon to TSO one more time, and play with all your tools for a while, until you're completely satisfied that they are working properly. Run them in batch jobs too, and try them under TSO-in-Batch, just to make sure that you have re-created all the facilities you need. This step cannot be left out. Later, you'll have to test your re-organized production system in the same way, but even more thoroughly. Make sure that your Rescue res pack has enough JES spool. If not, create an additional spool pack and do a JES cold start to let JES know about the extra spool space. Under JES2, the $DSPL,ALL console command will tell you if the spool dataset(s) are filling up too quickly. VENDOR PRODUCT CONSIDERATIONS You may not be able to create a working copy of all your system vendor products on your Rescue res pack. These products may be too hard to re-install, or they may take up too much room. An alternative might be to include all the packs containing the relevant vendor software libraries into your Rescue res pack IPL. Then you can IMPORT CONNECT the relevant user catalogs into the res pack's master catalog, and the Rescue system will then "know" about the vendor product. You might have to change the Rescue pack's linklist and LPA list accordingly, too. This is a way to get your production vendor products to work, while being driven by the Rescue system MVS res pack instead of the production system MVS res pack. This hybridization of your MVS system functions--driving them by using a different MVS res pack--is actually step number two, after having constructed your Rescue MVS res pack and getting it to work the way you want it. See how many of your installation's processes can be driven, just from the (expanded) Rescue res pack. Doing this second exercise is the bridge between working with the small self-contained Rescue MVS system and your large full-size MVS production system. As you will undoubtedly find out, there is a lot of difference between the small sandbox system and the installation's full-size MVS. But attempting to expand the small system until it can do a lot of work, will be very eye-opening, and extremely helpful to you. I think that we've actually covered a lot of ground this month, outlining a method of systematically getting a handle on how to separate your MVS system functionality into different pieces. In the concluding article next month, I'll try and cover other miscellaneous related topics. Thanks for visiting. I hope to see you here again next month. * * * * * * * * * * * * * * * * * * * * * * * Figure 1. BARE-BONES TSO LOGON PROC Note the APF-authorized STEPLIB DD and also the SYSPROC DD pointing to the CLIST library containing your logon allocation CLIST. //*--------------------------------------------------------------*// //*-- "Bare-bones" TSO LOGON Proc - PGM=ADFMDF03 gives the --*// //*-- TSO session manager. --*// //*-- - PGM=IKJEFT01 gives --*// //*-- regular TSO READY. --*// //*-- Please note the APF-authorized STEPLIB. --*// //*--------------------------------------------------------------*// //ISPFPROC EXEC PGM=ADFMDF03,REGION=0M,DYNAMNBR=175, // PARM='%MYCLIST' //STEPLIB DD DISP=SHR,DSN=myusrid.APF.LINKLIB //SYSPROC DD DISP=SHR,DSN=MY.CLIST.LIBRARY (contains member MYCLIST) //SYSPRINT DD TERM=TS //SYSTSIN DD TERM=TS //SYSUDUMP DD SYSOUT=Z