[DESY Home] [DESY IT Home] [DESY IT Physics Computing] [Grid Computing at DESY] [DESY Computing Seminar] [Imprint]

Grid Computing at DESY DESY

[Home] [Mon/Admin] [Grid@DESY] [Certs & VOs] [VOMS] [CVMFS] [User Guide] [Install Guide] [Notes] [Talks & Posters] [Glossary] [Documentation] [Links]

In order to ensure response in case of problems, use the Global Grid User Support (GGUS) and/or your VO support rather than private e-mail contacts or internal mailing lists.


Middleware

Projects:

  • SciencePAD

    ITGF

  • EGI IGTF Release

    EMI

  • EMI User Support

    EMI 3 Monte Bianco:

  • EMI 3 - Monte Bianco
  • EMI 3 Monte Bianco Products
  • EMI 3 Monte Bianco Updates

    EPEL:

  • EPEL

    Repos:

  • EPEL-6
  • EMI Software Repository
  • https://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl6/x86_64/ [base] [updates] [third-party]
  • http://nims.desy.de/extra/emi/3/sl6/x86_64/ [base] [updates] [third-party]

    Wikis:

  • Generic Installation Configuration EMI1
  • Giving EMI repositories precedence over EPEL
  • Notes about services from IGI middleware, based on EMI/UMD
  • Troubleshooting Guide for Operational Errors on EGI Sites

  • Grid Information System

  • Worker Node testing for WLCG
  • Atlas EMI Migration (CERN account)
  • EMI Migration
  • SL6 Dependency RPM
  • RPM Compat SLC6

    UMD (EGI)

    Docs:

  • UMD-4

    Repos:

  • UMD Repository

    Wikis:

  • EGI
  • EGI Operations
  • Middleware
  • Troubleshooting Guide for Operational Errors on EGI Sites

    WLCG Wikis:

  • WLCG Twiki
  • WLCG Grid Deployment Board documents
  • Middleware Readiness WG
  • WLCG Grid Deployment
  • WLCG Baseline Versions

  • LCG_ROLLOUT list

    [top]


    Content:

    Node types:

    [top]



    BATCH

    Documentation:

  • TORQUE Administrator Guide, version 2.5.12
  • TORQUE Administrator Guide, version 4.2.9

      yum install emi-torque-server emi-torque-utils
      yum install perl-DateManip 
    

    Configurations:

      /opt/misc/yaim/cp-ssh_key.sh
    
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n TORQUE_server -n TORQUE_utils
      /opt/glite/yaim/bin/yaim -r -s /opt/misc/yaim/`hostname -s`-info.def -n TORQUE_server -n TORQUE_utils -f config_apel_pbs
    
      vi /etc/glite-apel-pbs/parser-config-yaim.xml
    
      grep ldap /etc/passwd /etc/group (needed for CREAM info system)
    
      vi /etc/ssh/sshd_config
    ...
    # AG
    HostbasedAuthentication yes
    IgnoreUserKnownHosts yes
    IgnoreRhosts yes
    ...
    
      vi /etc/sysconfig/munge
    ...
    DAEMON_ARGS="--num-threads 2"
    ...
    
      cd ~/quattor/scdb9/desy-tools/PBS
    ./pbs_backup.install
    

    Special adjustments for CREAM:

    > qmgr
    
    resources_default.walltime
    resources_max.walltime
    
    resources_default.cput
    resources_max.cput
    
    resources_default.pcput
    resources_max.pcput
    
    resources_default.procct
    resources_max.procct
    
    max_queuable
    max_running 
    

    Services:

      chkconfig nfs on
    
      /etc/init.d/pbs_server status
      #/etc/init.d/maui start
    

    Logs:

      ln -s /var/log/apelparser.log .
    

    Tests:

      grep pbs /etc/services 
    pbs           15001/tcp           # pbs server (pbs_server)
    pbs           15001/udp           # pbs server (pbs_server)
    pbs_mom       15002/tcp           # mom to/from server
    pbs_mom       15002/udp           # mom to/from server
    pbs_resmom    15003/tcp           # mom resource management requests
    pbs_resmom    15003/udp           # mom resource management requests
    pbs_sched     15004/tcp           # scheduler
    pbs_sched     15004/udp           # scheduler
    
      netstat -t -n | grep 15001 | wc
      netstat -t -n | grep 15001 | grep TIME_WAIT | wc
    

    PBS/torque:

    echo "/bin/hostname ; sleep 120" | qsub -q desy -l nodes=1:ppn=4 
    

    Cleanup:

      cd /var/log/torque/server_logs
      20140101.gz
    
      cd /var/lib/torque/server_priv/accounting
      20140101
    
      cd /var/log/mysched
      mypbsacc.log.20140101-000101.gz
      mysched.err.20140101-000101.gz
      mysched.log.20140101-000101.gz
    
      cd /var/lib/mysched
      mypbsacc_20140101.csv
      mypbsacc-hourly_20140101.csv
      mysched_20140101.csv
    
      cd /var/lib/mysched
      jobs_20140101/
      mysched_20140101/
      nodes_20140101/
    

    PBS/TORQUE Hints:

      CREAM: /usr/libexec/pbs_submit.sh
      ARC:   /usr/share/arc/submit-pbs-job
    
      > qsub job
      > cat job
    #
    #PBS -q desy
    #PBS -l pvmem=4gb
    #PBS -l pmem=4gb
    #
    echo
    hostname -f
    echo
    ulimit -a
    echo
    which java
    echo
    java -version
    echo
    
    
      > vi /etc/torque/mom/config 
    ...
    $ignvmem true
    
      > qmgr -c "l q  desy"
    Queue desy
            queue_type = Execution
            max_queuable = 15000
            max_user_queuable = 5000
            total_jobs = 11
            state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:11 Exiting:0 Complete:0 
            max_running = 1000
            acl_host_enable = True
            acl_hosts = grid-cr6.desy.de,grid-vm06.desy.de,grid-cr5.desy.de,
                        grid-cr4.desy.de,grid-batch4.desy.de,grid-arc4.desy.de,
                        grid-cr3.desy.de,grid-vm03.desy.de,grid-cr2.desy.de,
                        grid-vm02.desy.de,grid-cr1.desy.de,grid-cr0.desy.de
            resources_max.cput = 60:00:00
            resources_max.pcput = 60:00:00
            resources_max.procct = 1
            resources_max.walltime = 90:00:00
            resources_default.cput = 60:00:00
            resources_default.pcput = 60:00:00
            resources_default.walltime = 90:00:00
            acl_group_enable = True
            acl_groups = biomedsgm,biomedusr,biomedprd,desyusr,desyprd,desysgm,
                         dechusr,dechprd,dechsgm,opsusr,opssgm,dteamprd,dteamsgm,
                         caliceusr,caliceprd,calicesgm,hermesusr,hermesprd,
                         hermessgm,honeusr,honeprd,honesgm,ilcusr,ilcprd,ilcsgm,
                         zeususr,zeusprd,zeussgm,iceusr,iceprd,icesgm,xfelsgm,
                         xfelprd,xfelusr,ildgusr,ildgprd,ildgsgm,geantusr,geantprd,
                         geantsgm,dteamusr,caliceger,ilcger,honeoop,desytst,opsplt,
                         ilcplt,olympusr,olympprd,olympsgm,enmrsgm,enmrusr,belleusr,
                         bellesgm,belleprd,ghepusr,ghepprd,ghepsgm
            mtime = Fri Jan 23 10:20:31 2015
            resources_available.nodect = 2048
            resources_assigned.mem = 0b
            resources_assigned.nodect = 11
            resources_assigned.vmem = 0b
            enabled = True
            started = True
    

    [top]


    VOBOX

    Docs: WLCG vobox

    puppet installation

    Puppet files:

      vi ./features/grid/files/etc/grid-security/grid-mapfile.hone
    

    Configuration:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /opt/glite/yaim/bin/yaim -c -s /root/vobox-siteinfo.def -n VOBOX
    
      #\rm /etc/cron.d/edg-mkgridmap
    
      cat /etc/grid-security/grid-mapfile
    

    [top]


    WMS/LB

    Docs: System Administrator Guide for WMS for EMI site-info.def WMS best practices

    puppet installation (EMI3)

      #install host certificate
      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /opt/glite/yaim/bin/yaim -c -s /root/wms-siteinfo.def -n WMS -n LB
    
      cd /opt/misc/tools/WMS/wmsmon5/
      ./install5.sh
    
      #self test (needed for poise)
      /opt/misc/tools/POISE/WMSTests/self_install5.sh
      yum install glite-wms-ui-commands
    
      cd /opt/misc/tools/WMS
      ./wms.install `/bin/hostname -f`
    

    [top]


    Old:

    Upgrade (EMI1 -> EMI-2):

      yum update
      yum remove emi-wms
      yum install emi-wms emi-lb condor-emi kill-stale-ftp
    
      save glite-wms-purger.cron
    
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n WMS -n LB
    
      ln -s /etc/profile.d/grid-env.sh .
      save grid-env.sh
      vi grid-env.sh
    ...
    #AG https://ggus.eu/tech/ticket_show.php?ticket=87802
    gridenv_set         "ICE_DISABLE_DEREGISTER" "1"
    #AG
    ...
      /etc/init.d/glite-wms-ice restart
    
      cat /etc/cron.d/glite-wms-create-host-proxy.cron
      \rm -v /etc/cron.d/wmproxy_logrotate
    
    
      diff glite-wms-purger.cron.save glite-wms-purger.cron
    

    Installation:

      #yum install ca-policy-egi-core
      #yum install emi-wms condor-emi
      #yum install glite-wms-ui-commands
    
      yum install ca-policy-egi-core emi-wms condor-emi glite-wms-ui-commands -y
    

    yaim:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      vi /etc/my.cnf
    ...
    # AG (https://wiki.egi.eu/wiki/WMS_best_practices)
    innodb_file_per_table
    default-storage-engine=InnoDB
    
      vi /opt/misc/yaim/`hostname -s`-info.def
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n WMS
    
      # check!
      ls -l /etc/nsswitch.conf
      chmod a+r /etc/nsswitch.conf
    
      #
      # check
      #
      echo $GLOBUS_LOCATION
      vi /usr/sbin/glite-wms-create-proxy.sh
    ...
      proxy=`"/usr/bin/grid-proxy-init" -q \
      proxy=`"${GLOBUS_LOCATION}/bin/grid-proxy-init" -q \
    ...
    

    Configs:

      glite-wms-get-configuration
    
      # EMI1
      vi /etc/logrotate.d/lcmaps
      /var/log/glite/lcmaps.log
    
      ln -s /etc/grid-security/gridmapdir/ .
    
      ln -s /etc/glite-wms/glite_wms.conf .
    
      ln -s /etc/glite-wms/glite_wms_wmproxy.gacl .
    
    #  save /etc/lcmaps/lcmaps.db
    #  vi /etc/lcmaps/lcmaps.db
    #...
    ## This is like the CREAM CE
    ## DN-local -> DN-pool -> VO-pool
    #voms:
    #vomslocalgroup -> vomslocalaccount
    #vomslocalaccount -> good | vomspoolaccount
    #vomspoolaccount -> good
    #
    ##voms:
    ##localaccount -> good | poolaccount
    ##poolaccount -> good | vomslocalgroup
    ##vomslocalgroup -> vomspoolaccount
    
    #  cd ~/quattor/scdb9/desy-tools/WMS
    

    Security:

    #  vi /etc/httpd/conf.d/ssl.conf
    #...
    #SSLProtocol all -SSLv2 -SSLv3
    #SSLCipherSuite DEFAULT:!EXP:!NULL:!SSLv2:!SSLv3:!DES:!IDEA:!SEED:!RC4:!MD5:+3DES
    #...
    
      vi /etc/glite-wms/glite_wms_wmproxy_httpd.conf
    ...
    SSLProtocol all -SSLv2 -SSLv3
    ...
    

    Keep disk clean:

      ln -s /etc/cron.d/glite-wms-purger.cron
      save glite-wms-purger.cron
      vi /etc/cron.d/glite-wms-purger.cron
    ...
    #
    # DESY-HH
    #
      3 *   * * mon-sat glite . /usr/libexec/grid-env.sh ; /usr/sbin/glite-wms-purgeStorage.sh -l /var/log/wms/glite-wms-purgeStorage.log -p /var/SandboxDir -t 345600 > /dev/null 2>&1
    

    This _MIGHT_ be needed to run on ARC-CEs: (testing on grid-wms15 2011-10-14 10:40h)

    ##  save /opt/condor-c/local.`hostname -s`/condor_config.local
    ##  vi /opt/condor-c/local.`hostname -s`/condor_config.local
    ###AG NORDUGRID_GAHP= $(SBIN)/nordugrid_gahp
    ##NORDUGRID_GAHP = $(RELEASE_DIR)/sbin/nordugrid_gahp
    
    ## /etc/init.d/glite-wms-jc restart
    

      vi /etc/condor/condor_config.local
    ...
    #
    # AG
    #
    NORDUGRID_GAHP = /usr/sbin/nordugrid_gahp
    

    POISE:

      /opt/misc/tools/POISE/WMSTests/self_install5.sh
    

    WMSMON:

      cd /opt/misc/tools/WMS/wmsmon5 ; ./install5.sh ; cd
      #vi /etc/cron.d/wms-monitoring5
    
      /etc/init.d/glite-lb-locallogger status
    glite-lb-logd not running (stale pidfile)
    glite-lb-interlogd not running (stale pidfile)
    
    rm /var/glite/glite-lb-logd.pid
    
      /etc/init.d/glite-lb-locallogger status
    glite-lb-logd not running (disabled)
    glite-lb-interlogd not running (disabled)
    
      tail -1 /var/tmp/data_`hostname -s`_week.dat
    

    Condor:

      du -hs  /var/condor/log/condor
      ls -ltr /var/condor/log/condor
    
      find /var/condor/log/condor -type f -name core.\*
      #find /var/condor/log/condor -type f -name core.\* -ctime +1 -exec \rm -v {} \;
    
      less /var/condor/log/condor/SchedLog
      less /var/condor/log/GridmanagerLog.glite
    
      /etc/condor/condor_config
      /etc/condor/condor_config.local
    

    Logs:

      ln -s /var/log/globus-gridftp.log .
      ln -s /var/log/gridftp-session.log .
    
      ln -s /var/log/wms/lcmaps.log .
      ln -s /var/logmonitor/CondorG.log .
      ln -s /var/log/wms/workload_manager_events.log .
      ln -s /var/log/wms/logmonitor_events.log .
      ln -s /var/log/wms/jobcontroller_events.log .
      ln -s /var/log/wms/ice.log .
      ln -s /var/log/wms/httpd-wmproxy-errors.log .
      ln -s /var/log/wms/httpd-wmproxy-access.log .
      ln -s /var/log/wms/wmproxy.log .
    

    Hints:

    Handle 'LCMAPS' error: (wrong hard link in gridmapdir):

      glite-wms-job-submit ...
    ...
    Warning - LCMAPS failed to map user credential
    
    Method: getFreeQuota
    ...
    
      li /etc/grid-security/gridmapdir | less
    
      \rm %2d....
    
      #ls -l /var/lib/myproxy/
    

    If the validity time of the VOMS-server's host cert is shorter than the requested VOMS-proxy time (--valid), WMS' do not work:

      > less wmproxy.log
    ...
    Remote GRST CRED: Not Available
    ...
      versus
    ...
    Remote GRST CRED: VOMS 1402994118 1403454918 ...
    ...
    

    Check number of dirs per user: (limit 31999)

  • bug #86490
  • bug #87651

      find /var/proxycache -name userproxy.pem | wc
    

    Cleanup:

      /usr/bin/queryDb -v -u -S -G 
      export X509_USER_PROXY=/var/glite/wms.proxy
      glite-lb-purge -m _server_ _jobid_
    
    
      for f in `grep -l CLEAR.REASON=\"TIMEOUT\" /tmp/glite-lbproxy-ilog_events*`; do echo $f; rm -f $f; done
    

    Admin:

      condor_q -long -attributes x509UserProxyVOName,GridJobStatus,x509UserProxyFQAN,,CEInfoHostName
      condor_q
    
      /usr/bin/queryDb -v -u -S -G   
    
      /usr/bin/queryDb -c glite_wms.conf
      /opt/condor-7.4.2/bin/condor_q
    

    Problems:

  • twiki
  • GGUS ticket
    1) first of all, verify the ICE crash
    
    su - glite
    /usr/bin/glite-wms-ice --conf glite_wms.conf
    
    and see if it goes in memory exceed
    
    Compiled into
    
      /opt/misc/tools/WMS/wms-clean-ice.sh
    
    2) come back to root user:
    cd /var/ice/persist_dir
    
    3) put all the myproxy-url without the "dot" in the file "file.txt":
    
    sqlite3 ice.db "select myproxyurl from delegation where myproxyurl not like '%.%';" > file.txt
    
    4) grep -v ^$ file.txt > file1.txt
    
    5) generate the instructions to update the ICE DB, and put them in a script
    
    cat file1.txt | gawk '{print "sqlite3 /var/ice/persist_dir/ice.db \"update delegation set myproxyurl=\x27"$0".desy.de\x27 where myproxyurl=\x27"$0"\x27;\""}' > script
    
    (I suppose that almost all the jobs are bounded to your myproxy server grid-px.desy.de, otherwise in the command above you should set a domain different from desy.de)
    
    6) execute the script:
    
    chmod +x script
    ./script
    

    Old/expired proxies:

      cd /var/glite/spool/glite-renewd
      grep hone *.data | awk -F ":" '{print $1}'
      openssl x509 -text -noout -in *.0
    
      ls -l /var/proxycache/*Lobodzinski*
    
      ls -l /var/proxycache/cache/*Lobodzinski*
    

    [top]


    UI

    Puppet installation:

      /opt/glite/yaim/bin/yaim -c -s /root/ui-siteinfo.def -n UI
    

    WGS installation:

    
      #
      # 2013-08-07 (AG): nafhh-belle02 (SLD63 -> SLD64)
      #
      yum remove  emi-version emi-release -y
      yum remove  globus* emi* glite* gfal* dcap* voms*-y
    
      rpm -ivh http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl6/x86_64/base/emi-release-3.0.0-2.el6.noarch.rpm
      wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo -O /etc/yum.repos.d/EGI-trustanchors.repo
    
      yum clean all
      yum install yum-priorities yum-protectbase -y
      yum install emi-ui --disablerepo=sld* --enablerepo=epel*
    
      #yum install dcap-libs-2.47.7-1.el6.x86_64 dcap-libs-2.47.7-1.el6.i386
      #yum --enablerepo=epel install emi-ui
      #yum reinstall dcap-libs-2.47.7-1.el6.x86_64 dcap-libs-2.47.7-1.el6.i386
    
      vi /afs/desy.de/project/glite/UI/siteinfo/wgs-site-info.def
    
      \cp -r /afs/desy.de/project/glite/UI/siteinfo/ /root/.
    
      vi     /root/siteinfo/wgs-site-info.def
    ################################################
    #
    # siteinfo/wgs-site-info.def for a UI
    #
    ################################################
    
    YAIM_LOGGING_LEVEL=INFO
    
    SITE_NAME=DESY-HH
    
    WMS_HOST=grid-wms.desy.de
    PX_HOST=grid-px.desy.de
    BDII_LIST="grid-bdii.desy.de:2170"
    
    UIWMS_SERVICE_DISCOVER=no
    
    USERS_CONF=/root/siteinfo/users.conf
    GROUPS_CONF=/root/siteinfo/groups.conf
    
    FUNCTIONS_DIR=/opt/glite/yaim/functions
    
    SITE_EMAIL=grid@desy.de
    
    OUTPUT_STORAGE=/tmp
    
    GLOBUS_TCP_PORT_RANGE="20000,25000"
    
    VOS="atlas belle biomed calice cms dech desy dteam enmr.eu ghep hermes hone icecube ilc olympus xfel.eu zeus"
    
    X509_USER_PROXY="~/k5-ca-proxy.pem"
    
       /opt/glite/yaim/bin/yaim -c -s /afs/desy.de/project/glite/UI/siteinfo/wgs-site-info.def -n UI
    
       vi /etc/profile.d/grid-env.sh
    
       vi /etc/yum.repos.d/emi3*
    ...
    enabled=0
    ...
    
       yum update --disablerepo=sld* --disablerepo=epel* --enablerepo=emi*
    

    wmsmon:

      rpm -ivh /opt/misc/tools/WMS/wmsmon/wmsmon-1.0.4-3.noarch.rpm
      save /var/wmsmon/bin/monitoring-cycle.sh
      save /var/wmsmon/etc/wmsmon.conf
      \cp /opt/misc/tools/WMS/wmsmon/monitoring-cycle.sh /var/wmsmon/bin/monitoring-cycle.sh
      \cp /opt/misc/tools/WMS/wmsmon/wmsmon.conf         /var/wmsmon/etc/wmsmon.conf
    
      yum install gnuplot
    
      /opt/misc/yaim/cp-ssh_key.sh
    
      #groupadd -g 998 glite
      #useradd -g glite -u 998 -c "gLite User" -m glite
      #copy ssh keys of root to glite
    

    yaim:

      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n UI
    

    Logs:

      less  ln -s /opt/glite/yaim/log/yaimlog
    

    edg-mkgridmap

      yum install edg-mkgridmap
    
      cat /etc/edg-mkgridmap.conf
    #
    # Testing: /etc/edg-mkgridmap.conf
    #
    group vomss://grid-voms.desy.de:8443/voms/desy desy
    
      /usr/sbin/edg-mkgridmap --conf=/etc/edg-mkgridmap.conf
    ...
    

    UI tar (deprecated)

    Docu:

  • EMI2Tarball
  • EMI UI/WN Tarballs
  • site-info.def

    Repo:

  • http://emisoft.web.cern.ch/emisoft/dist/EMI/2/sl5/x86_64/tgz/
  • http://emisoft.web.cern.ch/emisoft/dist/EMI/2/sl6/x86_64/tgz/

  • Matt's Repo
  • EMI UI

  • https://ggus.eu/ws/ticket_info.php?ticket=81496
  • tarball-support@cern.ch

    Repos:

  • SL5 EMI3 Test
  • SL6 EMI3 Test
  • SL5 EMI3 Production
  • SL6 EMI3 Production

    Installation:

      cd /tmp
    
      wget http://repository.egi.eu/mirrors/EMI/tarball/sl6ui/emi-ui-2.6.1-1_v4.sl6.tgz
      wget http://repository.egi.eu/mirrors/EMI/tarball/sl6ui/emi-ui-2.6.1-1_v4.sl6.os-extras.tgz
    
      wget http://repository.egi.eu/mirrors/EMI/tarball/sl5ui/emi-ui-2.6.1-1_v4.sl5.tgz
      wget http://repository.egi.eu/mirrors/EMI/tarball/sl5ui/emi-ui-2.6.1-1_v4.sl5.os-extras.tgz
    
      cd grid/glite
    
      tar -zxvf ...
    

    Changes needed:

      # sloved in v4: vi $INSTALL_ROOT/opt/glite/yaim/defaults/emi-ui_tar.post
    ...
    #AG FUNCTIONS_DIR="${INSTALL_ROOT}/glite/yaim/functions"
    FUNCTIONS_DIR="${INSTALL_ROOT}/opt/glite/yaim/functions"
    ...
    
      # sloved in v4: vi $INSTALL_ROOT/opt/glite/yaim/bin/yaim
    ...
    #AG for i in ${YAIM_ROOT}/glite/yaim/etc/versions/*; do
    for i in ${YAIM_ROOT}/opt/glite/yaim/etc/versions/*; do
    
    
    
      vi $INSTALL_ROOT/usr/lib64/python2.4/site-packages/wmsui_checks.py (SL5)
      vi $INSTALL_ROOT/usr/lib64/python2.6/site-packages/wmsui_checks.py (SL6)
    ...
      #AG pathList = ['/','/usr/local/etc' , '']
      installroot = os.environ['EMI_UI_CONF']
      pathList = ['/','/usr/local/etc', '', installroot]
    ...
    

    yaim:

      cp -vr ../UI/siteinfo/ .
      vi     siteinfo/afs-site-info.def
      ./opt/glite/yaim/bin/yaim -c -s siteinfo/afs-site-info.def -n UI_TAR
    

    [top]


    VOMS

    Docu: Known Issues VOMS mysql 2.0.0

    Hints:

    If the validity time of the VOMS-server's host cert is shorter than the requested VOMS-proxy time (--valid), at least WMS' do not work:

      > less wmproxy.log
    ...
    Remote GRST CRED: Not Available
    ...
    versus
    ...
    Remote GRST CRED: VOMS 1402994118 1403454918 ...
    ...
    

    Puppet

    System Administrator guide

      #yum install ca-policy-egi-core
      #yum install fetch-crl
      #yum install emi-voms-mysql
    
      /opt/misc/yaim/get-grid-host-cert.sh $(/bin/hostname -f)
    
      save /etc/security/limits.conf
      vi   /etc/security/limits.conf
    ...
    voms          soft    nofile  63536
    voms          hard    nofile  63536
    ...
    
      #
      # java (15 VOs)
      #
      save /etc/sysconfig/voms-admin
      vi   /etc/sysconfig/voms-admin
    ...
    VOMS_JAVA_OPTS="-Xms375m -Xmx750m -XX:MaxPermSize=2048m"
    ...
    
      #
      # CA
      #
     #yum localinstall /opt/misc/dCache-CA/ca_dCacheORG-2.0-5.noarch.rpm
      yum localupdate  /opt/misc/dCache-CA/ca_dCacheORG-2.2-1.noarch.rpm # 2016-02-12
    
      rpm -q ca-policy-egi-core
      /usr/sbin/fetch-crl
      chkconfig fetch-crl-cron on
      service fetch-crl-cron start
      touch /etc/sysconfig/fetch-crl
    
      #
      # mysql
      #
      service mysqld start
      mysqladmin -u root password adminPassword -p
      mysqladmin -u root -h grid-vm01.desy.de password adminPassword -p
      chkconfig mysqld on
    
      #
      # migrate
      #
      mysqldump --password="..." voms_desy > voms_desy.sql
      cat voms_desy.sql | mysql -p...  voms_desy 
      #mysql_fix_privilege_tables --verbose --password=adminPassword
    
    
      #
      # voms-admin
      #
      ls -l /etc/voms-admin/
      save  /etc/voms-admin/voms-admin-server.properties
      vi    /etc/voms-admin/voms-admin-server.properties
    ...
    host=grid-vm01.desy.de
    ...
    
      #
      # voms
      #
      voms-configure install --vo testvo \
      --dbtype mysql \
      #--createdb \
      --dbname voms_testvo \
      --dbauser root --dbapwd pwd \
      --dbusername voms --dbpassword pwd \
      --core-port 15199 \
      --smtp-host smtp.desy.de \
      --mail-from mail@mail \
      --admin-cert usercert.pem
    
      vi /etc/voms-admin/voms-admin-server.properties
    
      ls -l /etc/voms-admin/testvo
      cat   /etc/voms-admin/testvo/lsc
      cat   /etc/voms-admin/testvo/vomses
      cat   /etc/voms-admin/testvo/vo-aup.txt
    
      vi    /etc/voms-admin/testvo/service.properties
    
    
      ls -l /etc/voms/testvo
      cat   /etc/voms/testvo/voms.conf
      cat   /etc/voms/testvo/voms.pass
    
      # if migrated!
      voms-configure upgrade --vo desy
    
    
      #
      # notification
      #
      vi /etc/voms-admin/desy/service.properties
    
    
      #
      # bdii
      #
      rpm -qf /usr/sbin/slapd
      save /etc/sysconfig/bdii
      vi   /etc/sysconfig/bdii
    ...
    SLAPD_CONF=/etc/bdii/bdii-slapd.conf
    SLAPD=/usr/sbin/slapd
    BDII_RAM_DISK=yes
    ...
      voms-config-info-providers -s DESY-HH -e
      chkconfig bdii on
      service bdii start
      ldapsearch -x -h localhost -p 2170 -b 'GLUE2GroupID=resource,o=glue' objectCLass=GLUE2Service
      ldapsearch -x -h localhost -p 2170 -b 'GLUE2GroupID=resource,o=glue'
    
      lcg-infosites --vo testvo voms --is grid-vm01.desy.de
    
      #
      # services
      #
      service voms start
      service voms-admin start
    
      #
      # logs
      #
      less /var/log/voms/voms.testvo
      less /var/log/voms-admin/server.log
      less /var/log/voms-admin/voms-admin-testvo.log
    
      ln -s /var/log/voms-admin/*.log .
      ln -s /var/log/voms/voms.* .
    

    https://grid-vm01.desy.de:8443/voms/testvo


    Old

    Installation: (EMI-2 needs >= SL5.7)

      yum install ca-policy-egi-core
      yum install emi-voms-mysql xml-commons-apis
    
      /opt/misc/yaim/get-grid-host-cert.sh grid-voms.desy.de
    
      scp gellrich@pal:.globus/usercert.pem  /root/gellrich_usercert.pem
    
      # (2011-09-05) rpm -ivh /opt/misc/DESY-CA/ca_DESY-0.02-1.noarch.rpm
    
    
    

    yaim:

      vi ../grid-voms-info.def
    ...
    INSTALL_ROOT=/usr
    
    BDII_USER=ldap
    BDII_GROUP=ldap
    BDII_HOME_DIR=/var/lib/ldap
    BDII_RAM_DISK=no
    
    MYSQL_PASSWORD=...
    
    VOMS_HOST=grid-voms.desy.de
    VOMS_ADMIN_INSTALL=true
    VOMS_ADMIN_SHOW_FULL_DN=true
    VOMS_DB_HOST=localhost
    VOMS_DB_TYPE=mysql
    #first-time VOMS_DB_DEPLOY=true
    VOMS_DB_DEPLOY=false
    
    VOS="calice desy ghep hermes hone icecube ilc ildg olympus xfel.eu zeus"
    
      vi /opt/misc/yaim/vo.d/desy
    VOMS_PORT="15104"
    VOMS_DB_NAME="voms_desy"
    VOMS_DB_USER="voms"
    VOMS_DB_PASS="voms"
    VOMS_CORE_TIMEOUT=691200
    VOMS_ADMIN_DEPLOY_DATABASE="false"
    VOMS_ADMIN_SMTP_HOST="smtp.desy.de"
    VOMS_ADMIN_MAIL=""
    VOMS_ADMIN_CERT="/root/gellrich_usercert.pem"
    VOMS_ADMIN_WEB_REGISTRATION_DISABLE=false
    
      /opt/glite/yaim/bin/yaim -v -s /opt/misc/yaim/`hostname -s`-info.def -n VOMS
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n VOMS
    

    Configs:

      #chkconfig bdii --list
    
      chkconfig voms --list
      chkconfig --add voms
      chkconfig voms on
    
      chkconfig voms-admin --list
      chkconfig --add voms-admin
      chkconfig voms-admin on
    
      vi /etc/tomcat5/tomcat5.conf
    ... 
    CATALINA_OPTS="-Xmx4096M -server -Dsun.net.client.defaultReadTimeout=240000 -XX:MaxPermSize=1024m"
    
    
      vi /etc/security/limits.conf
    #AG
    tomcat          soft    nofile  63536
    tomcat          hard    nofile  63536
    
    tomcat          soft    nproc   16384
    tomcat          hard    nproc   16384
    
    voms            soft    nofile  63536
    voms            hard    nofile  63536
    
    voms            soft    nproc   16384
    voms            hard    nproc   16384
    
      tomcat@grid-voms: [~] ulimit -n -u
    open files                      (-n) 63536
    max user processes              (-u) 16384
    
    
      # needed by bdii
      cd /usr/etc
      mv voms voms.orig
      ln -s /etc/voms .
    
    
      ##ln -s /etc/profile.d/grid-env.sh /etc/profile.d/voms.sh
    
      #vi /etc/voms/desy/voms.conf
      #vi /etc/voms-admin/desy/vomses
      ##vi /etc/voms-admin/desy/voms.service.properties
    
      # MUST exist for fetch-crl
      ls -l /var/lock/subsys/fetch-crl-cron
    
      vi /etc/voms-admin/testvo/vomses
    ...
    grid-voms.desy.de
    ...
    
      vi /etc/voms//voms.conf
    ...
    --uri=grid-voms.desy.de:151NN
    

    AUP:

    Orig:

    Logs:

      less /opt/glite/yaim/log/yaimlog
    
      less /var/log/bdii/bdii-update.log
    
      ln -s /var/log/tomcat5/catalina.out .
    
      ln -s /var/log/voms/voms.* .
      ln -s /var/log/tomcat5/voms-admin-*.log .
    

    Tests:

      ldapsearch -LLL -x -h localhost -p 2170 -b 'o=infosys'
    
      export X509_USER_CERT=/etc/grid-security/hostcert.pem
      $GLITE_LOCATION/bin/voms-admin --vo desy --host grid-vm01.desy.de list-users
    /O=GermanGrid/OU=DESY/CN=Andreas Gellrich, /C=DE/O=GermanGrid/CN=GridKa-CA - Andreas.Gellrich@desy.de
    

    Issues:

      > openssl x509 -noout -subject -in .globus/usercert.pem
    subject=/C=UK/O=eScience/OU=Edinburgh/L=NeSC/CN=ukqcdcontrol.epcc.ed.ac.uk/emailAddress=jamesp@epcc.ed.ac.uk
    
      > voms-proxy-init -voms ildg -debug
    Your identity:
    /C=UK/O=eScience/OU=Edinburgh/L=NeSC/CN=ukqcdcontrol.epcc.ed.ac.uk/Email=jamesp@epcc.ed.ac.uk
    
      > cat voms.ildg
    Thu Aug 25 11:57:29 2011:grid-voms.desy.de:vomsd[11590]: msg="LOG_INFO:REQUEST:Run (vomsd.cc:729): 
    user: /C=UK/O=eScience/OU=Edinburgh/L=NeSC/CN=ukqcdcontrol.epcc.ed.ac.uk/emailAddress=jamesp@epcc.ed.ac.uk"
    Thu Aug 25 11:57:29 2011:grid-voms.desy.de:vomsd[11590]: msg="LOG_ERROR:REQUEST:get_userid (vomsd.cc:1412):Error in executing request!"
    Thu Aug 25 11:57:29 2011:grid-voms.desy.de:vomsd[11590]: msg="LOG_ERROR:REQUEST:get_userid (vomsd.cc:1434):ildg: User unknown to this VO."
    

    [top]


    CREAM

    Docu:

  • System Administrator Guide for CREAM for EMI-1 release
  • System Administrator Guide for CREAM for EMI-2 release
  • 3.14 Self-limiting CREAM behavior
  • Troubleshooting guide for CREAM
  • site-info.def
  • CREAM Manual Tuning

  • Installation And Configuration Notes For CREAM Using SGE As Batch System

    Admin

  • How To Purge Jobs From The CREAMDB

    On CREAM: > JobDBAdminPurger.sh

    User:

      glite-ce-job-status -a -s IDLE -e ...
      glite-ce-job-purge -e ...
    

    Puppet:

    Update:

    
      #
      # Update 8 (05.09.2013) - v. 3.5.1-1 (CANL, v.2.1.2)(canl-java-tomcat-0.1.13-1.el6.noarch)
      #
      #cd /usr/share/java/tomcat6/
      #ln -s /usr/share/java/canl-java-tomcat.jar .
      #ln -s /usr/share/java/canl.jar .
      #ln -s /usr/share/java/commons-logging.jar . 
      #ln -s /usr/share/java/commons-io.jar .
      #ln -s /usr/share/java/bcprov.jar  .
    
      #
      # Update 17 (11.06.2014) - v. 3.8.0-1 CREAM, v.1.16.3
      #
      \rm /var/lib/tomcat6/webapps/ce-cream/WEB-INF/lib/glite-lb-client-java.jar
      ln -s /usr/lib/java/glite-lb-client-java.jar /var/lib/tomcat6/webapps/ce-cream/WEB-INF/lib/glite-lb-client-java.jar
    
    
      #
      # always
      #
    
      # java (MD5 in CAs)
      vi /usr/lib/jvm/jre/lib/security/java.security
    #AG jdk.certpath.disabledAlgorithms=MD2, MD5, RSA keySize __ 1024
    jdk.certpath.disabledAlgorithms=MD2, RSA keySize __ 1024
    
      save /etc/lrms/scheduler.conf
      save /etc/blah.config
      ###save /etc/tomcat6/tomcat6.conf
    
      /opt/glite/yaim/bin/yaim -c -s /root/cream-siteinfo.def -n creamCE -n TORQUE_utils
    
      diff /etc/lrms/scheduler.conf.save  /etc/lrms/scheduler.conf
      diff /etc/blah.config.save          /etc/blah.config
      ###diff /etc/tomcat6/tomcat6.conf.save /etc/tomcat6/tomcat6.conf
    
      \cp /etc/lrms/scheduler.conf.save  /etc/lrms/scheduler.conf
      \cp /etc/blah.config.save          /etc/blah.config
      ###\cp /etc/tomcat6/tomcat6.conf.save /etc/tomcat6/tomcat6.conf
    
      service gLite restart
    
      # change info system 
      #vi    /root/cream-siteinfo.info
      #/opt/glite/yaim/bin/yaim -r -s /root/cream-siteinfo.def -n creamCE -n TORQUE_utils -f config_cream_gip -f config_cream_gip_glue2
    

    Fresh installation:

    
      ls -l /afs/desy.de/common/etc/ssh_known_hosts
      ls -l /afs/desy.de/common/etc/ssh_known_hosts2
    
      wget -nv http://wims.desy.de/system/ALL_afs/etc/ssh_known_hosts -O /etc/ssh/ssh_known_hosts
      wget -nv http://wims.desy.de/system/ALL_afs/etc/ssh_known_hosts2 -O /etc/ssh/ssh_known_hosts2
    
      /opt/misc/tools/fetch-ssh-known-hosts.sh
    
    #
    # needed because expected gid is occupied by 'stapusr'
    #
      #groupadd -g 154 infosys    # clash with stapduser:156
    
    #
    #
    #
      save /etc/security/limits.conf
      vi   /etc/security/limits.conf
    #AG
    tomcat          soft    nofile  63536
    tomcat          hard    nofile  63536
    
    tomcat          soft    nproc   16384
    tomcat          hard    nproc   16384
    ...
    
    #
    # mysql optimization
    #
      save /etc/my.cnf
      vi /etc/my.cnf
    [mysqld]
    max_connections=450
      
    # AG (default is 8M)
    innodb_buffer_pool_size=128M
    ...
    
      # prevents yaim from running but will be restored by puppet later
      \rm /root/.my.cnf
    
    #
    #
    # 
      vi /etc/glite-ce-cream-utils/glite_cream_load_monitor.conf
    ...
    FTPConn = 100
    ...
    
    #
    # yaim
    #
      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
      /opt/misc/yaim/cp-ssh_key.sh
    
      mkdir -p  /var/lib/glite/.certs
      cp -v /etc/grid-security/hostcert.pem /var/lib/glite/.certs/.
      cp -v /etc/grid-security/hostkey.pem  /var/lib/glite/.certs/.
      chown -R glite.glite                  /var/lib/glite
    
    
      less /opt/glite/yaim/log/yaimlog
      /opt/glite/yaim/bin/yaim -e -s /root/cream-siteinfo.def -n creamCE -n TORQUE_utils
      /opt/glite/yaim/bin/yaim -c -s /root/cream-siteinfo.def -n creamCE -n TORQUE_utils
    
      /opt/glite/yaim/bin/yaim -r -s /root/cream-siteinfo.def.okay -n creamCE -n TORQUE_utils -f config_cream_gip -f config_cream_gip_glue2
    
    #
    # GIP
    #
    runuser -s /bin/bash -c /var/lib/bdii/gip/plugin/glite-info-dynamic-scheduler-wrapper -- ldap
    
    #
    # info system (make sure only one CE publishes resources
    #
    #  cd /var/lib/bdii/gip/ldif
    #  grep grep LogicalCPU *
    #  save ExecutionEnvironment.ldif
    #  vi   ExecutionEnvironment.ldif
    #...
    #GLUE2ExecutionEnvironmentTotalInstances: 0
    #GLUE2ExecutionEnvironmentPhysicalCPUs: 0
    #GLUE2ExecutionEnvironmentLogicalCPUs: 0
    #...
    
    #  save static-file-Cluster.ldif
    #  vi   static-file-Cluster.ldif
    #...
    #GlueSubClusterPhysicalCPUs: 0
    #GlueSubClusterLogicalCPUs: 0
    #...  
    
    #  /etc/init.d/bdii restart
    
    #
    # capabilities
    #
    cp -pr /opt/misc/edg/var/info /opt/edg/var/.
    
    #
    # blah optimization
    #
      save /etc/blah.config
      vi   /etc/blah.config
    
    # DESY
    # (not set by yaim glite-ce-yaim-cream-ce-4.3.1-4.sl5.noarch)
    #glite-ce-bupdater.log: /usr/libexec/BUpdaterPBS: key finalstate_query_interval not found using the default:30
    finalstate_query_interval=60
    #glite-ce-bnotifier.log: /usr/libexec/BNotifier: key bnotifier_loop_interval not found using the default:5
    bnotifier_loop_interval=30
    
    #
    # tomcat
    #
      #save /etc/tomcat6/tomcat6.conf
      #vi   /etc/tomcat6/tomcat6.conf
    ...
      JAVA_OPTS="${JAVA_OPTS} -server -Xms1024m -Xmx4096m"
    
    #
    #
    #
      vi /etc/glite-ce-cream/cream-config.xml
    ...
    name="JOB_PURGE_POLICY" value="ABORTED 5 days; CANCELLED 5 days; DONE-OK 5 days; DONE-FAILED 5 days; REGISTERED 2 days;"
    ...
    
    #
    # tools
    #
      ssh grid-batch0.desy.de ls -al
    
    #  cd ~/quattor/scdb9/desy-tools/CREAM
    #  Quattor: ./cream-rsync-torque_grid-batch0.install grid-cr
    
    #  /root/cream-rsync-torque_grid-batch0.sh
    #  ls -l /var/log/torque/server_logs
    #  vi /etc/cron.d/cream-rsync-torque_grid-batch0.cron
    
      mkdir -p /var/log/torque/server_logs
    
      save /etc/lrms/scheduler.conf
      vi   /etc/lrms/scheduler.conf
    [LRMS]
    lrms_backend_cmd: /usr/libexec/lrmsinfo-pbs -i /var/log/torque/server_logs/qstat-f.out
    [Scheduler]
    cycle_time : 0
    vo_max_jobs_cmd: /usr/libexec/vomaxjobs-maui -i /var/log/torque/server_logs/mysched.out
    
      /etc/init.d/gLite restart
    
    #
    # unload batch server
    #
    
    #
    # requires: /var/log/torque/server_logs/qstat-f.out
    #
    cp -v /usr/bin/qstat            /usr/bin/qstat.orig
    \rm   /usr/bin/qstat
    cp    /opt/misc/tools/PBS/qstat /usr/bin/qstat
    touch /var/log/qstat.log
    chmod a+xrw /var/log/qstat.log
    
    #
    # requires: /var/log/torque/server_logs/pbsnodes-a.out
    #
    cp -v /usr/bin/pbsnodes            /usr/bin/pbsnodes.orig
    \rm   /usr/bin/pbsnodes
    cp    /opt/misc/tools/PBS/pbsnodes /usr/bin/pbsnodes
    touch /var/log/pbsnodes.log
    chmod a+xrw /var/log/pbsnodes.log
    
    #
    # logs
    #
      ln -s /var/log/tomcat6/catalina.out .
      ln -s /var/log/cream/glite-ce-cream.log .
      ln -s /var/log/cream/glite-ce-bnotifier.log
      ln -s /var/log/cream/glite-ce-bupdater.log
      ln -s /var/log/globus-gridftp.log .
      ln -s /var/log/gridftp-session.log .
      ln -s /var/log/bdii/bdii-update.log .
    
      ln -s /var/log/apelparser.log .
    
      ls -l /var/spool/glite/lb-locallogger/dglogd.log.*
      #find /var/spool/glite/lb-locallogger/ -type f -ctime +30 -exec rm -v {} \;
    #
    # apel
    #
    #  save /etc/apel/parser.cfg
    #  vi   /etc/apel/parser.cfg
    #[db]
    #hostname = grid-apel0.desy.de
    #password =
    #[site_info]
    #site_name = DESY-HH
    #lrms_server = grid-batch0.desy.de
    #[batch]
    #enabled = false
    #type = PBS
    #parallel = true
    
    #  vi /etc/cron.d/apelparser.cron
    #MAILTO=grid-ops@desy.de
    # 05 02,14 * * * root /usr/bin/apelparser > /dev/null
     
    #  /usr/bin/apelparser
    

    Old: (Quattor/SL5)

    Update:

      save /etc/lrms/scheduler.conf
      save /etc/blah.config
      save /etc/tomcat5/tomcat5.conf
      save glite-apel-pbs-parser
    
      #yum clean all; yum update
      #yum update             -c /opt/misc/tmp/epel.el5.repo
    
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n creamCE -n TORQUE_utils
    
      diff /etc/lrms/scheduler.conf.save  /etc/lrms/scheduler.conf
      diff /etc/blah.config.save          /etc/blah.config
      diff /etc/tomcat5/tomcat5.conf.save /etc/tomcat5/tomcat5.conf
      diff glite-apel-pbs-parser.save     glite-apel-pbs-parser
    
    
      \cp /etc/lrms/scheduler.conf.save  /etc/lrms/scheduler.conf
      \cp /etc/blah.config.save          /etc/blah.config
      \cp /etc/tomcat5/tomcat5.conf.save /etc/tomcat5/tomcat5.conf
      \cp glite-apel-pbs-parser.save     glite-apel-pbs-parser
    
      #grep GLUE2PolicyRule: /var/lib/bdii/gip/ldif/ComputingShare.ldif
    

    Prerequisites:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
      #/opt/misc/yaim/cp-ssh_key.sh
    
      cd ~/quattor/scdb9/desy-tools/CREAM
      Quattor: ./cream-rsync-torque_grid-batch5.install grid-batch
    

    Installation:

      #yum install ca-policy-egi-core -y
    
      # torque server
      ###yum install emi-cream-ce emi-torque-server emi-torque-utils xml-commons-apis -y
    
      # sge
      ###yum install emi-cream-ce emi-ge-utils xml-commons-apis -y
    
      # torque client
      yum install emi-cream-ce emi-torque-utils xml-commons-apis -y
    
      yum install perl-DateManip xml-commons-apis -y
      #yum install perl-Date-Manip xml-commons-apis -y
    
      #ll /etc/nsswitch.conf 
      #chmod a+r /etc/nsswitch.conf 
    
    #
    # /etc/security/limits.conf
    #
      save /etc/security/limits.conf
      vi   /etc/security/limits.conf
    #AG
    tomcat          soft    nofile  63536
    tomcat          hard    nofile  63536
    
    tomcat          soft    nproc   16384
    tomcat          hard    nproc   16384
    
    #
    # /etc/my.cnf
    #
        save /etc/my.cnf
        vi /etc/my.cnf
    [mysqld]
    max_connections=450
    
    # AG (default is 8M)
    innodb_buffer_pool_size=128M
    
    #
    # /etc/munge/munge.key
    #
      cp -fpv /opt/misc/yaim/etc/munge/munge.key /etc/munge/munge.key
      chown munge.munge 
    
    #
    # apel (SL6)
    #
      vi /etc/cron.d/apelparser.cron
    MAILTO=grid-ops@desy.de
     25 02,15 * * * root /usr/bin/apelparser > /dev/null
    
      vi /etc/apel/parser.cfg
    hostname = grid-apel0.desy.de
    password =
    site_name = DESY-HH
    lrms_server = grid-batch6.desy.de
    enabled = false
    type = PBS
    

    yaim:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      vi /opt/misc/yaim/`hostname -s`-info.def
    ...
    BLPARSER_WITH_UPDATER_NOTIFIER="true"
    ...
    
      ###/opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n creamCE -n TORQUE_server -n TORQUE_utils
      ###/opt/glite/yaim/bin/yaim -r -s /opt/misc/yaim/`/bin/hostname -s`-info.def -n creamCE -f config_cream_blparser
      ###/opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n creamCE -n SGE_utils
    
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n creamCE -n TORQUE_utils
    
      /opt/glite/yaim/bin/yaim -e -s /root/cream6-siteinfo.def -n creamCE -n TORQUE_utils
      /opt/glite/yaim/bin/yaim -c -s /root/cream6-siteinfo.def -n creamCE -n TORQUE_utils
    
    
      #
      # after changes in /opt/misc/yaim/users.conf or /opt/misc/yaim/groups.conf
      # (wild card for groups seem to be allowed!)[2012-12-12]
      #
      #/opt/glite/yaim/bin/yaim -r -s /opt/misc/yaim/`hostname -s`-info.def -n creamCE -n TORQUE_utils -f config_vomsmap      
    

    Configs:

    #
    # /etc/ssh/shosts.equiv
    #
      #vi /etc/ssh/shosts.equiv
    
    #
    # batch: /etc/hosts.equiv
    #
      vi /etc/hosts.equiv
    
    #
    # /etc/blah.config
    #
      save /etc/blah.config
      vi /etc/blah.config
    ...
    # (not set by yaim glite-ce-yaim-cream-ce-4.3.1-4.sl5.noarch)
    #glite-ce-bupdater.log: /usr/libexec/BUpdaterPBS: key finalstate_query_interval not found using the default:30
    finalstate_query_interval=60
    #glite-ce-bnotifier.log: /usr/libexec/BNotifier: key bnotifier_loop_interval not found using the default:5
    bnotifier_loop_interval=30
      save /etc/blah.config
    
    #
    # mysql (to avoid cleaning of deleagtions with a tomcat restart)
    #
    mysql> use creamdb;ALTER TABLE db_info MODIFY creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP; commit;
    
    #
    # tomcat
    #
      #save /etc/tomcat6/tomcat6.conf
      #vi   /etc/tomcat6/tomcat6.conf
      save /etc/tomcat5/tomcat5.conf
      vi   /etc/tomcat5/tomcat5.conf
    ...
    JAVA_OPTS="${JAVA_OPTS} -server -Xms1024m -Xmx4096m"
    ...
      save /etc/tomcat5/tomcat5.conf
      service tomcat5 restart
    
    #
    # admins
    #
      #\cp -v /opt/misc/yaim/etc/grid-security/admin-list /etc/grid-security/admin-list
      cat /etc/grid-security/admin-list
    
    #
    # apel (old)
    #
      #ln -s /etc/cron.d/glite-apel-pbs-parser
      #save glite-apel-pbs-parser
      #vi glite-apel-pbs-parser
      #save glite-apel-pbs-parser
    
    #
    # sudo
    #
        save /etc/sudoers
      vi /etc/sudoers
    ...
    #AG Defaults    requiretty
    ...
    
    #
    # bdii
    #
      # solved: https://ggus.eu/tech/ticket_show.php?ticket=89007
      ###grep GLUE2PolicyRule: /var/lib/bdii/gip/ldif/ComputingShare.ldif
    
      grep ldap /etc/passwd /etc/group   (also needed on batch server!)`
    
    
    #
    # monitor
    #
      cat /etc/glite-ce-cream-utils/glite_cream_load_monitor.conf
    

    Special:

    #
    # sge
    #
      CE: cp /usr/bin/sge_filestaging /opt/misc/tmp/sge_filestaging
      WN: cp /opt/misc/tmp/sge_filestaging /usr/bin/sge_filestaging
      #
      ##
    
    #
    #torque
    #
      ###cp -pv /opt/misc/yaim/etc/munge/munge.key /etc/munge/munge.key
      ###chown munge.munge /etc/munge/munge.key 
      ###chkconfig munge on
      ###/etc/init.d/munge start
      #
    
    #
    # sshd
    #
      # maybe already in yaim (?)
      vi /etc/ssh/sshd_config
      ...
      #AG allow WNs to copy back output (earlier inserted by yaim)  
      HostbasedAuthentication yes
      IgnoreUserKnownHosts yes
      IgnoreRhosts yes
    
      /etc/init.d/sshd restart
    

    Logs:

      ln -s /var/log/tomcat5/catalina.out .
      ln -s /var/log/tomcat6/catalina.out .
    
      ln -s /var/log/cream/glite-ce-cream.log .
    
      ln -s /var/log/cream/glite-ce-bnotifier.log
      ln -s /var/log/cream/glite-ce-bupdater.log
    
      ln -s /var/log/globus-gridftp.log .
      ln -s /var/log/gridftp-session.log .
    
      ln -s /var/log/bdii/bdii-update.log .
    
      ln -s /var/log/apel.log .
    
    
      ls -l /var/glite/log/dglogd.log.*
      #find /var/glite/log/ -type f -ctime +50 -exec \rm -v {} \;
    

    Additionals:

      #cd ~/quattor/scdb9/desy-tools/CREAM] 
      #Quattor: ./blahp-logs-cron.install grid-cr
      #Quattor: ./CREAM-check-tomcat.install grid-cr
     
      #Quattor: ./cream-rsync-torque_grid-batch5.install grid-cr
      #vi /etc/lrms/scheduler.conf
    ...
    lrms_backend_cmd: /usr/libexec/lrmsinfo-pbs -i /var/torque/server_logs/qstat-f.out
    ...
    

    Tests:

    
    #
    # BDII
    #
      sshr grid-batch ls -al
    
      /sbin/runuser -s /bin/sh ldap -c "/usr/libexec/vomaxjobs-maui"
      /sbin/runuser -s /bin/sh ldap -c diagnose
    
      /sbin/runuser -s /bin/sh ldap -c "qstat -B"   (needs user/group 'ldap' on the server!)`
    
      /usr/libexec/lcg-info-dynamic-scheduler -c /etc/lrms/scheduler.conf
    
      lcg-infosites --vo belle ce --is grid-cr
    
      ls -l /var/lib/bdii/gip/plugin/glite-info*
    
      ldapsearch -LLL -x -h localhost -p 2170 -b 'o=infosys'
    
    #
    # CREAM service
    #
      glite-ce-service-info -L 2 grid-cr.desy.de
    
    #
    # GLEXEC w/o ARGUS
    #
      cp  /tmp/p.pem
      chown tomcat /tmp/p.pem
      chmod 600    /tmp/p.pem
      /sbin/runuser -s /bin/sh tomcat -c "export GLEXEC_MODE=lcmaps_get_account ; export GLEXEC_CLIENT_CERT=/tmp/p.pem; /usr/sbin/glexec /usr/bin/id" 
    
    #
    # CREAM monitor
    #
      /usr/bin/glite_cream_load_monitor  /etc/glite-ce-cream-utils/glite_cream_load_monitor.conf --show
      /usr/bin/glite_cream_load_monitor /etc/glite-ce-cream-utils/glite_cream_load_monitor.conf --test ; echo $?
    
    #
    # munge
    #
      remunge
      munge -n
      munge -n | unmunge
      munge -n | sshr grid-batch unmunge
    
    #
    # sge
    #
      #qstat -f
      #qstat
      #qhost
      #
      #echo "/bin/hostname" | qsub
    

    [top]


    MyProxy

    Documentation:

  • NotesAboutInstallationAndConfigurationOfMyproxy
  • emi-px and gLite ProxyRenewal Service Reference Card

    Installation: (puppet)

      #yum install ca-policy-egi-core
      #yum install emi-px
      #yum install cyrus-sasl-gssapi
    

    yaim:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /opt/glite/yaim/bin/yaim -v -s /opt/misc/yaim/`hostname -s`-info.def -n PX
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n PX
    
      chkconfig bdii on
      chkconfig myproxy-server on
    

    Configs:

      vi /etc/myproxy-server.config
    
      #cd ~/quattor/scdb9/desy-tools/PX
      #Quattor: ./myproxy-server-restart.install
    
    
      kerberos:
      --------
      vi /etc/myproxy-server.config
    ...
    
    # kerberos
    sasl "sufficient"
      
      klist -ek /etc/krb5.keytab
    
    
      hkdc -v krb5_get myproxy
      klist -ek /etc/krb5.myproxy.keytab
    
      ktutil
      ktutil:  ?
      ktutil: read_kt  /etc/krb5.keytab
      ktutil: write_kt /etc/krb5.keytab
    
    

    Logs:

      ln -s /opt/glite/yaim/log/yaimlog .
    
      ln -s /var/log/bdii/bdii-update.log .
    

    Tests:

      myproxy-init -v -s grid-px0.desy.de
      myproxy-info -v -s grid-px0.desy.de
    
      echo $MYPROXY_SERVER
    
      myproxy-init -v
      myproxy-info -v
    
      myproxy-init -n
      myproxy-logon -n
    
      myproxy-init -n -s grid-px.desy.de -c 720 -l gellrich@DESY.DE
      myproxy-logon -l gellrich@DESY.DE -n -m desy
    
    

    [top]


    L&B (obsolete)

    Installation:

      yum install ca-policy-egi-core emi-lb
    

    yaim:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      vi /opt/misc/yaim/`hostname -s`-info.def
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n LB
    

    Configs:

      #
      # not needed on L&B
      #
      chkconfig glite-lb-harvester off
      service glite-lb-harvester stop
    
      ##mkdir /usr/etc/init.d
      ##ln -s /etc/init.d/glite-lb-bkserverd /usr/etc/init.d/glite-lb-bkserverd
    
      ##chkconfig bdii on
    
      less /etc/glite-lb/glite-lb-authz.conf
    
      cd ~/quattor/scdb9/desy-tools/LB
    

    Logs:

      ln -s /var/log/bdii/bdii-update.log .
    

    Tests:

      glite-lb-state_history 
    

    [top]


    BDII (bdii-top)

    Docs:

    (EMI.gLite) top-BDII High Availability

    Installation:

      #yum install emi-bdii-top
    
      #chmod a+r /etc/nsswitch.conf
      #ls -l /etc/nsswitch.conf
    -rw-r--r-- 1 root root 1717 Jan 28 14:35 /etc/nsswitch.conf
    

    yaim:

      vi /opt/misc/yaim/`hostname -s`-info.def
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n BDII_top
    
      #save /etc/sudoers
      #vi /etc/sudoers
    ...
    #AG Defaults    requiretty
    ...
    

    Configs:

      ln -s /var/cache/glite/top-urls.conf
    
      #ln -s /opt/glite/etc/gip/top-urls.conf
      #cp -v /opt/glite/etc/gip/top-urls.conf /opt/glite/etc/gip/top-urls0.conf
    
      #save /etc/glite/glite-info-update-endpoints.conf
      #vi /etc/glite/glite-info-update-endpoints.conf
    #...
    #output_file = /opt/glite/etc/gip/top-urls0.conf
    #...  
    
      ls -l  /var/cache/glite/glite-info-update-endpoints
    
      #cat /etc/cron.hourly/glite-info-update-endpoints
    
      #ls -lt /var/lib/bdii/gip/tmp/gip/top-urls.conf-glue2
      #ls -lt /var/lib/bdii/gip/cache/gip/top-urls.conf-glue2
    

    Logs:

      ln -s /var/log/bdii/bdii-update.log .
    

    Tests:

      ldapsearch -LLL -x -h localhost -p 2170 -b o=Infosys
      ldapsearch -LLL -x -h localhost -p 2170 -b o=grid
    
      echo $LCG_GFAL_BDII_TIMEOUT
    

    [top]


    GIIS (bdii-site)

    Installation:

      #yum install emi-bdii-site
    

    yaim:

    
      /opt/glite/yaim/bin/yaim -e -s /root/giis-siteinfo.def -n BDII_site
      /opt/glite/yaim/bin/yaim -c -s /root/giis-siteinfo.def -n BDII_site
    
      vi /opt/misc/yaim/`hostname -s`-info.def
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n BDII_site
    
    #
    # bug in
    #
    # /opt/glite/yaim/functions/config_info_service_bdii_site
    # vs
    # /var/lib/bdii/gip/provider/glite-info-provider-service-bdii-site
    #
    # ln -s /etc/bdii/gip /etc/bdii/gip/etc
    
      save /etc/sudoers
      vi   /etc/sudoers
    ...
    #AG Defaults    requiretty
    

    Configs:

      ln -s /etc/bdii/gip/site-urls.conf .
      cat   /etc/bdii/gip/site-urls.conf
    

    Logs:

      ln -s /var/log/bdii/bdii-update.log .
    

    Tests:

      ldapsearch -LLL -x -h localhost -p 2170 -b o=Infosys
      ldapsearch -LLL -x -h localhost -p 2170 -b o=grid
    
      lcg-infosites --vo desy ce  --is grid-giis1
    

    [top]


    LFC

    Docs: LCG File Catalog site-info.def Notes about Installation and Configuration of a LFC server

    Installation:

      yum install ca-policy-egi-core emi-lfc_mysql
    

    yaim:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /etc/init.d/mysqld start
      mysqladmin -u root password '...'
      mysqladmin -u root -h `hostname -f` password '...'
      /etc/init.d/mysqld restart
    
      cp  /opt/misc/backups/grid-lfc2.desy.de/... 
      cat ... | mysql -p
    
      mysql -p
    mysql> GRANT ALL PRIVILEGES ON *.* to 'lfc'@'localhost' identified by '...';
    
      chkconfig mysqld on
      chkconfig mysqld --list
    
      vi /opt/misc/yaim/`hostname -s`-info.def
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n emi_lfc_mysql
    
      chkconfig lfcdaemon on
      chkconfig lfc-dli on
    
      vi /etc/logrotate.d/lfcdaemon
    ...
    rotate 30
    ...
    
      vi /etc/logrotate.d/lfc-dli
    ...
    rotate 30
    ...
    

    Configs:

      save /etc/security/limits.conf
      vi   /etc/security/limits.conf
    ...
    lfcmgr           soft    nproc           8192
    lfcmgr           hard    nproc           8192
    
    lfcmgr           soft    nofile          16384
    lfcmgr           hard    nofile          16384
    
      # /etc/lfc-mysql/lfcdaemon.condf does not work!
    
      save  /etc/lfc-mysql/lfcdaemon.init
      vi    /etc/lfc-mysql/lfcdaemon.init
    ...
    #ULIMIT_N=4096
    ...
    #ALLOW_COREDUMP="yes"
    ...
    #NB_THREADS=20
    NB_THREADS=200
    ...
    #
    # https://svnweb.cern.ch/trac/lcgdm/blog/official-release-lcgdm-183
    #
    export GLOBUS_THREAD_MODEL="pthread"
    ...
    
      cat   /usr/etc/NSCONFIG
    lfc/msql@localhost/cns_db
    

    Logs:

      ln -s /opt/glite/yaim/log/yaimlog .
    
      ln -s /var/log/bdii/bdii-update.log .
    
      ln -s /var/log/lfc/log /root/lfc.log
      ln -s /var/log/dli/log /root/dli.log
    

    Tests:

      ldapsearch -LLL -x -h localhost -p 2170 -b 'o=infosys'
      ldapsearch -LLL -x -h localhost -p 2170 -b 'GLUE2GroupID=resource,o=glue'
    
      lcg-infosites --vo desy lfc --is grid-lfc0
    
      export LFC_HOST=grid-lfc1.desy.de
      lfc-ls -l /grid/
    
      $LFC_CONNTIMEOUT -> sets the connect timeout in seconds
      $LFC_CONRETRY -> sets the number of retries
      $LFC_CONRETRYINT -> sets the retry interval in seconds 
    
     # connexion timeout in second
     LFC_CONNTIMEOUT=15
     # maximum number of try for opening a connexion
     LFC_CONRETRY=2
     # 
     LFC_CONRETRYINT=1
    
    

    [top]


    CLUSTER (not deployed)

    Note: Does assume lcg-CEs only.

    Documentation:

  • EMI: gLite CLUSTER
  • Wiki: gLite CLUSTER

      yum install ca-policy-egi-core
      yum install emi-cluster
    

    Configurations:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /opt/glite/yaim/bin/yaim -v -s /opt/misc/yaim/`hostname -s`-info.def -n CLUSTER
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n CLUSTER
    

    Services:

    
    
    

    Logs:

    
    
    

    [top]


    CLOUDMON

    Documentation:

      #yum install yum-conf-epel
      #yum install httpd
      #yum install python-sqlalchemy python-amqplib rabbitmq-server
      #yum install cloudmon-server
    

    Configurations:

      chkconfig httpd --list
      chkconfig httpd on
      /etc/init.d/httpd start
    
      chkconfig rabbitmq-server --list
      chkconfig rabbitmq-server on
      /etc/init.d/rabbitmq-server start
    
      cat /etc/cloudmon/consume.cfg
      cat /etc/cloudmon/cgi.cg
    
      groupadd cloudmonconsume
      useradd -g cloudmonconsume -c "cloudmon User" cloudmonconsume
    
      chkconfig cloudmoncgiconsume --add
      chkconfig cloudmoncgiconsume on
    
      /etc/init.d/cloudmoncgiconsum start
    

    Services:

      /etc/init.d/httpd status
      /etc/init.d/rabbitmq-server status
      /etc/init.d/cloudmoncgiconsum status
    

    Administratoin:

      sshr grid-mon1
    
      cat 00VMON
    
      > sqlite3  /var/lib/cloudmon/cloudmon.db
    
    sqlite> select identifier from VHOST;
    sqlite> delete from VHOST where identifier = "grid-core15.desy.de";
    
    sqlite> select libvirtName from VM;
    sqlite> delete from VM where  libvirtName = "grid-wn2001";
    

    Logs:

    
    

    [top]


    APEL

    Docs:

  • APEL wiki
  • APEL client wiki
  • APEL Server wiki
  • Sites with data in the SSM 2.0 APEL Test Summaries table
  • List of sites whose data is contained in the rap summary table
  • APEL Client Upgrade Plan
  • Multicore Deployment Task Force

  • GGUS #96527

    SL6:

    password =
    site_name = DESY-HH
    ldap_host = grid-bdii-intern.desy.de
    lrms_server = grid-batch6.desy.de
    spec_type = HEPSPEC
    spec_value = 7.54
    enabled = false
    send_summaries = true
    enabled = false
    

    Publisher:

    
      /opt/misc/yaim/get-hostcert.sh `hostname -f`
      #obsolete cp /etc/grid-security/hostcert.pem /etc/grid-security/servercert.pem
    
      # puppet
      #yum install apel-ssm -y
      #yum install apel-lib -y
      #yum install apel-client -y
    
      #yum install ca-policy-egi-core -y
      #yum install fetch-crl -y
    
      #yum install mysql-server -y
    
      #touch /var/lock/subsys/fetch-crl-cron
    
      chkconfig --level 345 mysqld on
      mv /root/my.cnf /root/.my.cnf.old 
      service mysqld start
      mysqladmin -u root password '...'
      /usr/bin/mysqladmin -u root -h FQHN password '...'
      mv /root/my.cnf.old /root/.my.cnf
    
      mysql
    mysql> create database apelclient;
      mysql apelclient < /usr/share/apel/client.sql
    mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'localhost' IDENTIFIED BY '..';
    mysql> REVOKE ALL PRIVILEGES ON apelclient.* FROM 'apel'@'grid-batch6.desy.de';
    mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'grid-batch6.desy.de' IDENTIFIED BY '..';
    ...
    
      # puppet
      vi /etc/apel/client.cfg
    ...
    password = ...
    site_name = DESY-HH
    ldap_host = grid-bdii-intern.desy.de
    ...
    
      cat /etc/cron.d/apelclient.cron
    MAILTO=grid-ops@desy.de
     45 03,15 * * root /usr/bin/apelclient
    

    Parser:

      yum install apel-parsers apel-lib
      vi /etc/apel/parser.cfg
    ...
    hostname = grid-apel0.desy.de
    password = ...
    site_name = DESY-HH
    lrms_server = grid-batch6.desy.de
    #blah enabled = false
    type = PBS
    dir = /var/lib/torque/server_priv/accounting
    filename_prefix = 201
    ...
    
      cat /etc/cron.d/apelparser.cron
    MAILTO=grid-ops@desy.de
     05 02,14 * * root /usr/bin/apelparser > /dev/null
    

    # obsolete
    #  cd ~/quattor/scdb9/desy-tools/APEL
    #./apel-db_backup.install grid-apel0
    

    Tests:

      mysql -p
    mysql> use apelclient;
    mysql> show tables;
    mysql> select * from MachineNames;
    mysql> select count(*) from JobRecords;
    

    SL5 stand-alone service:

    Docs:

  • GOC Accounting (Plots and docs)
  • APEL FAQ
  • APEL FAQ and Troubleshooting
  • site-info.def

    Accounting:

  • APEL Synchronisation Test

      yum install ca-policy-egi-core
      yum install emi-apel
    
      /opt/misc/yaim/get-grid-host-cert.sh `hostname -f`
    
      /etc/init.d/mysqld start
      /usr/bin/mysqladmin -u root password ''
      /usr/bin/mysqladmin -u root -h grid-apel1.desy.de password ''
      /etc/init.d/mysqld restart
    
      vi /opt/misc/yaim/`hostname -s`-info.def
    ...
    
    ...
    
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n APEL
    
      chkconfig mysqld --list
      chkconfig mysqld on
    
      save /usr/bin/apel-publisher
      vi /usr/bin/apel-publisher
    ...
    -Xmx2048m
    ...
    
      ln -s /etc/cron.d/glite-apel-publisher
      save glite-apel-publisher
      vi glite-apel-publisher
     52 2,14 ...
    

    Configs:

      vi /etc/glite-apel-publisher/publisher-config-yaim.xml
    

    MySQL:

      mysql -p
    
      mysql> GRANT ALL ON *.*  TO 'accounting'@'grid-apel1.desy.de' IDENTIFIED BY '';
      mysql> GRANT ALL ON *.*  TO 'accounting'@'localhost' IDENTIFIED BY '';
    
      mysql> GRANT ALL ON accounting.*  TO 'accounting'@'grid-batch.desy.de' IDENTIFIED BY '';
      mysql> GRANT ALL ON accounting.*  TO 'accounting'@'grid-cr.desy.de'    IDENTIFIED BY '';
    
      mysql> SHOW GRANTS FOR 'accounting'@'localhost';
    
      mysql> SHOW GRANTS FOR 'accounting'@'grid-batch.desy.de';
      mysql> SHOW GRANTS FOR 'accounting'@'grid-cr.desy.de';
    
      mysql> use accounting;
      mysql> SHOW TABLES;
      mysql> OPTIMIZE TABLE BlahdRecords;
      mysql> OPTIMIZE TABLE EventRecords;
      mysql> OPTIMIZE TABLE GkRecords;
      mysql> OPTIMIZE TABLE LcgProcessedFiles;
      mysql> OPTIMIZE TABLE LcgRecords;
      mysql> OPTIMIZE TABLE MessageRecords;
      mysql> OPTIMIZE TABLE RepublishInfo;
      mysql> OPTIMIZE TABLE SpecRecords;
      mysql> OPTIMIZE TABLE SpecRecords_28593;
    

    Admin:

    mysql> select LocalJobId from JobRecords where StartTime < "2013-07-01 00:00:00";
    
    mysql> use apelclient;
    mysql> select * from EventRecords where JobName = "3607076.grid-batch4.desy.de";
    

    Error handling:

      myisamchk --help
    
      myisamchk --check /var/lib/mysql/accounting/BlahdRecords.MYI
    
      myisamchk -o /var/lib/mysql/accounting/BlahdRecords.MYI
    

    Clean-up mcore problem. Jobs with fixed pbs-parser which allows for multi-core were recorded w/o deleting the older ones.
    Note: 'VSuperSummaries' is a view, the real table is 'SuperSummaries'.

    mysql> use apelclient;
    
    mysql> select StartTime,EndTime,LocalUserId,LocalJobId,NodeCount,Processors from JobRecords where LocalJobId = '1892977.grid-batch4.desy.de';
    mysql> select StartTime,EndTime,LocalUserId,LocalJobId,NodeCount,Processors from VJobRecords where LocalJobId = '1892977.grid-batch4.desy.de';
    
    mysql> select UpdateTime,SubmitHostId,NodeCount,Processors,NumberOfJobs from SuperSummaries where Year=2015 and Month=8;
    mysql> select * from VSuperSummaries where Year=2015 and SubmitHost LIKE '%-mcore' and Month=9;
    
    mysql> delete  from SuperSummaries where Year=2015 and UpdateTime < '2015-09-30';
    

    [top]


    LFC

    LCG File Catalog The LCG Troubleshooting Guide

    LFC API / CLI

      #wget ftp://fr2.rpmfind.net/linux/dag/redhat/el5/en/x86_64/dag/RPMS/perl-XML-RegExp-0.03-1.2.el5.rf.noarch.rpm
      #wget ftp://fr2.rpmfind.net/linux/dag/redhat/el5/en/x86_64/dag/RPMS/perl-XML-DOM-1.44-2.el5.rf.noarch.rpm
      #yum --enablerepo=dag install glite-LFC_mysql
      #yum install glite-LFC_mysql
    
      /opt/misc/yaim/get-grid-host-cert.sh `hostname -f`                         # must contain real hostname!
    
      /opt/glite/yaim/bin/yaim -v -s /opt/misc/yaim/`hostname -s`-info.def -n LFC_mysql
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n LFC_mysql
    
      ln -sv /opt/lcg/etc/NSCONFIG
      ln -sv /opt/lcg/etc/lcgdm-mapfile
      ln -sv /var/log/lfc/log /root/lfc.log
      ln -sv /var/log/dli/log /root/dli.log
    
      cat /opt/lcg/etc/NSCONFIG
    lfc/msql@grid-lfc1.desy.de/cns_db
    

    Migrate DB from other LFC:

      cat lfc-db_grid-lfc0.desy.de_20110106-140001.sql | mysql -p 
      /etc/init.d/mysqld restart
    
      mysqladmin -p -u root -h localhost       password '...'
      /etc/init.d/mysqld restart
    
    mysql> GRANT ALL PRIVILEGES ON *.* TO 'lfc'@'grid-lfc2.desy.de' IDENTIFIED BY "...";
    
    mysql> show grants for 'root'@'grid-lfc0.desy.de';
    mysql> show grants for 'root'@'grid-lfc1.desy.de';
    mysql> show grants for 'root'@'grid-lfc2.desy.de';
    
    mysql> show grants for 'lfc'@'grid-lfc0.desy.de';
    mysql> show grants for 'lfc'@'grid-lfc1.desy.de';
    mysql> show grants for 'lfc'@'grid-lfc2.desy.de';
    

    MySQL:

      mysql -p
    mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'grid-db0.desy.de' IDENTIFIED BY "...";
    
      /etc/init.d/mysql restart
    

    When renewing the host certificate, make sure it is copied to /etc/grid-security/lfcmgr as well:

    root@grid-lfc1: [~] ll /etc/grid-security/host*.pem
    -rw-r--r--    1 root     root         5132 Jan  7 14:49 /etc/grid-security/hostcert.pem
    -r--------    1 root     root          887 Jan  7 14:49 /etc/grid-security/hostkey.pem
    root@grid-lfc1: [~] ll /etc/grid-security/lfcmgr/lfc*.pem
    -rw-r--r--    1 lfcmgr   lfcmgr       5132 Jan  9 15:52 /etc/grid-security/lfcmgr/lfccert.pem
    -r--------    1 lfcmgr   lfcmgr        887 Jan  9 15:53 /etc/grid-security/lfcmgr/lfckey.pem
    

    [top]


    DPM

    Disk Pool Manager

    The installation was done on one host.

    default-info.def:

    DPM_HOST=grid-se0.desy.de
    DPMPOOL=datapool
    DPM_FILESYSTEMS="grid-se0.desy.de:/data"
    DPMMGR=dpmmgr
    DPMUSER_PWD=xxx
    DPMFSIZE=1G
    DPM_DB_HOST=grid-se0.desy.de
    DPM_DB_USER=dpmmgr
    DPM_DB_PASSWORD=_secret_
    

    Tests:

      dpm-qryconf
    POOL datapool DEFSIZE 1024.00M GC_START_THRESH 0 GC_STOP_THRESH 0 DEFPINTIME 0 PUT_RETENP 86400 FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo GID 0 S_TYPE -
                                  CAPACITY 134.83G FREE 127.95G ( 94.9%)
      grid-se0.desy.de /data CAPACITY 134.83G FREE 127.95G ( 94.9%)
    
      pp dpnsdaemon
      pp dpm
      pp srmv1
      pp srmv2
    
      ls -la /data
    
      cat /etc/sysconfig/dpm
    
      cat /etc/sysconfig/dpnsdaemon
    
      cat /etc/sysconfig/dpm-gsiftp
    
      cat /etc/sysconfig/rfiod
    
      cat /opt/lcg/etc/DPMCONFIG
    dpmmgr/msql@grid-se0.desy.de
    
      cat /etc/shift.conf
    RFIOD TRUST grid-se0.desy.de
    RFIOD WTRUST grid-se0.desy.de
    RFIOD RTRUST grid-se0.desy.de
    RFIOD XTRUST grid-se0.desy.de
    RFIOD FTRUST grid-se0.desy.de
    DPM TRUST grid-se0.desy.de
    DPNS TRUST grid-se0.desy.de
    

    Logs:

      /var/log/dpm-gsiftp/dpm-gsiftp.log
      /var/log/dpm/log
    

    Usage tests:

      export DPNS_HOST=grid-se0.desy.de
    
      dpns-ls -lR /
      dpns-ls -l /dpm/desy.de/home
    
      dpns-getacl /dpm/desy.de/home/geant4
      dpns-entergrpmap --gid 122 --group geant
      dpns-chown root::geant /dpm/desy.de/home/geant4
    
      dpns-mkdir /dpm/desy.de/home/dteam/test
      dpns-ls -l /dpm/desy.de/home/dteam
    
      lcg-cp -v --vo dteam gsiftp://grid-se0.desy.de/etc/passwd file:$PWD/x
    
      lcg-cp -v --vo dteam file:/bin/sh gsiftp://grid-se0.desy.de/dpm/desy.de/home/dteam/SET_testfile
    

    [top]


    Ganglia

    Server

    http:

      #yum downgrade ganglia-3.0.7-1 ganglia-gmond-3.0.7-1
    
      #rpm -e ganglia ganglia-gmond
      #rpm -ivh http://swrep.desy.de/SL/desy.de/SL6/ganglia-3.0.7-1.el5.x86_64.rpm http://swrep.desy.de/SL/desy.de/SL6/ganglia-gmond-3.0.7-1.el5.x86_64.rpm
    
      yum install ganglia ganglia-gmond
      yum install ganglia-gmetad ganglia-web
    
      vi /etc/httpd/conf/httpd.conf
    
      chkconfig httpd on
      service httpd start
    
      vi /etc/httpd/conf.d/ganglia.conf
      
      cd /var/www/html
    
      # denies access per default to /usr/share/ganglia (installed by ganglia-web)
      vi /etc/httpd/conf.d/ganglia.conf
    ...
        #AG
        Allow from all
    
       service httpd reload
    

      vi /boot/grub/grub.conf
    ... ramdisk_size=8000000
      reboot
    
    #  vi /etc/php.ini
    ...
    ;memory_limit = 128M ;
    memory_limit = 512M ;
    ...
    ;date.timezone =
    date.timezone = "Europe/Berlin"
    ...
    
      mkdir -p /var/lib/ganglia/rrds
      chown ganglia.ganglia
    
      vi /etc/gmetad.conf
      chkconfig gmetad on
      service gmetad start
    
      #ln -sv /usr/share/ganglia /var/www/html/.
    
      vi /usr/share/ganglia/get_context.php
    ...
    if (!$sort)
          $sort = "by name";
    ...
    
      vi /usr/share/ganglia/conf.ph
    

    Ganglia squid:

      cp /opt/misc/tools/GANGLIA/squid_5min_service_report.json /opt/misc/tools/GANGLIA/squid_60min_service_report.json /usr/share/ganglia/graph.d/.
    

    Tests:

      root@grid-wn0001: [~] /usr/bin/gstat -a
    
      root@grid-wn0001: [~] nc -u -l 8653
      root@grid-wn0201: [~] echo "hello"|nc -u grid-wn0001 8653
      
      root@grid-wn0001: [~] nc grid-wn0201.desy.de 8649
    

    Client

    RPMS:

      #puppet: yum install ganglia-gmond
    

    Config:

      vi /etc/ganglia/gmond.conf
    
      ls -l /etc/ganglia/conf.d/
    
      ls -l /usr/lib64/ganglia/python_modules/
    

    Python modules:

      yum install ganglia-gmond-python
      rpm -ql ganglia-gmond-python
    /etc/ganglia/conf.d/diskusage.pyconf
    /etc/ganglia/conf.d/modpython.conf
    /etc/ganglia/conf.d/tcpconn.pyconf
    /usr/lib64/ganglia/modpython.so
    /usr/lib64/ganglia/python_modules
    /usr/lib64/ganglia/python_modules/example.py
    /usr/lib64/ganglia/python_modules/example.pyc
    /usr/lib64/ganglia/python_modules/example.pyo
    /usr/lib64/ganglia/python_modules/multidisk.py
    /usr/lib64/ganglia/python_modules/multidisk.pyc
    /usr/lib64/ganglia/python_modules/multidisk.pyo
    /usr/lib64/ganglia/python_modules/tcpconn.py
    /usr/lib64/ganglia/python_modules/tcpconn.pyc
    /usr/lib64/ganglia/python_modules/tcpconn.pyo
    
      service gmond restart
    

    GitHub: gmond_python_modules / network / netstats

    [top]


    ARGUS

    Docs:

  • Argus Server
  • Argus: Policy Administration Point (PAP): Configuration
  • Argus PEP Server: Configuration
  • Argus Policy Decision Point (PDP): Configuration
  • A guideline on how to deploy ARGUS and gLexec on own grid site
  • Adding the central banning Argus PAP to the list of PAPs

    ARGUS YAIM Configuration:

    ARGUS server Installation:

      #yum install ca-policy-egi-core
      #yum install fetch-crl
      #yum install emi-argus
    
      
      /opt/misc/yaim/get-grid-host-cert.sh grid-argus.desy.de
    
      /opt/glite/yaim/bin/yaim -c -s /root/argus-siteinfo.def -n ARGUS_server
    

    Configurations:

      vi /etc/init.d/argus-pdp
    ...
    function start() {
    ...
          elif [ $state -eq 5 ]; then
    
            #AG wait for pep to start
            sleep 30
    ...
    
     #chkconfig fetch-crl-cron on
      chkconfig fetch-crl-cron --list
    
      /etc/init.d/fetch-crl-cron status
     #/etc/init.d/fetch-crl-cron start
      ls -l /var/lock/subsys/fetch-crl-cron
    
     #/usr/sbin/fetch-crl
    
      /etc/argus
      /etc/argus/pap/pap_authorization.ini
      /etc/argus/pap/pap_configuration.ini
      /etc/argus/pdp/pdp.ini
      /etc/argus/pepd/pepd.ini
    
     #vi /etc/cron.d/lcg-expiregridmapdir 
    
    #
    # hostcert DN
    #
      #vi /etc/argus/pap/pap_authorization.ini
    
    #
    # ACCOUNTMAPPER_OH
    #
      # to use gridmapdir of classical cream
      vi /etc/argus/pepd/pepd.ini
    ...
    #
    # Obligation Handlers (OH) configuration
    #
    [ACCOUNTMAPPER_OH]
    parserClass = org.glite.authz.pep.obligation.dfpmap.DFPMObligationHandlerConfigurationParser
    handledObligationId = http://glite.org/xacml/obligation/local-environment-map
    accountMapFile = /etc/grid-security/grid-mapfile
    groupMapFile = /etc/grid-security/groupmapfile
    gridMapDir = /etc/grid-security/gridmapdir
    #AG useSecondaryGroupNamesForMapping = true
    #AG useSecondaryGroupNamesForMapping = true # ...%20gellrich:ilcprd:ilcusr:ilcger
    useSecondaryGroupNamesForMapping = false    # ...%20gellrich:ilcprd
    
    

    Hints:

  • WLCG Operations Coordination Minutes - February 5th, 2015
  • GGUS #105666
  • GGUS #111505

      save  /usr/lib/jvm/jre-openjdk/lib/security/java.security
      vi    /usr/lib/jvm/jre-openjdk/lib/security/java.security
    ...
    #AG jdk.tls.disabledAlgorithms=SSLv3
    

    Performance issues:

  • Argus Service Deployment for EMI
  • Argus PEP Server: Configuration
      save /etc/sysconfig/argus-pepd
      vi   /etc/sysconfig/argus-pepd
    ...
    #AG PEPD_JOPTS="-Xmx256M"
    PEPD_JOPTS="-Xmx2048M"
    ...
    
    
      save /etc/argus/pepd/pepd.ini
      vi   /etc/argus/pepd/pepd.ini
    [SERVICE]
    ...
    # Applying advanced setting - see above twiki link for Docs
    # Increase from Default 200
    maximumRequests = 400
    # Increade from Default 500
    requestQueueSize = 1000
    # Increase from Default 30
    connectionTimeout = 30
    
    # Increase from 16384
    receiveBufferSize = 32768
    # Increase from 16384
    sendBufferSize = 32768
    ...
    
    [PDP]
    ...
    maximumCachedResponses = 0
    ...
    
      service argus-pepd restart
    
      save /etc/argus/pdp/pdp.ini
      vi   /etc/argus/pdp/pdp.ini
    ...
    [SERVICE]
    ...
    maximumRequests = 201
    requestQueueSize = 501
    connectionTimeout = 45
    ...
    
      service argus-pdp restart
    
    

    Note: The start of the argus daemons at boot is not not safe!

    
      tail /var/log/argus/pap/pap-standalone.log
      tail /var/log/argus/pdp/process.log
      tail /var/log/argus/pepd/process.log
    
      service argus-papd restart
      wait ...
    
      service argus-pepd restart
      wait ...
    
      service argus-pdp restart
      wait ...
    

    Services:

      #/var/lib/bdii/gip/provider/glite-info-glue2-provider-service-argus
    
      /etc/init.d/argus-pdp stop ; /etc/init.d/argus-pepd stop ; /etc/init.d/argus-pap stop
    
      /etc/init.d/argus-pap start ; /etc/init.d/argus-pepd start
      /etc/init.d/argus-pdp restart
    

    Logs:

      /var/log/argus
    
      /var/log/argus/pap
      /var/log/argus/pdp
      /var/log/argus/pepd
    
      ln -s /var/log/argus/pap/pap-standalone.log /root/pap.log
      ln -s /var/log/argus/pdp/process.log        /root/pdp.log 
      ln -s /var/log/argus/pepd/process.log       /root/pepd.log
      ln -s /var/log/argus/pepd/access.log        /root/.
    
      ln -s /etc/grid-security/gridmapdir/        /root/.
    

    Tests:

       pepcli -h
       pepcli -p https://grid-argus.desy.de:8154/authz -k _proxy_ --capath /etc/grid-security/certificates/ --cert /etc/grid-security/hostcert.pem --key /etc/grid-security/hostkey.pem -r http://authz-interop.org/xacml/resource/resource-type/wn -a http://glite.org/xacml/action/execute
       pepcli -p https://grid-argus1.desy.de:8154/authz -k /opt/misc/tmp/k5-ca-proxy-desy.pem --capath /etc/grid-security/certificates/ --cert /etc/grid-security/hostcert.pem --key /etc/grid-security/hostkey.pem -r http://authz-interop.org/xacml/resource/resource-type/wn -a http://glite.org/xacml/action/execute
    

    GLExec Argus Quick Installation Guide

    GLEXEC on WNs (needs additional line for SL6)

      log-only:
      --------
      chmod 0555 /usr/sbin/glexec
      chmod o+r  /etc/glexec.conf
    
      xsetuid:
      -------
      chmod 6111 /usr/sbin/glexec
      chmod o-r  /etc/glexec.conf
    
      # for round robin see
      man lcmaps_plugins_c_pep
    
      vi /etc/lcmaps/lcmaps-glexec.db
    #
    # LCMAPS config file for glexec
    #
    
    # where to look for modules
    path = /usr/lib64/lcmaps
    
    # module definitions
    verify_proxy = "lcmaps_verify_proxy.mod" 
                   " -certdir /etc/grid-security/certificates/"
                   " --allow-limited-proxy"
    
    pepc        = "lcmaps_c_pep.mod"
                  "--pep-daemon-endpoint-url https://grid-argus.desy.de:8154/authz"
                  " -resourceid http://authz-interop.org/xacml/resource/resource-type/wn"
                  " -actionid http://glite.org/xacml/action/execute"
                  " -capath /etc/grid-security/certificates/"
                  " -pep-certificate-mode implicit"
                  " --use-pilot-proxy-as-cafile" # Add this on RHEL 6 based systems
    
    glexec_get_account:
    verify_proxy -> pepc
    

    Operations:

      pap-admin -help
    
    
      #
      # changing policies
      #
      pap-admin list-policies
      pap-admin remove-all-policies
      pap-admin list-policies
      #vi /opt/misc/tools/ARGUS/policy.txt
      pap-admin add-policies-from-file /opt/misc/tools/ARGUS/policy.txt 
      pap-admin list-policies
    
    
      pap-admin list-policies -all
      pap-admin list-policies --show-all-ids
    
      #pap-admin remove-all-policies
      #pap-admin remove-policy 2113d266-5c63-4a26-89bf-e23a69001d4f
    
      pap-admin add-pap ngi grid-ngi-argus.desy.de "/C=DE/O=GermanGrid/OU=DESY/CN=grid-ngi-argus.desy.de"
      pap-admin enable-pap ngi
      pap-admin set-paps-order ngi default
    
      /etc/init.d/argus-pepd clearcache
      /etc/init.d/argus-pdp  reloadpolicy
    
    
      pap-admin ban    subject "..."
      pap-admin un-ban subject "..."
      pap-admin list-policies
    

    Policies:

      cat /opt/misc/tools/ARGUS/policy.txt
    

    Tests:

      ldapsearch -x -h grid-argus -p 2170 -b o=glue
    
      ssh grid-wn
      su - _user_
    
      scp _proxy_
    
      export X509_USER_PROXY=~/_proxy ; export GLEXEC_CLIENT_CERT=$X509_USER_PROXY
    /usr/sbin/glexec /usr/bin/id
    
      less /var/log/argus/pepd/access.log
      less /var/log/argus/pdp/process.log
    

    Problems:

      grep ERROR pepd.log
      ...
      2016-02-24 07:44:08.030Z - ERROR [GridMapDirPoolAccountManager] - cmsusr pool account is full. Impossible to map ...
      ...
    
      ls -li /etc/grid-security/gridmapdir/*cmsusr*
    
      cat /etc/cron.d/lcg-expiregridmapdir
    
      man lcg-expiregridmapdir
    
      /usr/sbin/lcg-expiregridmapdir.pl -e 48 -v
    

    ARGUS_NGI

    Docs:

  • Argus: Global Banning Service
     #hostCert, yaim, fetch_crl steps as it's done for normal Argus server
    

    Configuration:

      vi /etc/cron.d/lcg-expiregridmapdir
    #
    

    Define the central banning server

     pap-admin add-pap --public centralbanning lcg-argus.cern.ch "/DC=ch/DC=cern/OU=computers/CN=argus.cern.ch"
     pap-admin enable-pap centralbanning
     pap-admin set-paps-order centralbanning default
    

    Add Site Argus

     #on ngi_argus side:
     pap-admin add-ace 'CN=grid-argus.desy.de, OU=DESY, O=GermanGrid, C=DE' 'POLICY_READ_LOCAL|POLICY_READ_REMOTE'
     pap-admin add-ace 'CN=grid-argus0.desy.de, OU=DESY, O=GermanGrid, C=DE' 'POLICY_READ_LOCAL|POLICY_READ_REMOTE'
     pap-admin add-ace 'CN=grid-argus1.desy.de, OU=DESY, O=GermanGrid, C=DE' 'POLICY_READ_LOCAL|POLICY_READ_REMOTE'
     pap-admin add-ace 'CN=grid-argus2.desy.de, OU=DESY, O=GermanGrid, C=DE' 'POLICY_READ_LOCAL|POLICY_READ_REMOTE'
    
     #on site_argus side:
     pap-admin add-pap ngi grid-ngi-argus.desy.de "/C=DE/O=GermanGrid/OU=DESY/CN=grid-ngi-argus.desy.de"
     pap-admin enable-pap ngi
     pap-admin set-paps-order ngi default 
    
     pap-admin add-ace 'CN=srv-111.afroditi.hellasgrid.gr, OU=afroditi.hellasgrid.gr,O=HellasGrid, C=GR' 'POLICY_READ_LOCAL|POLICY_READ_REMOTE|CONFIGURATION_READ'
    

    For sites without ARGUS pdp should be configured:

      #in /etc/init.d/argus-pdp reloadpolicy set the retentionInterval to 60 (default 240, e.g. 4 hours)
      /etc/init.d/argus-pdp restart
     
     #manually to reload the policy
     /etc/init.d/argus-pdp reloadpolicy
    

    Tests:

     #to check the current policies
     pap-admin list-policies -all
     #(ngi should be on top)
    
     #to refresh the policy (default interval defined in /etc/argus/pap/pap_configuration.ini is 14400, will change probably in future to lower values. To setup
     #polling interval - pap-admin set-polling-interval 3600 )
     pap-admin refresh-cache
    

    [top]


    CLUSTER

    Documentation:

  • gLite CLUSTER

      yum install glite-CLUSTER
    

    Configurations:

    
    
    

    Services:

    
    
    

    Tests:

    
    
    

    [top]


    VO DIR NFS Server

    Documentation:

  •   # (by Quattor) yum install nfs-utils
    

    Configurations:

      vi /etc/exports
    #
    # allow access
    #
    /local grid-sm*.desy.de(rw,no_root_squash,sync) *.desy.de(ro,async,no_root_squash)
    
      vi /etc/sysconfig/nfs
    ...
    RPCNFSDCOUNT=32
    ...
      cat /proc/net/rpc/nfsd
    
      /etc/init.d/nfs restart
    
      chkconfig nfslock --list
      chkconfig nfs --list
    
      chkconfig nfslock on
      chkconfig nfs on
    

    VO dir:

      uid=42900(icesgm000) gid=4290(icesgm) groups=4290(icesgm),4280(iceusr)
    
      mkdir -p /local/vo/icecube
      chown 42900.4290 /local/vo/icecube
    

    [top]


    CLUSTER

    Note: Does assume lcg-CEs only?

    Documentation:

  • EMI: gLite CLUSTER
  • Wiki: gLite CLUSTER

      yum install ca-policy-egi-core
      yum install glite-CLUSTER
    

    Configurations:

      /opt/misc/yaim/get-grid-host-cert.sh `/bin/hostname -f`
    
      /opt/glite/yaim/bin/yaim -v -s /opt/misc/yaim/`hostname -s`-info.def -n CLUSTER
      /opt/glite/yaim/bin/yaim -c -s /opt/misc/yaim/`hostname -s`-info.def -n CLUSTER
    

    Services:

    
    
    

    Logs:

    
    
    

    [top]


    CVMFS

    Documentation:

  • Welcome to CernVM-FS’s documentation!
  • CVMFS Technical information
  • Nagios CVMFS
  • Maintaining a CernVM-FS Repository
  • CernVM-FS Configuration Examples
  • CVMFS configuration at RAL

    Repo:

  • http://cvmrepo.web.cern.ch/cvmrepo/yum/cvmfs
  • http://nims.desy.de/extra/cernvm/cvmfs/

    Puppet files:

      ./features/cvmfs_desy/files/etc/cvmfs/config.d/test.desy.de.conf
      ./features/cvmfs_desy/files/etc/cvmfs/default.local
      ./features/cvmfs_desy/manifests/config.pp
    
      ./features/cvmfs_desy/files/etc/cvmfs/exports
    

    Installation: (puppet)

    
    
    

    Configurations:

      cat /etc/cvmfs/config.d/test.desy.de.conf
      cat /etc/cvmfs/default.local
      exportfs -r
    
      cvmfs_config setup
      cvmfs_config probe
      cvmfs_config reload
    
      cat /opt/misc/votags_sl6/desy/desy.list 
    VO-desy-CVMFS
    

    Services:

      cat /etc/cron.d/cvmfs_fsck.cron
    #
    # Cron to check and fix cvmfs cache integrity
    #
    MAILTO=grid-ops@desy.de
     0 4 * * * root ( perl -e 'sleep rand 3600' ; /bin/date +%Y%m%d-%H%M%S ; /usr/bin/cvmfs_fsck <%= @cvmfs_cache_base %> ) >> /var/log/cvmfs-fsck.log 2>&1
    

    Tests:

      ls -l /cvmfs/
    
      /etc/init.d/autofs status
    
      cvmfs2
      cvmfs_config
      cvmfs_fsck
      cvmfs_talk
    
      cvmfs_config stat -v
      cvmfs_config probe            # will mount and check all cvmfs dirs
    
      root@grid-cvmfs1: [~]  cvmfs_talk -i atlas.cern.ch internal affairs
    Inode Generation:
      init-catalog-revision: 2434  current-catalog-revision: 3912
    incarnation: 5  inode generation: 59440337
    

    Ganglia nfsstats.py:

      /opt/misc/tools/GANGLIA/nfsstats.install
      ls -l /usr/lib64/ganglia/python_modules/nfststats.py
      ls -l /etc/ganglia/conf.d/nfsstats.pyconf
    

    Hints:

      # if CVMFS hangs
      service autofs forcerestart
    

    [top]


    GSISSH

    Documentation:

    Installation:

      yum install gsi-openssh gsi-openssh-server
    

    Configurations:

      vi /etc/gsissh/sshd_config
    ...
    Port 1975
    ...
    
      vi /etc/grid-security/grid-mapfile
    

    Services:

      /etc/init.d/gsisshd start
    

    Logs:

      less /var/log/secure
    

    Tests:

      voms-proxy-init
      gsissh -p 1975 
    

    [top]


    SQUID

    Documentation:

    General information and instructions

    Change ACLs to restrict the destination sites:

  • ATLAS Squid Deployment
  • WLCG Monitoring

    Installation:

    The Squid can be installed via RPMs or directly from the source TAR balls. For CMS the Squid is installed from the TAR since this allows the installation as a non-root user (only the activation of the automatic startup script requires root access). The details are out lined here.

    Ganglia squid:

      /opt/misc/tools/GANGLIA/squid.install
      ls -l /usr/lib64/ganglia/python_modules/squid.py
      ls -l /etc/ganglia/conf.d/squid.pyconf
    

    Configurations:

      less /etc/security/limits.d/10-squid-nofile.conf
    
      vi /data/squid/etc/squid/customize.sh
    
    # Edit customize.sh as you wish to customize squid.conf.
    # It will not be overwritten by upgrades.
    # See customhelps.awk for information on predefined edit functions.
    # In order to test changes to this, run ../../usr/sbin/fn-local-squid.sh
    # Avoid single quotes in the awk source or you have to protect them from bash.
    #
    
      awk --file `dirname $0`/customhelps.awk --source '{
    setoption("acl NET_LOCAL src", "141.34.0.0/255.255.0.0 131.169.0.0/255.255.0.0 134.76.97.0/255.255.255.0")
    setoption("cache_mem", "4096 MB")
    setoptionparameter("cache_dir", 3, "100000")
    setoption("acl SNMPHOSTS src", "128.142.202.0/24 localhost")
    setoption("acl SNMPMON snmp_community", "public")
    setoption("logfile_rotate", "3")
    ...
    
    example: setoptionparameter("acl RESTRICT_DEST", 3, "^((atlasfrontier.*)\\.cern\\.ch|frontier.*\\.racf\\.bnl\\.gov|ccfrontier.*\\.in2p3\\.fr|lcgft-atlas.*\\.gridpp\\.rl\\.ac\\.uk)$")
    
    
      vi /etc/squid/customize.sh on t2-atlas-squid01
    
      awk --file `dirname $0`/customhelps.awk --source '{
    setoption("acl NET_LOCAL src", "141.34.0.0/255.255.0.0 131.169.0.0/255.255.0.0 134.76.97.0/255.255.255.0")
    setoption( "acl HOST_MONITOR src 127.0.0.1/32 128.142.0.0/16 188.185.0.0/17" )
    setoption("cache_mem", "4096 MB")
    setoptionparameter("cache_dir", 3, "80000")
    setoptionparameter("cache_dir", 2, "/data/squid/var/cache/squid")
    setoption("cache_log", "/data/squid/var/log/squid/cache.log")
    setoptionparameter("access_log", 1, "/data/squid/var/log/squid/access.log")
    setoption("coredump_dir", "/data/squid/var/cache/squid")
    setoption("acl SNMPHOSTS src", "128.142.202.0/24 localhost")
    setoption("acl SNMPMON snmp_community", "public")
    setoption("logfile_rotate", "3")
    print
    }'
    
      service frontier-squid restart
    

    Change acl settings for the squid accepting monitoring requests:

    vim /data/squid/etc/squid/customize.sh
    setoption("acl HOST_MONITOR src", "127.0.0.1/32 128.142.0.0/16 188.185.0.0/17")
    
    atlasprd000@t2-atlas-db0: [/data/squid/etc/squid] ../../usr/sbin/fn-local-squid.sh reload 
    

    For CMS some specific configurations are needed to make the squid know to the CMSSW applications running locally.

    ATLAS set the forntier location in:

      less /opt/misc/atlas/local/setup.sh
      vi /opt/misc/atlas/local/setup.sh.local
    ...
    export FRONTIER_SERVER=
    ...
    

    For WLCG monitoring the port 3401/UDP should be accessible from 128.142.0.0/16 and 188.185.0.0/17

    Services:

      atlas:
      -----
      service frontier-squid status
    
      cms:
      ---
      service frontier-squid.sh status
    

    Logs:

    
      atlas:
      -----
      ln -s  /data/squid/var/log/squid/access.log .
      ln -s  /data/squid/var/log/squid/cache.log .
    
      cms:
      ---
      ln -s /data/Frontier/frontier-cache/squid/var/logs/access.log
      ln -s /data/Frontier/frontier-cache/squid/var/logs/cache.log
    

    Tests:

      which squidclient
    /usr/bin/squidclient
    
      locate squidclient
    
      #ln -s /data/Frontier/frontier-cache/squid/bin/squidclient /usr/bin/squidclient
    
      runuser -s /bin/sh ganglia -c "squidclient mgr:info"
    

    [top]


    Documentation:

    Installation:

      yum install virt-manager libvirt qemu-kvm qemu-kvm-tools python-amqplib
    
      wget http://grid.desy.de/vm/repo/yum/sl6/noarch/RPMS.stable/cloudmon-client-0.0.8-1.noarch
      rpm -ivh cloudmon-client-0.0.8-1.noarch
    

    Configurations:

      #mv    /var/lib/libvirt/images /local/.
      #ln -s /local/images           /var/lib/libvirt/.
    
      #/opt/misc/kvm/kvmAddBridge 'eth0' 'bridge0' 'no'
      #less /etc/sysconfig/network-scripts/ifcfg-bridge0
      #less /etc/sysconfig/network-scripts/ifcfg-eth0
    
      vi /etc/yum.repos.d/vhost.repo
    [amqp]
    host     = "grid-mon1.desy.de"
    port     = 5672
    user     = "guest"
    password = "guest"
    
    [logging]
    # Here you can set custome log levels to the application.
    # cfg    = "/etc/cloudmon/client-log.cfg"
    

    Services:

      /etc/init.d/network force-reload
    
      /etc/init.d/libvirtd status
    
      virt-manager
    
      cat /etc/cron.d/cloudmonclientcron.cron 
     */5 * * * * root /usr/bin/cloudmonclientcron > /dev/null 2>&1
    
      /usr/bin/cloudmonclientcron
    

    Tests:

      ifconfig -a  
    

    [top]


    Documentation: FAXDECloud Instructions" ATLAS Xrootd

    Installation:

      wget http://www.xrootd.org/binaries/xrootd-stable-slc6.repo
      set gpgcheck=0
    
      /opt/misc/yaim/get-grid-host-cert.sh $(/bin/hostname -f) 
      [/etc/grid-security] mkdir xrd
      [/etc/grid-security] cd xrd/
      [/etc/grid-security/xrd] cp /etc/grid-security/hostcert.pem ./xrdcert.pem
      [/etc/grid-security/xrd] cp /etc/grid-security/hostkey.pem ./xrdkey.pem
      [/etc/grid-security/xrd] chmod 644 /etc/grid-security/xrd/xrdcert.pem
      [/etc/grid-security/xrd] chmod 400 /etc/grid-security/xrd/xrdkey.pem
      [/etc/grid-security/xrd] chown xrootd.xrootd xrdcert.pem
      [/etc/grid-security/xrd] chown xrootd.xrootd xrdkey.pem
    
      yum install expect
      yum install --disablerepo="*" --enablerepo=xrootd-stable xrootd
      yum install xrootd-server-atlas-n2n-plugin
    
      cp /opt/misc/tools/XROOTD/xrootd-clustered_20150203.cfg /etc/xrootd/xrootd-clustered.cfg
      cp /opt/misc/tools/XROOTD/xrootd_20141110 /etc/sysconfig/xrootd
      
      [/root] ln -s /var/log/xrootd/cmsd.log cmsd.log
      [/root] ls -s /var/log/xrootd/xrootd.log xrootd.log
    
      service xrootd start
      service cmsd start
    
    

    Configurations:

      chkconfig fetch-crl-boot on
      chkconfig fetch-crl-cron on 
    
      chkconfig fetch-crl-boot --list
      chkconfig fetch-crl-cron --list
    
      chkconfig cmsd on
      chkconfig xrootd on
    
      /home/mpoise/public/xrootd.sh
    

    Services:

      /etc/init.d/fetch-crl-cron status
      ls -l /var/lock/subsys/fetch-crl-cron
      /usr/sbin/fetch-crl
    
      /etc/init.d/cmsd status
      /etc/init.d/xrootd status
    
      mount -t nfs -o vers=3 dcache-core-atlas:/pnfs /pnfs
    

    Logs:

      less /var/log/xrootd/cmsd.log
      less /var/log/xrootd/xrootd.log
    

    Renew Certificate:

      /opt/misc/yaim/get-grid-host-cert.sh $(/bin/hostname -f)
      /opt/misc/tools/XROOTD/XROOTD-update-hostcert.sh
      service cmsd restart
      service xrootd restart 
    

    [top]


    XROOTD

    Documentation (might be restricted to CMS members): CMS XROOTD Architecture CMS XROOTS dCache Instructions

    Installation:

    # The xrootd component that deals with CMS TFC is distributed through
    # OSG repositories, so we need to install them:
      
      rpm -Uhv http://repo.grid.iu.edu/osg-el6-release-latest.rpm
    
    # Make sure to disable all osg repositories in /etc/yum.repos.d/osg*
    # (Usually only osg-el6.repo is enabled.
    
    # Install xrootd server from epel
    
      yum install xrootd
    
    # [Might be it is alreaty installed by Puppet VOBOX]
     
    # Install the CMS specific xrootd plugin from OSG:
    
      yum --enablerepo=osg install xrootd-cmstfc
    

    Configurations:

    
    # These configs should come GIT or something...TO BE DONE!!! ->ChW
      vi /etc/xrootd/xrootd-clustered.cfg
      vi /etc/xrootd/storage.xml
      echo "u * / lr" >  /etc/xrootd/Authfile
    
    # Enable auto-start xrootd/cmsd
      chkconfig xrootd on
      chkconfig cmsd on
    
    # Start the service now
      service cmsd start
      service xrootd start
    

    Services:

      /etc/init.d/fetch-crl-cron status
      ls -l /var/lock/subsys/fetch-crl-cron
      /usr/sbin/fetch-crl
    
      /etc/init.d/cmsd status
      /etc/init.d/xrootd status
    

    Logs:

      less /var/log/xrootd/cmsd.log
      less /var/log/xrootd/xrootd.log
    

    Tests:

    # You need a a valid (=supported by CMS dCache instance) VOMS proxy!
    
    # Test the CMS local redirector:
      xrdcp -d 1 -f root://t2-cms-xrootd01.desy.de://store/user/wissingc/Test_Chksum /dev/null
    
    # Test that the local redicrector has registered to regional redirector, Bari in this case:
      xrdcp -d 1 -f root://xrootd.ba.infn.it://store/user/wissingc/Test_Chksum /dev/null
    
    # SET on grid-mon.desy.de /etc/cron.d/SETests.cron
      29,59 * * * * gellrich /opt/SETests/SET_xrootd.sh root://t2-cms-xrootd01.desy.de//store/user/wissingc/Test_Chksum desy  > /dev/null 2>&1 
    

    [top]


    CVMFS Stratum-0/1 & Installation box

    Documentation:

  • CVMFS Technical information
  • Instructions for the repository maintainer
  • Creating a Repository (Stratum 0)
  • Install a CVMFS Stratum 1
  • Stratum One Service Operations

    Puppet files:

      ./features/grid/files/etc/grid-security/grid-mapfile.hone
    
      ./features/grid/manifests/srv/cvmfs_one.pp
    

    Hosts:

    grid-cvmfs-desy.desy.de
    grid-cvmfs-ilc.desy.de
    
    grid-cvmfs-null0.desy.de
    
    grid-cvmfs-one0.desy.de
    grid-cvmfs-one1.desy.de
    

    Installation box

      ssh -l root grid-cvmfs-desy
    
      #
      # gsissh
      #
      cat /etc/grid-security/grid-mapfile
    
      #
      # cvmfs
      #
      (calice.desy.de desy.desy.de ghep.desy.de hermes.desy.de hone.desy.de ilc.desy.de olympus.desy.de xfel.desy.de zeus.desy.de)
      (ilc.desy.de)
    
      # clean
      cvmfs_server rmfs desy.desy.de
    
      # storage
      cd /srv/cvmfs
      ln -s /storage/cvmfs/desy.desy.de .
    
      # build cvmfs
      cvmfs_server mkfs -o desysgm000 desy.desy.de
    
      # key management
      ls -l /etc/cvmfs/keys/desy.de
    
      #make links for .pub and .masterkey to the desy.de keys (to be able to sign all DESY-HH repositories with one key)
      \rm desy.desy.de.pub
      \rm desy.desy.de.masterkey
    
      ln -s desy.de/desy.de.masterkey desy.desy.de.masterkey
      ln -s desy.de/desy.de.pub       desy.de.pub
    
      cvmfs_server resign
    
      ls -l /etc/httpd/conf.d/
    
      vi /etc/httpd/conf.d/cvmfs.desy.desy.de.conf
    
      vi /etc/cvmfs/repositories.d/*.desy.de/server.conf
    ...
      CVMFS_HASH_ALGORITHM=sha1
    
      #
      # CVMFS from elsewhere
      #
      vi /etc/fstab
    ...
    #
    cvmfs2#sft.cern.ch      /cvmfs/sft.cern.ch      fuse    ro,nodev,allow_other,default_permissions        0       0
    cvmfs2#clicdp.cern.ch   /cvmfs/clicdp.cern.ch   fuse    ro,nodev,allow_other,default_permissions        0       0
    #
    
      /etc/cvmfs/default.local
    #
    # manually added (make sure /var/cache/cvmfs2 exists)
    #
    CVMFS_REPOSITORIES=sft.cern.ch,clicdp.cern.ch
    CVMFS_CACHE_BASE=/var/cache/cvmfs2
    CVMFS_QUOTA_LIMIT=4000
    CVMFS_HTTP_PROXY="http://t2-atlas-squid01.desy.de:3128|http://t2-cms-squid01.desy.de:3128;http://t2-atlas-squid02.desy.de:3128|http://t2-cms-squid02.desy.de:3128|http://t2-cms-squid03.desy.de:3128"
    CVMFS_SERVER_URL="http://grid-cvmfs-one.desy.de/cvmfs/@fqrn@;http://cvmfs-stratum-one.cern.ch/opt/@org@;http://cernvmfs.gridpp.rl.ac.uk/opt/@org@;http://cvmfs.racf.bnl.gov/opt/@org@;http://cvmfs.fnal.gov/opt/@org@;http://cvmfs-stratum-one.hep.pnnl.gov/cvmfs/@fqrn@"
    CVMFS_NFS_SOURCE=no
    CVMFS_MEMCACHE_SIZE=64
    CVMFS_NFILES=65535
    

    stratum-1

      #yum install mod_wsgi
    
      echo -e "*\t\t-\tnofile\t\t16384" >>/etc/security/limits.conf
      ulimit -n 16384
    
      vi /var/www/html/robots.txt
    User-agent: *
    Disallow: /
    
      ls -l /etc/cvmfs/keys
    
      less /etc/cron.d/cvmfs_one.cron (puppet)
    
      less /etc/logrotate.d/cvmfs (puppet)
    
      less /etc/httpd/conf.d/cvmfs.conf (puppet)
    
      ls -l /opt/misc/tools/CVMFS-1/etc/httpd/conf.d/
      ls -l /etc/httpd/conf.d/
    
      #
      # http://cernvm.cern.ch/portal/filesystem/cvmfs-2.1.20
      #
      # for the stratum-1 the settings are wrong in /etc/cvmfs/repositories.d/*/replica.conf  
      #
      # CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/cern.ch.pub:/etc/cvmfs/keys/cern-it1.cern.ch.pub:/etc/cvmfs/keys/cern-it2.cern.ch.pub
      #
      cd /etc/cvmfs/keys
      ln -s */* .
    
      cvmfs_server list
    
      cd /srv/cvmfs
      mkdir test.desy.de
      #ln -s atlas atlas.cern.ch
    
      cvmfs_server add-replica -o root http://grid-cvmfs-null.desy.de:8000/cvmfs/test.desy.de /etc/cvmfs/keys/desy.de/desy.de.pub
    
      ls -l /etc/cvmfs/repositories.d/
    
      cat /etc/cron.d/cvmfs_one.cron
    
      cvmfs_server snapshot belle.cern.ch
    
      find /srv/cvmfs/*.*/data/txn -name "*.*" -mtime +2 2>/dev/null|xargs rm -fv
    

    Tests:

      cvmfs_server check desy.desy.de
      cvmfs_server tag desy.desy.de
    

    stratum-0

      less /etc/tsm/dsm.opt
      less /etc/tsm/dsm.sys
    
      vi /etc/logrotate.d/tsm
    /var/log/tsm/dsmerror.log /var/log/tsm/dsmsched.log {
            compress
            daily
            dateext
            delaycompress
            maxage 7
            missingok
            rotate 7
            create 640 root root
    }
    
    /var/log/tsm/dsmwebcl.log {
            compress
            daily
            dateext
            delaycompress
            maxage 7
            missingok
            rotate 7
            create 640 root root
            postrotate
                    /etc/init.d/dsmcad restart > /dev/null
            endscript
    }
    

    Logs:

      ls -l /var/log/httpd
      less  /var/log/httpd/access_log
      less  /var/log/httpd/error_log
    
    stratum-0:
    ---------
      less /var/log/dsmsched.log
      less /var/log/dsmerror.log
    
    stratum-1:
    ---------
      less /var/log/cvmfs/cron.log
    

    Tests:

    stratum-0:
    ---------
      dsmc q fi
      #dmsc incr     # run incremental archive 
    
    stratum-1:
    ---------
      ls -l /storage/cvmfs/
    

    Hints:

      less /var/log/cvmfs/cron.log
      for r in `ls -1 /etc/cvmfs/repositories.d/`; do echo $r; cvmfs_server migrate $r; done
    

    Usage:

      voms-proxy-init
      gsissh -p 1975 grid-cvmfs-desy
      cvmfs_server check desy.desy.de
      cvmfs_server transaction desy.desy.de
      cd /cvmfs/desy.desy.de/
      vi ...
      cd
      cvmfs_server publish desy.desy.de
      cvmfs_server check desy.desy.de
    

    Introducing a New Repo

    Repo Host:

      sshr grid-cvmfs-desy
    
      mkdir /storage/cvmfs/test.desy.de
      chown grid:grid /storage/cvmfs/test.desy.de
    
      vi /etc/fstab
    ...
    grid-na01:/stratum_0/test.desy.de       /storage/cvmfs/test.desy.de     nfs     rw,noatime,hard,nfsvers=3       0       0
    ...
      mount /storage/cvmfs/test.desy.de
      
      cvmfs_server mkfs -o grid test.desy.de
    
      ls -l /etc/httpd/conf.d
    
      ls -l /etc/cvmfs/keys/desy.de
      cd /etc/cvmfs/keys/desy.de
      rm test.desy.de.pub test.desy.de.masterkey
      ln -s desy.de.pub test.desy.de.pub
      ln -s desy.de.masterkey test.desy.de.masterkey  
    
      cvmfs_server resign test.desy.de
    
      su - grid
      cvmfs_server transaction test.desy.de ; date >> /cvmfs/test.desy.de/test ; cvmfs_server publish test.desy.de
    

    Stratum-0:

      sshr grid-cvmfs-null0
    
      ls -l /storage/cvmfs
    

    Stratum-1:

      sshr grid-cvmfs-one1
    
      cd /etc/httpd/conf.d/
      cp cvmfs.zeus.desy.de.conf cvmfs.test.desy.de.conf
      vi cvmfs.test.desy.de.conf
    
      cvmfs_server add-replica -o root http://grid-cvmfs-null.desy.de:8000/cvmfs/test.desy.de /etc/cvmfs/keys/test.de.pub
      vi /etc/cvmfs/repositories.d/test.desy.de/replica.conf
    ...
    CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/desy.de.pub
    ...
      cvmfs_server snapshot test.desy.de
    

    Clients:

      sshr grid-wn9999
    
      vi /etc/cvmfs/default.local
    
      # stratum-1
      vi /etc/cvmfs/domain.d/desy.de.local 
    
      service autofs restart
      cvmfs_config probe test.desy.de
      cvmfs_config reload test.desy.de
      ls -l /cvmfs/test.desy.de
    
    [top]

    DIRAC Server (Distributed Infrastructure with Remote Agent Control)

    Documentation :

    DIRAC Project DIRAC Administration

    Installation: (puppet)

      #yum install ca-policy-egi-core fetch-crl
      #yum install mysql
      #adduser -s /bin/bash -d /home/dirac dirac
    
      /opt/misc/yaim/get-grid-host-cert.sh $(/bin/hostname -f)
    
      mkdir /opt/dirac
      chown -R dirac:dirac /opt/dirac
    
      su - dirac
    
      mkdir -p /opt/dirac/etc/grid-security/
      cp /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /opt/dirac/etc/grid-security
      ln -s /etc/grid-security/certificates  /opt/dirac/etc/grid-security/certificates
    
      mkdir /home/dirac/DIRAC
      cd /home/dirac/DIRAC
      wget -np https://github.com/DIRACGrid/DIRAC/raw/integration/Core/scripts/install_site.sh --no-check-certificate
      chmod a+x install_site.sh
    
      vi install.cfg
      ./install_site.sh install.cfg
    

    Configuration:

      vi /opt/dirac/pro/etc/Dirac-Prod.cfg
    #
    # make sure 'MaxTotalJobs' and 'MaxWaitingJobs' are set reasonably to avoid blocking 
    #
    CEs
            {
              grid-batch4.desy.de
              {
                CEType = SSHTorque
                SubmissionMode = Direct
                SSHHost = grid-batch4.desy.de
                SSHUser = desyusr007
                SSHPassword = 
                Queues
                    {
                    desy
                      {
                      CPUTime = 2880
                      MaxTotalJobs = 500
                      MaxWaitingJobs = 500
                      BundleProxy = True
                      BatchOutput = /home/desyusr007/dirac/output
                      BatchError = /home/desyusr007/dirac/error
                      ExecutableArea = /home/desyusr007/dirac/submission
                      RemoveOutput = True
                      }
                    }
              }
    ...
    #
    

    Logs:

      less /opt/dirac/pro/runit/WorkloadManagement/*/log/current
    

    Hints:

    
    
    

    Tests:

    
    
    
    
    

    [top]


    DIRAC Server (Distributed Infrastructure with Remote Agent Control)

    Documentation :

    DIRAC Project DIRAC Administration DIRAC Client

    Repo: Repo

    Installation:

      wget -np -O dirac-install https://github.com/DIRACGrid/DIRAC/raw/integration/Core/scripts/dirac-install.py --no-check-certificate
      chmod +x dirac-install
    
      ./dirac-install -r v6r11    # 2014-05-12
    

    Configuration:

    
    
    

    Logs:

    
    
    

    Tests:

    
    
    
    
    

    [top]


    by the DESY Grid Team: http://grid.desy.de/