- VFAT Mask investigate issue reported here
- Efficiency vs eta partition: investigate issue reported here
- Complete analysis on 2022 data
- Checks on GE21 code here
- Run on MC
- Run on 2022 data
- High occupancy events we know what triggers these events. Looking for the best way to mask them out offline.
- CMS Reconstructed muon tracks (Stand Alone Muos STA) do make use of GEM Rechits. It biases the evaluation of the spatial resolution. Possible solutions:
- Propagated the ME11 segments
- Remove the GE11 hits and re-fit the tracks
- VFAT Mask: at the moment the VFAT mask are not stored in the RECO data. Issue reported here. To be further discussed with Marcello
As we discussed today, the efficiency analysis is split in 2 steps:
- Ntuplization
- Analysis of the ntuples
CMSSW code meant to skim the data and perform the propagation of muon tracks on GE11 surface
on your lxplus
cmsrel CMSSW_12_5_1
cd CMSSW_12_5_1/src/
cmsenv
git cms-merge-topic yeckang:OHStatus_12_5_X
git clone --branch GEMOHStatusCMSSW_12_6_X [email protected]:gem-dpg-pfa/MuonDPGNTuples.git MuDPGAnalysis/MuonDPGNtuples
cd MuDPGAnalysis/MuonDPGNtuples/
scram b -j 5
cp /afs/cern.ch/user/f/fivone/public/forSumit/classes_def.xml /afs/cern.ch/work/s/skeshri/GEM_efficiency/Ntuple/CMSSW_12_6_4/src/DataFormats/GEMDigi/src/
cp /afs/cern.ch/user/f/fivone/public/forSumit/GEMAMCStatus.h /afs/cern.ch/work/s/skeshri/GEM_efficiency/Ntuple/CMSSW_12_6_4/src/DataFormats/GEMDigi/interface/
The code can be executed
locally in your lxplus area
- distributed at the CMS computing centers
- I recommend running it locally first.
The output file will be stored under the test/folder
.
The input parameters are provided in the file test/muDpgNtuples_cfg.py
Among the many parameters of primary importance are:
- the globalTag link depends on the CMS configuration at the time of the run
- the input file link which specifies what files are you running on. You might need a working voms authentication to access the input files which are located in sites other than CERN. You don't need to edit the config file, simply run
cmsenv
cmsRun muDpgNtuples_cfg.py
to start the ntuplization.
The output will be stored under /eos/cms/store/group/dpg_gem/comm_gem/P5_Commissioning/2022/GEMCommonNtuples
Therefore writing rights are needed for the GEM DPG EOS (EOS T2 facility)
Subscribe to the e-group cms-eos-dpg-gem
to have writing rights.
You must have a working voms authentication to access the input files which are located in sites other than CERN.
cd CRAB_SUB/
cmsenv
source "/cvmfs/cms.cern.ch/common/crab-setup.sh"
python3 P5Data_crabConfig.py --RunList [space separated list of runs to be ntuplized]--Dataset [chosen dataset among the available options
This command submits the jobs on CRAB. You can check the status of your jobs by running
python3 CheckStatus.py
This code is meant to analyze the ntuples and produce efficiency info per VFAT. It actually produces many plots of many quantities.
This code can largely benefit from columnar analysis
which is what I am working on at the moment.
The code currently on github processes the events one by one in a for loop. It typically takes ~hours to process a run.
With the columnar analysis
it will be 1-2 order of magnitude faster.
The README details the usage and functioning of the code.
I recommend reading it first. You should be able to try it out following the instruction in the there.
I recommend to checkout the branch main
.
My latest working branch is feature_python3
:
- based on python3 (perviously python2.7)
- includes changes in the the input parameters parsing
- includes VFAT mask event by event
- manages the dependencies through poetry The core of the code stays the same, but the instructions for this version have not been written 😖.
Additional code that helps you select only the lumisections for which GE11 HV == your desired value
This is a stub: I intend to write this section, but haven’t yet.
-
Have the "FetchOMS" script work
- Produce keys and cert
- Link the key and cert location on the
- run following
cd /venv/bin source activate
- Start fetching info from run 357899 to run 363000
- Filter based on duration (>1h) and date (up to 27 Nov) Useful link runregistry: https://github.com/cms-DQM/runregistry/tree/master/runregistry_api_client
-
Use the latest version of the analyzer
- Install poetry link a. Before installing Poetry enable the python3 version higher than python3.7. Instructions are at here for the lxplus
- git clone --branch feature_python3 https://github.com/fraivone/PFA_Analyzer.git
- Set up the poetry environment
poetry shell poetry install
-
TO DO Francesco
- Provide the chamber masking script
- Check with Simone the status of the code after HV Extension
Two cases:
-
OHStatus available in the data: This is the optimal case for masking. The information on VFAT Errors, Warnings, ZeroSuppressed, VFAT Masked at the FW/DAQ level are stored per event. I fetch the information per VFAT per event and store them into the ntuples. At the analysis level, for each event, I skip the propagated hits which go on a "BAD" VFAT
-
OHStatus NOT available in the data: I rely on the DQM Summary plot. Chambers that appear "not green" in the DQM Summary plot are masked entirely for the whole run
Instructions:
cmsrel CMSSW_13_0_3
cd CMSSW_13_0_3/src/
cp -r /afs/cern.ch/work/s/skeshri/public/forShalini/Ntuple_2023/* .
scram b -j 4
cd /MuDPGAnalysis/MuonDPGNtuples/test
voms-proxy-init --voms cms
cmsRun muDpgNtuples_cfg.py #for running locally
CRAB_JOBS:
cd /MuDPGAnalysis/MuonDPGNtuples/CRAB_SUB/
source /cvmfs/cms.cern.ch/crab3/crab.sh
python3 P5Data_crabConfig.py -rl 366451 -d ZMu