Geant4 Cross Reference

Cross-Referencing   Geant4
Geant4/examples/extended/parameterisations/Par04/training/README.md

Version: [ ReleaseNotes ] [ 1.0 ] [ 1.1 ] [ 2.0 ] [ 3.0 ] [ 3.1 ] [ 3.2 ] [ 4.0 ] [ 4.0.p1 ] [ 4.0.p2 ] [ 4.1 ] [ 4.1.p1 ] [ 5.0 ] [ 5.0.p1 ] [ 5.1 ] [ 5.1.p1 ] [ 5.2 ] [ 5.2.p1 ] [ 5.2.p2 ] [ 6.0 ] [ 6.0.p1 ] [ 6.1 ] [ 6.2 ] [ 6.2.p1 ] [ 6.2.p2 ] [ 7.0 ] [ 7.0.p1 ] [ 7.1 ] [ 7.1.p1 ] [ 8.0 ] [ 8.0.p1 ] [ 8.1 ] [ 8.1.p1 ] [ 8.1.p2 ] [ 8.2 ] [ 8.2.p1 ] [ 8.3 ] [ 8.3.p1 ] [ 8.3.p2 ] [ 9.0 ] [ 9.0.p1 ] [ 9.0.p2 ] [ 9.1 ] [ 9.1.p1 ] [ 9.1.p2 ] [ 9.1.p3 ] [ 9.2 ] [ 9.2.p1 ] [ 9.2.p2 ] [ 9.2.p3 ] [ 9.2.p4 ] [ 9.3 ] [ 9.3.p1 ] [ 9.3.p2 ] [ 9.4 ] [ 9.4.p1 ] [ 9.4.p2 ] [ 9.4.p3 ] [ 9.4.p4 ] [ 9.5 ] [ 9.5.p1 ] [ 9.5.p2 ] [ 9.6 ] [ 9.6.p1 ] [ 9.6.p2 ] [ 9.6.p3 ] [ 9.6.p4 ] [ 10.0 ] [ 10.0.p1 ] [ 10.0.p2 ] [ 10.0.p3 ] [ 10.0.p4 ] [ 10.1 ] [ 10.1.p1 ] [ 10.1.p2 ] [ 10.1.p3 ] [ 10.2 ] [ 10.2.p1 ] [ 10.2.p2 ] [ 10.2.p3 ] [ 10.3 ] [ 10.3.p1 ] [ 10.3.p2 ] [ 10.3.p3 ] [ 10.4 ] [ 10.4.p1 ] [ 10.4.p2 ] [ 10.4.p3 ] [ 10.5 ] [ 10.5.p1 ] [ 10.6 ] [ 10.6.p1 ] [ 10.6.p2 ] [ 10.6.p3 ] [ 10.7 ] [ 10.7.p1 ] [ 10.7.p2 ] [ 10.7.p3 ] [ 10.7.p4 ] [ 11.0 ] [ 11.0.p1 ] [ 11.0.p2 ] [ 11.0.p3, ] [ 11.0.p4 ] [ 11.1 ] [ 11.1.1 ] [ 11.1.2 ] [ 11.1.3 ] [ 11.2 ] [ 11.2.1 ] [ 11.2.2 ] [ 11.3.0 ]

Diff markup

Differences between /examples/extended/parameterisations/Par04/training/README.md (Version 11.3.0) and /examples/extended/parameterisations/Par04/training/README.md (Version 11.1.3)


  1 This repository contains the set of scripts us      1 This repository contains the set of scripts used to train, generate and validate the generative model used
  2 in this example.                                    2 in this example.
  3                                                     3 
  4 - root2h5.py: translation of ROOT file with sh << 
  5 - core/constants.py: defines the set of common      4 - core/constants.py: defines the set of common variables.
  6 - core/model.py: defines the VAE model class a      5 - core/model.py: defines the VAE model class and a handler to construct the model.
  7 - utils/preprocess.py: defines the data loadin      6 - utils/preprocess.py: defines the data loading and preprocessing functions.
  8 - utils/hyperparameter_tuner.py: defines the H      7 - utils/hyperparameter_tuner.py: defines the HyperparameterTuner class.
  9 - utils/gpu_limiter.py: defines a logic respon      8 - utils/gpu_limiter.py: defines a logic responsible for GPU memory management.
 10 - utils/observables.py: defines a set of obser      9 - utils/observables.py: defines a set of observable possibly calculated from a shower.
 11 - utils/plotter.py: defines plotting classes r     10 - utils/plotter.py: defines plotting classes responsible for manufacturing various plots of observables.
 12 - train.py: performs model training.               11 - train.py: performs model training.
 13 - generate.py: generate showers using a saved      12 - generate.py: generate showers using a saved VAE model.
 14 - observables.py: defines a set of shower obse     13 - observables.py: defines a set of shower observables.
 15 - validate.py: creates validation plots using      14 - validate.py: creates validation plots using shower observables.
 16 - convert.py: defines the conversion function      15 - convert.py: defines the conversion function to an ONNX file.
 17 - tune_model.py: performs hyperparameters opti     16 - tune_model.py: performs hyperparameters optimization.
 18                                                    17 
 19 ## Getting Started                                 18 ## Getting Started
 20                                                    19 
 21 `setup.py` script creates necessary folders us     20 `setup.py` script creates necessary folders used to save model checkpoints, generate showers and validation plots.
 22                                                    21 
 23 ```                                                22 ```
 24 python3 setup.py                                   23 python3 setup.py
 25 ```                                                24 ``` 
 26                                                    25 
 27 ## Full simulation dataset                         26 ## Full simulation dataset
 28                                                    27 
 29 The full simulation dataset can be downloaded      28 The full simulation dataset can be downloaded from/linked to [Zenodo](https://zenodo.org/record/6082201#.Ypo5UeDRaL4).
 30                                                << 
 31 If custom simulation is used, the output of fu << 
 32                                                    29 
 33 ## Training                                        30 ## Training
 34                                                    31 
 35 In order to launch the training:                   32 In order to launch the training:
 36                                                    33 
 37 ```                                                34 ```
 38 python3 train.py                                   35 python3 train.py
 39 ```                                                36 ``` 
 40                                                    37 
 41 You may specify those three following flags. I     38 You may specify those three following flags. If you do not, then default values will be used.
 42                                                    39 
 43 ```--max-gpu-memory-allocation``` specifies a      40 ```--max-gpu-memory-allocation``` specifies a maximum memory allocation on a single, logic GPU unit. Should be given as
 44 an integer.                                        41 an integer.
 45                                                    42 
 46 ```--gpu-ids``` specifies IDs of physical GPUs     43 ```--gpu-ids``` specifies IDs of physical GPUs. Should be given as a string, separated with comas, no spaces.
 47 If you specify more than one GPU then automati     44 If you specify more than one GPU then automatically ```tf.distribute.MirroredStrategy``` will be applied to the
 48 training.                                          45 training.
 49                                                    46 
 50 ```--study-name``` specifies a study name. Thi     47 ```--study-name``` specifies a study name. This name is used as an experiment name in W&B dashboard and as a name of
 51 directory for saving models.                       48 directory for saving models.
 52                                                    49 
 53 ## Hyperparameters tuning                          50 ## Hyperparameters tuning
 54                                                    51 
 55 If you want to tune hyperparameters, specify i     52 If you want to tune hyperparameters, specify in `tune_model.py` parameters to be tuned. There are three types of
 56 parameters: discrete, continuous and categoric     53 parameters: discrete, continuous and categorical. Discrete and continuous require range specification (low, high), while
 57 the categorical parameter requires a list of p     54 the categorical parameter requires a list of possible values to be chosen. Then run it with:
 58                                                    55 
 59 ```                                                56 ```
 60 python3 tune_model.py                              57 python3 tune_model.py
 61 ```                                                58 ```
 62                                                    59 
 63 If you want to parallelize tuning process you      60 If you want to parallelize tuning process you need to specify a common storage (preferable MySQL database) by
 64 setting `--storage="URL_TO_MYSQL_DATABASE"`. T     61 setting `--storage="URL_TO_MYSQL_DATABASE"`. Then you can run multiple processes with the same command:
 65                                                    62 
 66 ```                                                63 ```
 67 python3 tune_model.py --storage="URL_TO_MYSQL_     64 python3 tune_model.py --storage="URL_TO_MYSQL_DATABASE"
 68 ```                                                65 ```
 69                                                    66 
 70 Similarly to training procedure, you may speci     67 Similarly to training procedure, you may specify ```--max-gpu-memory-allocation```, ```--gpu-ids``` and
 71 ```--study-name```.                                68 ```--study-name```.
 72                                                    69 
 73 ## ML shower generation (MLFastSim)                70 ## ML shower generation (MLFastSim)
 74                                                    71 
 75 In order to generate showers using the ML mode     72 In order to generate showers using the ML model, use `generate.py` script and specify information of geometry, energy
 76 and angle of the particle and the epoch of the     73 and angle of the particle and the epoch of the saved checkpoint model. The number of events to generate can also be
 77 specified (by default is set to 10.000):           74 specified (by default is set to 10.000):
 78                                                    75 
 79 ```                                                76 ```
 80 python3 generate.py --geometry=SiW --energy=64     77 python3 generate.py --geometry=SiW --energy=64 --angle=90 --epoch=1000 --study-name=YOUR_STUDY_NAME
 81 ```                                                78 ``` 
 82                                                    79 
 83 If you do not specify an epoch number the base     80 If you do not specify an epoch number the based model (saved as ```VAEbest```) will be used for shower generation.
 84                                                    81 
 85 ## Validation                                      82 ## Validation
 86                                                    83 
 87 In order to validate the MLFastSim and the ful     84 In order to validate the MLFastSim and the full simulation, use `validate.py` script and specify information of
 88 geometry, energy and angle of the particle:        85 geometry, energy and angle of the particle:
 89                                                    86 
 90 ```                                                87 ```
 91 python3 validate.py --geometry=SiW --energye=6     88 python3 validate.py --geometry=SiW --energye=64 --angle=90 
 92 ```                                                89 ``` 
 93                                                    90 
 94 ## Conversion                                      91 ## Conversion
 95                                                    92 
 96 After training and validation, the model can b     93 After training and validation, the model can be converted into a format that can be used in C++, such as ONNX,
 97 use `convert.py` script:                           94 use `convert.py` script:
 98                                                    95 
 99 ```                                                96 ```
100 python3 convert.py --epoch 1000                    97 python3 convert.py --epoch 1000
101 ```                                                98 ```