You are here: TWiki> VELO Web>Testbeam>VeloTestbeam2014>Tpx3TestbeamSoftware (2021-11-24, Main.TimothyDavidEvans)
create new tag
, view all tags

Timepix3 testbeam offline software

  • The Timepix3 test beam offline analysis software is based on the Gaudi framework.
  • The project is named Kepler after the German astronomer Johannes Kepler External link mark.
    • For other notable people with the name Kepler see Wikipedia External link mark).
  • Contacts: Tim Evans, Heinrich Schindler
Johannes Kepler

Definitions and nomenclature

Coordinate system

  • The local x coordinate of a pixel centre (for a single-chip assembly) is given by (column + 0.5) × pitch.
  • The local y coordinate of a pixel centre is given by (row + 0.5) × pitch.
  • The transformation from local to global coordinates is composed of a rotation (ROOT::Math::RotationZYX External link mark) and a translation.
  • In the global coordinate system, the z axis is roughly aligned with the direction of the beam and the y axis points upwards.


  • The global times of hits, clusters, tracks and triggers are encoded in a 64-bit unsigned integer number, consisting of
    • a coarse time (in units of 25 ns), corresponding to bits 12 - 63, and
    • a fine time, corresponding to the 12 least significant bits of the timestamp.
  • One unit of fine time (also known as "trigger FToA") therefore corresponds to 25 ns / 4096.
  • In addition the objects are assigned a "local" time ("htime") given by the time (in ns) with respect to the beginning of the respective event.
  • The time of a cluster is given by the earliest timestamp of the pixel hits forming the cluster.
  • The time of a track is given by the average of the times of the clusters forming the track.

Input files

  • The minimum input you need to run Kepler is
    • a set of raw data files, and
    • an alignment file describing the geometry of the setup.

Raw data files

  • The format of the SPIDR raw data files is explained in this document.
  • The testbeam data files are stored on EOS in the directory
    and are grouped by data taking period.
  • The files from one data taking period are grouped by run, e. g.
  • A backup copy is available on castor,
  • For using the raw data stored on EOS, Kepler uses a ROOT I/O plugin to directly interface with the storage, hence fuse-mounting is not required.

Alignment files

  • The rotation angles and positions of each plane are imported from a text file.
  • For each plane in the setup the alignment file contains a line in the following format (note: don't use tabs but spaces to separate the columns)
    NameOfPlane x[mm] y[mm] z[mm] rotX[rad] rotY[rad] rotZ[rad]
  • For a single-chip assembly NameOfPlane is the ID of the Timepix3, e. g.
    W0002_J06  14.  14.  0  2.98  3.3  0.
  • For a tile, NameOfPlane is composed of the IDs of the three ASICs, separated by a comma, e. g.
    W0009_L09,W0009_D08,W0009_E08  0.  14.  390  3.14  0.  0.
  • For most of the testbeam runs, alignment files have been produced and are available on EOS. They are organized in similar way as the raw data files, e. g. the alignment file for run 13600 is available at

Configuration files

  • There are a number of calibrations and corrections that can be imported in Kepler by means of text files.
  • The ToT measurement of a pixel can be converted to charge (number of electrons) using a calibration curve.
    • ToT calibrations are obviously important for charge collection efficiency measurements. In addition, they also help to correct for gain variations across the pixel matrix.
  • Pixel-by-pixel ToT calibration files for most of the Devices Under Test are available on EOS,
  • These files have the keyword "Charge" and the chip ID in the first line, followed by one line for each pixel with six parameters: the column, row of the pixel and the parameters p0, p1, c, and t of the "surrogate" fit function
    ToT(q) = p0 + p1 * q - c / (q - t)
  • It is also possible to apply a "global" charge calibration (in this case the parameters of the surrogate functions are the same for all pixels). This is done by loading a configuration file with the keyword "GlobalCharge" in the first line, followed by one line for each chip, containing the chip ID and the four parameters of the surrogate function.
  • Noisy pixels can be masked offline by loading a file with the keyword "Mask" in the first line, and a list of pixels (one line per pixel, with chip ID, column, and row of the pixel to be masked), for example
    # Device   column row
    W0005_F09   121   150
    W0005_F09   133   115
  • If a column is not properly synchronized, one can apply per-column offsets to the time stamp of hits.
    • For example, to apply an offset of one ToA (i. e. 25 ns) to double column five, you should load a configuration file that looks like this
      #chip dcol offset
      W009_B05 5 1
    • This is a legacy feature nowadays. Resynchronizations used to be an issue during the first testbeam periods.
  • In order to set a timing offset for a chip as a whole, load a file with the keyword "Timing" in the first line, followed by one line for each device with the chip ID and the offset in ns.


  • You can either use a released version or - if you want to profit from the latest modifications - the development version of the project.
  • The latest release is v3r1 (release notes External link mark).
  • To run with a released version as is, just do
    lb-run Kepler v3r1
    where is your python job configuration script.
  • See this page External link mark for more details about lb-run.
  • To use the latest version either run from the nightly builds or compile the project yourself.

Compiling the project (on machine with access to cvmfs)

  • Create a directory where you want to install Kepler (let's call it 'Kepler' in this example).
    mkdir ~/Kepler; cd ~/Kepler
  • Download a working copy of the project (you only need to do this once).
    git clone ssh:// KEPLER
    go to the directory
    cd KEPLER
  • To build using the `standard' LHCb tools, first setup the LHCb environment:
       export BINARY_TAG=x86_64-centos7-gcc9-opt
       export LCG_VERSION=97a 
       source /cvmfs/ -c $BINARY_TAG
    * You can then initialise the project and make using the LHCb tools as follows:
       make -j 10
  • To run the application do
    where is your python options file.

Running from the nightly builds

  • Login to lxplus.
  • Go to a directory of your choice.
  • Do (only the first time, or when restarting from scratch)
    lb-dev --nightly lhcb-head Kepler/HEAD
  • This creates a directory KeplerDev_HEAD in your working directory.
  • Go to KeplerDev_HEAD.
  • To run the application do
  • To make changes to the code type (only need to do this once)
    git lb-use Kepler
    and check out the respective package (let's say Tb/TbAlgorithms) using
    git lb-checkout Kepler/master Tb/TbAlgorithms
  • After editing the code, compile the project by typing (in KeplerDev_HEAD)
  • See the Git4LHCb Twiki page External link mark for a more detailed description of the work flow with "satellite projects".


  • For each event, Kepler runs a sequence of algorithms. The run-time configuration of the sequence and the algorithms is based on python steering files.
  • Kepler has the following sequences of algorithms.
    • The "Telescope" sequence contains the core reconstruction algorithms.
      • By default it includes TbEventBuilder, TbClustering, TbSimpleTracking, TbTriggerAssociator, and TbClusterAssociator.
    • The "Monitoring" sequence contains the monitoring algorithms.
      • By default it includes TbHitMonitor, TbClusterPlots, TbTrackPlots, TbTriggerMonitor, TbDUTMonitor, and TbTrackingEfficiency.
    • The "Output" sequence contains the algorithms dedicated to writing output data.
      • At the moment, there is only one algorithm, TbTupleWriter.
    • The sequence "UserSequence" can be used for running a user's additional algorithms.
  • The configuration of the individual algorithms is discussed below.

Top-level configurable

  • The Kepler top-level configurable (located in Tb/Kepler/python/Kepler/ has the following options.
Option Description Default Comment
InputFiles List of raw data files or directories None Mandatory
AlignmentFile Path of the alignment file None Mandatory
PixelConfigFile Configuration files for the pixel service []  
TimingConfigFile Configuration file for the timing offsets per plane ""  
EvtMax Number of events to process -1 By default all events in the input files are processed.
HistogramFile ROOT file for output histograms Kepler-histos.root  
TupleFile ROOT file for output tuples Kepler-tuple.root  
WriteTuples Flag to add TbTupleWriter to the "Output" sequence or not False By default, no tuples are produced.
Alignment Flag to add TbAlignment to the "Telescope" sequence or not False By default alignment is switched off.
Tracking Flag to run pattern recognition and track-fit related algorithms or not True Can be switched off for processing lab data.
Monitoring Flag to run monitoring algorithms or not True By default monitoring is switched on.
  • There are no default values for input raw data files and alignment file. These always have to be specified by the user.
  • A minimal python configuration script should therefore look something like this.
    from Gaudi.Configuration import *
    from Configurables import Kepler
    # Set the path to the directory/files to be processed
    Kepler().InputFiles = ["eos/lhcb/wg/testbeam/velo/timepix3/July2014/RawData/Run1024/"]
    # Set the alignment file
    Kepler().AlignmentFile = "eos/lhcb/wg/testbeam/velo/timepix3/July2014/RootFiles/Run1024/Conditions/Alignment1024Preliminary.dat"
  • This will run the default sequences with the options of all algorithms set to their default values.
  • An example configuration script can be found at Tb/Kepler/options/
  • If you specify a directory in the list of InputFiles, all .dat files in this directory will be processed. Alternatively you can also give a list of the individual files to be processed.
  • If you want to process files on EOS, make sure the path starts with 'eos' and ends with a '/'. For example,
    Kepler().InputFiles = ['eos/lhcb/wg/testbeam/velo/timepix3/July2014/RawData/Run1024/']
  • Note, '/eos' and not ending with a '/' will not currently work. Similarly, conditions files can be accessed on EOS in the same way,
    Kepler().AlignmentFile =['eos/lhcb/wg/testbeam/velo/timepix3/July2014/RootFiles/Run1024/Coditions/Alignment1024mille.dat']
  • To add your own algorithms to the UserSequence, do
    from Configurables import extraAlgorithm1, extraAlgorithm2
    Kepler().UserAlgorithms = [extraAlgorithm1(), extraAlgorithm2()]


  • All algorithms are derived from the base class TbAlgorithm which has two options.
    • PrintConfiguration prints out all properties of an algorithm during the initialization stage. Defaults to False.
    • MaskedPlanes sets the list of planes to be masked. By default it is empty. The meaning of "masked" varies between the different algorithms.


  • TbEventBuilder reads in the SPIDR raw data files, creates TbHit and TbTrigger objects, sorts them in time and stores those with timestamps within the current event boundaries on the Transient Event Store (TES).
  • The code is located in the package Tb/TbIO.

Event Definition

Option Description Default Comment
EventLength Time in global time units that is defined as a single 'event' 10 SpidrTimes = 163844096 trigger FToA times  
CacheLength Time cached by Kepler in order to order data packets in time, in global time units 20 SpidrTimes  
OverlapTime Time by which events 'overlap' (in global time units) 0 Works in collaboration with TbPacketRecycler (not run by default)
StartTime Time to start data processing (in milliseconds) 0 Packets before this time are skipped.
EndTime Time to stop processing data (in milliseconds), ignores any packets after this. 0 Not activated by default.
MinPlanesWithHits Minimum number of planes with hits for an event to be considered by Kepler 1 Can be used to suppress events with only noise hits.


Option Description Default Comment
PrintFreq Frequency with which to print event read messages 100  
PrintHeader Prints out the information (chip ID, firmware version, run number, run info, chip configuration, DAC values, …) in the header of the raw data files. False  
HitLocation TES location prefix for hit containers LHCb::TbHitLocation::Default Shouldn't be changed normally
Monitoring Adds verbose output every PrintFreq events regarding the number of packets cached and the number propagated to the TES. False Also adds monitoring histograms for the data rate.

Error Handling

Option Description Default Comment
MaxLostPackets Maximum number of packets that can fall before the current event edge before considered a critical failure. 1000  
MaxLostTimers Number of timing packets that cannot properly update the global clock before considered a critical failure. 10  
ForceCaching Forces the cache to update for every caching event. False Can be useful for dealing with runs where there are packets stuck in the beginning of the file from the previous run.
IgnoreGlobalClock Uses the clock from the Timepix3 to work out the global time rather than using the SPIDR global clock False Can be useful for processing lab data.


  • TbTupleWriter saves nTuples to a ROOT file which can be useful for interactive analyses.
  • If the full event information is saved, the file size becomes very large.
Option Description Default Comment
WriteTriggers Write out all triggers False  
WriteHits Write out all pixel hits False  
WriteClusters Write out all clusters True By default writes out hit information for each cluster, but this can be configured using the additional two flags below
MaxClusterSize Only write out the first n hits of a cluster 200 Writing out large clusters can be extremely slow
WriteClusterHits Flag to turn off writing the hits associated to a cluster True  
WriteTracks Write out all tracks False  
StrippedNTuple Only write out tracks with associated triggers False Very useful for reducing data size with low rate triggers
MaxTriggers Maximum number of triggers that can be associated with a track 20  
  • For example, the default configuration for external users should be to only write out track information, and only tracks associated with triggers, hence
    from Configurables import TbTupleWriter
    TbTupleWriter().WriteClusters = False
    TbTupleWriter().WriteTracks = True
    TbTupleWriter().StrippedNTuple = True

Tuple structure

More... Close
Branch Code Meaning
TbTupleWriter/Triggers (Tb/TbEvent/Event/TbTrigger.h)
TgID trigger->index() + evtOffset; trigger index
TgTime trigger->time(); global timestamp
TgHTime trigger->htime(); local timestamp in ns
TgPlane i; plane index
TgCounter trigger->counter(); trigger counter
TkEvt m_evtNo; event number
TbTupleWriter/Hits (TbEvent/Event/TbHit.h)
hID hit->index() + offset; hit index
hCol hit->col(); column number
hRow hit->row(); row number
hTime hit->time(); global timestamp
hHTime hit->htime(); local timestamp in ns
hToT hit->ToT(); time over threshold
hPlane i; plane number
TbTupleWriter/Clusters (TbEvent/Event/TbCluster.h)
clID cluster->index() + offset; cluster index
clGx cluster->x(); global x
clGy cluster->y(); global y
clGz cluster->z(); global z
clLx cluster->xloc(); local x
clLy cluster->yloc(); local y
clTime cluster->time(); global timestamp
clHTime cluster->htime(); local timestamp in ns
clSize cluster->hits().size(); cluster size
clCharge cluster->charge(); total charge
clTracked cluster->associated(); flag for whether the cluster is part of a track
clPlane cluster->plane(); index of the telescope plane
clEvtNo m_evtNo; event number
clNHits h = cluster->hits()[j]; hits (per cluster, max size 200)
hRow h->row(); row number (per cluster)
hCol h->col(); column number (per cluster)
hToT h->ToT(); time over threshold (per cluster)
hHTime h->htime(); local timestamp in ns (per cluster)
TbTupleWriter/Tracks (TbEvent/Event/TbTrack.h)
TkX0 track->firstState().x(); x-position of the first state of the track
TkY0 track->firstState().y(); y-position of the first state of the track
TkTx track->firstState().tx(); dx/dz slope of the first state of the track
TkTy track->firstState().ty(); dy/dz slope of the first state of the track
TkHTime track->htime(); local timestamp in ns
TkTime track->time(); global timestamp
TkID track->index() + evtOffset; track index
TkNCl track->clusters().size(); size of clusters forming this track
TkChi2ndof track->chi2PerNdof(); chi-squared per degree of freedom of the track
TkEvt m_evtNo; event number
TkClId (*it)->index() + offset; track index


  • TbClustering groups pixel hits to clusters, calculates the local and global cluster positions and stores the TbCluster objects on the TES.
  • The cluster time is given by the time of the earliest hit in the cluster.

Option Description Default Comment
TimeWindow Maximum time between the last and first hit in a cluster, in ns 100  
SearchDist Maximum spatial separation (in pixels) between hits that can be considered in a cluster 1  
ClusterErrorMethod Model for assigning errors to x and y positions of a cluster 0 No need to change normally
HitLocation TES location prefix for hit containers LHCb::TbHitLocation::Default No need to change normally
ClusterLocation TES location prefix for cluster containers LHCb::TbClusterLocation::Default No need to change normally


  • TbSimpleTracking is the default pattern recognition algorithm.
Option Description Default Comment
TimeWindow Time window (in ns) between pairs of clusters to form a seed track and for adding clusters to a track 10  
MinPlanes Sets the min. number of clusters required to form a track. 8  
MaxOpeningAngle Max. opening angle θ (rad) for track finding 0.01 Search window is given by Δz × θ where Δz is the separation between two planes
MaxDistance Extra spatial allowance (in mm) in the search window 0  
RecheckTrack Try adding extra clusters on planes without a cluster, once a track candidate has been found and fitted True  
RemoveOutliers Remove outlier clusters (more than 100 μm away from the track intercept) from a track candidate True  
ChargeCutLow Skip clusters with too low ToT 0  
MaxClusterSize Skip too large clusters 10  
MaxClusterWidth Skip too wide cluster during seeding 3  
MaxChi2 Max. χ2/ndof for a track candidate 10  
ClusterLocation TES location prefix for cluster containers LHCb::TbClusterLocation::Default No need to change normally
TrackLocation TES location of tracks LHCb::TbTrackLocation::Default No need to change normally
  • If the alignment is poor, some of the cuts need to be relaxed in order to find tracks.
    • MaxDistance should be increased (to a few mm), RemoveOutliers should be set to False, and MaxChi2 should be set to a large value.


  • TbTriggerAssociator links trigger timestamps to tracks, i. e. it adds the TbTrigger objects to the triggers container of the TbTrack and flags the TbTrigger objects as associated.
  • Triggers are associated to a track if ttrack + TimeOffset - TimeWindow < ttrigger < ttrack + TimeOffset + TimeWindow.
  • The trigger signals fed into a SPIDR are typically the coincidence triggers of the scintillators attached to the telescope or a subset of it (e. g. a copy of the scintillator triggers accepted by an external DUT).
  • You need to specify from which plane (i. e. from which SPIDR) the triggers should be taken from.
Option Description Default Comment
TimeWindow Time window (in ns) 1000  
TimeOffset Constant offset (in ns) between triggers and tracks 0 To account for cable delays etc.
Plane Telescope plane from which to pick up the triggers 999 To be set explicitly by the user. By default, no triggers are considered.
TrackLocation TES location of tracks LHCb::TbTrackLocation::Default No need to change normally
TriggerLocation TES location prefix of triggers LHCb::TbTriggerLocation::Default No need to change normally


  • TbClusterAssociator links clusters on planes which are not included in the pattern recognition to a given telescope track.
  • This is typically used for clusters on the Device Under Test (DUT).
  • Clusters matching a track are stored in the container associatedClusters of TbTrack.
Option Description Default Comment
DUTs List of the planes that contain the clusters to be matched []  
UseHits Use the positions of the individual pixel hits instead of the cluster position when checking the tolerance window True  
ReuseClusters Allow clusters to be associated to multiple tracks or not True  
TimeWindow Time window (in ns) around the track time within which clusters are considered associated to the track. 200  
XWindow Spatial tolerance window in local x and y (in mm) around the track intercept within which clusters are considered associated to the track. 1  
MaxChi2 Skip tracks with too high χ2/ndof 100  
TrackLocation TES location of tracks LHCb::TbTrackLocation::Default No need to change normally
ClusterLocation TES location prefix of clusters LHCb::TbClusterLocation::Default No need to change normally
  • DUTs sets the list of plane indices to be treated as DUTs, e. g.
    from Configurables import TbTracking, TbClusterAssociator
    TbTracking().MaskedPlanes = [4]
    TbClusterAssociator.DUTs = [4]


  • TbVisualiserOutput saves data from one user specified event, that can be used by the python script KEPLER/KEPLER_HEAD/ to visualise telescope data (including pixels). Matplotlib is required. The time window visualised is specified on lines 19 and 20 of hCentre is the middle of the time window, and hWindow is the width plotted either side of hCenter.
Option Description Default Comment
ViewerEvent Index of the event to be saved 7  


  • The algorithm TbAlignment holds a list of tools that are called/run one after the other.
  • The algorithm simply passes tracks to the current tool. When they have accumulated sufficient tracks, the alignment procedure implemented in the tool is executed, and the alignment is updated.
  • Then, TbAlignment passes tracks to the next tool in the sequence until either the sequence is completed, or the end of data has been reached.
  • If this happens, the last tool is run on whatever tracks it has been able to collect.
  • Each tool has its own, independent configuration, and there is an inheritance structure between the tools that defines what can be configured.


  • All alignment tools inherit from this base class. The following options are therefore available in all alignment tools.
Option Description Default Comment
ClearTracks Flag to clear the tracks used by the previous tool. True
Monitoring Flag to produce some monitoring histograms using updated alignment constants. False  
MaskedPlanes List of masked plane indices [] Used, for example, to mask the DuT from the telescope alignment.
MaxChi2 Maximum χ2/ndof for tracks used in any alignment technique. 100 Only meaningful for the track based alignment techniques, but as that's most of them it's been made common. If the alignment isn't doing anything, it's likely the chi2 cut is too harsh.
DOFs List of bools defining which degrees of freedom are allowed to float [] Defaults are different depending on the alignment technique.


  • Iterative alignment technique that "wobbles" one plane after the other (except for a reference plane), refits the tracks and minimizes the sum of the chi-squares of all alignment tracks.
  • By default, the z position is held fixed.
Option Description Default Comment
ReferencePlane Index of the plane that is held fixed 999 Needs to be specified
NIterations Number of iterations for the loop 3  
FitStrategy Sets the Minuit fit strategy 2  


  • Simple technique that lines the detectors up along the beam, using a reference plane and Minuit.
  • By default, only the x and y positions and the rotation around z are floated.
Option Description Default Comment
ReferencePlane Index of the plane to align with respect to 999 Needs to be specified


  • Minuit based alignment that aligns a single telescope plane or DUT.
  • By default, all degrees of freedom are floated.
Option Description Default Comment
DeviceToAlign Index of the plane you want to align 999 Needs to be specified
RefitTracks Refits the tracks when alignment constants are updated. False  
IsDUT Flag whether the device to align is included in the pattern recognition or whether it is a DUT True  
TimeWindow Time window (in ns) for associating a cluster to a track. 200 Useful to tighten gradually the selection from iteration to iteration.
XWindow Spatial window (in mm) for associating a cluster to a track. 1 Useful to tighten gradually the selection from iteration to iteration.
IgnoreEdge Flag to skip clusters close to the edge of the sensor True  
  • If TimeWindow or XWindow are negative, the tool picks up the clusters in the associatedClusters of a track. Otherwise the association of clusters to tracks is done within the tool.


  • The main workhorse. Does a simultaneous fit of both the tracks and the alignment constants.
  • This should generally be used to align the telescope.
Option Description Default Comment
NIterations Number of iterations of the outer loop (where full nonlinearity is taken into account) 5  
ResCut Cut on the residual (in mm) after an iteration. 0.05  
ResCutInit Initial cut on the residuals entering into the computation. 0.6  
NStdDev Reject tracks with chi-squared outside n sigmas 0 Disabled by default


  • A survey technique that lines the residuals up with a reference plane (using histograms from TbClusterPlots, note the parameters of this tool are essentially defined there).


  • Typically, we will want to run several alignment techniques per run. As an example of a typical sequence
    from Configurables import TbAlignmentMinuit1, TbMillepede 
    from Configurables import TbAlignmentDeviceSurvey, TbAlignmentMinuit3
    Kepler().Alignment = True
    Kepler().addAlignment(TbAlignmentMinuit1(ReferencePlane=3, MaxChi2=9999999, MaskedPlanes=[4]))
    Kepler().addAlignment(TbMillepede(MaxChi2=1000, ResCutInit=0.5, ResCut=0.03, MaskedPlanes=[4]))
    Kepler().addAlignment(TbMillepede(DOFs=[1,1,1,1,1,1], Monitoring=True, MaxChi2=15, ResCutInit=0.5, ResCut=0.03, MaskedPlanes=[4]))
    Kepler().addAlignment(TbAlignmentMinuit2(DeviceToAlign=4, TimeWindow=15, XWindow=0.5))
    Kepler().addAlignment(TbAlignmentMinuit2(Monitoring=True, DOFs=[1,1,1,1,1,1], MaxChi2=5, DeviceToAlign=4, TimeWindow=15, XWindow=0.1))
    from Configurables import TbAlignment
    TbAlignment().NTracks = 10000
  • We use technique1 with no chi2 requirement to get a rough estimate of the positions (note, this should not be necessary overtime).
  • Then we run Millepede twice, once without chi2 requirements, once with, an additionally allowing z-positions to float.
  • At this point, the telescope should be well aligned (per run, aligning once with Millepede should suffice at this point)
  • We produce a few monitoring histograms and numbers to convince ourselves the telescope alignment is okay.
  • Then we align the DuT (device 4) with first the survey technique and then Minuit2.



  • TbPacketRecycler is an algorithm which 'recycles' unused packets from the end of one event and adds them to the next event.
  • It is currently not run as a default algorithm.
  • To use the packet recycler add the following lines to the configuration
    def patch():
      from Configurables import TbPacketRecycler
      seq = GaudiSequencer("Telescope")
      seq.Members += [TbPacketRecycler()]
    from Configurables import TbEventBuilder
    TbEventBuilder().OverlapTime = 100 * 4096
  • This will add the algorithm to the end of the telescope sequence, but before any monitoring/analysis as otherwise hits will be used twice.
  • The overlap length in this example is set to 100 ToA (2500 ns).
  • Note that the overlap length is defined in TbEventBuilder, not in TbPacketRecycler.


  • TbCalibration is an algorithm which automatically generates some of the configuration files required to process data.
  • Currently it contains three separate algorithms.
    • CheckHotPixels identifies pixels which are very noisy and generates a masking file for future processing.
    • CheckSynchronisation calculates per plane timing offsets by zeroing time correlations between the planes.
    • CheckColumnOffsets uses cluster time differences to check that all super columns on a plane are properly synchronised - this is largely a legacy tool for processing Run1001-Run1094.
  • To run TbCalibration, add something along the following lines to your configuration
    from Configurables import TbCalibration
    TbCalibration().CheckHotPixels = True
    TbCalibration().CheckSynchronisation = True
    TbCalibration().CheckColumnOffsets = True
    TbCalibration().PixelConfigFile = "myPixelconfig.dat"
    TbCalibration().TimingConfigFile = "myTimingConfig.dat"
    Kepler().UserAlgorithms = [ TbCalibration() ]


  • For processing sets of runs, it is far more efficient to use Ganga to organise the running of Kepler.
  • To use the Kepler plugin in Ganga, add
    to RUNTIME_PATH in ~/.gangarc.
  • If you have not used Ganga before, you will need to run ganga at least once to generate this configuration file.
  • To make a new Kepler Job, run Ganga, and at the prompt type:
    job = Job( application = Kepler( optsfile=[] , version='HEAD' ), 
     backend = Local() / LSF() / Dirac() ,
     splitter = TbDataset(Month("July2014/Oct2014/Nov2014/Dec2014",[runs])))
  • Note that data is passed directly to a splitter object - this is to ensure that each run has it's own job and this is how the data is avoided - it also avoids some of the complexities of using LFNs when specifying inputdata as inputdata.
  • Then, once the job is made
  • For example, to run on Runs 1030 to 1050 from July on LSF 8 hour queue:
    job = Job(application = Kepler(optsfile = ['~/cmtuser/KEPLER/KEPLER_HEAD/Tb/Kepler/options/'], version = 'HEAD'),
  • Submitting will then create 20 subjobs that will run on the batch system, and will return root output /stdout / stderr to your Ganga repository. Output can be verified with TbFileChecker() - which checks if the application has been successful terminated, use as
    TbFileChecker().check( job )
  • This will change the status of any failed subjobs to failed (currently, they will all register as succeeding when the subjob finishes).
  • The DIRAC backend is also now avaliable - which has much more resources than LSF and hence can be used to run many jobs in parallel. In order to use DIRAC, a fixed release much be used (i.e. version="v2r1"). But, the user install area can be shipped with the job using
    job.application.user_release_area = "~/cmtuser/"
  • This will in effect ship the head version to the batch node from the user's area. However, it is recommended that a fixed release is used, as otherwise Kepler needs to be installed each time on the batch node which is somewhat inefficient.


  • TbDataset will find the data on EOS and pass it to Kepler - hence do not specify a dataset in the options file. It also has a parameter, AutoConfig (turned on by default) , which sets alignment/pixel/timing configuration based on what is stored in the conditions folder for this run. This can be switched off, but then an alignment will need to be specified in the options file: use
    inputdata=TbDataset( "July2014", range(1030,1051), AutoConfig = False)
  • Several outputs are by default downloaded to your repository. These include *.root files, stdout, stderr, but do not include *.dat. If you run alignment / configuration jobs, remember to include these in the outputfiles, otherwise they will not be stored away from scratch.

Frequently Asked Questions

  • How do I apply the ToT calibration for a device?


Instructions for contributing code

  • Before pushing to the repository, make sure the package you edited compiles without warnings.
  • Git4LHCb Twiki page External link mark
Topic attachments
I Attachment Action Size Date Who Comment
PDFpdf AlignmentInstructions.pdf manage 153.5 K 2014-07-31 - 12:11 HeinrichSchindler  
PDFpdf AlignmentOverview.pdf manage 95.0 K 2015-03-03 - 12:14 HeinrichSchindler  
PDFpdf SPIDR_file_format_v3.pdf manage 240.6 K 2017-02-23 - 12:53 HeinrichSchindler  
Topic revision: r60 - 2021-11-24 - TimothyDavidEvans

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback