HARP CDR configuration

Consult the harp-support earchive on the web

Harp document describing the harp dataflow
Harp document describing their proposed CDR requirements: These requirements are not necessarily agreed upon.

Run details

Start of data taking : 19 March 2001
Beam : PS
Run period : 200 days 0ver 18 months
Run Area : East hall: t9
Forseen interuptions :

Data

Average rate : 2-3 MB/sec
Peak rate : 5-6 MB/sec
Maximum expcted total volume : 50TB

Comments : Peak rates will only be attained overnight and at the weekend during the first months of data taking prior to the SPS startup. This opportunity may not be fully exploited however by HARP. Data during March and April will be mainly cosmics. The data rate will probably not peak during this period.


Online cluster

Machine name
Function
CDR
Installed
Location
pcoharp01 AFS installed No  
pcoharp07 the file server No  
pcoharp09 DCS server for slow controls No  
pcoharp11 Online lockserver, boot, FDDB and journal files No  
lockharp Offline lockserver, boot,FDDB and journal files No  
pcoharp10 Raw data server 1 Yes /daqdb/cdr
pcoharp12 Raw data server 2 Yes /daqdb/cdr

Cluster home directory: /home_main/harpcdr
/home_main is an NFS mounted directory physically residing in the file server, pcoharp07. It is mounted on all cluster machines.

CDR home directory: /daqdb/cdr on all data servers.

Software repository: /daq_repository : NFS mounted directory physically residing on pcoharp07. The cdr directory under here will point to the CDR installed under /daqdb..

Filesystems and mapping to CASTOR

Machine Local filepath CASTOR filepath Fileclass
pcoharp10 /harpData_0/FedName /castor/cern.ch/harp/FedName/Raw1/data1 harp1
/harpData_1/FedName /castor/cern.ch/harp/FedName/Raw1/data2 harp2
/harpData_2/FedName /castor/cern.ch/harp/FedName/Raw1/data3 harp3
/harpData_3/FedName /castor/cern.ch/harp/FedName/Toplevel harpperm
pcoharp12 /harpData_0/FedName /castor/cern.ch/harp/FedName/Raw2/data1 harp4
/harpData_1/FedName /castor/cern.ch/harp/FedName/Raw2/data2 harp5
/harpData_2/FedName /castor/cern.ch/harp/FedName/Raw2/data3 harp6
/harpData_3/FedName /castor/cern.ch/harp/FedName/Runs harpruns
DCS server /harpDcsData /castor/cern.ch/harp/FedName/DCS harpperm
Offline server Not yet known /castor/cern.ch/harp/Shared harpperm
Not yet known /castor/cern.ch/harp/Shared/Conditions harpperm

All bulk raw data resides on disks /harpData_0, /harpData_1, and /harpData_2. All bulk raw data from pcoharp10 is mapped to Raw1 and all data from pcoharp11 is mapped to raw2.

The bulk raw data is mapped directly to a unique CASTOR filepath. A class of service is defined for each directory which serves to copy the data to different tapes. These raw data files will be transferred directly to CASTOR by the CDR. Files in these directories will be handled by first of the offline servers. The directories are owned by harpcdr, group read only access


Offline cluster

The offline cluster consists of 3 IDE disk servers having a disk pool of 500 GB each. One is dedicated to the CDR and theremaining 2 for reconstruction and analysis. One of these may also be used for CDR backup. The stager used by all three machines is stageharp -> harp001d.

Machine name
Function
Stager running
Objectivity
AMS
harp001d CDR server Yes No  No
harp002d Analysis and reconstruction No Yes  Yes
harp003d Analysis and reconstruction No Yes  Yes

 


CASTOR setup

Tapes 9940, 60GB capacity, 2CHF/GB
All data is single copy

CASTOR fileclasses for HARP

CLASS_ID 15 11 16 17 18 19 20 21
CLASS_NAME harpperm harp1 harp2 harp3 harp4 harp5 harp6 harpruns
CLASS_UID - - - - - - - -
CLASS_GID uh uh uh uh uh uh uh uh
FLAGS 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
MAXDRIVES 2 1 1 1 1 1 1 1
MIN FILESIZE 0 0 0 0 0 0 0 0
MAX FILESIZE 0 0 0 0 0 0 0 0
MAX SEGSIZE 0 0 0 0 0 0 0 0
MAX SEGSIZE 0 0 0 0 0 0 0 0
MIGR INTERVAL 1800 1800 1800 1800 1800 1800 1800 1800
MIN TIME 0 0 0 0 0 0 0 0
NBCOPIES 1 1 1 1 1 1 1 1
RETENP_ON_DISK INFINITE ALAP ALAP ALAP ALAP ALAP ALAP ALAP
TAPE POOLS harp7 harp1 harp2 harp3 harp4 harp5 harp6 harp7

The total amount of data in the HARP permanent pool should not exceed 50 GB, i.e 10% of the disk capacity of the server. This may change in future when more experience has been gained.


Databases

Database: Objectivity

There will be three independent Federated databases for the HARP online data taking. Only one federation will will written to at any one time, i.e no simultaneously active federations. This condition means that only one instance of the CDR software needs to be installed on any online DAQ. When the online federation is changed, the CDR will be stopped and a new configuration file loaded corresponding to the new federation. Dates for changing the online federation should be known in advance.

Federated databases

Federation name Purpose
HarpTestFD Data of the cosmic data taking, and the technical run
HarpStandaloneFD for the data of the standalone runs, which will be taking place during the PS scheduled shutdowns
HarpPhysicsFD Data from the physics runs

File sizes (approximate )

The callibration and meta data corresponds to approximately 10% of the bulk raw data


CDR errors and solutions

Objectivity lockserver error:

** Error #190102: ooSession::Init() could not open: pcoharp11.cern.ch::/daqdb/bootfiles/HarpTestFD.BOOT is OO_FD_BOOT set correctly ? ** System Error #3099: ooHandle(ooFDObj)::open(): Lock Manager cannot connect to the Lock Server. ** System Error #2502: ooHandle(ooFDObj)::open(): Object Manager is unable to start a new transaction ** Error #2782: DC: Invalid Federated Database ID. ** System Error #3099: ooHandle(ooFDObj)::open(): Lock Manager cannot connect to the Lock Server.

The lockserver on the online lockserver machine (pcoharp11) has died and must be restarted: