EPICS V 4area Detector Integration Dave Hickin Diamond




















- Slides: 20
EPICS V 4/area. Detector Integration Dave Hickin Diamond Light Source area. Detector WG Meeting Diamond 27/10/2014
Overview • • EPICS V 4 basics ADPv. Access plugin and driver Embed ADPv. Access driver V 4 area. Detector
EPICS V 4 Basics • V 4 adds structured data to EPICS • V 4 modules • • pv. Data – structured data pv. Access – network protocol pva. Srv – pv. Access server on V 3 IOC pv. Database – V 4 records/database and pv. Access • pva. Srv and pv. Database are C++-only
pv. Data basics • Basic types • scalar, structure, tagged union, variant union • arrays of these • Scalar types • • Booleans Signed and unsigned integers (8 -, 16 -, 32 -, 64 -bit) Floats and doubles Strings
pv. Data. CPP basics • Extensive use of boost shared pointers • Array types use reference counted shared vector • Copy-on-write semantics • Arrays of complex types use shared vectors of shared pointers • Unions have shared pointer to top-level field
pv. Data. CPP basics • Data can be shared or shallow or deep-copied • Shallow-copy easy and avoids copying large arrays • Care needs to be taken with unions and arrays of complex data types • Const is not implemented – but could be added • Sharing currently problematic • References can be hung onto
pv. Access basics • Create/destroy pv. Access channel • Channel services • • • Introspection Get, Put. Get Monitor Process RPC Channel array
NTNDArray • V 4 Structure encoding NDArray • • Array data uses unions of scalar arrays Attributes use variant unions (anys) Dimension and attribute use structure arrays Timestamps Unique. Id Data sizes Codec for compression encoding • Normative type (replaces NTImage)
Issues to think about • Monitors do shallow copy • Structures could be shared (const would help) or shallow-copied • References can be held onto • How to do free-lists • Monitors have overflow bit but no dropped frame count
ADPv. Access Plugin/Driver • V 4 sever-side plugin and client-side driver • Plugin puts NDArray data in V 4 structure and publishes it via pv. Access • Client monitors and converts V 4 structures to NDArrays • Allows local and remote transfer of NDArrays • Compression (through Blosc/LZ 4)
Performance on 10 Gig Ethernet • Uncompressed • 120 -122 frames per second (97 -99+% bandwidth) • Compressed • With compression image reduced to 36% of original size using lz 4 and 38% by Blosc • Single threaded compression reduces performance • Blosc-based compression (multithreaded) increases rate • Blosc + lz 4 best. Up to ~230 frames per second (190% of bandwidth)
Current and near future work • Complete move to NTNDArray from NTImage • Move to Git. Hub Module - ADPv. Access • Package and release • Integration with other EPICS developments especially CS-Studio • Windows build • Deploy on beamline (I 12 is candidate)
Embed ADPv. Access driver • Local plugins (in same process) connect to NDArray driver as usual • If NDArray driver not present, first plugin creates V 4 monitor client driver monitors “PV” and plugin connects to this • Subsequent plugins connect to this local driver • Avoids having to explicitly create client driver • Monitor stops when all plugins disabled
V 4 area. Detector drivers Put frame data direct into NTNDArray V 4 client plugins consume NTNDArrays Can publish frame easily as “PV” Avoids conversion from NDArray and V 4 server plugin • Straight-forward to re-write drivers • V 4 Sim. Detector prototyped • Existing NDArray-plugins can be run though ADPv. Access client • •
V 4 client plugins • area. Detector plugins can be rewritten to process NTNDArrays – straight-forward • New V 4 plugins can consume NTNDArray • V 4 client plugins can work with old drivers through ADPv. Access server plugin • Various options for passing frames to plugins
Option 1 - Clients use pva monitors Clients create monitor to NTNDArray Monitor queues can be used to queue frames Only required fields need be monitored Local (same process) monitors perform shallow copy • Multiple remote monitors transfer data multiple times • •
Option 2 - Plugins use single monitor • First plugin registers with “PV”. Creates monitor. S/W layer/object to manage this. Plugin registers with monitor • Plugin gets a call-back. Passed an NTNDArray (const) shared or shallow copy • Subsequent plugins register, but no new monitor created • Each plugin can have own queue and processing thread
Option 3 - Plugins use single monitor remotely, multiple locally • For first plugin creates monitor. S/W layer/object to manage this. Republishes local -only PV. Client monitors this • Additional plugins monitor local PV • Plugins can use monitor queues
Option 4 - PVManager • Handles queue/buffering/aggregation and background data processing • Java only. No C++ implementation yet • Abstract types (Vtypes). One for NTNDArray • Used in CSS • Single channel/monitor to “PV” • Could it be used for plugins? • Limited vtypes