Future development

Future additions

  1. Shared/asynchronous reader to speed solution and save total memory, changes required:
    • set up shared grid and reader field memory in parent and child model runs

    • set up control variables as shared memory and child and parent responses to reader buffer changes

    • spawn asynchronous model runs, based on parent readers time steps in the buffer

    • have main create info to build reader fields??, then caserunner use this build info, whether shared or not shared, but not grid as it required grid read methods?

    • have main get grid variable dim info , file variable info an give this grid_build info to case_runner with or without shared memory, but not read unless

  2. Use multiple hindcasts with different grids, eg wave hindcast and hydro hindcasts on different grids,changes required:
    • attach interpolator to all fields acting on its buffer,

    • attach reader to all reader fields,

    • reader holds setup interp at given time, not field group manager

    • interpolator class params as a reader parameter

    • field buffer accessed as ring buffer on global hindcast time step calculated from time using reader method

    • move solver to specified time step, ie not substeps, so different hindcasts can have different time intervals

    • dry cell index evaluation at current time step is part of reader as method?, not field group manager

    • share_reader_memory flag move from reader to shared_params

  3. velocity interpolator which tweaks flows parallel to faces with adjacent dry cell or land boundary? after random walk velocity is added?

  4. Fuse looping for field interpolation for particle properties to reduce RAM-CPU memory traffic
    • kernal version of interpolate fields

    • fuse velocity interpolation and euler step

    • Update field derived particle properties as a group with kernel interpolator

  5. make error trapping reponse more consistent, eg some errors return no info

  6. Fuse Runge-kutta steps loops to reduce RAM-CPU memory traffic
    • kernal versions of BC walk and vertical walk

    • fuse BC walk and velocity interpolation using kernals

  7. option for particle tracking to work natively in lat/log cords for global scale models

  8. Read release points/polygons from file, eg shape files, csv

Possible additions

  1. find a numba container suitable for passing all fields and part prop as a group by name, to allow assess to all and reduce arguments needed

  2. merge numerical solver and random walk by moving to numerical solution as a stochastic ODE?

  3. RK45 solver to allow adaptive time stepping to improve accuracy for those particles where flows are rapidly varying in time or space.

Minor features/fixes

  1. Make grid variables which dont vary in time as sharedmemory

  2. automatic particle buffer size estimate factor of 2 too large

  3. add class crumb trail method to be displayed in errors and warnings

  4. do full parameter set up and release group params and release times in main?

  5. use date class consistently through code, ie drop time in independent use of seconds, dates in netcdf files

  6. move kill old particles to a status modifier

  7. add update timer to all classes

  8. convert to new status modifier classes
    • cull particles

    • tidal stranding

    • kill old particles

  9. support short class_name’s given by user ie just AgeDecay,not oceantracker.particle_properties.age_decay.AgeDecay

  10. ensure all classes are updated by and .update() method

  11. stats/heat maps work from given data as in other plotting

  12. fields manager updated dry cell, move and/or generalize to cases where only tide and depth available

  13. velocities as float32 is significantly faster??

  14. move to looping over reader global time, remove passing nb, the buffer time step to methods

  15. see if numba @guvectorize with threading can increase overall speed on particle operations

  16. move velocity reading in al readers to own method to give more flexibility, make it a core particle property with write off by default

  17. tidy up case info file, eg have full class params, full params merged in setup in main.py to core and class lists

  18. in main do full requirements checked, wit errors communicated, before running up cases

  19. to guide users time step choice when using terminal velocities add calc of vertical courant number in top and bottom cell, ie likely the smallest cells) ( also check vertical random wall size and terminal vel displacement?)

  20. move residence time to auto gerate class based on param of polygon release polygon with mutli release groups?

  21. add a show_grid to reader, to see grid and use ginput to pick release locations