Random error when using chunks in pipeline
A random error appears when using chunks in pipeline. This issue is reproducible with both custom features and classsification using scikit-learn.
This issue seems concern only tests using fake data as I can't reproduce it with S2 data (whole or cropped image).
Here the error:
RuntimeError: Exception thrown in otbApplication Application_GetVectorImageAsFloatNumpyArray_: ../Modules/Core/Common/src/itkMultiThreader.cxx:399: itk::ERROR: MultiThreader(0x55ed0e2992f0): Exception occurred during SingleMethodExecute std::bad_alloc
This error seems random, depending on the number of chunk used. There is no threshold identified as the chain can ends with 20 chunks and failed with 19 or 22.
Fake data used for tests draw the word iota2" in a 86*16 pixels image. This leads to a lot of no-data in the resulting image. Replacing this image by an homogeneous one (using numpy ones function) seems to solve the problem.
Maybe there is an issue on OTB pipeline when we extract only a no-data ROI.
Currently, most of running tests using chunks are reduced to the case where number_of_chunks=1