Limit resources (memory) to be consumed by filters (ppt in particular)
My laptop got unresponsive during running recollindex on the boot, and for a moment before the total freeze in top I saw that python3 process running ppt-dump was taking 60GB of memory...
I made that broken ppt (note -- I am not the owner/author of it, so do not redistribute etc, I will eventually remove it from there) available from http://www.oneukrainian.com/tmp/broken.ppt .
If I limit amount of RAM to 5G and start the process, we can see it crash:
❯ ulimit -Sv 5000000 # Set ~5000 mb limit
❯ /usr/bin/python3 /usr/share/recoll/filters/ppt-dump.py --no-struct-output --dump-text broken.ppt
Traceback (most recent call last):
File "/usr/share/recoll/filters/ppt-dump.py", line 122, in main
if not dumper.dump():
^^^^^^^^^^^^^
File "/usr/share/recoll/filters/ppt-dump.py", line 46, in dump
strm.printDirectory()
File "/usr/share/recoll/filters/msodump.zip/msodumper/pptstream.py", line 56, in printDirectory
obj = self.__getDirectoryObj()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/recoll/filters/msodump.zip/msodumper/pptstream.py", line 48, in __getDirectoryObj
obj = self.header.getDirectory()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/recoll/filters/msodump.zip/msodumper/ole.py", line 274, in getDirectory
chain = self.getSAT().getSectorIDChain(dirID)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/recoll/filters/msodump.zip/msodumper/ole.py", line 434, in getSectorIDChain
chain.append(nextID)
MemoryError
Error: Could not parse
/usr/bin/python3 /usr/share/recoll/filters/ppt-dump.py --no-struct-output 57.83s user 1.60s system 99% cpu 59.447 total
so, besides fixing that ppt-dump one way or another to become more robust, I thought that a general solution could be to introduce some kind of monitoring (or similar limiting via ulimits or other mechanisms) over started filter processes and killing them if they exceed some reasonably set, or configured, memory consumption limit.