Commit 204f1e31 authored by setop's avatar setop 🚀
Browse files

minimal product

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
\ No newline at end of file
FROM debian:buster-slim
LABEL maintainer="setop@zoocoop.com"
RUN apt-get update && \
apt-get install --assume-yes --no-install-recommends python3 python3-pip python3-setuptools python3-lxml python3-scrapy ftp && \
rm -rf \
/var/cache/* /var/lib/apt/lists/* /tmp/* /var/tmp/* /var/log/* /usr/share/doc/* /var/lib/dpkg/info/* \
/usr/lib/python3/dist-packages/twisted/test /usr/lib/python3/dist-packages/twisted/*/test
RUN pip3 install readability-lxml && \
rm -rf \
/root/.cache/* /usr/share/python-wheels/*
ENV INSTALL_PATH /scrapy-lobsters
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY . .
CMD scrapy crawl -L INFO --logfile=$LOG_FILE -t xml -o $FTP_FILE feed
\ No newline at end of file
# what
Link aggregation platforms like [Lobsters](https://lobste.rs/) are very convenient to get valuable matter about one particular center of interest, like computer science in the case of Lobsters.
Even better, they propose news feed which can be added to your preferred feed reader in order to get notifications of new contents and read them offline.
The only bothering thing to me is that platform do not allow themselves to inline linked articles in their feed so I have to go online to read them.
This kind of ruins the offline reading experience.
This tool reads the platform feed, loads all linked articles and rebuild the feed wich articles embedded. So you can enjoy reading articles while commuting.
# builld and deploy
## locally
### dependencies
* python3
* python scrapy
* python readability-xml
* ftp client (like netkit-ftp on debian)
You also need an access to a web server via ftp.
### get application
git clone this repo then cd inside.
### run
To build feed file, run :
```
export FTP_FILE="lobsters.atom"
export LOG_FILE="logs.txt"
rm -f $FTP_FILE $LOG_FILE
scrapy crawl -L INFO --logfile=logs.txt -t xml -o $FTP_FILE feed
```
Then customize with ftp properties, and run :
```
export FTP_SERVER="ftp"
export FTP_USER="user"
export FTP_PASS="pass"
export FTP_DIR="path/to"
./ftpupload.sh
```
This could be put in a crontab
## using docker
Here is a basic way to deploy this tool on docker. You infrastructure may be much more elaborated than what is proposed here.
### build
git clone this repo then cd inside.
`rm -rf .git*`
`docker build -t scrapy-lobsters .`
`docker save scrapy-lobsters | gzip -9 > scrapy-lobsters.tgz`
move the image to target environment
### run
`gzip -cd scrapy-lobsters.tgz | docker load`
`useradd -b /var scrapy`
To build feed file, run :
```
docker run \
--rm \
-u $(id -u scrapy):$(id -g scrapy) \
--env FTP_FILE="/var/scrapy/lobsters.atom" \
--env LOG_FILE="/var/scrapy/logs.txt" \
--volume /var/scrapy:/scrapy-lobsters/.scrapy \
--volume /var/scrapy:/var/scrapy \
scrapy-lobsters
```
Feed file and logs are produced on the host in `/var/scrapy`.
Then customize with ftp properties, and run :
```
docker run --rm -u $(id -u scrapy):$(id -g scrapy) \
--env FTP_SERVER="ftp" \
--env FTP_USER="user" \
--env FTP_PASS="pass" \
--env FTP_DIR="path/to" \
--env FTP_FILE="lobsters.atom" \
--volume /var/scrapy:/var/scrapy \
--workdir "/var/scrapy" \
scrapy-lobsters /scrapy-lobsters/ftpupload.sh
```
This could be put in a crontab
# TODO
* core enhancement
* handle pubDate element in source rss to populate updated element in destination atom
* keep more entries than in the original feed
* cache readability output to save process power
=> use sqlite db to store : id, date, link, title, content
=> load all, order by date+id DESC, limit 40
* content
* handle non HTML format like pdf
* rewrite relative img src
* generalization
* handle other link aggregation websites feeds, like journalduhacker.net
\ No newline at end of file
#!/bin/bash -e
ftp -i -n -p $FTP_SERVER << EEE
quote USER $FTP_USER
quote PASS $FTP_PASS
binary
cd $FTP_DIR
mput $FTP_FILE
quit
EEE
from scrapy.exporters import XmlItemExporter
import datetime
import six
from scrapy.utils.python import is_listlike
class AttrXmlItemExporter(XmlItemExporter):
# from : https://stackoverflow.com/questions/46425930/adding-attributes-to-exported-xml-in-scrapy
def _export_xml_field(self, name, serialized_value, depth):
# Custom code:
attrs = {}
if isinstance(serialized_value, dict):
serialized_value = serialized_value.copy()
attr_keys = [k for k in serialized_value.keys() if k.startswith('_')]
attrs = {k[1:]: serialized_value.pop(k) for k in attr_keys}
# Default implementation (except for startElement call)
self._beautify_indent(depth=depth)
self.xg.startElement(name, attrs)
if hasattr(serialized_value, 'items'):
self._beautify_newline()
for subname, value in serialized_value.items():
self._export_xml_field(subname, value, depth=depth + 1)
self._beautify_indent(depth=depth)
elif is_listlike(serialized_value):
self._beautify_newline()
for value in serialized_value:
self._export_xml_field('value', value, depth=depth + 1)
self._beautify_indent(depth=depth)
elif isinstance(serialized_value, six.text_type):
self._xg_characters(serialized_value)
else:
self._xg_characters(str(serialized_value))
self.xg.endElement(name)
self._beautify_newline()
def rfc3339_serialize(value:datetime.datetime):
return value.isoformat()+"Z"
class AtomExporter(AttrXmlItemExporter):
def __init__(self, file, **kwargs):
super().__init__(file, **kwargs)
self.root_element = "feed"
self.item_element = "entry"
def start_exporting(self):
self.xg.startDocument()
self.xg.startElement(self.root_element, {"xmlns":"http://www.w3.org/2005/Atom"})
self._beautify_newline(new_item=True)
super()._export_xml_field("id", "tag:zoocoop.com,2018:lobsters", 1) # <id></id>
super()._export_xml_field("title", "Lobsters", 1) # <id></id>
super()._export_xml_field("updated", rfc3339_serialize(datetime.datetime.utcnow()), 1) # <id></id>
super()._export_xml_field("author", {"name":"spider"}, 1) # <id></id>
import scrapy
from lobsters.exporters import rfc3339_serialize
__all__ = ['EntryItem']
class EntryItem(scrapy.Item):
"""
see https://validator.w3.org/feed/docs/rfc4287.html#element.entry
"""
id = scrapy.Field()
title = scrapy.Field()
link = scrapy.Field()
updated = scrapy.Field(serializer = rfc3339_serialize)
content = scrapy.Field()
\ No newline at end of file
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
from readability.readability import Document
import datetime
class LobstersPipeline(object):
def process_item(self, item, spider):
item["updated"] = datetime.datetime.utcnow()
doc = Document(item["content"])
item["content"] = doc.summary(html_partial=True)
return item
def test():
item = { 'content' : open("./index.html", "r").read(), }
print(LobstersPipeline().process_item(item, None))
if __name__ == "__main__":
test()
\ No newline at end of file
# -*- coding: utf-8 -*-
# Scrapy settings for lobsters project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'lobsters'
SPIDER_MODULES = ['lobsters.spiders']
NEWSPIDER_MODULE = 'lobsters.spiders'
# some privacy
UER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0'
ROBOTSTXT_OBEY = False
REFERER_ENABLED = False
COOKIES_ENABLED = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable Telnet Console (enabled by default)
TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'lobsters.middlewares.LobstersSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'lobsters.middlewares.LobstersDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'lobsters.pipelines.LobstersPipeline': 300,
}
FEED_EXPORTERS = {
'xml': 'lobsters.exporters.AtomExporter',
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_POLICY = 'lobsters.spiders.feed.AllButRootPolicy'
HTTPCACHE_IGNORE_HTTP_CODES = [500, 503]
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
# -*- coding: utf-8 -*-
import logging
from urllib.parse import urlparse
from scrapy.spiders import XMLFeedSpider
from scrapy import Request
from scrapy.extensions.httpcache import DummyPolicy
from lobsters.items import EntryItem
__all__ = ['AllButRootPolicy']
DOMAIN = 'lobste.rs'
FEED='https://'+DOMAIN+'/rss'
logger = logging.getLogger(__name__)
class AllButRootPolicy(DummyPolicy):
def should_cache_request(self, request):
(_,netloc,_,_,_,_) = urlparse(request.url)
if netloc == DOMAIN:
logger.info('will not cache %s'%(request.url))
return False
else:
return super().should_cache_request(request)
class FeedSpider(XMLFeedSpider):
name = 'feed'
start_urls = [FEED]
iterator = 'iternodes'
itertag = 'item'
def parse_node(self, response, node):
# might be improved by https://doc.scrapy.org/en/latest/topics/loaders.html
"""
mandatory: id, title, updated
"""
i = EntryItem()
# <guid isPermaLink="false">https://lobste.rs/s/2150cw</guid>
guid= node.xpath('guid/text()').extract_first()
i['id'] = guid #.split("/")[-1]
i['title'] = node.xpath('title/text()').extract_first()
target = node.xpath('link/text()').extract_first()
i['link'] = {}
i['link']["_href"]= guid
# "_" mark a property which is persisted as an element attribute in xml exporter
r = Request(target, self.parse_target)
r.meta["i"] = i
return r
def parse_target(self, response):
i = response.request.meta["i"]
i["content"] = response.body
return i
# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html
[settings]
default = lobsters.settings
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment