Commit 94cf8b57 authored by Antonin Segault's avatar Antonin Segault
Browse files

First version of the script and first description of it's use

parent b2d8a068
This diff is collapsed.
# Historic Graphs
Building dynamic maps of the hyperlinks between Wikipedia articles
\ No newline at end of file
Building dynamic maps of the hyperlinks between Wikipedia articles
## About
Historic Graphs is a tool (and a method) to capture, represent and study the evolution of the links between Wikipedia articles over time. Each article has its own version history, storing all the changes, additions and removal that occurred over the time. Starting from an article, a python script connects to Wikipedia's APIs and collects the list of the links in contained at different points in time. It will then follow these links and continue collecting up to the desired depth. All the lists of links are then merged them into nice tables, showing the link structure for each date. These tables can then be imported into network analysis tools such as Gephi, to be visualized as dynamic maps.
## How To
The script requires Python 3. It has only been tested on a GNU/Linux (Debian) system, but should work properly on others.
Data collection requires at least four pieces of information :
- the title of the pages from which the collection will start (it is simpler to start with only one page)
- the dates for which links will be collected (usually from 3 to 10 dates, distant from a few hours to a few months)
- the language code of the wiki that will be used (en, fr ... )
- the depth of the map (collection time increases exponentially, so start with only one)
This information needs to be inserted into the python script. Open the script with your favorite text editor and change the configuration variables accordingly. For example, if I want to study the long term evolution of the Dorayaki article on the english wiki :
pages = ['Dorayaki']
dates = ['2006-01-01T00:00:00Z', '2012-01-01T00:00:00Z', '2018-01-01T00:00:00Z']
lang = 'en'
depth = 1
Then open a terminal and run the python script. For example, on a GNU/Linux system :
python3 historic-graphs.py
The data collection is slow (around 8 minutes for this small example) because it involves a lot of communications with the API. Have a cup of tea ... And there we are : the script generates one CSV file per date, containing the list of links that existed at that time. The importation of such files in Gephi will be covered in the tutorial bellow. Have fun !
## Examples
Coming soon !
## Credit
Historic Graphs was created and is maintained by Antonin Segault (Paris Nanterre University, Dicen-IdF lab, France). For questions, suggestions and bug fixes : antonin.segault-AT-parisnanterre.fr
Historic Graphs is a free software. The python script, the documentation and the examples are released under the terms of the GNU General Public License version 3 : https://www.gnu.org/licenses/
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Historic Graphs was created by Antonin Segault in 2019
# Info and updates : https://framagit.org/retrodev/historic-graphs
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import requests, time
# configuration variables
pages = ['Dorayaki'] # pages used as starting point
dates = ['2006-01-01T00:00:00Z', '2012-01-01T00:00:00Z', '2018-01-01T00:00:00Z'] # revision dates (ISO)
# careful, UTC+0 timezone will be used, even when using -- for exemple -- french wikipedia
lang = 'en' # language version of wikipedia that is queried
depth = 1 # increasing leads to larger graph, but very long data collection times
trim = True # whether or not we should remove the external leafs of the graph (usually cleaner)
# stop list : title of sections that may indicate the end of the content of a page
# might need to be expanded -- or even created if you want to use other languages
stop = {}
stop['fr'] = ['Annexes', 'Articles connexes', 'Bibliographie', 'Notes', 'Notes et références', 'Références', 'Sources', 'Voir aussi']
stop['en'] = ['Notes', 'References', 'See also', 'Sources']
# utility functions
def handleRequest(url) :
try:
r = requests.get(url,timeout=30)
r.raise_for_status()
except requests.exceptions.HTTPError as errh:
print ("Http Error:",errh)
except requests.exceptions.ConnectionError as errc:
print ("Error Connecting:",errc)
except requests.exceptions.Timeout as errt:
print ("Timeout Error:",errt)
except requests.exceptions.RequestException as err:
print ("OOps: Something Else",err)
return r.json()
def getLinks(revid, sections) :
links = []
for section in sections :
url = 'https://' + lang + '.wikipedia.org/w/api.php?action=parse&oldid=' + revid + '&prop=links&section=' + section + '&format=json'
d = handleRequest(url)
time.sleep(.25) # waiting after each request, because we don't want to flood the API
if 'parse' in d :
for link in d['parse']['links'] :
if link['ns'] == 0 and 'exists' in link :
if link['*'] not in links :
links.append(link['*'])
return links
def getSections(revid) :
url = 'https://' + lang + '.wikipedia.org/w/api.php?action=parse&oldid=' + revid + '&prop=sections&format=json'
d = handleRequest(url)
sections = ['0']
if 'parse' in d :
for section in d['parse']['sections'] :
if section['toclevel'] == 1 :
if lang in stop : # if we have a stop list
if section['line'] in stop[lang] :
#print( section['line'] ) # usefull when filling the stop lists
return sections # breaking out of the loop
sections.append(section['index'])
return sections
def getRevisions(page, dates) :
revids = {}
for date in dates :
url = 'https://' + lang + '.wikipedia.org/w/api.php?action=query&prop=revisions&titles=' + page + '&rvlimit=1&rvprop=timestamp|ids&rvdir=older&rvstart=' + date + '&format=json'
d = handleRequest(url)
pageid = list(d['query']['pages'].keys())[0]
if 'revisions' in d['query']['pages'][pageid] :
revid = str(d['query']['pages'][pageid]['revisions'][0]['revid'])
revids[date] = revid
return revids
def buildGraph(graph, page, dates, depth) :
print(page, depth, time.strftime("%H:%M:%S", time.localtime()))
graph[page] = {}
graph[page]['revisions'] = {}
revs = getRevisions(page, dates)
for rev in revs :
s = getSections(revs[rev])
l = getLinks(revs[rev], s)
graph[page]['revisions'][rev] = []
for link in l :
graph[page]['revisions'][rev].append(link)
if link not in graph.keys() and depth > 0 :
buildGraph(graph, link, dates, depth - 1)
def trimGraph(graph) :
for page in graph :
for rev in graph[page]['revisions'] :
trimmed = graph[page]['revisions'][rev].copy()
for link in graph[page]['revisions'][rev] :
if link not in graph.keys() :
trimmed.remove(link)
graph[page]['revisions'][rev] = trimmed
# serious business starts here
t = time.strftime("%Y-%m-%d_%H-%M-%S",time.localtime())
graph = {}
for page in pages :
buildGraph(graph, page, dates, depth)
if trim :
trimGraph(graph)
# saving the graph of each revision date as a CSV table
for date in dates :
o = 'source\ttarget\ttimeset\n'
for page in graph :
if date in graph[page]['revisions'] :
for link in graph[page]['revisions'][date] :
o = o + page + '\t' + link + '\t' + date + '\n'
f = open(t + '_rev-' + date + '.csv', 'w')
f.write(o)
f.close()
# saving the configuration variables in a separated file
f = open(t + '_settings.txt', 'w')
f.write('Pages : ' + str(pages) + '\n\n')
f.write('Dates : ' + str(dates) + '\n\n')
f.write('Lang : ' + str(lang) + '\n\n')
if lang in stop :
f.write('Stopwords : ' + str(stop[lang]) + '\n\n')
else :
f.write('No stopword found for this language\n\n')
f.write('Depth : ' + str(depth) + '\n\n')
f.write('Trim : ' + str(trim))
f.close()
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment