For three days, participants were able to share knowledge and experience on the preservation of information published on the Web.
Arquivo.pt contributed the following presentations:
Training the Trainers – Helping Web Archiving Professionals become Confident Trainers (Pre-Conference Workshop, Training Working Group) – Ricardo Basílio (Abstract, slides)
80 Thousand Pages On Street Art : Exploring Techniques To Build Thematic Collections (Session#02: unique content) – Ricardo Basílio (Abstract, slides)
Renascer Project Brings Back Old Websites at Arquivo.pt, Ricardo Basílio, Daniel Gomes and Vasco Rato (Session#04: Delivery & Access (Abstract, slides)
Arquivo.pt CitationSaver: Preserving Citations for Online Documents (Session#09: Digital Preservation) – Pedro Gomes, Daniel Gomes (Abstract, slides)
Fixing Broken Links with Arquivo404 (Poster session 2) – Vasco Rato, Daniel Gomes (Abstract, slides)
Arquivo.pt preserved online documents in several languages about the 2019 European Parliamentary Elections
The 2019 European Parliamentary Elections were an event of international relevance. The strategy to preserve the relevant information on the World Wide Web is delegated to national institutions. However, the preservation of web pages that document transnational events is not officially assigned.
The Arquivo.pt team, with the aim of preserving the cross-lingual online content that documents this event, applied a combination of human and automatic selection processes.
In the first step, 40 relevant terms in Portuguese about the 2019 European Parliamentary Elections were identified, and then, automatically translated into the 24 official languages of the European Union: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.
These translations were reviewed in collaboration with the Publications Office of the European Union. Besides that, in parallel, a collaborative list was launched to gather contributions of relevant seeds from the international community.
In the second step, the Arquivo.pt team iteratively ran 6 crawls (99 million web files, 4.8 TB) using different configurations and crawling software, to maximize the quality of the collected content.
The obtained web-data was aggregated into one special collection identified as EAWP23 and became searchable and accessible through Arquivo.pt in July 2020 (https://arquivo.pt/ee2019).
CLEOPATRA project: Cross-lingual Event-centric Open Analytics Research Academy
Daniel Gomes and Diego Alves at presenting at CLEOPATRA final event
The CLEOPATRA ITN was a Marie Skłodowska-Curie Innovative Training Network aimed to generate ways to better understand the massive digital coverage of major events in Europe over the past decades.
The main goal was to facilitate advanced cross-lingual processing of textual and visual information related to key contemporary events at large scale and develop innovative methods for efficient access and interaction with multilingual information.
In total, 14 Early-Stage Researchers hosted across 9 European Universities developed their research while enrolled as Ph.D. students.
Associated partners such as Arquivo.pt contributed to CLEOPATRA by hosting and training early-stage researchers such as Diego Alves. As part of the training program, he conducted a secondment at Arquivo.pt in Lisbon from June to August 2022.
The idea was to develop part of his research about syntactic structures of EU languages using the textual resources preserved by the Arquivo.pt and exchange knowledge with the web-archiving experts on the strategies to extract and process historical web-data.
Generating textual datasets for Natural Language Processing
Diego Alves’ work originated cross-lingual datasets about the 2019 European Parliamentary Elections precious for research.
This work will be detailed in chapter “Robustness of Corpus-based Typological Strategies for Dependency Parsing” of the open-access CLEOPATRA book entitled “Event Analytics across Languages and Communities”.
A 3-step Natural Language Processing pipeline was developed to generate research textual datasets that can be used in several types of digital humanities studies:
Extract text: The textual content was extracted from each web-archived URL using the newspaper3k Python library. The language of each extracted text was determined using the langdetect library, to separate the texts written in different languages across distinct files;
Clean extracted texts: a Python script was applied to clean the texts by removing unnecessary information (e.g.: repeated instances, empty lines, etc.);
Double-check of language identification: the language of each cleaned extracted text was verified again to eliminate possible errors originated during the previous steps.
Two new research datasets are openly available!
The result was a dataset of cleaned and language-verified texts publicly available. Each file contains the texts in a given language about the 2019 European Union Elections. The distribution of extracted texts for each language is described in the figure below:
Number of tokens of each corpus extracted from the collection 2019 European Union Elections preserved by Arquivo.pt (EAWP23).
The aforementioned corpus was automatically annotated regarding part-of-speech and dependency relations to generate a corpus with syntactic information which is useful for linguistic studies.
The texts in these annotated corpora followed the same order of the respective raw-texts files. Each sentence is annotated following the Universal Dependencies framework in the CoNNL-U format, which is the reference in terms of syntactic annotation in Natural Language Processing. Thus, each file in this dataset contains the annotated texts in a given language about the 2019 European Union Elections
“Robustness of Corpus-based Typological Strategies for Dependency Parsing”, Diego Alves and Daniel Gomes, “Event Analytics across Languages and Communities” book, Springer (to appear).
The research and education community has been requesting to support the bulk download of web-archived data and index files (CDXJ), for instance, to feed AI training models, optimize routing of web archive requests or recover information from selected websites (e.g. news).
Arquivo.pt begun making all its CDXJ index files publicly available in real-time to facilitate the bulk download of web-archived data. Learn how at:
One of them was the tutorial “Timeline summarization for large-scale past-web events with Python: the case of Arquivo.pt” developed by Daniel Gomes and Ricardo Campos.
Now, Arquivo.pt contributed to preserve online information that documents R&D projects funded by the Horizon 2020 programme (2014-2021). It preserved 197 million web files (17 TB) related to science for future access.
H2020 projects publish valuable information online but are being lost
However, after projects ending, the corresponding websites usually disappear causing a permanent loss of unique and valuable scientific information.
Arquivo.pt automatically identified URLs that document H2020 Research and Development projects
The European Union’s Open Data Portal published a data set from the Community Research and Development Information Service (CORDIS) that documents H2020 research projects. However, from the 31 129 projects listed, only 46% presented a project URL.
Arquivo.pt developed a low-cost methodology that automatically identifies URLs related to R&D projects to be systematically preserved. This automatic identification is achieved through the combination of open data sets with web search services. This methodology is detailed on a scientific article published at the International Conference on Digital Preservation 2016.
In sum, we extracted 106 300 unique URLs from the following open data sets:
Then, we extracted the acronym and title of the projects from the data sets and automatically searched the web for additional URLs using the Bing Search API.
All the data sets and tools developed have been made publicly available in open access so that they can be reused and collaboratively enhanced. In particular, you can access the software developed to automatically identify additional URLs about H2020 projects.
197 million web files related to science were preserved
Arquivo.pt identified and preserved 197 million web files (17 TB) that document R&D projects funded by Horizon 2020.
Contributions to complement the European Open Data Sets
All the resulting data sets were made publicly available so that they can be improved and reused by other organizations also interested on preserving this digital heritage:
URLsBingSearch (column V): top 10 search results returned by Bing API when column projectUrl (column K) in the original data set was empty (e.g. http://extmos.eu/)
ArchivedProjectURLs (column W): direct link to access the preserved version of the projectUrls and URLsBingSearch in Arquivo.pt (e.g. https://arquivo.pt/wayback/http://extmos.eu)
archivedOrganizationUrl (column Y): direct link to access the preserved version of the organizationUrl (column O) in Arquivo.pt (e.g. https://arquivo.pt/wayback/www.it.pt)
When a user enters a set of words about a topic in the Arquivo.pt search box and clicks on the “Narrative” button, the user is directed to the “Conta-me Histórias” service, which automatically analyzes the news from 25 websites archived by Arquivo.pt over time and presents a chronology of news related to the topic.
Figure 1: Search results for pages about “Justin Bieber”.
Figure 2: Narrative of news about “Justin Bieber” from Portuguese news sites preserved by Arquivo.pt generated by the “Conta-me Histórias” service.
Create your narrative now!
“Conta-me Histórias” researches, analyzes and aggregates thousands of results to generate each narrative about a topic. It is recommended to choose descriptive words about well-defined themes, personalities or events to obtain good narratives.
Creating a narrative is useful for researchers, journalists or citizens who want to quickly get an overview of the evolution of a topic along time, thus saving them a lot of time and effort.
Go to Arquivo.pt and try to create a narrative about a theme of your choice.
Web Archiving Conference 2021 – the most important meeting in the field of Web preservation, where experts share new knowledge and experiences
RESAW Conference – meeting of the European RESAW network (Research Infrastructure for the Study of Archived Web Materials) this year in its 4th edition, mainly addressed to the community of researchers from non-technological scientific areas, such as Social Sciences, Arts and Humanities.
Contributions of Arquivo.pt to the international community
Arquivo.pt presented some results of the work developed in the last year, with emphasis on the functionalities that improve the reproduction of the archived contents, such as the “Complete the page”.
Two historical collections were integrated on the Arquivo.pt: the Geocities and the Internet Memory Foundation. Arquivo.pt did special collections about the 2019 European Elections and Covid-19.
The contents of Arquivo.pt are accessible to any researcher regardless of the country they are in and therefore it is a useful service to the international community.
Presentations
Arquivo.pt updates 2021: presentation at the IIPC – General Assembly, by Daniel Gomes (Vídeo)
Complete the page. 1 minute drop in (presentation at the IIPC – General Assembly “complete the page”), by Daniel Gomes (Slide, Video)
A transnational and cross-lingual crawl of the European Parliamentary Elections 2019, by Ivo Branco (Slides, Vídeo)
Enhancing access to research the Geocities historical collection, by Pedro Gomes (Slides, Vídeo)
Complete the page – demo. Slide used in the IIPC 1 minute presentation, at the IIPC General Assembly 2021
The historical collection of web content generated during the Internet Memory Foundation’s (IMF) activity has been donated to Arquivo.pt and is now searchable!
The IMF was a European organization dedicated to preserving web content that was wound up in 2018.
The 1st web archiving project in Europe (2004-2010) was led by Julien Masanès (who was guest of honour at the celebration of 10 years of Arquivo.pt) and was called European Archive Foundation.
In 2010, Julien Masanès, the “father” of Web archives in Europe created the IMF.
Examples of pages from the collection donated by the IMF
The collection donated by the IMF has now been integrated in the Arquivo.pt collection to be preserved for posterity.
This collection is composed of 142 million files that total 6.3 TB of historical information whose texts or images can now be searched through Arquivo.pt.
This new collection has been named “InternetMemory” in the Arquivo.pt collections list.
Searches can be made on this collection using the collection search parameter or through the custom search page available at arquivo.pt/InternetMemory.
This external service is useful for research use cases, in areas such as Web design, Art, Communication or History,where it is necessary to access the original visual aspect of a page from the past in the most reliable way possible.
Web page of the European Map of WWW/NIR sites in 1996 using the Oldweb.Today service
You may have to wait a while for your request to be processed but it is always faster than having to install a browser from the past on your computer.
Export search results to spreadsheet format
This new function enables users to save their search results for further treatment and analysis. This is specially useful to perform thorough research about a given topic.
After a search, in the Options, just choose one of the available formats to export the obtained results: XLSX, CSV or TXT.
Exported results into an Excel sheet of a search for the word “universidade”, university, limited to 10 items
Arquivo.pt launched a new version of its service on April 15, 2020 named WebApp.
The purpose of this version was to standardize the user experience between different devices and reduce maintenance costs by removing components with redundant functions.
Its main novelty is the combination of the desktop and mobile interfaces in a single user interface.
The old desktop version has been disabled and the mobile version has evolved to work on various types of devices and screen sizes.