We can now announce the publication of the Requirements Report written by partners in Work Package 1.
The Requirements Report details:
* A full list of content to be provided to DM2E by the content partners;
* Detailed summaries of DM2E content providers metadata and content infrastructures and the interfaces used to retrieve data;
* Summary of mappings of each content providers metadata on to the Europeana Data Model (EDM) and corresponding “cross walks”.
The full report can be downloaded [here](http://dm2e.okfn.org/files/D1.1_1.0_WP1_Requirements_Report-20120813_Final.pdf).
The DM2E team are very happy to be able to announce the first prototype release of Korbo and Pundit – two innovative Linked Data tools for use within humanities research.
##Korbo
Korbo is a powerful aggregation platform for gathering Linked Data objects relevant to your area of research into single workspaces or “baskets”.
Korbo is targeted primarily at developers who want to build applications on top of its API and make full use of the linked cultural data from sources such as Europeana, FreeBase and DBPedia.
Korbo is currently in the early stages of development, but you can already try out a demo version of the platform. Please do give us feedback on your experiences as this is invaluable information for future iterations – [Korbo user feedback questionnaire](https://docs.google.com/spreadsheet/viewform?formkey=dHNXd0ZleUx5RDlYZTRTekFJLWtVZGc6MQ#gid=0).
[TEST KORBO](http://korbo.muruca.org/demo.php)
##Pundit
Pundit is a powerful but easy to use semantic annotation tool that can work with the objects collected within a Korbo “basket.
It enables you to link sections of text to each other or to other Linked Data resources on the net such as DBPedia, Freebase and Geonames. In case a text document comes with a microstructure including sub-entities identified by URIs such structures can be used transparently – or else a highlighting function will be available that would as well enable the highlighting of image areas.
Pundit is currently in its Alpha phase, but you can already try out a demo version of the platform. Please remember to give us feedback on your experiences of Pundit via the [Pundit questionnaire](https://docs.google.com/spreadsheet/viewform?formkey=dExoMEV1MGJEcm9PNzF4VkI2NzZBRXc6MQ#gid=0)
[TEST PUNDIT](http://thepund.it/demo.php)
##Going forward
Both tools will continue to be developed as part of the DM2E (Digitised Manuscripts to Europeana) project on the basis of feedback we receive from user testing and continuing work on user requiremnets and scholarly primitives led by Professor Gradmann and the Humboldt Universität.
To follow the project and receive updates about events, conferences and code sprints we are running, please sign up to the [project mailing list](http://sympa.cms.hu-berlin.de/sympa/subscribe/dm2e-news)
Following a discussion of the presentations the next 6 months of DM2E project were discussed.
Here is a summary of the key points:
* Aggregation of requirements
* Adapt MINT to DM2E Requirements
* Specialise the Europeana Data Model for DM2E (EDM+);
* Stablise the prototype demoed at the [Digital Humanities Advisory Board meeting](https://dm2e.eu/digital-humanities-advisory-board-meeting-15th-june-2012/)
* Continue the work on the “Scholarly Domain Model” begun by Stefan Gradmann and Steffen Hennicke with the Advisory Board for publication in Autumn;
* A series of remote Work Package meetings and workshops;
* Remote user testing of Korbo and Pundit;
* Promotion of Korbo and Pundit at major Digital Humanities conferences;
* Creation of an open licensing for metadata animation.
Last Friday the DM2E Advisory Board gathered in the confines of the Humboldt Universität Berlin to give their feedback on the first demo of the tools being developed as part of the project and to determine guidelines for the future functional evolution of the DM2E scholarly toolset.
Attendees from the Board included:
* **Susan Schreibman (Trinity College Dublin)**
* **Tobias Blanke (Kings College London)**
* **Stefan Gradmann (Humboldt Universität)**
* **Steffen Hennicke (Humboldt Universität)**
* **Sally Chambers (DARIAH)**
* **Felix Sasaki (W3C)**
* **Alois Pichler (University of Bergen)**
* **Laurent Romary (INRIA)**
* **Gerhard Lauer (Göttingen Centre for Digital Humanities)**
The initial focus of the meeting was the demo of two tools developed by Net7 as part of DM2E and now in prototype phase: Pundit and Korbo. Both of these tools are designed to operate on the immensely rich source of Linked Data that is being supplied to DM2E via the Consortium content providers and that will become part of the Europeana.
We will be releasing both these tools for remote user testing next week and if you’re interested in trying them please sign up to the mailing list so we can notify you once they’re ready.
[Sign up to DM2E Mailing List](http://sympa.cms.hu-berlin.de/sympa/subscribe/dm2e-news)
##Korbo
An aggregation platform for gathering Linked Data objects relevant to your area of research into single workspaces or “baskets”.
This platform will be immensely useful for Digital Humanities developers wanting to build applications on top of and explain Europeana metadata in creating virtual scholarly ‘desktop’ environments.
##Pundit
Pundit is a powerful but easy to use semantic annotation tool that enables you to link sections of text to each other or to other Linked Data resources on the net such as DBPedia, Freebase and Geonames. In case a text document comes with a microstructure including sub-entities identified by URIs such structures can be used transparently – or else a highlighting function will be available that would as well enable the highlighting of image areas.
Pundit can operate on top of the objects that you have collected in your Korbo “basket” or workspace, and as such, it is an example of the kind of tools that other developers might want to build with Korbo to enhance scholarship in the humanities.
##Screencast
To whet your appetite for the full demo of these tools we’ve composed a short screen cast:
##Ongoing work on scholarly primitives
Professor Stefan Gradmann presented his proposed model for the scholarly domain as a whole and his elaboration of the scholarly primitives concept foundational to the Digital Humanities since John Unsworth’s pioneering work on the subject [^1].
Gradmann identified a number of central ways a scholar engages with a digital corpora to be further elaborated in a forthcoming paper co-authored with the DM2E Advisory Board. Modeling and automated semantic ‘expansion’ (dynamic contextualisation to variable degrees of semantic ‘neighborhood) were considered among the most promising and challenging functional extensions moving beyond mere annotation functionality. This work will inform the future development of Korbo and Pundit.
##The Wittgenstein Case Study
At the meeting it was confirmed that at the very start of 2013 a group of about 10 Wittgenstein scholars will begin working on Wittgenstein’s Brown Book manuscripts as they are made available by the Wittgenstein Archives at the University of Bergen to DM2E as well as on scholarly material related to this text.
This will enable the scholars interactions with the texts in Korbo and Pundit to be traced and fed back into the platform itself, further honing its functionality so that it respondes precisely to the needs of scholars working in a digital enviroment.
The scholarly semantic graph resulting from this incubator activity will be an object of scholarly work itself, as the modeling of this dynamic aggregation of RDF statements and of its evolution may create new insights into the way thought and discourse form and evolve in virtual research environments.
[^1]: [“Scholarly Primitives: what methods do humanities researchers have in common, and how might our tools reflect this?”, 2000](http://people.lis.illinois.edu/~unsworth/Kings.5-00/primitives.html)
Ac couple of weeks back we posted about a [Digital Humanities event taking place at WWW 2012 Conference in Lyon, France](https://dm2e.eu/the-web-digital-humanities-what-about-semantics/).
We now have Professor Gradmann’s slides on the [DM2E Slideshare](http://www.slideshare.net/DM2E) for those who weren’t able to make it:
Last week on April 20th, the [Open Knowledge Foundation](http://www.okfn.org) hosted its first open GLAM workshop in Berlin as part of the [DM2E](http://www.dm2e.eu) workshop program. During this half-day event, the legal-issues faced by cultural heritage institutions who want to open up their collections were discussed. Representatives from the [Staatsbibliothek zu Berlin](http://staatsbibliothek-berlin.de/), the [Institute for Museum Research](http://www.smb.museum/ifm/), [the Geheimes Staatsarchiv Preussischer Kulturbesitz](http://www.gsta.spk-berlin.de/), [the Naturkundemuseum](http://www.naturkundemuseum-berlin.de/), [the Max Planck institute for Wissenschaften](http://www.mpg.de/en) and many others joined legal experts from [Creative Commons](http://creativecommons.org/), [Wikimedia](http://wikimedia.de/) and [iRights](http://irights.info/).
After a word of welcome by [Joris Pekel](https://okfn.org/about/team/the-projects-team/#joris-pekel-openglam) (OKFN) and Jutta Weber (Staatsbibliothek), [Daniel Dietrich](https://okfn.org/about/team/international-chapters/#daniel-dietrich-chapter-lead-okfn-deutschland) (OKFN) opened the workshop with a short overview of the current state of affairs in the field of digitising cultural heritage. He discussed the proposed amendment of the PSI-directive to include cultural heritage institutions to fall under the directive and the work the Open Knowledge Foundation does to unlock cultural heritage.
Jutta Weber, head of the Manuscripts department of the Staatsbibliothek and partner of the [Digitised Manuscripts to Europeana project](http://www.dm2e/eu) presented the work the Staatsbibliothek is doing to digitise their collection and what that means for the literary publishing cyclus now that the digital objects become available outside of the reading room. The fact that the works are digital brings new possibilites like the creation of digital derivative works and annotations. This also means new legal questions about who owns the copyright of the digitised object and also about private rights of authors and editors
Paul Klimpel lined out clearly the why the arguments against releasing metadata under an open license are not valid. More and more, GLAM institutions become publishers themselves, both by original research as well as by crowdsourcing new information about cultural objects. To allow these new possibilities of digitised cultural data of working with cultural data, institutions have to publish their data under an open license. The institution need to switch to a collaborative model and work and share with other institutions to remain visible and to add value to their collection.
John Weitzmann of Creative Commons discussed the [different Creative Commons licenses](http://creativecommons.org/licenses/) and what they mean for cultural institutions. During the discussion the [4.0 version](http://wiki.creativecommons.org/4.0) of the license was discussed and how it would benefit for example Wikimedia editors as well.
Mathias Schindler gave an overview of the different projects Wikimedia is running that are relevant for GLAM institutions such as the media repository [Wikimedia Commons](http://commons.wikimedia.org/wiki/Main_Page) and the just started [Wikidata](http://meta.wikimedia.org/wiki/Wikidata) project. He also explained why it is important that the images which are used on Wikipedia need to be openly licensed and permit commerical reuse if Wikimedia’s goal of creating a world where every single human being can freely share in the sum of all knowledge is to be attained. He also gave some examples of how the institutions can benefit from having their content in Wikimedia Commons and why good meta-data is of such importance to be visible as a cultural institution.
Paul Keller ([Knowledgeland](http://kl.nl), [Europeana](http://www.europeana.eu) presented the [Europeana Licensing Framework](http://pro.europeana.eu/documents/858566/7f14c82a-f76c-4f4f-b8a7-600d2168a73d) and their [Data Exchange Agreement](http://pro.europeana.eu/web/guest/data-exchange-agreement). From the first of July, all meta-data in Europeana will be licensed under a CC0 license which means that all their data can be freely used without any restrictions. He pointed out that Europeana favors a smaller database which is easy to use with a CC0 license in stead of a larger collection which is harder to use because the material is licensed only for non-commercial use.
After his presentation Paul Keller discussed with the participants the different concerns of opening up their meta-data and signing the agreement. He mentioned that the German institutions are in general the ones that have the most trouble with signing the Europeana Data Exchange agreement and asked the institutions why that is. One of the answers was that the distinction between meta-data and the actual content was not always clear for institutions which shows the importance of demystifying the topic.
The Open Knowledge Foundation will be hosting a follow up workshop in the new year, focusing on metadata standards and technologies in cultural heritage institutions. If you’re interested in keeping in touch, you can join the [open-glam mailing list](http://lists.okfn.org/mailman/listinfo/open-glam).
We thank the Staatsbibliothek in Berlin for kindly giving us a room to use for the workshop and Wikimedia and Creative Commons for their support with organising the workshop.
This Friday (20th April 2012) Professor Stefan Gradmann and Dr Tobias Blanke will join a panel at the [WWW 2012 conference in Lyon, France](http://www2012.wwwconference.org/) to discuss the opportunities and challenges that the semantic web poses for the humanities.
The most pertinent questions of the day within the Digital Humanities will be addressed including:
How do we encourage participation amongst a Humanities community that historically has low technical literacy and limited interested in complex data modelling?
To what extent does the Semantic Web require and/or benefit from open access and decentralisation and how well will that fit with the expectations and rewards structures based on traditional modes of Humanities publication and dissemination?
Are Semantic technologies the ultimate digital divide – powerful tools for those who can make use of them but computational voodoo to everyone else?
What might the Humanities themselves have to say about this new medium and what light does it shed on the way humans communicate and interact?
We expect DM2E to become a point of discussion particularly the work to be undertaken on modeling large semantic scholarly graphs and the role Linked Open cultural heritage will have in the project.
For more details on the event and other panelist visit the [WWW 2012 website](http://www2012.wwwconference.org/program/panels/web-digital-humanities/).
**On March 28th, partners involved in Workpackage 2 came together to discuss plans going forward. The meeting was held at the the [Freie Universität Berlin](http://www.fu-berlin.de/).**
The focus of Worpackage 2 is to develop a tool for converting metadata from a range of sources to RDF that maps on to the Euroepeana Data Model, thus enabling Europe’s cultural heritage institutions to more easily integrate their data into Europeana. One of the most important things that the tool developed as part of Workpackage 2 must do is to preserve the original metadata record.
Steffen Hennicke (Humboldt Universität) started with an extended presentation about the properties and possibilities of the European Data Model.
The most important task of this day was to look at the data that the content providers in Workpackage 1 had provided and five key areas were discussed as part of the agenda:
1. Distill the requirements from the datasets provided by the Workpackage 1 partners. This meant identifying the different classes and properties uses within the datasets provided.
2. Match these requirement with the capabilities of the [European Data Model](http://pro.europeana.eu/documents/900548/bb6b51df-ad11-4a78-8d8a-44cc41810f22)
3. Identify gaps and try to fill them with the terms from other vocabularies. One suggestion was to build an ordered list of vocabularies to reuse
4. Discuss URI Construction Schemas
5. Decide to dos and next steps
We are proud to announce that The Open Knowledge Foundation will organise in cooperation with Wikimedia Deutschland and Creative Commons the very first workshop for the DM2E project titled: “Rechtliche Fragen beim Öffnen von (Meta-) Daten Gedächtnisinstitutionen”.
Goals for this workshop are:
Explanation why open data can be beneficial for cultural institutions
Examples and showcases about the possibilities of open cultural data
Explanation of the legal issues surrounding open data
Identification and discussion of the issues and concerns cultural heritage institutions have about opening up (meta)data
Let the institutions share their own experiences
During this day, there will be speakers from different projects and institutions that have experience in working with open cultural data. They will present their showcases and give the representatives the possibility to ask questions or address concerns they have. Besides this, there will be a variety of legal experts present who can help the institutions sorting out their situation and again answer their questions.
The program:
Daniel Dietrich (OKFN) will open the workshop with a word of welcome and facilitate the rest of the day. He will start the day with an overview of the current situation and why it can be beneficial for institutions to open up and share their data
Dr. Jutta Weber (Staatsbibliothek Berlin) will give a presentation about the experiences the Staatsbibliothek has with releasing data under an open license
Dr. Paul Klimpel gives a presentation about the legal possibilities when institutions open up their cultural (meta)data
Mathias Schindler (Wikimedia) gives an overview of the work Wikimedia has been doing in this area, showcases and examples, as well as where they stand now and future developments
John Hendrik Weitzmann (CC) gives an overview of the different licensing models related to opening up data
Paul Keller (Europeana) presents the work Europeana is doing and what it means for cultural institutions to join and openly license their metadata
We will end the session with a round table discussion
The workshop will be held on Friday the 20th of April from 13:00 till 17:00 at the Staatsbibliothek in Berlin and is free to attend for representatives of cultural heritage institutions.
The complete workshop will be in German.
For more information or registration, please contact joris.pekel [at] okfn.org
Last week the Digitised Manuscripts to Europeana (DM2E) project officially kicked off. Hosted in the wonderful setting of the Humboldt University, the kickoff event lasted two days and brought together the 11 strong consortium of academic institutions, non-governmental organisations and cultural heritage institutions. Led by the Project Coordinator, Professor Stefan Gradmann, the kick-off event featured key-note addresses as well as presentation from work package leads about their work going forward.
DM2E will address the needs of scholars in the Digital Humanities who require tools for working with large bibliographic data sets. Dr Susan Schreibman of Trinity College Dublin and member of the DM2E Advisory Board gave a fascinating account of the current state of the Digital Humanities. She presented some of the ground-breaking research that is being undertaken on the history of reading practices and presented some of the best examples of visualisation projects in the Digital Humanities, such as Stanford’s Mapping the Republic of Letters.
The next presentation was form Stefan Gradman himself. In an insightful talk he explained the objectives and the role of the different workpackages of the project. Goal of DM2E is not only to enable institutions to provide enriched content into Europeana, but also enable community-based use, remix and knowledge building.
Following on from this, Steffen Hennicke from the Humboldt University presented the Europeana Data Model (EDM). The Europeana Data Model stands at the heart of much of the technical work to be undertaken as part of DM2E. Its richness and flexibility facilitates contextualization of metadata in Europeana. The remainder of day one and the first half of day two was spent with break out sessions at which each Work Package lead set out their plan going forward.
To round off proceedings on the second day the final presentation was given by Louise Edwards from the European Library. Louise highlighted the challenges faced by Europeana going forward but also the opportunities that it held for enriching research and stimulating the development of new apps that used cultural heritage data.
Emphasis was placed on the importance of engaging scholarly communities with Europeana. It was a fitting end to the kick-off event as it showed where the the project would benefit the more general Europeana by enabling more communities to use Europe’s vast store of digital cultural heritage in new and exciting ways through the creation of innovative tools.