Search

Recipe 1: How to assess FAIRness


Recipe metadata

identifier: RX.x

version: v0.1

Difficulty level

Reading Time

15 minutes

Recipe Type

Hands-on

Executable Code

Yes

Intended Audience

Principal Investigators

Data Managers

Data Scientists


Ingredients:

Ingredient Type Comment
HTTP1.1 protocol data communication protocol
guidance on persistent resolvable identifiers policy
Persistent Uniform Resource Locators - PURL redirection service
Archival Resource Key identifier minting service; identifier resolution service
Handle system identifier minting service; identifier resolution service
DOI identifier minting service based on Handle system
identifiers.org identifier resolution service
EZID resolution service identifier resolution service
name2things rsolution service identifier resolution service
FAIREvaluator FAIR assessment
FAIRShake FAIR assessment
RDF/Linked Data model
Actions.Objectives.Tasks Input Output

Objectives:

Perform an automatic assessment of the status of a dataset with respect to the FAIR principles. Obtain human and machine readable reports highlighting the main

Step by Step Process:

Step1:

Navigate the FAIREvaluator tool, which can be accessed via the following 2 addresses:

the FAIREvaluator Home page

Step2:

In order the run the FAIREvaluator, it is important to understand to notion of FAIR indicators (formerly referred to as FAIR metrics). One may browse the list of currently community defined indicators from the Collections page

Select a 'FAIR Maturity Indicator - Collections'

Step3:

To run an evaluation, the FAIREvaluator needs to following 5 inputs from users:

  1. a collection of FAIR indicators, selected from the list described above.
  2. a globally unique, persistent, resolveable identifier for the resource to be evaluated.
  3. a title for the evaluation. Enforce a naming convention to make future searches easiers as these evaluations are saved.
  4. a person identifier in the form of an ORCID

Running the FAIREvaluator - part 1: setting the input

Step4:

Hit the 'Run Evaluation' button from 'https://fairsharing.github.io/FAIR-Evaluator-FrontEnd/#!/collections/new/evaluate' page

Running the FAIREvaluator - part 2: execution

Step5:

Analyze the report:

FAIREvaluator report - overall report

Time to dig into the details and figure out the reasons why some indicators are reporting a failure:

apparently a problem with identifier persistence if using DOI, which are URN rather than URL *stricto-sensu*

Reference:

Wilkinson, M.D., Dumontier, M., Sansone, S. et al. Evaluating FAIR maturity through a scalable, automated, community-governed framework. Sci Data 6, 174 (2019). doi:10.1038/s41597-019-0184-5

Clarke et al. FAIRshake: Toolkit to Evaluate the FAIRness of Research Digital Resources, Cell Systems (2019),doi:10.1016/j.cels.2019.09.011

Authors:

Name Affiliation orcid CrediT role
Philippe Rocca-Serra University of Oxford, Data Readiness Group 0000-0001-9853-5668 Writing - Original Draft

License:

This page is released under the Creative Commons 4.0 BY license.