prepared by
Recommendations in this document are intended to illustrate the guidelines and other information provided by the SNSF for preparing Data Management Plans. The SNSF’s guidelines are binding. A PDF version of this guidance document is available here: DLCM_SNSF-DMP_v2-1.pdf.
This document was prepared jointly by teams from the libraries of EPFL and ETH Zurich, with input from DLCM partners, and exists in adapted versions for the two universities. It can also be freely adapted to other institutions’ needs. The examples therefore do not cover all disciplines. Further examples from other subject areas and other feedback are welcome to info@dlcm.ch for possible inclusion in future revisions.
Please contact data-archive@library.ethz.ch for feedback or questions concerning ETH Zurich.
License Creative Commons CC BY-SA
Mandated by
1. Data collection and documentation
2. Ethics, legal and security issues
3. Data storage and preservation
ETH Zurich
Principal Investigator:
(Specify name and email)
Data management plan contact person:
(Specify name and email)
What type, format and volume of data will you collect, observe, generate or reuse?
Which existing data (yours or third-party) will you reuse?
Briefly describe the data you will collect, observe or generate. Also mention any existing data that will be (re)used. The descriptions should include the type, format and content of each dataset. Furthermore, provide an estimation of the volume of the generated datasets.
This relates to the FAIR Data Principles F2, I3, R1 & R1.2
For each dataset in your project (including data you might re-use) mention:
Data type: Briefly describe categories of datasets you plan to generate or use, and their role in the project
Data origin: to be mentioned if you are reusing existing data (yours or third-party one). Add the reference of the source if relevant.
Format of raw data (as created by the device used, by simulation or downloaded): open standard formats should be preferred, as they maximize reproducibility and reuse by others and in the future [see List of recommended file formats by ETH Zurich]
Format of curated data (if applicable): open standard formats should be preferred [see List of recommended file formats by ETH Zurich]
Estimation of volume of raw and curated data.
The data produced from this research project will fall into two categories:
Data in category 1 will be documented in [file format]. We anticipate that the data produced in category 1 will amount to approximately 10 MB and the data produced in category 2 will be in the range of 4 - 5 GB. |
This project will work with and generate three main types of raw data.
All data will be stored in digital form, either in the format in which it was originally generated (i.e. Metamorph files, for confocal images; Spectrum Mill files, for mass spectra with results of mass spectra analyses stored in CSV files; TIFF files for gel images; MariaDB SQL dump files for genetics records), or will be converted into a digital form via scanning to create tiff or jpeg files (e.g. western blots or other types of results). Measurements and quantification of the images will be recorded in excel files (for long term preservation, they will be converted in CSV files. Micrograph data is expected to total between 100GB and 1TB over the course of the project. Scanned images of western blots are expected to total around 1GB over the course of the project. Other derived data (measurements and quantifications) are not expected to exceed 10MB. |
The data are health records auto-generated by users of the application X. They are subjected to a contract with the company X. All fields contain user observations and entered manually, except for temperature which is measured by a Bluetooth connected thermometer. Data fields per user (anonymized by X): User identifier; Age; Weight, Size. Data fields per users per day of observation:
Data will be received in CSV format, and consists of the record of 2 million users. It will amount to maximum 1GB. |
There will be two categories of data: NEW data from this project and EXISTING data from the FOEN Lake Monitoring
The EXISTING data is already available (CIPAIS, CIPEL) in excel sheets with matrices for the individual samplings and a variable number of parameters (~10 to ~25). The EXISTING data will not be modified and remains with the organizations. We will keep a copy on our computers during the project. We anticipate the data produced in category 1 to amount to several hundred MB for the moored and profiled sensor files and ~100 GB for the T-microstructure profiles; the EXISTING data in category 2 is in the range of ~20 MB. |
Digital Curation Office: data-archive@library.ethz.ch
What standards, methodologies or quality assurance processes will you use?
How will you organize your files and handle versioning?
Explain how the data will be collected, observed or generated. Describe how you plan to control and document the consistency and quality of the collected data: calibration processes, repeated measurements, data recording standards, usage of controlled vocabularies, data entry validation, data peer review, etc.
Discuss how the data management will be handled during the project, mentioning for example naming conventions, version control and folder structures.
This relates to the FAIR Data Principle R1
For each dataset in your project (including data you might re-use) mention:
the use of core facility services (specify their certifications, if any),
whether you follow double blind procedures (define it),
the use of standards or internal procedures; describe them briefly.
If you are working with persons’ data, confirm the following:
have the subjects of your data collection (persons) been fully informed (what data do you collect, what will you do with the data, and who will receive it; when will they be deleted) and have the subjects given their informed consent?
Indicate and describe the tools you will use in the project.
You may rely on the following tools depending on your needs:
a naming convention, i.e. the structure of folders and file names you will use to organize your data.
For example: Project-Experiment-Scientist-YYYYMMDD-HHmm-Version.format (concretely: Atlantis-LakeMeasurements-Smith-20180113-0130-v3.csv)
a code revision management system, such as Git. Several Git servers are available for ETH domain: c4science.ch, gitlab.epfl.ch, gitlab.ethz.ch.
The reaction conditions will be recorded and collated using a spreadsheet application and named according to each generation of reaction as follows: The various experimental procedures and associated compound characterization will be written up using the Royal Society of Chemistry standard formatting in a Word document, each Word document will also be exported to PDF-A. The associated NMR spectra will be collated in chronological order in a PDF-A document. |
All samples on which data are collected will be prepared according to published standard protocols in the field [cite reference]. Files will be named according to a pre-agreed convention. The dataset will be accompanied by a README file which will describe the directory hierarchy. Each directory will contain an INFO.txt file describing the experimental protocol used in that experiment. It will also record any deviations from the protocol and other useful contextual information. Microscope images capture and store a range of metadata (field size, magnification, lens phase, zoom, gain, pinhole diameter etc.) with each image. This should allow the data to be understood by other members of our research group and add contextual value to the dataset should it be reused in the future. |
Experiments will include appropriate controls to ensure validity [brief description]. Data consistency will be assessed by comparing repeated measures. |
Quality of analytical data will be guaranteed through calibration of devices, repetition of experiments, comparison with literature/internal standards/previous data, by a peer review. |
All experimental data will be automatically imported into the institutional electronic Laboratory Information System (LIMS) from the measurement device. Methods and materials will be recorded using the institutional Electronic Lab Notebook (ELN). |
The experimental records and observations are recorded by hand-written notes followed by digitization (scanning). The analytical data are collected by the instruments that generated them; they are processed by the native programs associated with the instruments. A periodic quality control process will be applied to remove errors and redundancies. Errors include for example incorrect handling and machine malfunction. The quality control process will be documented. The quality of experimental records and observations will be controlled by repeating experiments. For NMR and X-ray, the data collection is done through instrument standardised data acquisition programs. For E-chem, UV-Vis, IR, GC, GC-MS, lab-standardized protocols will be used. |
The data from the moored sensors is sensor-internally stored and recovered every two months, when sensors will be cleaned and recalibrated if data indicates quality loss. The CO2 sensors will be cross calibrated against atmospheric pressure. The DO and PAR sensors in the mooring will be compared to profiled sensors and deviations detected. Temperature sensors are extremely stable and are only calibrated before and after the two years using the laboratory temperature bath which is calibrated agaist the Office of Metrology in Bern every few years to 0.001 oC. The Thesis sensor data is transmitted when surfacing via GSM communication system directly to the lab where sensors deterioration is weekly checked. The instrument will be retrieved every month and sensors cleaned. The optical sensors will be calibrated according the manual every six months. The T-microstructure sensors do not need calibration as the data 8 is matched to (very accurate) CTD temperature. Small T shifts are irreverent, as only the spectra matter. The sensors deterioration (or frequency loss) will visually be checked and is seen in the quality of the Batchelor spectra. The very simple structure of the CSV files holding the raw data will be documented in a plain text README file. This file, and all raw data files as they become available, will be uploaded to the Eawag Research Data Institutional Collection into one “data package”, which is annotated with general metadata. Copies of the raw data files as well as set of calibrated, quality-controlled files stored on the group computers at ETH will be organized in a folder structure that is also documented in a README file. At the end of the project, the entire set of calibrated, quality-controlled files will be annotated and stored on the Eawag institutional repository as well. |
All files produced during this project will be stored in our Electronic Laboratory Notebook (ELN) and Laboratory Information Management System (LIMS) openBIS. In this ELN, each scientist has a personal folder where to organize projects and experiments. Each experiment is described in the electronic notebook and all data related to the experiment is directly attached to it, in so called “datasets”. Each dataset is immutable, thus different file versions are stored in the lab notebook in different datasets with a manually generated version number. Very large datasets (100s of TBs) are not directly stored in openBIS datasets, but they are linked to the experimental description using an extension to openBIS called BigDataLink. This works similarly to the git version control software, so every time changes are made to the data, these need to be committed to openBIS, which automatically keeps track of the versioning. |
Digital Curation Office: data-archive@library.ethz.ch
Scientific IT Services: https://sis.id.ethz.ch/
What information is required for users (computer or human) to read and interpret the data in the future?
How will you generate this documentation?
What community standards (if any) will be used to annotate the (meta)data?
Describe all types of documentation (README files, metadata, etc.) you will provide to help secondary users to understand and reuse your data. Metadata should at least include basic details allowing other users (computer or human) to find the data. This includes at least a name and a persistent identifier for each file, the name of the person who collected or contributed to the data, the date of collection and the conditions to access the data.
Furthermore, the documentation may include details on the methodology used, information about the performed processing and analytical steps, variable definitions, references to vocabularies used, as well as units of measurement.
Wherever possible, the documentation should follow existing community standards and guidelines. Explain how you will prepare and share this information.
This relates to the FAIR Data Principles I1, I2, I3, R1, R1.2 & R1.3
Indicate all the information required in order to be able to read and interpret the data (context of data) in the future. General documentation of the data is often compiled into a plain text or markdown README file. These formats may be opened by any text editor and are future proofed.
Provide the metadata standard used to describe the data (for concrete examples see: Research Data Alliance Metadata Standards Directory). If no appropriate (discipline oriented) existing standard is available, you may describe the ad hoc metadata format you will use in this section. Metadata1 may also be embedded in the data (e.g. embedded comments for code). Or, when for example using Hierarchical Data Format HDF5, arbitrary machine readable metadata can be included directly at any level.
Describe:
the software (including its Version) used to produce the data and the software used to read it (they can be different)
the format and corresponding filename extension and its version (if possible).
The used software should be archived along with the data (if possible, depending on the software license).
Describe the automatically generated metadata, if any.
Provide the data analysis or result together with the raw data, if possible.
description of the used software,
description of the used system environment,
description of relevant parameters such as:
geographic locations involved (if applicable)
all relevant information regarding production of data.
1 Metadata refers to “data about data”, i.e., it is the information that describes the data that is being published with sufficient context or instructions to be intelligible for other users. Metadata must allow a proper organization, search and access to the generated information and can be used to identify and locate the data via a web browser or web based catalogue.
The data will be accompanied by the following contextual documentation, according to standard practice for synthetic methodology projects:
Files and folders will be named according to a pre-agreed convention YXZ, which includes for each dataset, identifications to the researcher, the date, the study and the type of data (see section 1.2). The final dataset as deposited in the chosen data repository will also be accompanied by a README file listing the contents of the other files and outlining the file-naming convention used. |
Metadata will be tagged in XML using the Data Documentation Initiative (DDI) format. The codebook will contain information on study design, sampling methodology, fieldwork, variable-level detail, and all information necessary for a secondary analyst to use the data accurately and effectively. It will be responsibility of:
|
IFS and OpenIFS model integrations will be run and standard meteorological and computing performance data output will be generated. Both will be run at ECMWF, and only performance data will be made available to the public. The meteorological output will be archived in MARS, as it is standard research experiment output. The data will be used for establishing research and test code developments, and will enter project reports and generally accessible publications. The IFS will not be made available, OpenIFS is available through a dedicated license. IFS meteorological output (incl. metadata) and format follows the World Meteorological Organization (WMO) standards. Compute performance (benchmark) output will be stored and documented separately. Data will be in ASCII and maintained locally. The output will be reviewed internally, and the ECMWF facilities allow reproduction of this output if necessary. |
Two types of metadata will be considered within the frame of the project X: that corresponding to the project publications, which has already been described in Section 4, and that corresponding to the published research data. In the context of data management, metadata will form a subset of data documentation that will explain the purpose, origin, description, time reference, creator, access conditions and terms of use of a data collection. The metadata that would best describe the data depends on the nature of the data. For research data generated in project X, it is difficult to establish a global criteria for all data, since the nature of the initially considered data sets will be different, so that the metadata will be based on a generalised metadata schema as the one used in ZENODO5, which includes elements such as:
Additionally, a readme.txt file could be used as an established way of accounting for all the files and folders comprising the project and explaining how all the files that make up the data set relate to each other, what format they are in or whether particular files are intended to replace other files, etc. |
For every data stream (sequences of identical data files) over the entire 2-year period of data acquisition a README File will be generated which contains: (a) the sensors used (product, type, serial number), (b) the temporal sequence of the sensors (time and location, sampling interval), (c) the observations made during maintenance and repairs, and (d) details on the physical units, as well as the calibration procedure and format. This is a standard procedure which we have used in the past. |
In the data management system (openBIS ELN-LIMS), metadata are provided as attributes of the respective datasets. Based on the defined metadata schema, openBIS ELN-LIMS will be configured so that the required metadata is automatically assigned to datasets and / or manually provided by the researcher. *Information required to read and interpret data (incl. metadata standards) to be filled by researchers. |
Digital Curation Office: data-archive@library.ethz.ch
Scientific IT Services: https://sis.id.ethz.ch/
Ethical issues in research projects demand for an adaptation of research data management practices, e.g. how data is stored, who can access/reuse the data and how long the data is stored. Methods to manage ethical concerns may include: anonymization of data; gain approval by ethics committees; formal consent agreements. You should outline that all ethical issues in your project have been identified, including the corresponding measures in data management.
This relates to the FAIR Data Principle A1
Describe which ethical issues are involved in the research project (for example, human participants, collection/use of biological material, privacy issues (confidential/sensitive data), animal experiments, dual use technology, etc.).
For more information, see
Explain how these ethical issues will be managed, for example:
The necessary ethical authorizations will be obtained from the competent ethics committee.
Informed consent procedures will be put in place.
Personal/sensitive data will be anonymized.
Access to personal/sensitive data will be restricted.
Personal/data will be stored in a secure and protected place.
Protective measures will be taken with regard to the transfer of data and sharing of data between partners.
Sensitive data is not stored in cloud services (e.g. data related to individuals, data under a non-disclosure agreement, data injuring third party rights or (legal) expertises).
Please check if your project involves data relating to (in bold) one of the following ethical issues:
If you consider that there are no ethical issues in your project, you can use the following statement:
There are no ethical issues in the generation of results from this project.
If your project involves human subjects, an ethical authorization from either the cantonal ethics commission or the institutional ethics commission (ETH Zurich Ethics Commission) is needed. This depends on whether your project is invasive/non-invasive and whether or not health-related data is collected/used.
For research involving work with human cells/tissues, a description of the types of cells/tissues used in their project needs to be provided, together with copies of the accreditation for using, processing or collecting the human cells or tissues.
Research which involves the collection or use of personal data needs to be reviewed by the cantonal ethics commission or the ETH Zurich Ethics Commission’s (depending on what kind of data is involved). ETH Zurich: For more information, see the ETH Zurich Ethics Commission’s website (German).
If animal experiments are conducted in the context of the research project, an authorization of the cantonal veterinarian office is needed.
(See also: ETH Zurich Animal Welfare Officer)
For more information, see the ETH Zurich Export Control website.
Please check if your project involves one of the following ethical issues:
If you consider that there are no ethical issues in your project, you can use the following statement: There are no ethical issues in the generation of results from this project. |
This project will generate data designed to study the prevalence and correlates of DSM III-R psychiatric disorders and patterns and correlates of service utilization for these disorders in a nationally representative sample of over 8000 respondents. The sensitive nature of these data will require that the data be released through a restricted use contract, to which each respondent will give explicit consent. An ethical authorization will be obtained from the cantonal ethics committee for this project. |
Research in this proposal involves the use of animals of the species mouse (Mus Musculus). Animal studies will be preceded by multiple biochemical experiments in vitro and in cultured cells. Mouse experiments will only be used at advanced stages of investigations when few, specific and highly relevant questions can be addressed by a limited number of experiments. The PI and the research team will work in conformity with all applicable rules, guidelines and principles such as the EU directive 2010/63/EU on the protection of animals used for scientific purposes, the Swiss federal law on animal protection (RS 455), the federal ordinance on animal protection (RS 455.1), and the federal ordinance on animal experimentation, production, and housing (RS 455.163). All animal experiments will only be initiated after having received the approval of the Cantonal and Federal authorities. Details on animal usageIn performing the experiments, we strive to strictly adhere to the 3Rs principle of Replacement, Refinement, and Reduction.
TrainingAll researchers and technicians working with the animals receive proper animal welfare training in conformity with DFE Ordinance 455.109.1 on ‘Training in animal husbandry and in the handling of animals’. |
Environmental protection and safetyThe PI assures that appropriate health and safety procedures conforming to relevant local / national guidelines / legislation are followed for staff involved in this project. The health and safety of all participants in the research (investigators, subjects involved or third parties) must be a priority in all research projects (see also ETH regulations on health and safety). The project will be conducted in collaboration with ETH’s Safety, Security, Health and Environment department (SSHE). |
All data are anonymized, and as such, we are in line with the Swiss Federal Act on Data Protection as described on the page of the Federal Data Protection and Information Commissioner (FDPIC). |
The project respects all the constraints and requirements as laid down in the Swiss Federal Act on Data Protection and supervised by the Federal Official Responsible for Data Protection and Transparency. Indeed, as the finality of the project does not relate to individuals and the published results do not allow to identify the participants nominatively, we have communicated with all participants giving them the following basic information :
|
The project is a medical research project and respects all the rules and regulations laid down in the Swiss Federal Act on Data Protection and supervised by the Federal Official Responsible for Data Protection and Transparency. We are only using and processing data for individuals who have given their explicit consent. |
Dataset X was obtained from the BAFU and is subject to a confidentiality agreement to keep information about the sampling locations secret. We are allowed to share this information among researchers involved in the project. The dataset is being stored in a location to which only project member have access. Please refer to Section 2.2 for technical details about access restrictions. All project members will be informed about sensitivity of this data and agree not to copy it to other places. This dataset and intermediate datasets containing the sampling locations will be excluded from the data package published along with the final report and replaced with instructions about how to obtain them from the BAFU. |
ETH Zurich Guidelines on scientific integrity, RSETHZ 414
The ETH Zurich Compliance Guide
Federal Data Protection and Information Commissioner
Ethics Commission (Website or Contact: raffael.iturrizaga@sl.ethz.ch)
Website of Legal Office (e.g. for Data Protection issues)
Website of the Animal Welfare Officer
Website of Safety, Security, Health, Environment department (SSHE / SGU)
What are the main concerns regarding data security, what are the levels of risk and what measures are in place to handle security risks?
How will you regulate data access rights/permissions to ensure the security of the data?
How will personal or other sensitive data be handled to ensure safe data storage and transfer?
If you work with personal or other sensitive data you should outline the security measures in order to protect the data. Please list formal standards which will be adopted in your study. An example is ISO 27001-Information security management. Furthermore, describe the main processes or facilities for storage and processing of personal or other sensitive data. (This relates to the FAIR Data Principle A1.)
The main concerns regarding data security are data availability, integrity and confidentiality, in particular the levels of risks involved and technical and organizational measures as named in the Swiss Federal Act on Data Protection.
The main concerns regarding data security are data availability, integrity and confidentiality.
Define whether :
You may choose some of the following options :
Regarding anonymization / encryption:
Regarding access rights:
Regarding storage and back-up:
In May 2018, the EU General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) will come into force. This already now influences future cooperation with any EU-based partners and will be implemented in Swiss law, as well.
GDPR introduces an approach of “Privacy by Design” for parties working with personal or other sensitive data, requiring projects to define their data protection measures from the beginning.
Where the GDPR applies you must outline in a Data Protection Impact Analysis (DPIA, text or table, see an example of the ICRC) the risks involved to the rights of your studies’ subjects and the security measures foreseen in order to protect the data. This is crucial for your project. The less risks you have, the better. The more data safeguards you can imply, the better. The earlier stage you imply them at, the better.
(Cf. Art 35 of the EU General Data Protection Regulation entering into force May 2018)
The data will be processed and managed in a secure non-networked environment using virtual desktop technology. |
All interviewees and focus group participants will sign a Consent form agreed to by the School ethics committee. We have guaranteed anonymity to our interviewees and focus group participants. Therefore we will not be depositing .wav files as this would compromise that guarantee. However, anonymised transcripts of the interviews and focus groups will be deposited. We will make sure consent forms make provision for future sharing of data. All identifying information will be kept in a locked filing cabinet and not stored with electronic files. |
Data will be stored on the centralized file storage system managed by our institutional or school [please specify, e.g. life sciences or basic sciences] IT department. The access to the data is managed through the ETH identity management system, which is a secured system following the best practices in terms of identity management. Our central storage facility has redundancy, mirroring and is monitored. |
Research records will be kept confidential, and access will be limited to the PI, primary research team members, and project participants. Data will be housed on a local server controlled by the PI, and will be accessible via SSH and VPN. Data containing identifiable information, or information covered by an NDA, will be held in an encrypted format (symmetric, AES256, key on local server, passphrase only know to PI and primary research team members). |
The data we are generating, processing and storing in this project does not pose a particular data security risk. Dayto- day work is conducted on standard-issue workstations in the ETH-environment with standard enterprise-grade access control. The ETH network is a secured system following the best practices in terms of identity management and central storage facility has redundancy, mirroring and is monitored. At different stages, data will be stored in the Eawag Institutional Collection (see section 1.3). This system is accessible only from within the Eawag network and is comprised of several virtualized Linux systems that receive real-time security patches. Access control is handled according to recognized best practices of server administration. “Notoriously Toxic”, NEH ODH Start-up Grant, Level 1, https://www.neh.gov/files/dmp_from_successful_grants.zip |
All data generated in the project will be stored in our open- BIS ELN-LIMS. This operates in a client-server model, which is installed and maintained by the ETH Zurich IT services on ETH Zurich infrastructure. Researchers can access openBIS via any of the most common web browsers. openBIS requires user authentication with ETH Zurich credentials and it provides user right management, so that different users can have different access to all or different parts of the system, as required. Below is a description of the default openBIS roles, which can be modified upon request:
openBIS does not offer any specific option for sensitive data, but the data will be encrypted prior to upload to openBIS. Furthermore, all operations on the system (incl. which users log in and when) are logged, so that it is fully transparent who did what to the data and when. The data stored in openBIS is physically located on a NAS (network attached storage) provided by the ETH Zurich IT Services. The access to the share’s data is governed by the latest security best practices and only a limited number of employees of the ETH Zurich IT services have access to that share. |
ETH Zurich Guidelines on scientific integrity, RSETHZ 414
The ETH Zurich Compliance Guide
Digital Curation Office: data-archive@library.ethz.ch
Website of the Legal Office (e.g. for Data Protection issues)
IT Support Groups in the Departments
Website of the Scientific IT Services
Who will be the owner of the data?
Which licenses will be applied to the data?
What restrictions apply to the reuse of third-party data?
Outline the owners of the copyright and Intellectual Property Right (IPR) of all data that will be collected and generated including the licence(s). For consortia, an IPR ownership agreement might be necessary. You should comply with relevant funder, institutional, departmental or group policies on copyright or IPR. Furthermore, clarify what permissions are required should third-party data be re-used.
This relates to the FAIR Data Principles I3 & R1.1
Attaching a clear license to a publicly accessible data set allows other to know what can legally be done with its content. When copyright is applicable, Creative Commons licenses are recommended. However, Creative Commons licenses are not recommended for software.
Amongst all Creative Commons licenses, CC0 “no copyright reserved” is recommended for scientific data, as it allows other researchers to build new knowledge on top of a data set without restriction. It specifically allows aggregation of several data sets for secondary analysis. Several data repositories impose the CC0 license to facilitate reuse of their content.
In order to enable a data set to get cited, and therefore get recognition for its release, it is recommended to attach a CC-BY “Attribution” license to the record, usually a description of the dataset (metadata). To get recognition, data sets can be cited directly. However, to increase their visibility and reusability, it is recommended to describe them in a separated document licensed under CC BY “Attribution”, such as a data paper or on the institutional repository.
When the data has the potential to be used as such for commercial purposes, and that you intend to do so, the license CC BY-NC allows you to keep the exclusive commercial use.
Reuse of third-party data may be restricted. If authorised, the data must be shared according to the third party’s original requirement or license.
The research is not expected to lead to patents. Other Intellectual Property Rights (IPR)issues will be dealt in line with the institutional recommendation. As the data is not subjected to a contract and will not be patented, it will be released as open data under Creative Commons CC0 license. |
This project is being carried out in collaboration with an industrial partner. The intellectual property rights are set out in the collaboration agreement. The intellectual property generated from this project will be fully exploited with help from the institutional Technology Transfer Office. The aim is to patent the final procedure and then publish the work in a research journal and to publish the supporting data under an open Creative Commons Attribution (CC BY) license. |
Data is suitable for sharing. They are observational data (hence unique) and could be used for other analyses or for comparison for climate change effects among many things. Reuse opportunities are vast. For this reason, we aim to allow the widest reuse of our data and will release them under Creative Commons CC0. |
The source code for analysis will most likely utilize the GNU Scientific Library (GSL), which is licensed under the GNU General Public License (GPL). Therefore we will make our analysis software available under the GPL as well. |
ETH Zurich Guidelines on scientific integrity, RSETHZ 414
The ETH Zurich Compliance Guide
Digital Curation Office: data-archive@library.ethz.ch
Website of ETH transfer (e.g. for research contracts)
What is your storage capacity and where will the data be stored?
What are the back-up procedures?
Please mention what the needs are in terms of data storage and where the data will be stored.
Please consider that data storage on laptops or hard drives, for example, is risky. Storage through IT teams is safer. If external services are asked for, it is important that this does not conflict with the policy of each entity involved in the project, especially concerning the issue of sensitive data.
Please specify your back-up procedure (frequency of updates, responsibilities, automatic/manual process, security measures, etc.)
For ETH Zurich, see storage options here and consult the IT Support Group of your Department.
Storage and back up will be in three places:
[Name of Researcher] will be responsible for the storage and back up of data. This will be done weekly. Backups on the institutional infrastructure are automated using the RSYNC tool. |
Original notebooks and hardcopies of all NMR and mass spectra are stored in the PI’s laboratory. Additional electronic data will be stored on the PI’s computer, which is backed up daily. Additionally, the laboratory will make use of the PI’s lab server space at institution’s storage facility for a second repository of data storage. The PI’s lab has access to up to 1 terabyte of information storage, which can be expanded if needed. All the project data will be stored using the institution’s Collaborative Storage, which is backed-up on a regular basis. |
All our data will be uploaded to our Electronic Laboratory Notebook. The data is stored on institutional storage facilities and it is set up by our IT support to be automatically backed up daily. |
The ETH centralized file storage service follows the best practices and standards regarding storage, for instance high availability, multiple levels of data protection, partnership with providers for support. The service is managed centrally by the hosting department of the Vice Presidency for Information Systems (VPSI) and ensures security, coherence, pertinence, integrity and high-availability. Two distinct storage locations can be found on the ETH campus with replication between the two. (Please note that these two different storage options correspond to different payments.)Physical servers’ pairing and clustering guarantees local redundancy of data. Moreover, volume mirroring protects data in case of disaster on the primary site. The copy is asynchronous and automatic and runs every two hours. The file servers are virtualized for separation between logical data and physical storage, RAID groups ensure physical storage protection: data is split in chunks written on many disks with double parity. Moreover, volume snapshots are used and can allow user restoration of previous versions if need be. For specific needs, optional backup on tape can also be done. Access to the data is managed by the owner of the volumes through the identity management system of ETH. Any person who needs access to data has therefore to be a registered and verified user in the identity management system. |
Our team stores the data to be analyzed along with the results using Eawag file services. [copy text from Eawag standard snippet “file services - backup”] To easily share data with our collaborators in Fribourg, we synchronize those data with a folder on SWITCHdrive. Since this is sensitive personal data, the folder being synchronized contains encrypted files (public key encryption, key-pairs specifically created for this project). |
by the ETH Zurich IT Services. openBIS uses a postgres database that stores all metadata. This database is backed up (“pg_dump”) every night with a 7 days retention of the dumps and fully backed-up twice a week with a backup retention of 20 days. The full backup procedure includes a point-in-time recovery that allows a finer granularity (up to minutes) of data recovery in case of a disaster. The database backup is stored on the NAS (network attached storage) provided by the ETH Zurich IT services. The same NAS is used to store the data uploaded to openBIS. This network attached storage is snapshot every night with a 7 days retention, and data is backed up on a proprietary tape library with a retention of 90 days. Data which is no longer actively needed is moved to the long term storage (i.e. tapes). The tape library where openBIS moves the data has a read-only replica in a different geographical location in order to minimize any data loss. *For data linked to openBIS with the BigDataLink tool, please provide details of the data location and back-up. |
Digital Curation Office: data-archive@library.ethz.ch
Website of the Scientific IT Services
What procedures would be used to select data to be preserved?
What file formats will be used for preservation?
Please specify which data will be retained, shared and archived after the completion of the project and the corresponding data selection procedure (e.g. long-term value, potential value for re-use, obligations to destroy some data, etc.). Please outline a long-term preservation plan for the datasets beyond the lifetime of the project.
In particular, comment on the choice of file formats and the use of community standards.
Describe the procedure, (appraisal methods, selection criteria …) used to select data to be preserved. Note that preservation does not necessarily mean publication (e.g. personal sensitive data may be preserved but never published), but publication means generally preservation.
This section should answer the following questions:
What data will be preserved in the long term - selection criteria, in particular:
Reusability of the data: quality of metadata, integrity and accessibility of data, license allowing reuse, readability of data (chosen file formats),
Value of the data: indispensable data, completeness of the data or data set, uniqueness, possibility to reproduce the data in the same conditions and at what cost, interest of the data, potential of reuse
Ethical considerations
Stakeholders requirements
Costs: additional costs that come for depositing data in a repository or data archive of your choice (costs anticipation and budgeting)
Selection basically has to be done together with or by the data producer or someone else with deep specialist knowledge.
What data curation process(es) will be applied, i.e.: anonymization (if necessary), metadata improvement, format migration, integrity check, measures to ensure accessibility.
Data retention period (0, 5, 10, 20 years or unlimited)
Decision to make the data public
Use of sensitive data (i.e. privacy issues, ethics, or intellectual property laws)
Definition of the responsible person for data (during the process of selection and after the end of the project)
Other criteria from the Digital Curation Center (UK). In addition, select appropriated preservation formats (see section 1.1) and data description or metadata (see section 1.3).
Data will be stored for a minimum of three years beyond award period, per funder’s guidelines. If inventions or new technologies are made in connection data, access to data will be restricted until invention disclosures and/or provisional patent filings are made with the institutional Technology Transfer Office (TTO). |
We will preserve the data for 10 years on university’s servers and also deposit it in an appropriate data archive at the end of the project (e.g. Zenodo, see section 4.1 below). Where possible, we will store files in open archival formats e.g. Word files converted to PDF-A or simple text files encoded in UTF-8 and Excel files converted to CSV. In case this is not possible, we will include information on the software used and its version number. |
Where possible, we will store files in open archival formats e.g. Word files converted to PDF-A or simple text files encoded in UTF-8 and Excel files converted to CSV. In case this is not possible, we will include information on the software used and its version number. |
Data will be stored on ETH servers and will be preserved for the long term at the ETH Data Archive. |
Digital Curation Office: data-archive@library.ethz.ch
IT Support Groups in the Departments
On which repository do you plan to share your data?
How will potential users find out about your data?
Consider how and on which repository the data will be made available. The methods applied to data sharing will depend on several factors such as the type, size, complexity and sensitivity of data.
Please also consider how the reuse of your data will be valued and acknowledged by other researchers.
This relates to the FAIR Data Principles F1, F3, F4, A1, A1.1, A1.2 & A2
It is recommended to publish data in well established (or even certified) domain specific repositories, if available:
re3data is a repository directory allowing to select repositories by subject and level of trust (e.g. certifications)
ETH Zurich researchers are encouraged to publish data in ETH’s own Research Collection repository to ensure full compliance with ETH regulations.
In domains for which no suitable subject repositories are available, generalist repositories are available.
Among the most common used:
Zenodo (free, maximum 50GB/dataset, hosted by CERN)
Dryad (120$ for the first 20GB and 50$ for additional GB, Non-profit organization)
Figshare (free upload, maximum 5GB / dataset, commercial company)
SNSF does not pay for storage in commercial data repositories (even though data preparation costs are eligible). Check the SNSF’s criteria for non-commercial repositories here (section 5.2). If you choose a commercial repository, read carefully the Terms of service to check if they respond to your needs and to your institutions’ ones as well as to your institutional (data) policy.
In order to make your data findable by other users, it is important that
each data packet and publication has a DOI (or similar persistent identifier) assigned,
they are deposited Open Access in a repository harvested by the main data services (e.g.: OpenAire, EUDAT,…).
Some of the ongoing data will be shared on [Researcher1]’s Github repository (results and code from the project, data from twitter searches). Major revisions of this page will be baked up using the Github-Zenodo connection (see: https://guides.github.com/activities/citable-code/). All other data we will be published on Zenodo under CC0 license. We chose Zenodo because it supports the FAIR principles (http://about.zenodo.org/principles/). The immediate publication at the end of the project aims to minimize the data loss risk, while the 2 years embargo guarantees us to be first to exploit our data. Zenodo implements long-term preservation features, notably bitstream preservation. |
Datasets from this work which underpin a publication will be deposited in the ETH Zurich Research Collection, and made public at the time of publication. Data in the repository will be stored in accordance with funder’s data policies. Files deposited in the Research Collection will be given a Digital Object Identifier (DOI). The retention schedule for data will be set to 10 years from date of deposition in the first instance, with possible extension for datasets which remain in regular use. The DOI issued to datasets in the repository can be included as part of a data citation in publications, allowing the datasets underpinning a publication to be identified and accessed. Metadata about datasets held in the Research Collection will be publicly searchable and discoverable and will indicate how and on what terms the dataset can be accessed. |
For this project, the National Geoscience Data Centre (NGDC) (see http://www.bgs.ac.uk/services/ngdc/home.html) is the most suited repository. As it is adapted to geodata, it facilitates storage and allows interactive geographical search. In addition, many other researchers in our field are familiar with it. This repository requires the deposition under Open Governement Licence (see : http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/), which demands attribution when the data is reused (our dataset must by cited, similarly to the CC BY license). |
SNSF’s criteria for non-commercial respositories
Digital Curation Office: data-archive@library.ethz.ch
Website of the Research Collection
Questions you might want to consider
Under which conditions will the data be made available (timing of data release, reason for delay if applicable)?
Data have to be shared as soon as possible, but at the latest at the time of publication of the respective scientific output.
Restrictions may be only due to legal, ethical, copyright, confidentiality or other clauses.
Consider whether a non-disclosure agreement would give sufficient protection for confidential data.
This relates to the FAIR Data Principles A1 & R1.1
You may mention specifically the conditions under which the data will be made available:
there are no sensitive data
the data are not available at the time of publication
the data are not available before publication
the data are available after the embargo of …
the data are not available because of the patent of … for a period of…
Data which underpins any publication will be made available at the time of publication. All unpublished data will be deposited in a data repository 12 months after the end of the award. |
Astronomical data will be diffused but under an embargo of one year for priority of exploitation reasons. |
Personal data will be anonymized before diffusion based on the recommendations from the Federal Act on Data Protection(FADP) https://www.admin.ch/opc/en/classified-compilation/19920153/index.html. The package SDC-Micro (https://cran.r-project.org/package=sdcMicro) will be used to assess the risk of identification: we will make sure that each data set has a k-anonymity of 3 at least. |
The extensive household survey about water-born diseases poses severe challenges with regard to anonymization, since simple pseudonymization might not be sufficient to guard against the identification of individual households by an inference attack that uses other available information. Therefore we will be only able to publish summary statistics together with the associated article. If a sufficiently anonymized dataset turns out to still hold scientific value, we will publish it no later than one year after completion of the project. |
ETH Zurich Guidelines on scientific integrity, RSETHZ 414
The ETH Zurich Compliance Guide
Digital Curation office: data-archive@library.ethz.ch
Ethics Commission (Website or Contact: raffael.iturrizaga@sl.ethz.ch)
[CHECK BOX]
The SNSF requires that repositories used for data sharing are conformed to the FAIR Data Principles. For more information, please refer to the SNSF’s explanation of the FAIR Data Principles.
You can find certified repositories in Re3data.org, an exhaustive registry of data repositories.
ETH Zurich’s Research Collection also complies with the FAIR Principles.
[RADIO BUTTON yes/no]
If you do not choose a repository maintained by a non-profit organization, you have to provide reasons for that.
One possible reason would be to ensure the visibility of your research, if your research community is standardly publishing data on a well-established but commercial digital repository.
Please note that the SNSF supports the use of non-commercial repositories for data sharing. Costs related to data upload are only covered for non-commercial repositories. Check the SNSF’s criteria for non-commercial repositories (section 5.2).
Digital Curation Centre glossary
List of useful tools prepared by the Swiss DLCM project