Matches in Nanopublications for { ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?o ?g. }
- Archimer comment "Archimer is the institutional repository of Ifremer (French Research Institute for Exploitation of the Sea). It archives and provides free access to a wide range of scientific publications, including articles, theses, conference papers, and internal reports related to marine sciences. Archimer is part of the Open Access movement, aiming to make scientific documentation freely available and widely disseminated via the web." assertion.
- Archimer comment "Archimer is the institutional repository of Ifremer (French Research Institute for Exploitation of the Sea). It archives and provides free access to a wide range of scientific publications, including articles, theses, conference papers, and internal reports related to marine sciences. Archimer is part of the Open Access movement, aiming to make scientific documentation freely available and widely disseminated via the web." assertion.
- DL comment "DeSci Labs was founded in July 2021 in Switzerland. Our goal is to accelerate scientific progress by making research more accessible, innovative, and reliable. Achieving this requires a fundamental rethinking and rebuilding of the scientific publishing process, including better incentives, workflows, and infrastructure. We build an open-access publishing environment for versionable research objects that allows all research outputs—including data and code—to be easily shared, checked, updated, curated, and archived. Our technology is radically open, FAIR by design, and ensures that best open science practices are incredibly easy and rewarding. Through our partnership with OpenAlex, we integrate all open science content into our environment and provide valuable, unique analytics, starting with novelty scores. DeSci Labs was started by scientists for scientists. Since our inception in 2021, we have grown to a team of 16 engineers and scientists." assertion.
- CMS-INSTAC-MS comment "The Copernicus Marine In Situ TAC Metadata Schema specifies the NetCDF file format of Copernicus Marine In Situ TAC used to distribute ocean In Situ data and metadata. It documents the standards used herein; this includes naming conventions as well as metadata content. It was initiated in March 2019, based on OceanSITES and Argo user's manuals." assertion.
- CMS-INSTAC-MS comment "The Copernicus Marine In Situ TAC Metadata Schema specifies the NetCDF file format of Copernicus Marine In Situ TAC used to distribute ocean In Situ data and metadata. It documents the standards used herein; this includes naming conventions as well as metadata content. It was initiated in March 2019, based on OceanSITES and Argo user's manuals." assertion.
- CMS-INSTAC-MS comment "The Copernicus Marine In Situ TAC Metadata Schema specifies the NetCDF file format of Copernicus Marine In Situ TAC used to distribute ocean In Situ data and metadata. It documents the standards used herein; this includes naming conventions as well as metadata content. It was initiated in March 2019, based on OceanSITES and Argo user's manuals." assertion.
- i6z comment "In IUCLID 6, the exchange of chemical information, from either datasets or dossiers, is facilitated via a zip/archive file that has the extension i6z, which stands for IUCLID 6 zip. Chemical information can be exported as an i6z file from an installation of IUCLID 6, and then imported into another. An i6z file has a well-defined and structured format that contains information on the IUCLID 6 entities, documents, and attachments it contains. The export feature of IUCLID 6 provides an advanced filtering mechanism that allows a user to select which of the interrelated entities are included in the archive." assertion.
- i6z comment "In IUCLID 6, the exchange of chemical information, from either datasets or dossiers, is facilitated via a zip/archive file that has the extension i6z, which stands for IUCLID 6 zip. Chemical information can be exported as an i6z file from an installation of IUCLID 6, and then imported into another. An i6z file has a well-defined and structured format that contains information on the IUCLID 6 entities, documents, and attachments it contains. The export feature of IUCLID 6 provides an advanced filtering mechanism that allows a user to select which of the interrelated entities are included in the archive." assertion.
- FAIRsharing.250a8c comment "The Bioregistry is an open source, community curated registry, meta-registry, and compact identifier resolver." assertion.
- FAIRsharing.250a8c comment "The Bioregistry is an open source, community curated registry, meta-registry, and compact identifier resolver." assertion.
- RO comment "Research Objects aim to improve reuse and reproducibility by: 1. Supporting the publication of more than just PDFs, making data, code, and other resources first class citizens of scholarship. 2. Recognizing that there is often a need to publish collections of these resources together as one shareable, cite-able resource. 3. Enriching these resources and collections with any and all additional information required to make research reusable, and reproducible! Research objects are not just data, not just collections, but any digital resource that aims to go beyond the PDF for scholarly publishing!" assertion.
- CMDS comment "The Copernicus Marine Data Store is a comprehensive platform that provides free, open, and systematic reference information on the state, variability, and dynamics of the global ocean and European regional seas. It covers three main areas: Blue Ocean (Physical aspects such as temperature, salinity, sea surface height, and currents); White Ocean (Sea ice data); Green Ocean (Biogeochemical data including nutrients, plankton, and oxygen levels). The platform offers a wide range of products derived from numerical models, in-situ observations, and satellite data. Users can access data through an intuitive interface that allows for filtering by variables, time range, area of interest. The Copernicus Marine Data Store is designed to support various applications, from scientific research to operational oceanography, and is part of the broader Copernicus Marine Service." assertion.
- CMDS comment "The Copernicus Marine Data Store is a comprehensive platform that provides free, open, and systematic reference information on the state, variability, and dynamics of the global ocean and European regional seas. It covers three main areas: Blue Ocean (Physical aspects such as temperature, salinity, sea surface height, and currents); White Ocean (Sea ice data); Green Ocean (Biogeochemical data including nutrients, plankton, and oxygen levels). The platform offers a wide range of products derived from numerical models, in-situ observations, and satellite data. Users can access data through an intuitive interface that allows for filtering by variables, time range, area of interest. The Copernicus Marine Data Store is designed to support various applications, from scientific research to operational oceanography, and is part of the broader Copernicus Marine Service." assertion.
- CMDS comment "The Copernicus Marine Data Store is a comprehensive platform that provides free, open, and systematic reference information on the state, variability, and dynamics of the global ocean and European regional seas. It covers three main areas: Blue Ocean (Physical aspects such as temperature, salinity, sea surface height, and currents); White Ocean (Sea ice data); Green Ocean (Biogeochemical data including nutrients, plankton, and oxygen levels). The platform offers a wide range of products derived from numerical models, in-situ observations, and satellite data. Users can access data through an intuitive interface that allows for filtering by variables, time range, area of interest. The Copernicus Marine Data Store is designed to support various applications, from scientific research to operational oceanography, and is part of the broader Copernicus Marine Service." assertion.
- EMMC comment "The non-profit Association, EMMC ASBL, was created in 2019 to ensure continuity, growth and sustainability of EMMC activities for all stakeholders including modellers, materials data scientists, software owners, translators and manufacturers in Europe. The EMMC considers the integration of materials modelling and digitalisation critical for more agile and sustainable product development." assertion.
- DataModel comment "A data model is an abstract model that organises elements of data and standardises how they relate to one another and to the properties of real world entities" assertion.
- DLiteDatamodelService comment "A FastAPI-based REST API service running on http://onto-ns.com/. It's purpose is to serve data models (also called entities) from an underlying database." assertion.
- LS-Login comment "The Life Science Login enables researchers to use their home organisation credentials or community or other identities (e.g. Google, Linkedin, LS ID) to sign in and access data and services they need. It also allows service providers (both in academia and industry) to control and manage access rights of their users and create different access levels for research groups or international projects." assertion.
- SEEK-ID comment "A persistent, unique identifier assigned to digital objects within SEEK-based platforms, such as FAIRDOMHub or IBISBAKhub. SEEK is an open-source data management system used primarily in research infrastructures for sharing, managing, and linking research outputs (e.g. datasets, models, SOPs, and publications)." assertion.
- _2 comment "Sri Widiasih, N. N., Adiputra, I. G., & Kiriana, I. N. (2022). Makna Simbolik Pratima Hyang Ratu di Pura Dadia Se-desa adat Kerobokan Kabupaten Badung. Jurnal Penelitian Agama Hindu, 6(1), 45–51. https://doi.org/10.37329/jpah.v6i1.1527" assertion.
- FHIR_HL7 comment "Fast Healthcare Interoperability Resources (FHIR) is a standardized framework developed by HL7 (Health Level Seven International) for exchanging, sharing, and integrating healthcare information. It enables interoperability between different healthcare systems by using modern web-based technologies like RESTful APIs, JSON, XML, and RDF." assertion.
- RAG7srcMhYZqsqWoNVs_dh8XwM359JGjLwaiGZ8yxctuU comment "Totally new comment for nano session" assertion.
- Lifelines-Data-Catalogue comment "Lifelines data catalogue is a search engine that can be used by researchers to select the data they require for their research. The data catalogue also publishes data developed by researchers who have created new variables through their research (i.e. secondary data)." assertion.
- IBISBAkHub comment "The IBISBA Knowledge Hub (IBISBAKHub) is a data management platform designed to help researchers enhance the management of their research data. It offers an improved way to share, register, and organize the data and assets generated from research projects. The IBISBAKHub’s primary objective is to promote the adoption and implementation of FAIR Data principles." assertion.
- FAIRsharing.Mkl9RR comment "The Ontology Lookup Service (OLS) is a repository for biomedical ontologies that aims to provide a single point of access to the latest ontology versions. You can browse the ontologies through the website as well as programmatically via the OLS API. In 2023 OLS was updated to scale better and with a new user interface. OLS is used within life sciences but also in the fields of chemistry and engineering. Code is available under an Apache 2.0 licence." assertion.
- MSP-EU comment "MSP platform is a service promoted by the EU commission to foster the MSP process in the member states startig from MSP directive 2014/89 Fair data sharing is crucial in the MSP planning process and lots of the resources mapped are data portals. Also Emodnet HU" assertion.
- NPDSpecimens_Flowchart comment "This resource describes the flowchart developed within the ITINERIS Project by the DiSSCo community of the Institute of Marine Sciences (CNR-ISMAR) for the management of physical and digital samples." assertion.
- PINK_Community comment "PINK Horizon Europe project" assertion.
- ResearchData comment "Any data created in research, to be made FAIR. It is agnostic to data format or type." assertion.
- PlannedResearchData comment "Specification of research data that does not yet exist. Format and type agnostic. The motivation for this is that it allows for specification of data even before it is generated, e.g. in the planning of an experiment or specification of a model input or output." assertion.
- FAIRsharing.88ea35 comment "The British natural history collection is one of the most important in the world, documenting 4.5 billion years of life, the Earth and the solar system. Almost all animal, plant, mineral and fossil groups are represented. The portal's main dataset consists of specimens from the Museum's collection database, with over 4 million records from the Museum’s Palaeontology, Mineralogy, Botany, Entomology and Zoology collections." assertion.
- PINKAnnotationSchema comment "This annotation schema si used to annotate data as part of the PINK project. It is based on DCAT for basic Accessibility and enahnced with domain and application specific terms." assertion.
- PINKAnnotationSchema comment "This annotation schema si used to annotate data as part of the PINK project. It is based on DCAT for basic Accessibility and enahnced with domain and application specific terms." assertion.
- PINKAnnotationSchema comment "This annotation schema si used to annotate data as part of the PINK project. It is based on DCAT for basic Accessibility and enahnced with domain and application specific terms." assertion.
- PINK_KB comment "The PINK knowledge base, implemted as a GraphDB triplestore, is used to ...." assertion.
- PINK_KB comment "The PINK knowledge base, implemted as a GraphDB triplestore, is used to ...." assertion.
- PINK_KB comment "The PINK knowledge base, implemted as a GraphDB triplestore, is used to ...." assertion.
- DLite comment "A lightweight data-centric framework for semantic interoperability. DLite allows to represent data and metadata with simple, but formalised data models, making it possible to decouple the (meta)data from how it is serialised. It includes a rich and easy extendable plugin-system for loading/writing (meta)data to different storage backends (like JSON, BSON, YAML, RDF, MinIO, MongoDB, PostgreSQL, Redis, CSV/Excel, ...). DLite enhances the reusability of storage plugins by a clear separation between data transfer (protocol) and loading/writing. This makes it possible to use the same file-based storage plugin against for instance the local file system or an sftp or http server. Semantic interoperability and automated data transformations is achieved by mapping DLite data models and/or their properties of to classes defined in ontologies. By combining mappings with a library of reusable mapping functions, fully automated and very powerful data transformations and integrations can be achieved. DLite also include a collection of tools for e.g. validation of data models and generation of code for handling of i/o in C and Fortran programs. DLite is written in C, but includes bindings to Python and Fortran. It is commonly used from Python and available under a permissive MIT license. " assertion.
- DLite comment "A lightweight data-centric framework for semantic interoperability. DLite allows to represent data and metadata with simple, but formalised data models, making it possible to decouple the (meta)data from how it is serialised. It includes a rich and easy extendable plugin-system for loading/writing (meta)data to different storage backends (like JSON, BSON, YAML, RDF, MinIO, MongoDB, PostgreSQL, Redis, CSV/Excel, ...). DLite enhances the reusability of storage plugins by a clear separation between data transfer (protocol) and loading/writing. This makes it possible to use the same file-based storage plugin against for instance the local file system or an sftp or http server. Semantic interoperability and automated data transformations is achieved by mapping DLite data models and/or their properties of to classes defined in ontologies. By combining mappings with a library of reusable mapping functions, fully automated and very powerful data transformations and integrations can be achieved. DLite also include a collection of tools for e.g. validation of data models and generation of code for handling of i/o in C and Fortran programs. DLite is written in C, but includes bindings to Python and Fortran. It is commonly used from Python and available under a permissive MIT license. " assertion.
- DiSSCo-ITINERIS_MetadataCatalog comment "DiSSCo-ITINERIS is part of the Next Generation EU (PNRR) project ITINERIS - Italian Integrated Environmental Research Infrastructures System, aimed at coordinating and integrating the Italian nodes of 22 European Research Infrastructures (RI). ITINERIS will build the Italian Hub of Research Infrastructures in the environmental scientific domain for the observation and study of environmental processes in the atmosphere, marine domain, terrestrial biosphere, and geosphere, providing access to data and services and supporting the Country to address current and expected environmental challenges. In the frame of this project, the DiSSCo-ITINERIS website hosts the results of the three activities (Activity 6.4, 6.5 and 6.6) taking part in the construction of the Italian node of the European Research Infrastructure DiSSCo-ERIC (Distributed System of Scientific Collections - www.dissco.eu)." assertion.
- __assertion comment "IHC shows Not_detected protein expression of ENSG00000000003 in lung(macrophages) with a Approved evidence/reliability" __head.
- TripperAnnotationSchema comment "Basic annotation schema used in Tripper for handling data and metadata in a knowledge base. Based on DCAT-AP, so compatible with all DCAT-aware services." assertion.
- TripperAnnotationSchema comment "Basic annotation schema used in Tripper for handling data and metadata in a knowledge base. Based on DCAT-AP, so compatible with all DCAT-aware services." assertion.
- TripperAnnotationSchema comment "Basic annotation schema used in Tripper for handling data and metadata in a knowledge base. Based on DCAT-AP, so compatible with all DCAT-aware services." assertion.
- FAIRsharing.hzdzq8 comment "Schema.org is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet. In addition to people from the sponsoring companies, there is substantial participation by the larger web community, through public mailing lists such as public-vocabs@w3.org and through GitHub. Search engines including Bing, Google, Yahoo! and Yandex rely on schema.org markup to improve the display of search results, making it easier for people to find the right web pages. Since April 2015, the W3C Schema.org Community Group is the main forum for schema collaboration, and provides the public-schemaorg@w3.org mailing list for discussions." assertion.
- FAIRsharing.hzdzq8 comment "Schema.org is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet. In addition to people from the sponsoring companies, there is substantial participation by the larger web community, through public mailing lists such as public-vocabs@w3.org and through GitHub. Search engines including Bing, Google, Yahoo! and Yandex rely on schema.org markup to improve the display of search results, making it easier for people to find the right web pages. Since April 2015, the W3C Schema.org Community Group is the main forum for schema collaboration, and provides the public-schemaorg@w3.org mailing list for discussions." assertion.
- FAIRDOMHub comment "The FAIRDOMHub is a publicly available repository managed and supported by the FAIRDOM consortium (https://fair-dom.org/). It's build using the FAIRDOM-SEEK software (http://fairdomseek.org), an open source web platform for storing, sharing and publishing research assets of biology projects. The assets include FAIR (Findable, Accessible, Interoperable and Reusable) Data, Operating procedures and Models. FAIRDOMHub enables researchers to organize, share and publish data, models and protocols, interlink them in the context of the biology investigations that produced them, and to interrogate them via API interfaces. By using the FAIRDOMHub, researchers can achieve more effective exchange with geographically distributed collaborators during projects, ensure results are sustained and preserved and generate reproducible publications that adhere to the FAIR guiding principles of data stewardship. FAIRDOMHub includes special support for the Systems Biology community." assertion.
- NCTP-NIVA comment "NCTP aims to consolidate and develop new tools and approaches for characterizing the Source To Outcome Pathway (STOP)." assertion.
- MWCCS comment "The Combined Cohort Study aims to spur new scientific discoveries by sharing data and biospecimens from the MACS and WIHS research groups. The NHLBI encourages early career investigators to use these resources for innovative research ideas and to generate preliminary data for large grant applications." assertion.
- MWCCS comment "The Combined Cohort Study aims to spur new scientific discoveries by sharing data and biospecimens from the MACS and WIHS research groups. The NHLBI encourages early career investigators to use these resources for innovative research ideas and to generate preliminary data for large grant applications. " assertion.
- ACTG-portal comment "ACTG is a global clinical trials network that conducts research to improve the management of HIV and its comorbidities; develop a cure for HIV; and innovate treatments for tuberculosis, hepatitis B, and emerging infectious diseases." assertion.
- ACTG comment "ACTG is a global clinical trials network that conducts research to improve the management of HIV and its comorbidities; develop a cure for HIV; and innovate treatments for tuberculosis, hepatitis B, and emerging infectious diseases." assertion.
- FAIRsharing.wkggtx comment "Dryad is an open-source, community-led data curation, publishing, and preservation platform for CC0 publicly available research data. Dryad has a long-term data preservation strategy, and is a Core Trust Seal Certified Merritt repository with storage in US and EU at the San Diego Supercomputing Center, DANS, and Zenodo. While data is undergoing peer review, it is embargoed if the related journal requires / allows this. Dryad is an independent non-profit that works directly with: researchers to publish datasets utilising best practices for discovery and reuse; publishers to support the integration of data availability statements and data citations into their workflows; and institutions to enable scalable campus support for research data management best practices at low cost. Costs are covered by institutional, publisher, and funder members, otherwise a one-time fee of $120 for authors to cover cost of curation and preservation. Dryad also receives direct funder support through grants." assertion.
- FAIRsharing.wkggtx comment "Dryad is an open-source, community-led data curation, publishing, and preservation platform for CC0 publicly available research data. Dryad has a long-term data preservation strategy, and is a Core Trust Seal Certified Merritt repository with storage in US and EU at the San Diego Supercomputing Center, DANS, and Zenodo. While data is undergoing peer review, it is embargoed if the related journal requires / allows this. Dryad is an independent non-profit that works directly with: researchers to publish datasets utilising best practices for discovery and reuse; publishers to support the integration of data availability statements and data citations into their workflows; and institutions to enable scalable campus support for research data management best practices at low cost. Costs are covered by institutional, publisher, and funder members, otherwise a one-time fee of $120 for authors to cover cost of curation and preservation. Dryad also receives direct funder support through grants." assertion.
- _2 comment "Reference: Suryadi, S. (1998), "Rabab Pariaman". In: McGlynn John H. (Ed.), Language and Literature; Indonesian Heritage Series. Singapore: Archipelago Press. 66-67." assertion.
- _2 comment "Reference: Suryadi, S. (1998), "Rabab Pariaman". In: McGlynn John H. (Ed.), Language and Literature; Indonesian Heritage Series. Singapore: Archipelago Press. 66-67." assertion.
- _2 comment "Reference: Suryadi, S. (1998), "Rabab Pariaman". In: McGlynn John H. (Ed.), Language and Literature; Indonesian Heritage Series. Singapore: Archipelago Press. 66-67." assertion.
- _1 comment "Suryadi, S. (1998), Rabab Pariaman. In: McGlynn John H. (Ed.), Language and Literature; Indonesian Heritage Series. Singapore: Archipelago Press. 66-67. https://niadilova.wordpress.com/2017/03/13/rabab-pariaman-senjakala-sebuah-genre-sastra-lisan-minangkabau/ https://www.youtube.com/watch?v=Z7cqV-hOugE" assertion.
- _2 comment "https://www.youtube.com/watch?v=Z7cqV-hOugE Junus, Umar (1984), Kaba dan sistem sosial Minangkabau: suatu problema sosiologi sastra. Jakarta: Balai Pustaka." assertion.
- EHN-ERDDAP-LA comment "A local username/password combination operated by the European High Frequency Radar Node ERDDAP data server." assertion.
- EHN-ERDDAP-LA comment "A local username/password combination operated by the European High Frequency Radar Node ERDDAP data server." assertion.
- EHN-THREDDS-LA comment "A local username/password combination operated by the European High Frequency Radar Node THREDDS data server." assertion.
- EHN-THREDDS-LA comment "A local username/password combination operated by the European High Frequency Radar Node THREDDS data server." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-ERDDAP comment "ERDDAP is a scientific data server that gives users a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps. ERDDAP is a Free and Open Source (Apache and Apache-like) Java Servlet developed by the NOAA NMFS SWFSC Environmental Research Division (ERD). This particular ERDDAP installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-THREDDS comment "The Unidata's Thematic Real-time Environmental Distributed Data Services (THREDDS) provides access to a large collection of real-time and archived datasets from a variety of environmental data sources at a number of distributed server sites. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using a variety of remote data access protocols. The THREDDS Data Server (TDS) provides catalog, metadata, and data access services for scientific data. Every TDS publishes THREDDS catalogs that advertise the datasets and services it makes available. THREDDS catalogs are XML documents that list datasets and the data access services available for the datasets. Catalogs may contain metadata to document details about the datasets. TDS configuration files provide the TDS with information about which datasets and data collections are available and what services are provided for the datasets. The available remote data access protocols include OPeNDAP, OGC WCS, OGC WMS, and HTTP. The ncISO service allows THREDDS catalogs to be translated into ISO metadata records. The TDS also supports several dataset collection services including some sophisticated dataset aggregation capabilities. This allows the TDS to aggregate a collection of datasets into a single virtual dataset, greatly simplifying user access to that data collection. The TDS is open source and runs inside the open source Tomcat Servlet container. This particular TDS installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-THREDDS comment "The Unidata's Thematic Real-time Environmental Distributed Data Services (THREDDS) provides access to a large collection of real-time and archived datasets from a variety of environmental data sources at a number of distributed server sites. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using a variety of remote data access protocols. The THREDDS Data Server (TDS) provides catalog, metadata, and data access services for scientific data. Every TDS publishes THREDDS catalogs that advertise the datasets and services it makes available. THREDDS catalogs are XML documents that list datasets and the data access services available for the datasets. Catalogs may contain metadata to document details about the datasets. TDS configuration files provide the TDS with information about which datasets and data collections are available and what services are provided for the datasets. The available remote data access protocols include OPeNDAP, OGC WCS, OGC WMS, and HTTP. The ncISO service allows THREDDS catalogs to be translated into ISO metadata records. The TDS also supports several dataset collection services including some sophisticated dataset aggregation capabilities. This allows the TDS to aggregate a collection of datasets into a single virtual dataset, greatly simplifying user access to that data collection. The TDS is open source and runs inside the open source Tomcat Servlet container. This particular TDS installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-THREDDS comment "The Unidata's Thematic Real-time Environmental Distributed Data Services (THREDDS) provides access to a large collection of real-time and archived datasets from a variety of environmental data sources at a number of distributed server sites. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using a variety of remote data access protocols. The THREDDS Data Server (TDS) provides catalog, metadata, and data access services for scientific data. Every TDS publishes THREDDS catalogs that advertise the datasets and services it makes available. THREDDS catalogs are XML documents that list datasets and the data access services available for the datasets. Catalogs may contain metadata to document details about the datasets. TDS configuration files provide the TDS with information about which datasets and data collections are available and what services are provided for the datasets. The available remote data access protocols include OPeNDAP, OGC WCS, OGC WMS, and HTTP. The ncISO service allows THREDDS catalogs to be translated into ISO metadata records. The TDS also supports several dataset collection services including some sophisticated dataset aggregation capabilities. This allows the TDS to aggregate a collection of datasets into a single virtual dataset, greatly simplifying user access to that data collection. The TDS is open source and runs inside the open source Tomcat Servlet container. This particular TDS installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- EHN-THREDDS comment "The Unidata's Thematic Real-time Environmental Distributed Data Services (THREDDS) provides access to a large collection of real-time and archived datasets from a variety of environmental data sources at a number of distributed server sites. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using a variety of remote data access protocols. The THREDDS Data Server (TDS) provides catalog, metadata, and data access services for scientific data. Every TDS publishes THREDDS catalogs that advertise the datasets and services it makes available. THREDDS catalogs are XML documents that list datasets and the data access services available for the datasets. Catalogs may contain metadata to document details about the datasets. TDS configuration files provide the TDS with information about which datasets and data collections are available and what services are provided for the datasets. The available remote data access protocols include OPeNDAP, OGC WCS, OGC WMS, and HTTP. The ncISO service allows THREDDS catalogs to be translated into ISO metadata records. The TDS also supports several dataset collection services including some sophisticated dataset aggregation capabilities. This allows the TDS to aggregate a collection of datasets into a single virtual dataset, greatly simplifying user access to that data collection. The TDS is open source and runs inside the open source Tomcat Servlet container. This particular TDS installation gives access to surface current velocity data measured by High Frequency Radar (HFR) systems. This ERDDAP server is managed by the EuroGOOS European HFR Node." assertion.
- adc comment "The Arctic Data Centre (ADC) is a service provided by the Norwegian Meteorological Institute (MET) and is a legacy of the International Polar Year (IPY). ADC is based on the FAIR guiding principles for data management and access to free and open data. While the Norwegian Meteorological Institute use CC BY as the data license, ADC is managing data on behalf of other data owners that may have other preferences. ADC is primarily hosting data within meteorology, oceanography and glaciology, but through active metadata harvesting it also points to data within other disciplines. ADC normally offers data in CF-NetCDF adhering to the Climate and Forecast Conventions (exceptions may occur) and support services on top of data like OPeNDAP and OGC WMS. Machine interfaces to the catalogue include OAI-PMH, OGC CSW and OpenSearch. Information is provided in the native format MET Metadata (MMD), ISO-19115 and GCMD DIF (others are being considered). " assertion.
- adc comment "The Arctic Data Centre (ADC) is a service provided by the Norwegian Meteorological Institute (MET) and is a legacy of the International Polar Year (IPY). ADC is based on the FAIR guiding principles for data management and access to free and open data. While the Norwegian Meteorological Institute use CC BY as the data license, ADC is managing data on behalf of other data owners that may have other preferences. ADC is primarily hosting data within meteorology, oceanography and glaciology, but through active metadata harvesting it also points to data within other disciplines. ADC normally offers data in CF-NetCDF adhering to the Climate and Forecast Conventions (exceptions may occur) and support services on top of data like OPeNDAP and OGC WMS. Machine interfaces to the catalogue include OAI-PMH, OGC CSW and OpenSearch. Information is provided in the native format MET Metadata (MMD), ISO-19115 and GCMD DIF (others are being considered). " assertion.
- VO comment "The Vaccine Ontology (VO) is a community-based biomedical ontology in the domain of vaccine and vaccination. VO aims to standardize vaccine types and annotations, integrate various vaccine data, and support computer-assisted reasoning. The VO supports basic vaccine R&D and clincal vaccine usage. VO is being developed as a community-based ontology with support and collaborations from the vaccine and bio-ontology communities." assertion.
- VO comment "The Vaccine Ontology (VO) is a community-based biomedical ontology in the domain of vaccine and vaccination. VO aims to standardize vaccine types and annotations, integrate various vaccine data, and support computer-assisted reasoning. The VO supports basic vaccine R&D and clincal vaccine usage. VO is being developed as a community-based ontology with support and collaborations from the vaccine and bio-ontology communities." assertion.
- MedDRA comment "MedDRA is a globally used, standardized medical terminology developed by ICH for the regulatory communication, safety monitoring, and analysis of medical products, continuously maintained to support patient safety and evolving industry needs. MedDRA is free to use by individual researchers but requires a paid license for commercial use." assertion.
- CMO comment "The Clinical Measurement Ontology is designed to be used to standardize morphological and physiological measurement records generated from clinical and model organism research and health programs." assertion.
- CMO comment "The Clinical Measurement Ontology is designed to be used to standardize morphological and physiological measurement records generated from clinical and model organism research and health programs." assertion.
- HPO comment "The Human Phenotype Ontology (HPO) provides a standardized vocabulary of phenotypic abnormalities encountered in human disease." assertion.
- HPO comment "The Human Phenotype Ontology (HPO) provides a standardized vocabulary of phenotypic abnormalities encountered in human disease." assertion.
- PRO comment "PRO provides an ontological representation of protein-related entities by explicitly defining them and showing the relationships between them. PRO encompasses three sub-ontologies: proteins based on evolutionary relatedness (ProEvo); protein forms produced from a given gene locus (ProForm); and protein-containing complexes (ProComp)." assertion.
- PRO comment "PRO provides an ontological representation of protein-related entities by explicitly defining them and showing the relationships between them. PRO encompasses three sub-ontologies: proteins based on evolutionary relatedness (ProEvo); protein forms produced from a given gene locus (ProForm); and protein-containing complexes (ProComp)." assertion.
- DO comment "A standardized ontology for human disease with the purpose of providing the biomedical community with consistent, reusable and sustainable descriptions of human disease terms, phenotype characteristics and related medical vocabulary disease concepts through collaborative efforts with biomedical researchers, coordinated by the University of Maryland School of Medicine, Institute for Genome Sciences. The Disease Ontology semantically integrates disease and medical vocabularies through extensive cross mapping of DO terms to MeSH, ICD, NCI's thesaurus, SNOMED and OMIM." assertion.
- DO comment "A standardized ontology for human disease with the purpose of providing the biomedical community with consistent, reusable and sustainable descriptions of human disease terms, phenotype characteristics and related medical vocabulary disease concepts through collaborative efforts with biomedical researchers, coordinated by the University of Maryland School of Medicine, Institute for Genome Sciences. The Disease Ontology semantically integrates disease and medical vocabularies through extensive cross mapping of DO terms to MeSH, ICD, NCI's thesaurus, SNOMED and OMIM." assertion.
- DRS-API comment "GA4GH Data Repository Service (DRS) API is a generic interface to data repositories so data consumers, workflow systems, can access data objects in a single, standard way regardless of where they are stored and how they are managed. The primary functionality of DRS is to map a logical ID to a means for physically retrieving the data represented by the ID." assertion.
- DRS-API comment "GA4GH Data Repository Service (DRS) API is a generic interface to data repositories so data consumers, workflow systems, can access data objects in a single, standard way regardless of where they are stored and how they are managed. The primary functionality of DRS is to map a logical ID to a means for physically retrieving the data represented by the ID." assertion.
- OBI comment "The Ontology for Biomedical Investigations (OBI) is a structured, interoperable vocabulary designed to represent all aspects of scientific investigations, including experiments, assays, and protocols. OBI provides precisely defined terms each with a unique identifier, metadata, and logical connections to related terms." assertion.
- OBI comment "The Ontology for Biomedical Investigations (OBI) is a structured, interoperable vocabulary designed to represent all aspects of scientific investigations, including experiments, assays, and protocols. OBI provides precisely defined terms each with a unique identifier, metadata, and logical connections to related terms." assertion.
- CL comment "An ontology designed to classify and describe cell types across different organisms. It serves as a resource for model organism and bioinformatics databases. The ontology covers a broad range of cell types in animal cells, with over 2700 cell type classes, and provides high-level cell type classes as mapping points for cell type classes in ontologies representing other species, such as the Plant Ontology or Drosophila Anatomy Ontology. Integration with other ontologies such as Uberon, GO, CHEBI, PR, and PATO enables linking cell types to anatomical structures, biological processes, and other relevant concepts." assertion.
- CL comment "An ontology designed to classify and describe cell types across different organisms. It serves as a resource for model organism and bioinformatics databases. The ontology covers a broad range of cell types in animal cells, with over 2700 cell type classes, and provides high-level cell type classes as mapping points for cell type classes in ontologies representing other species, such as the Plant Ontology or Drosophila Anatomy Ontology. Integration with other ontologies such as Uberon, GO, CHEBI, PR, and PATO enables linking cell types to anatomical structures, biological processes, and other relevant concepts." assertion.
- ComputationalModel comment "A computational model uses computer programs to simulate and study complex systems, employing an algorithmic or mechanistic approach, and is widely used in various fields, including physics, engineering, and biology." assertion.
- SNAP comment "SNAP is the data web portal and framework that allow to access all geophysical data acquired by Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS." assertion.
- CRS-OGS comment "The Center for Seismological Research at OGS carries out research on seismicity and seismogenesis of northeastern Italy, in scientific autonomy, managing and developing the related network of seismic survey for civil protection purposes" assertion.
- CRS-OGS comment "The Center for Seismological Research at OGS carries out research on seismicity and seismogenesis of northeastern Italy, in scientific autonomy, managing and developing the related network of seismic survey for civil protection purposes" assertion.
- SEG-Y comment "The SEG-Y (sometimes SEG Y or SEGY) file format is one of several data standards developed by the Society of Exploration Geophysicists (SEG) for the exchange of geophysical data." assertion.
- UKOOA comment "The UKOOA format (United Kingdom Offshore Operators Association) is a data format developed to manage and store geological and geophysical information in the context of offshore activities, particularly for the oil and gas industry in the United Kingdom. It is used to standardize and exchange data between operating companies and regulatory authorities, such as the UK Oil and Gas Authority (OGA)." assertion.
- miniSEED comment "The International Federation of Digital Seismograph Networks (FDSN) defines miniSEED as a format for digital data and related information. The primary intended uses are data collection, archiving and exchange of seismological data. The format is also appropriate for time series data from other geophysical measurements such as pressure, temperature, tilt, etc. In addition to the time series, storage of related state-of-health and parameters documenting the state of the recording system are supported. The FDSN metadata counterpart of miniSEED is StationXML which is used to describe characteristics needed to interpret the data such as location, instrument response, etc." assertion.
- RINEX2 comment "The first proposal for the 'Receiver Independent Exchange Format' RINEX has been developed by the Astronomical Institute of the University of Berne for the easy exchange of the GPS data to be collected during the large European GPS campaign EUREF 89. Currently the format consists of four ASCII file types: 1. Observation Data File 2. Navigation Message File 3. Meteorological Data File 4. GLONASS Navigation Message File. RINEX Version 2 also allows to include observation data from more than one site subsequently occupied by a roving receiver in rapid static or kinematic applications." assertion.