Readable reference page
The page names competency identifier, label, description, issuer, version, source and status. It remains usable for domain review, search and accessibility.
Developers
This page describes the intended working draft for OSC-compatible data models, export profiles and interface contracts in pilots. Example paths are not released production API addresses.
Integration path
An OSC implementation starts with domain-reviewed content. The same required fields should be reused in exports and interfaces: unique identifiers, labels, versions, sources, status and evidence references.
The page names competency identifier, label, description, issuer, version, source and status. It remains usable for domain review, search and accessibility.
Each competency receives a persistent identifier. Labels, synonyms and relations may change; the identifier remains the technical reference.
Planned JSON and CSV profiles describe field names, data types, required fields, allowed values and error cases for pilot imports and quality assurance.
Changes need status, release, change note and version reference. Integrations can distinguish draft, pilot status and reviewed recommendation.
Principles
Technical integration must not replace domain review. Ontology, knowledge graph, embeddings and evidence objects remain separate data types; model outputs and similarity scores provide signals, not competency evidence.
Identifier, label, description, source, version and evidence status remain required information.
Every change to skills data, relations or evidence should be reproducible and dated.
Readable pages, planned JSON/CSV exports and API responses use the same core fields.
Developer USP
OSC separates fields that often get mixed in AI projects. The API contract can expose graph relations, embedding metadata, LLM-generated draft signals and evidence references as different data types. That makes integrations easier to test and safer to review.
Expose skill identifiers, relation types, sources, language, version and status so systems can verify what a node means.
Expose model version, input scope, update date and use case for embeddings. A vector without metadata is not a reviewable competency statement.
Keep assessments, credentials, projects and source documents in explicit evidence fields, separate from matching scores or behavioral indicators.
Data model
The fields form a reference pattern for pilots. They are not a finally published schema and must be reviewed depending on integration context, data source and release status.
Separation
An interface should clearly show which data describes domain semantics, which relations are in the graph, which values were technically derived and which evidence is reviewed.
Service-Compliance-Test
The service compliance test is intended as a pilotable review. It compares sample data, export profiles and interface contracts with agreed required details without anticipating production API addresses or releases.
Compliance here means a documented review against the OSC working draft. It covers data fields, competency evidence, interfaces and governance. It is not certification and not production release.
The review checks whether identifier, record type, label, language, description, status and version are clearly and completely present in sample data.
JSON, CSV or API contracts are checked against field names, data types, required logic, allowed values and documented error cases.
Evidence, source, license and version metadata remain traceably separate. A review status describes the working draft, not automatic recognition.
Exports and API
The following paths are working examples for pilot integrations. Production addresses, authentication, rate limits, error codes and versioning become binding only in the relevant specification.
GET /osc/api/v1/skills/?language=de-DE
GET /osc/api/v1/skills/{skill_id}/
GET /osc/api/v1/evidence/{evidence_id}/
GET /osc/api/v1/graphs/relations/?skill_id={skill_id}
GET /osc/api/v1/embeddings/similar-skills/?skill_id={skill_id}
POST /osc/api/v1/pilots/evidence-check/
HTML pages provide canonical content. JSON and CSV exports should contain the same required fields: identifier, label, description, version, source, language, status, license and documented error cases.
Endpoints for evidence review should be understood as pilot working status: they reference evidence and return a review status, but do not replace domain recognition or governance release.