Interoperability stack

Reference patterns for interoperable competency data.

The OSC reference framework separates key concepts: ontologies describe terms, classes, relations, rules and semantics; knowledge graphs connect instances, sources and evidence objects; embeddings provide derived vector signals for search, matching, clustering, recommendations and forecasting. Model outputs remain supporting signals, not competency evidence.

Orientation

A reference framework for skills data in pilot work and review.

The standards page describes the current OSC working draft. It separates ontology, knowledge graph, embeddings, evidence, interfaces and governance so HR, product management and engineering can review the same requirements.

Semantics

Stabilize terms and relations for domain use

Ontologies describe terms, classes, relations, rules and semantics. Knowledge graphs connect concrete skill instances, roles, learning offers, sources and evidence objects from them.

Evidence

Classify competency signals in a reviewable way

Credentials, projects and assessments are linked with provenance, version, context and evaluation logic. Model values and embeddings remain support signals, not competency evidence.

Pilot integration

Plan exports and APIs as working drafts

Planned export profiles, stable identifiers, required fields and documented error cases make interfaces reviewable in pilots without implying production API commitments.

Knowledge graph ontology

What the model means in plain language.

The reference model does not treat every vector as a standard. Ontology gives terms their meaning. The knowledge graph connects concrete instances and evidence. Embeddings and LLM vectors help discover patterns, but they remain technical signals that need review.

Structure before automation

A skill node needs identifiers, labels, relations, source references, version and status before downstream AI features can use it responsibly.

Behavioral patterns stay contextual

Behavioral indicators may be observed or derived. They must carry source, time reference, context and uncertainty, and they must not replace verified competency evidence.

Four levels

Four separate levels for interoperable skills data.

The OSC working draft separates ontology, knowledge graph, embeddings and evidence. API and export profiles plus governance describe how these levels are exchanged, reviewed and versioned in pilots.

Level 1

Ontologie

Controlled terms, classes, relations, rules and semantics form the versioned domain language.

  • stable skill IDs and preferred labels
  • classes, relations, synonyms and rules
  • versions, languages and change history
Level 2

Knowledge graph

The knowledge graph connects skill instances, roles, learning offers, sources, relations and evidence objects.

  • concrete instances, not only term catalogs
  • source references, relations and context
  • links to evidence and profiles
Level 3

Embeddings

Embeddings are derived vector representations for search, matching, clustering, recommendations and predictions.

  • similarity, search and cluster support signals
  • model version, data basis and update
  • clear separation from evidence and approvals
Level 4

Evidence

Evidence objects classify evidence from learning, work, assessment and certification with provenance.

  • issuer, source, context and timestamp
  • evaluation logic, level, validity and review status
  • approvals, purposes and documented decisions

Mapping

Artifacts, quality criteria and applications in one matrix.

The table shows what is reviewed in pilot projects: which artifacts exist, how quality is recognized and in which applications the level should be used.

Mapping of interoperability levels to artifacts, quality criteria and typical applications of the OSC reference model.
Level Artifact Quality criterion Application
Ontologie Skill catalog, taxonomy, ontology, mapping table Unique IDs, defined terms, maintained classes, relations, rules and versions Domain review, search, role matching and learning-offer mapping
Knowledge graph Graph reference pattern, relation table, source reference, evidence link Traceable instances, relations, sources, context and update Skill gaps, profile matching, portfolio views and system integration
Embeddings Vector representation, model metadata, similarity score, evaluation finding Documented model version, data basis, update and clear marking as support signal Similarity search, matching suggestions, clustering, recommendations and predictions
Evidence Credential profile, portfolio entry, assessment record, review note Reviewable provenance, context, validity, evaluation logic, purpose limitation and review status Recognition processes, talent profiles, qualification paths and audit preparation

API and export note

JSON/CSV profiles, REST paths and error catalogs are planned working drafts for pilot integrations. Binding production addresses, authentication, rate limits and SLAs are created only in a released specification.

Quality

Quality comes from separation, provenance and pilot findings.

A working draft becomes reliable only when required fields are complete, provenance is traceable and real system data has been reviewed. Pilot results are documented as findings, not as automatic standard release.

Reviewability

Every statement needs provenance

Version, issuer, update, data source and evaluation logic belong to the artifact. In the knowledge graph, instances, relations, evidence objects, sources and links are stored traceably.

Usability

Support signals remain separate

Semantic HTML, tabular references and planned export profiles work together. Derived model values can support matching and search, but remain clearly separate from reviewed evidence and release decisions.

Service-Compliance-Test

A practical review for interoperable skill services.

The service compliance test is intended as a pilotable review framework for digital services. It helps teams review data, interfaces and evidence against the OSC working draft. It is not a certificate and does not replace final standard release.

Compliance here means a documented review against the OSC working draft. It covers data fields, competency evidence, interfaces and governance. It is not certification and not production release.

Data model

Trace required fields and terms

The review checks whether skill IDs, labels, relations, languages, versions and sources are documented so a service can process them unambiguously.

Interface

Make export and error cases pilotable

A pilot can show whether sample payloads, field formats, updates and error notes are understandable. Production SLAs or binding API commitments do not result from that.

Evidence

Separate evidence from support signals

Credentials, assessments and portfolio entries are reviewed with provenance and context. Matching scores or embeddings remain support signals, not conclusive evidence.

Result of the check

The result is a finding with open points, assumptions and possible adjustments for the next pilot. A conformity statement exists only when a released standard and a defined review process are available.

Next step

Request working draft for pilot work or review.

Interested organizations can request artifacts, sample data, required fields and review notes, and bring a concrete need into the next working group.