Developers

Model competency data cleanly and exchange it in pilots.

This page describes the intended working draft for OSC-compatible data models, export profiles and interface contracts in pilots. Example paths are not released production API addresses.

Integration path

From readable catalog to reviewable data exchange.

An OSC implementation starts with domain-reviewed content. The same required fields should be reused in exports and interfaces: unique identifiers, labels, versions, sources, status and evidence references.

01

Readable reference page

The page names competency identifier, label, description, issuer, version, source and status. It remains usable for domain review, search and accessibility.

02

Stable identifiers

Each competency receives a persistent identifier. Labels, synonyms and relations may change; the identifier remains the technical reference.

03

Export profiles

Planned JSON and CSV profiles describe field names, data types, required fields, allowed values and error cases for pilot imports and quality assurance.

04

Release and versioning

Changes need status, release, change note and version reference. Integrations can distinguish draft, pilot status and reviewed recommendation.

Principles

Readable, versioned, domain-reviewable.

Technical integration must not replace domain review. Ontology, knowledge graph, embeddings and evidence objects remain separate data types; model outputs and similarity scores provide signals, not competency evidence.

Core fields before model values

Identifier, label, description, source, version and evidence status remain required information.

Sources and versions

Every change to skills data, relations or evidence should be reproducible and dated.

HTML, export and API consistent

Readable pages, planned JSON/CSV exports and API responses use the same core fields.

Developer USP

One contract for graph data, vectors and evidence status.

OSC separates fields that often get mixed in AI projects. The API contract can expose graph relations, embedding metadata, LLM-generated draft signals and evidence references as different data types. That makes integrations easier to test and safer to review.

Graph payload

Expose skill identifiers, relation types, sources, language, version and status so systems can verify what a node means.

Vector metadata

Expose model version, input scope, update date and use case for embeddings. A vector without metadata is not a reviewable competency statement.

Evidence reference

Keep assessments, credentials, projects and source documents in explicit evidence fields, separate from matching scores or behavioral indicators.

Data model

Required fields for competency records in pilots.

The fields form a reference pattern for pilots. They are not a finally published schema and must be reviewed depending on integration context, data source and release status.

  • stable competency identifier and record type
  • main label, language and optional synonyms
  • short, domain-reviewable description
  • ontology reference: class, relation, rule or mapping
  • issuer, source, license and terms of use
  • version, change date, status and release state
  • relationships to clusters, roles, learning offers or evidence objects
  • review status and purpose limitation when evidence is referenced

Separation

Keep ontology, knowledge graph, embeddings and evidence separate.

An interface should clearly show which data describes domain semantics, which relations are in the graph, which values were technically derived and which evidence is reviewed.

  • Ontology: terms, classes, relations, rules and semantics.
  • Knowledge graph: skill instances, roles, sources, relations and evidence references.
  • Embeddings and model values: similarities, clusters, recommendations and normalization suggestions as support signals.
  • Evidence objects: assessments, credentials, project artifacts and documented sources with review status.
  • Evaluation logic, source, timestamp, validity and purpose limitation belong in dedicated fields.

Service-Compliance-Test

Practical check for fields, identifiers and contracts.

The service compliance test is intended as a pilotable review. It compares sample data, export profiles and interface contracts with agreed required details without anticipating production API addresses or releases.

Compliance here means a documented review against the OSC working draft. It covers data fields, competency evidence, interfaces and governance. It is not certification and not production release.

Required fields

The review checks whether identifier, record type, label, language, description, status and version are clearly and completely present in sample data.

Contracts and exports

JSON, CSV or API contracts are checked against field names, data types, required logic, allowed values and documented error cases.

Evidence and provenance

Evidence, source, license and version metadata remain traceably separate. A review status describes the working draft, not automatic recognition.

Exports and API

Example paths for planned pilot APIs and exports.

The following paths are working examples for pilot integrations. Production addresses, authentication, rate limits, error codes and versioning become binding only in the relevant specification.

GET /osc/api/v1/skills/?language=de-DE
GET /osc/api/v1/skills/{skill_id}/
GET /osc/api/v1/evidence/{evidence_id}/
GET /osc/api/v1/graphs/relations/?skill_id={skill_id}
GET /osc/api/v1/embeddings/similar-skills/?skill_id={skill_id}
POST /osc/api/v1/pilots/evidence-check/

API note on working status

HTML pages provide canonical content. JSON and CSV exports should contain the same required fields: identifier, label, description, version, source, language, status, license and documented error cases.

Endpoints for evidence review should be understood as pilot working status: they reference evidence and return a review status, but do not replace domain recognition or governance release.