Competencies are distributed.
Profiles, learning paths, courses and evidence sit in separate systems. Decisions then rely on incomplete data.
Transparent skill profiles and reviewable evidence are needed.Open Skills Consortium
OSC brings together working drafts for skill profiles, competency evidence and interoperable skills data. The goal is a reliable reference pattern for HR, education, technical teams and product teams.
Ausgangslage
Many organizations work with course data, job profiles, certificates and internal terms. Without a shared structure, matching, learning, product features and evidence remain hard to review.
Profiles, learning paths, courses and evidence sit in separate systems. Decisions then rely on incomplete data.
Transparent skill profiles and reviewable evidence are needed.Skill names change. IDs are missing. Versions and exports are not clear enough for reliable integrations.
Clear data contracts, IDs and export profiles are needed.Governance, audiences and interoperability often become visible only after the pilot. That makes scaling harder.
Bounded pilots with clear acceptance criteria are needed.Why OSC
The OSC approach combines a knowledge graph, ontology rules, evidence objects and optional embeddings. This makes skill definitions understandable for HR, implementable for development teams and governable for product decisions.
A knowledge graph connects skill instances with roles, learning offers, evidence objects, sources and relations. The ontology defines the terms, classes, relations, rules and semantics behind those links.
Embeddings and LLM-generated vectors can support search, matching, clustering and draft recommendations. They remain derived support signals and do not prove that a person has a competency.
Observed or derived behavioral indicators need a source, timestamp, context and uncertainty. They can inform skill-definition work, but final competency statements still need evidence, assessments, credentials or documented projects.
Value
The consortium works on reference patterns that help organizations describe, exchange and review competency data. Value emerges where data from HR, education, engineering, product and evidence systems has to be connected.
Skill profiles should show which competencies exist, which evidence belongs to them and where learning or recognition can start usefully.
A skills graph needs unambiguous nodes, stable relationships and traceable changes. OSC outlines contract surfaces for that work.
Product teams can start with bounded data spaces, define acceptance criteria and clarify connection points for later rollouts.
Next step
A bounded pilot shows which data is already usable and which standards are still missing.
Mechanics
OSC does not describe a loose taxonomy. The focus is on data, evidence and exchange formats that remain readable to domain experts and can be technically integrated into existing systems.
Governance
A shared standard must be readable to domain experts and technically reviewable. OSC therefore separates the data model, evidence logic and operating questions.
Every skill record needs provenance, meaning and validity. Without these details, matching remains unreliable.
OSC works toward open standards, APIs and export profiles. Existing HR and learning systems should not have to be replaced.
Organizations can start with a clear use case and derive requirements for data, roles and operations.
Competency evidence and micro-credentials must be machine-readable, while still being understandable to HR and audit teams.
Pilot patterns
The following patterns describe possible pilots for HR, education, engineering and product teams. They are intentionally bounded so value, data quality and interoperability become visible early.
Pilot review
A good pilot starts with a few skill profiles, clear evidence and a concretely reviewable process.
Working method
OSC formulates statements as working drafts. This creates room for tests without turning early patterns into finished standards.
HR, education and domain teams need to understand why a skill, evidence item or matching result is used.
Data model, IDs, versions and interfaces are described so development teams can test them.
Roles, approvals, sources and change processes are documented early so a pilot does not fail in operations.
Contact
Briefly describe which competency data, evidence or interfaces matter. We will assess whether a bounded pilot or an expert workshop is useful.