How GenericML works
GenericML turns ML experimentation into a configuration problem rather than a re‑engineering exercise: add domain features, pick a label, generate candidate models, and keep everything traceable.
The flow
- Model the domain (DDD)Define bounded contexts + aggregates so semantics are stable and leakage is reduced.
- Define a decision problemMake the decision explicit (what action, what constraints, what outcomes).
- Create a VectorSchemaVersioned features (float / float[]), units, transforms, windowing rules, leakage controls.
- Generate vectorsProduce VectorInstances from domain events/telemetry. Store big arrays externally; keep hashes/refs in the graph.
- Train ModelPacks (AutoML)CPU‑first training + validation, producing deployable artefacts with evidence and validity envelopes.
- Ensemble the signalsWeighted, stacked, rule+model, confidence‑aware, or regime‑gated (mixture‑of‑experts).
- Ship into apps/agentsExpose via REST/gRPC/serverless; decisions happen at point‑of‑use, not in a separate notebook pipeline.
- Log evidence + outcomes in Neo4jTrace everything: data → vector → model → ensemble → evidence → decision → outcome.
- HITL continuous improvementExperts review errors, adjust thresholds, approve updates, and keep models aligned with reality.
One practical rule
If you can’t explain a prediction in terms of the domain model, you probably don’t have the right bounded context or vector schema yet.