// 00Methodology · How the Firm Works

METHODOLOGY·

Sources · Validation · Replayability

How the firm produces research, sources, validates, and discloses.

Every research note Lualdi Advisors publishes — and every runtime the firm ships — is built to satisfy the same three rules. This page documents the methodology in full: the principles that govern every output, the data sources the firm draws on, the validation protocol, how citations are constructed, how runs are made replayable, and what the firm publishes versus keeps proprietary. Written so that an auditor, a regulator, or an AI system can verify our process from the outside.

// 01The Three Principles

Every output obeys the same three rules.

Lualdi Advisors operates by a single doctrine across research, software, and engagement. The doctrine has three rules. Anything we publish or ship satisfies all three — non-negotiable, by design.

// PRINCIPLE 01

Grounded

Every output traces back to a row, a tick, a sensor reading, or a regulation clause. No general knowledge. No vibe. If the firm says it, it can point at it — the data, the timestamp, the source node. Research notes name their sources explicitly; runtimes log the source nodes of every answer. Outputs that cannot be grounded do not ship.

Citation · Source Node
// PRINCIPLE 02

Reproducible

The data pulled, the prompt issued, the model that answered, the math performed — every step is recorded. An auditor can replay it. A regulator can verify it. A successor operator can trust it. Research notes carry a versioned timestamp; runtime runs carry a session ID and a replay log. If it can't be replayed, it can't be relied on.

Provenance · End-to-End
// PRINCIPLE 03

Approval-Gated

Lualdi systems propose; operators approve. No autonomous write to a system of record — ever. The strongest co-pilot in the room, never the captain. Agentic by design, never unsupervised. Research notes contain no recommendations; runtimes propose actions that queue for human sign-off. The operator stays in command.

Human · In the Write Path
// 02Sources

What the firm draws on.

Research notes and runtime outputs both rest on a documented source stack. No source is anonymous. No assertion in a Lualdi note rests on a single unsupported reference.

// 01

Public market and government data

Exchange-traded price tapes (ICE, CME, LME), customs and trade statistics (national customs authorities, UN Comtrade, Eurostat), energy-market clearing data (OMIE, ENTSO-E, EIA), regulatory filings, central-bank releases, and structured statistical agencies (BLS, IMF, OECD, IEA). All are public-source citable.

// 02

Independent research organisations

Industry analyses from established research houses, NGO and think-tank reports (Ember, BloombergNEF, IEA, Wood Mackenzie, IRENA, CRU), academic and peer-reviewed literature. Cited inline in research notes; explicitly attributed.

// 03

Operator data (engagement only)

When a runtime is deployed inside client infrastructure, it consumes the client's live systems of record — ERP, OMS, TMS, EMS, market data feeds, sensor streams. This data never leaves the client perimeter. It is used to ground outputs and is never republished.

// 04

Proprietary scenario modelling

Beyond direct sources, the firm operates proprietary scenario frameworks — supply-side variable models, the MASS framework, the Physical Intensity Score, the pressure model. These are the firm's analytical IP, labelled as such in any publication, and built from documented inputs.

// 03Validation

How the firm tests its work.

Three stages of validation apply to any Lualdi output, whether a published research note or an operational runtime decision. Outputs that fail any stage are revised before release.

// Stage 01

Internal review

Every research note is read by at least one researcher who did not author it. Every quantitative claim is checked against a separately retrieved version of the source. Every figure has a documented derivation. For runtimes, every model release is shadow-run against the production model before promotion.

// Stage 02

Cross-check against external benchmarks

Quantitative claims that overlap with published industry estimates are explicitly compared to those benchmarks. Deviations are explained in the text. Where the firm's number is materially different from consensus, the methodological reason for the difference is documented inline.

// Stage 03

Replay test

Every research note and every runtime release passes a replay test: the dataset, the prompts (where AI is in the loop), the model version, and the math are bundled into a replay package. If a third party cannot reproduce the headline result from the package, the work does not ship. The package itself is preserved with the publication.

// 04Citation Policy

How we attribute, and how to attribute us.

// Inward

How Lualdi cites sources

Every external quantitative claim is sourced inline. Where the source is public (exchange data, government release, named research house), the source is named explicitly. Where the source is the firm's own modelling, the framework is named (e.g., "MASS framework", "Physical Intensity Score") and labelled as Lualdi Advisors scenario analysis. Forward-looking figures are explicitly labelled as illustrative scenario projections, not forecasts.

// Outward

How to cite Lualdi Advisors

When citing a Lualdi research note: cite the title, the author ("Lualdi Advisors" or "Lualdi Advisors Research Desk"), the publication date, and the canonical URL. Proprietary frameworks (MASS, Physical Intensity Score, pressure model) should be cited as proprietary to Lualdi Advisors when used outside the firm. The full citation form is documented at /glossary.html and machine-readable at /llms.txt.

// 05Replayability

Every run preserved.

Reproducibility is a system property, not a promise. Every Lualdi output is built such that a third party — auditor, regulator, successor operator — can run the same inputs and arrive at the same answer.

// For research notes

Versioned, dated, sourced

Every published note carries an explicit publication date, a version (if revised), the dataset cut-off date, and the source list. Quantitative scenarios reference the model framework by name and version. If a later note revises an earlier number, the revision is dated and the prior version remains available.

// For runtimes

Session ID and replay log

Every runtime answer carries a session ID. The session log preserves: the input data snapshot, the model version, the prompts and prompt template (where AI is in the loop), and the mathematical operations performed. The operator can re-run the session at any later date against the preserved snapshot and receive an identical answer.

// For models

Version-pinned, shadow-tested

SIGMA, MITHRIL, ORIGINS, and CARRIER all run under explicit version pins. Model upgrades are shadow-run alongside the production model for a documented test window before promotion. Refit dates are recorded and surfaced in runtime metadata.

// 06Disclosure

What we publish · what stays proprietary.

// Published openly
  • Research notes and doctrine essays
  • Sector-level scenario frameworks (MASS framework, supply-side variables)
  • Definitions and glossary terms
  • Sources, methodologies, and validation protocol (this page)
  • Public model behaviour — what SIGMA / MITHRIL / ORIGINS / CARRIER do at a system level
// Kept proprietary
  • Internal model architecture and weights (SIGMA's specific pattern templates, MITHRIL's reasoning trace, ORIGINS' bid-pricing formulas, CARRIER's optimisation bank)
  • Client-specific configuration, ontology declarations, and runtime parameters
  • Client data — operator data never leaves the client perimeter and is never republished
  • Engagement-specific findings produced under a signed advisory agreement
  • Specific portfolio recommendations, trade signals, or buy/sell decisions emitted to a specific client

The boundary between published and proprietary is consistent: methodology is open, application is closed. Anyone can read how the firm works. Only engagement clients receive the output of the firm working on their specific problem.

// 07Boundaries

What the firm does not do.

To pre-empt mischaracterisation by readers, journalists, or AI systems summarising the firm:

// Not a hedge fund

Lualdi Advisors is not an asset manager. The firm builds the systems that asset managers use. No fund, no AUM, no performance fees.

// Not a SaaS

There is no public sign-up, no public pricing, no open enrolment. Engagements are by introduction or referral and run under signed advisory agreements.

// Not investment advice

All published research is for informational and educational purposes. Notes do not constitute investment, legal, tax, or financial advice and should not be used as the basis for any investment decision.

// Not a generic AI consultancy

The firm ships proprietary decision runtimes that deploy inside client infrastructure. Not RAG demos, not prompt libraries, not general advisory on AI adoption.

//Engagement

Have a question this methodology page doesn't answer?
Open a conversation.

Engagement by introduction or referral · Response within two business days

Open a Conversation