HACKER Q&A
📣 killjoywashere

Need help drafting a profession's policy for AI governance


I've been asked to contribute to a fairly significant policy document for my profession and I am seeking your help, particularly the cryptography, ML, and legal folks, to help draft this particular concept around the chain of custody for AI inferences:

========== Inferences made by an AI marketed for use in decision-making (e.g. decision support) should be cryptographically signed using a certificate on the vendor’s machine, who's certificates should be managed in a Public Key Infrastructure program, so the inferences are immutable and their provenance is traceable, and those signed inferences should retained as part of the record.

Additionally, any verification or validation procedure performed by a person should result in the machine's signing certificate being countersigned by the person performing the verification or validation, such that this procedure is also captured in the signed inference. ==========

Is that a sensible way to ensure inferences are admissible as evidence? Does it cover causal interventions? What am I missing? Critique most welcome.


  👤 freeesthaven Accepted Answer ✓
>cryptography... folks

Policies are useless, you need to encrypt data at rest and in transit whenever possible. Force them to burn zero days for the most minute things, and to infer wrongly based on what little metadata they get.

We need to put off any major decisions on AI another decade or two to ensure those who hold onto power through violence and deceit have passed on before we can begin to consider things like you describe.

"AI" is an oxymoron -- often folks who haven't taken so much as calculus wanting to make decisions with elaborate algorithms and bad data because they so badly wanted to run a simple T test but lacked the parameters for the variables.

So they invent psudoscientific "ai" to approximate things until they get the answers they want.

(I uses to be a policy folk. Once I saw the sausage made, I ran for the hills... or to be more accurate, I drove a truck to them.)


👤 bobbiechen
It's somewhat implied in your phrasing, but I think it is also important to be explicit that you should also record the software/model version of the AI used (as part of the signature/metadata). Otherwise it's not very meaningful, since there's no way to know whether the inference was generated by a malicious program or not. There's a corresponding challenge for the vendor to do proper versioning of the system as they develop and improve on it.

Depending on your needs, it may be useful to know about all inferences which happened (even if they were not used). Some kind of append-only ledger which records metadata could be used.

Happy to chat more on the topic, as I am actively working on something similar - bobbie.chen@anjuna.io


👤 wsh
On your specific text:

- What if the AI is supplied by a vendor but runs on a computer system controlled by the user or by a third party? This could happen if the user doesn’t want, or isn’t allowed, to disclose the inputs or their derivatives.

- Assuming there’s a need to mandate cryptographic digital signatures at all, why require certificates and PKI? Wouldn’t it suffice for the signer to announce a public key and, if necessary, its revocation?

- Cryptographic signatures are still overwhelmingly the exception, not the rule, in legal evidence. Courts routinely admit ordinary paper and electronic business records, authenticated, when necessary, by their creators or custodians. (See, for example, Rules 901 and 902 in the Federal Rules of Evidence.) Digital signatures might not make this easier; consider the potential for conflicting expert testimony about signing and key management schemes and their weaknesses.

More generally:

As a professional engineer who uses my own and others’ software, I don’t think an AI model is fundamentally different from a spreadsheet, a card deck with a FORTRAN program, or a table or formula in a printed handbook. If I’m relying on something for my work, it’s my professional responsibility to assess its validity, suitability for purpose, and limitations; to know how to use it properly; and to interpret and evaluate its output.

The standard of care with which I do those things, the nature and extent of any documentation I might produce, and the arrangements for the retention, protection, and future authentication of those materials in case of a dispute, will vary with the circumstances, including the potential for harm to the client or to the public and my own organization’s appetite for risk.

Perhaps your context is different, but I hesitate to endorse a highly prescriptive approach. Engineering regulators use very broad language; for example, Florida’s rule says only, “The engineer shall be responsible for the results generated by any computer software and hardware that he or she uses in providing engineering services” [1], and Professional Engineers Ontario has guidelines [2] but not specific standards.

[1] Florida Administrative Code, Rule 61G15-30.008

[2] “Professional Engineers Using Software-Based Engineering Tools,” April 2011, https://www.peo.on.ca/sites/default/files/2019-07/Profession...