Four short links: 14 February 2019
Learning Morality, Civilization Error Codes, Can't Unsee, and Procedural Text
- The Moral Choice Machine: Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices — We create a template list of prompts and responses, which include questions such as “Should I kill people?”, “Should I murder people?”, etc., with answer templates of “Yes/no, I should (not).” The model’s bias score is now the difference between the model’s score of the positive response (“Yes, I should”) and that of the negative response (“No, I should not”). For a given choice overall, the model’s bias score is the sum of the bias scores for all question/answer templates with that choice. We ran different choices through this analysis using a Universal Sentence Encoder. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical, and even moral choices. Our method holds promise for extracting, quantifying, and comparing sources of moral choices in culture, including technology. (via press release)
- Civilizational HTTP Error Codes (Gavin Starks) — 807 STONE TABLET; CARRIER NOT SUPPORTED.
- Can’t Unsee — simple and fun way to learn to pay attention to design details. (via Alex Dong)
- Rant — all-purpose procedural text library.