← Frameworks

Position Paper

Toward a Discipline of Knowledge Engineering

Every formalization of an engineering subdiscipline in the past two decades has followed the same structural pattern: a function that organizations treated as labor-intensive, reactive, and subordinate to the 'real' engineering work was reconceived as a discipline in its own right when a scaling crisis made the ad-hoc arrangement untenable. In 2003, Benjamin Treynor Sloss founded Site Reliability Engineering at Google by asking what happens when you treat operations as a software engineering problem rather than a staffing problem. By 2016, the approach had scaled to over a thousand practitioners managing infrastructure that would have traditionally required five times that number (Petoff et al. 2016). In 2009, Patrick Debois organized the first DevOpsDays conference in response to the structural separation between development and operations that produced coordination failures no amount of process could resolve (Debois 2009). DevSecOps extended the pattern by integrating security into the same continuous lifecycle rather than treating it as a gate at the end. In each case, the formalization succeeded because it reframed a labor problem as an engineering problem and provided the career paths, tooling, and institutional identity that allowed the new discipline to attract practitioners and organizational investment.

This paper argues that knowledge work, the function encompassing technical writing, information architecture, knowledge management, content strategy, documentation operations, and developer advocacy, is undergoing the same structural transition. The catalyst is not organizational pain alone, though the pain is real. The catalyst is AI. Large language models and autonomous agent systems are simultaneously the most demanding consumers of knowledge infrastructure (they depend on structured, unambiguous, machine-parseable knowledge artifacts to function), the most prolific producers of knowledge artifacts (they generate documentation, summaries, and knowledge representations at a scale that manual review cannot absorb), and the most disruptive force acting on the methods by which knowledge is captured, validated, and maintained. The transformation is as fundamental to knowledge work as containerization and orchestration were to operations. And it is happening to a function that, unlike operations at Google in 2003 or deployment at Flickr in 2009, has never been formalized as a discipline.

The convergence of this research program's findings with the AI transformation creates a structural urgency. The legibility thesis (Salman 2026a) established that the quality of every external operation on a system is bounded by the quality of the knowledge infrastructure that system makes available. The knowledge architecture framework (Salman 2026b) demonstrated that knowledge artifacts designed for a single consumer class fail when new consumer classes arrive, and that the most consequential new consumer class is autonomous agents. The same five failure patterns (narrative substitution, context privatization, granularity collapse, temporal unbinding, verification absence) recurred across every domain examined. If the knowledge layer determines what can be known about the systems it describes, and if AI systems are now both the primary producers and primary consumers of that layer, then the design decisions being made today about how knowledge is captured, structured, and transferred will constitute the invisible infrastructure (Bowker and Star 1999) that shapes organizational cognition for the next generation. Those decisions should be made by a discipline that understands their consequences.

The exact claim is this: Knowledge Engineering, the systematic capture, formalization, organization, transfer, and maintenance of expertise as an engineering material, must formalize as a discipline now, during the AI transition, because the alternative is that the knowledge substrate on which every AI-augmented system depends will be designed by default rather than by method. The term is reclaimed from Feigenbaum's (1977) original use in AI and expert systems research, where the core insight, that human expertise is an engineering material requiring systematic methods for capture and formalization, anticipated the organizational problem by four decades. The reclamation is substantive: the knowledge acquisition bottleneck that limited expert systems in the 1980s is the same bottleneck that limits organizational knowledge transfer today, and AI tools that assist with knowledge capture do not eliminate the bottleneck so much as transform its character from extraction difficulty to validation complexity.

The formalization would not create a new function. It would name and unify the function that already exists across at least seven roles that share a common material (knowledge), a common process (capture → formalization → organization → transfer → maintenance), and a common scaling crisis (AI-driven demand that exceeds the capacity of ad-hoc arrangements). The argument proceeds through five observations derived from the research program, a framework for the knowledge lifecycle as an engineering process, the structural conditions that make this moment catalytic rather than merely opportune, and the implications for organizational design when the discipline becomes visible.

Observations

Five findings from the research program

Bowker and Star (1999) identified a defining property of infrastructure: it becomes visible only at the moment of breakdown. A working electrical grid is background; a blackout is foreground. The same asymmetry governs knowledge infrastructure. Documentation that works is transparent: developers find what they need, agents parse capability schemas correctly, regulators locate required disclosures, new employees acquire organizational knowledge at the expected rate. Documentation that fails produces visible costs: integration errors cascade, onboarding stalls, governance mechanisms operate on incorrect assumptions, and the analytical outputs that depend on structured evidence degrade to narrative restatement.

This asymmetry appeared consistently across the research program. In the infrastructure analyses, systems with adequate documentation received no analytical credit for documentation quality; their analysis simply proceeded to substantive findings. Systems with inadequate documentation consumed analytical capacity on the gap itself, and the resulting analyses were bounded by what could be inferred rather than verified. Latour (1987) describes a parallel phenomenon in the construction of scientific facts: inscription devices (instruments, recording systems, measurement apparatus) become invisible once they produce stable outputs, and the labor required to build and maintain them disappears from the published account. Knowledge artifacts function as inscription devices for organizational phenomena. They make processes, architectures, and decision rationale visible and stable. The labor that produces them is subject to the same erasure.

The practical consequence is chronic under-investment. Organizations budget for the systems that knowledge infrastructure describes but not for the infrastructure itself, because the infrastructure's successful operation is indistinguishable from its absence. Simon's (1969) distinction between the sciences of the natural and the sciences of the artificial is relevant: knowledge infrastructure is a designed artifact, and its quality is a design question rather than a discovery question. Under-investment in design produces infrastructure whose failure modes become visible only after they have propagated downstream, by which point the cost of remediation exceeds the cost of the original investment by an order of magnitude.

Polanyi's (1966) observation that 'we can know more than we can tell' identifies the structural floor beneath every knowledge engineering effort. Expert practitioners carry knowledge that resists formalization: the senior engineer's intuition about which architectural decisions will produce maintenance burdens, the experienced operator's sense for anomalous system behavior before monitoring dashboards register a signal, the product manager's judgment about which feature trade-offs will produce user satisfaction. This tacit knowledge is organizationally critical and individually perishable.

Nonaka and Takeuchi (1995) proposed the SECI model (Socialization, Externalization, Combination, Internalization) as the organizational process through which tacit knowledge converts to explicit knowledge and back. Each conversion mode requires specific organizational conditions and maps to different existing roles: developer advocates facilitate socialization through community engagement; technical writers perform externalization through structured documentation; information architects enable combination through taxonomy and navigation design; knowledge managers oversee internalization through training and adoption programs. The fragmented role structure means that no single discipline owns the full conversion cycle.

AI transforms the conversion problem without solving it. Large language models can assist with externalization (transcribing, summarizing, drafting documentation from conversations and code), combination (connecting discrete knowledge elements across repositories), and portions of internalization (generating learning materials, practice exercises). What they cannot do is validate the knowledge they produce against the tacit understanding of the practitioners whose expertise the knowledge claims to represent. The bottleneck shifts from extraction difficulty (getting knowledge out of practitioners' heads) to validation complexity (determining whether the knowledge artifacts that AI produces accurately capture what practitioners know). Orr's (1996) ethnographic study of Xerox technicians demonstrated that practitioners' most valuable knowledge was embedded in stories shared during joint repair sessions, knowledge that existed in community practice rather than in any documentation system. AI can generate plausible documentation from code and conversations. Whether that documentation captures the practitioner's actual understanding, or merely produces a fluent approximation that passes surface review, is a validation question that requires the disciplinary methods Knowledge Engineering would formalize.

Winner's (1980) thesis that artifacts have politics applies with particular force to knowledge artifacts. The design decisions embedded in a documentation architecture, what categories exist, what information occupies each category, what is included and omitted, what resolution level serves as the default, constitute political choices that shape what can be known about the system the documentation describes. These choices are consequential whether they are made deliberately by a knowledge architect or incidentally by whichever engineer happened to write the README.

Latour and Woolgar's (1979) study of laboratory practice demonstrated that the production of scientific facts involves extensive inscription work: measuring, recording, tabulating, graphing, and writing that transforms phenomena into stable, transportable representations. The inscriptions constitute the phenomena as objects of knowledge, performing a function that exceeds description. Knowledge artifacts in organizational settings perform the same constitutive function. An API reference constitutes the API as a consumable capability by determining which aspects are visible, which constraints are specified, and which behaviors are left undocumented. Callon's (1998) performativity concept extends the point: the knowledge artifact actively shapes the informational environment within which decisions about the system are made.

The AI transformation amplifies the political stakes. When AI systems consume knowledge artifacts to make autonomous decisions (tool selection, integration, deployment, compliance assessment), the design choices embedded in those artifacts become decision inputs at machine scale and machine speed. A knowledge schema that omits rate-limit constraints because the original human author considered them obvious produces a different failure when consumed by a human developer (who may find the information elsewhere) than when consumed by an autonomous agent (which treats the schema as the complete constraint specification). A discipline that claims responsibility for knowledge artifact design can make these political choices deliberately, with awareness of their downstream consequences across consumer classes. Without such a discipline, the choices are made by default, and their consequences propagate at the speed of the systems that consume them.

Abbott's (1988) analysis of the system of professions identifies the structural conditions under which disciplines form: a body of work becomes sufficiently complex that its practitioners require specialized knowledge, and formalization occurs when practitioners claim jurisdiction through credentialing, standard-setting, and institutional boundary work. SRE claimed jurisdiction over production reliability. DevOps claimed jurisdiction over the deployment boundary. DevSecOps claimed jurisdiction over security within the continuous delivery lifecycle. Each formalization succeeded because it named a problem practitioners already recognized, proposed a structural solution, and provided career paths that attracted engineering talent to work previously classified as overhead.

Knowledge work satisfies Abbott's conditions. The work is complex, requires specialized knowledge, and is currently performed by practitioners distributed across at least seven roles (technical writer, information architect, knowledge manager, content strategist, documentation program manager, developer advocate, documentation specialist) who lack shared identity, shared career paths, and shared professional standards. A technical writer and an information architect share more methodological ground (taxonomy design, audience analysis, progressive disclosure, content lifecycle management) than either shares with the software engineers they typically report to. The fragmentation produces coordination failures structurally identical to those DevOps identified at the deployment boundary: handoffs between knowledge holders and knowledge producers create information asymmetry; quality standards vary because no discipline owns the standard; maintenance degrades because no role has lifecycle responsibility.

Star and Griesemer's (1989) concept of boundary objects explains both why the fragmentation persists and what formalization would change. Each knowledge role has developed locally adapted practices that coordinate work within their immediate community. These local adaptations lack the shared structure that would enable coordination across roles. A formalized discipline provides the boundary objects, the shared vocabularies, methods, and standards, that allow specialized roles to operate as members of a single profession. The precedent is clear: SRE did not eliminate the distinction between monitoring, capacity planning, and incident response; it unified them under a disciplinary identity that made the shared foundation visible and investable.

The legibility thesis established that a system is legible to the extent that an external reasoner can determine what it claims to do from publicly available structured evidence. By this definition, knowledge work as a discipline is illegible to the organizations that employ knowledge workers. The discipline lacks structured descriptions of its methods, its competency levels, its quality metrics, and its organizational value proposition. The same five failure patterns apply. Narrative substitution: knowledge work is described in terms of outputs ('documentation') rather than the engineering methods that produce them. Context privatization: the methodological knowledge that experienced practitioners accumulate remains private, transferred through mentorship or lost when practitioners leave. Granularity collapse: the discipline's diverse functions are collapsed into a single label ('docs') that erases the distinctions between acquisition, formalization, organization, and transfer. Temporal unbinding: practices evolve without versioning, so organizations cannot distinguish current method from inherited convention. Verification absence: the discipline publishes no metrics that would allow organizations to evaluate its effectiveness.

Hutchins's (1995) distributed cognition framework illuminates the organizational consequence. Cognitive work in complex systems distributes across human practitioners, material artifacts, and institutional structures in configurations the system's designers did not anticipate. Knowledge infrastructure is a critical component of this distribution. When the discipline that produces this infrastructure is illegible, the infrastructure's design becomes an unmanaged variable in the organization's cognitive system. Suchman's (1987) analysis of situated action adds that plans, including documentation plans and knowledge architecture designs, function as resources for action rather than determinants of action: their value depends on the practitioner's ability to adapt them to situated circumstances.

Dewey's (1927) argument that effective publics form only when citizens have access to the material facts about conditions that affect them applies at the organizational level: knowledge workers cannot form an effective professional public, one that advocates for investment, sets standards, and advances method, when the discipline itself lacks the legibility infrastructure that would make it visible as a coherent field. Formalization is the discipline's own legibility project.

Framework

The knowledge lifecycle as engineering process

The observations above converge on a structural claim: the work performed across fragmented knowledge roles constitutes a single engineering process with five phases. The phases are analytically distinct but operationally interleaved, and each draws on different competencies while sharing a common material: knowledge as an organizational resource that can be captured, structured, validated, delivered, and maintained.

The process has a structural parallel in Nonaka and Takeuchi's (1995) knowledge creation spiral, but differs in emphasis. The SECI model describes how knowledge converts between tacit and explicit forms. The knowledge engineering lifecycle describes what happens to explicit knowledge once produced: how it is organized, validated, how it reaches its consumers, and how it is kept current as the systems it describes evolve. AI transforms every phase of this lifecycle, which is precisely why the lifecycle needs disciplinary ownership rather than ad-hoc distribution across roles that address individual phases in isolation.

Identifying, extracting, and capturing knowledge from practitioners, systems, and organizational memory. This phase confronts the tacit knowledge problem directly: the most valuable organizational knowledge resists the extraction methods most commonly employed. Feigenbaum's (1977) knowledge acquisition research in expert systems identified this bottleneck four decades ago, and it remains the discipline's hardest problem. AI-assisted transcription, code analysis, and conversational extraction reduce the cost of the tacit-to-explicit conversion, but the validation question (does the extracted knowledge accurately represent what the practitioner knows?) requires disciplinary methods that do not yet have an institutional home.

Structuring captured knowledge in systematic representations (taxonomies, ontologies, metadata schemas, content models) that make it machine-processable and human-discoverable. This phase applies the classification expertise that Bowker and Star (1999) analyzed: every classification decision embeds assumptions about who will use the knowledge and how, and these assumptions persist as invisible infrastructure long after the decision is forgotten. The knowledge architecture framework's design primitives (structural decomposability, semantic self-description, temporal binding, modality independence) apply with particular force here, and the heterogeneous-reasoner problem makes formalization decisions consequential across consumer classes that the formalizer cannot anticipate.

Arranging formalized knowledge in structures that support discovery, navigation, and consumption across different consumer classes. Procida's Diataxis framework provides the strongest existing methodology: different cognitive needs (learning, problem-solving, reference consultation, conceptual understanding) require structurally different knowledge artifacts, and conflating them degrades effectiveness for every consumer. Organization encompasses the information architecture decisions that determine findability and the metadata strategies that determine whether knowledge is surfaced contextually or retrieved only through explicit search.

Delivering knowledge to its consumers in forms they can process and act upon. Transfer effectiveness depends on the match between the artifact's structure and the consumer's cognitive strategy (Gigerenzer 2000). Sweller's (1988) cognitive load theory provides the mechanism: unnecessary processing effort imposed by the artifact's format degrades the consumer's engagement with the knowledge itself. The knowledge architecture framework's extension of progressive disclosure (each resolution level must be complete and correct at its own depth) addresses a specific transfer failure observed across the research program. AI-mediated transfer (chatbots, retrieval-augmented generation, agent-facing tool schemas) introduces new transfer modalities that require the same deliberate design attention.

Keeping knowledge current, retiring obsolete knowledge, managing the lifecycle of knowledge artifacts as the systems they describe evolve. This phase is where the temporal unbinding failure documented in the legibility thesis originates: knowledge artifacts that describe a previous system state without signaling the divergence cause every downstream consumer to form beliefs based on stale evidence. Williamson's (1985) transaction cost analysis applies: the cost of maintaining knowledge infrastructure determines whether the infrastructure remains trustworthy over time, and under-investment in maintenance produces knowledge decay that compounds across every system that depends on the decaying artifact. AI can assist with staleness detection and consistency checking, but the governance decisions (what to update, what to retire, what to restructure) remain engineering judgments that require disciplinary expertise.

The AI transformation as structural catalyst

Four conditions that make this moment structural

Disciplinary formalization is not inevitable. Many bodies of work that satisfy Abbott's (1988) conditions for professional formation remain fragmented because the environmental pressure never reaches a threshold. The argument for formalizing Knowledge Engineering now rests on a specific structural claim: AI constitutes a transformation of the knowledge layer itself that changes the discipline's methods, its consumer landscape, its production economics, and its organizational stakes simultaneously. The transformation also reshapes every adjacent discipline, from SRE to DevOps to software engineering itself, but Knowledge Engineering faces the unique condition of undergoing this transformation without ever having formalized in the first place.

The knowledge architecture framework documented the heterogeneous-reasoner problem: knowledge artifacts designed for human consumers fail when autonomous agents arrive, because agents lack the contextual inference, ambiguity tolerance, and compensating heuristics that humans bring to poorly structured information. This is not a future concern. Agent systems consuming MCP tool schemas, API specifications, and structured documentation are encountering knowledge infrastructure designed for human developers, and the failure modes (misinterpreted constraints, suboptimal tool selection, cascading integration errors) are already measurable. Simultaneously, AI systems are producing knowledge artifacts, generated documentation, code summaries, meeting transcriptions, knowledge base entries, at a scale that exceeds the validation capacity of the humans who are nominally responsible for knowledge quality. The result is a knowledge layer that is growing faster than it can be curated, serving consumers whose requirements it was not designed to meet. Gigerenzer's (2000) ecological rationality framework explains the structural mechanism: cognitive strategies perform well only when matched to the informational structure of their environment, and knowledge infrastructure designed for one cognitive strategy degrades every other consumer's strategy. The design decisions being made now, about how AI-generated knowledge is validated, how agent-facing schemas are structured, how human and machine knowledge artifacts coexist, will constitute the knowledge substrate for the next generation of AI-augmented systems.

The AI transformation is not unique to knowledge work. SRE practitioners are integrating AI into monitoring, anomaly detection, and incident response. DevOps pipelines are incorporating AI-assisted code review, test generation, and deployment optimization. DevSecOps is adapting to AI-generated code that introduces novel vulnerability patterns. Software engineering itself is being reconceived as AI-augmented development. Each of these disciplines is absorbing the transformation from a position of existing formalization: they have established identities, career paths, standards, metrics, and institutional structures that provide the scaffolding for adaptation. Knowledge work has none of this scaffolding. The AI transformation is arriving at a function that is distributed across seven role titles with no shared identity, no standard methods, no quality metrics, and no career infrastructure that would allow practitioners to develop the expertise required to navigate the transition. The absence of formalization means that the discipline's response to AI will be fragmented, ad-hoc, and organizationally invisible, precisely when the stakes of the knowledge layer's design quality are highest. Nelson and Winter's (1982) analysis of organizational routines is instructive: an organization's adaptive capacity depends on the routines that encode its capabilities, and knowledge infrastructure is the medium through which routines become explicit, transferable, and improvable. A discipline that cannot articulate its own methods cannot adapt them.

Brooks's (1975) observation that coordination costs in knowledge-intensive work grow superlinearly with team size applies directly. As organizations grow, the number of systems, processes, and decisions requiring knowledge capture grows faster than the knowledge engineering capacity the organization employs. AI accelerates both sides of this equation: it increases the number of knowledge-producing and knowledge-consuming systems while simultaneously reducing the marginal cost of knowledge production in ways that mask the growing maintenance and validation burden. The gap between knowledge complexity and knowledge investment widens, producing the organizational phenomenon practitioners describe as 'documentation debt,' structurally analogous to technical debt but lacking the engineering framing that would make it visible as an investment problem. Ostrom's (1990) commons governance analysis provides the institutional frame: organizational knowledge is a common-pool resource subject to classic commons failure modes (depletion through neglect, degradation through unvalidated contribution, free-riding by those who consume without contributing). A formalized discipline provides the institutional infrastructure, the monitoring, standards, career incentives, and resource allocation mechanisms, that commons governance requires. Without this infrastructure, knowledge remains a commons without governance.

The claim that knowledge work can be practiced as engineering is not speculative. Procida's Diataxis framework demonstrates that documentation types correspond to distinct cognitive functions and can be categorized systematically. The Docs-as-Code movement demonstrates that knowledge artifacts can be subject to the same engineering practices (version control, peer review, automated testing, continuous deployment) as software artifacts. Splunk's product-first model (Gales et al. 2017) demonstrates that knowledge work integrates into product development as a first-class function. Tom Johnson's sustained work on API documentation methodology demonstrates the engineering depth that knowledge transfer requires at scale. These frameworks constitute existence proofs. What they lack is a unifying disciplinary identity that would allow organizations to invest in them as components of a single professional field rather than as independent initiatives that each require separate justification. The Write the Docs community provides the gathering point; the EKAW conference series provides academic engagement; the Diataxis framework provides systematic methodology. The pieces of a discipline are assembling. What they require is the formalization that converts a collection of practices into a profession with the institutional infrastructure to develop, transmit, and advance its own knowledge.

Implications

What changes when the discipline becomes visible

The formalization proposed here is structural rather than nominal. Renaming existing roles accomplishes nothing if the underlying organizational arrangements remain unchanged. The implications below identify what changes when knowledge work is treated as an engineering discipline with unified identity, shared methods, and career paths that reward both depth and breadth across the knowledge lifecycle.

The research program documented how knowledge artifacts govern consumer behavior through their design choices: what descriptions include, what schemas omit, how errors are categorized, how version boundaries are signaled. When no discipline claims responsibility for these choices, they are made incidentally. Formalization makes the choices deliberate. A knowledge engineer designing an API reference considers whether its structure serves the multiple consumer classes that depend on it: human developers, autonomous agents, integration validators, AI-assisted development environments, and reasoner forms that have not yet emerged. The design primitives from the knowledge architecture framework (structural decomposability, semantic self-description, verification surface, temporal binding, modality independence, progressive disclosure, composability, adversarial resilience) become the engineering standards that knowledge artifacts are designed and evaluated against. The AI-specific extension is that knowledge artifacts must now be designed for simultaneous human and machine consumption, with explicit constraint metadata, structured verification surfaces, and temporal binding that machines can parse without the contextual inference that humans supply.

Polanyi's tacit knowledge problem is currently treated as an obstacle to documentation rather than as the central engineering challenge of a discipline. Technical writers work around it, knowledge managers acknowledge it, and organizations accept knowledge loss when practitioners leave as an unavoidable cost. AI changes the economics of externalization (the tacit-to-explicit conversion) without changing its difficulty. A formalized discipline treats the conversion problem as its core research agenda, developing methods that combine AI-assisted extraction with practitioner validation to reduce the gap between what the organization's people know and what its knowledge infrastructure captures. The investment pays returns across every phase of the knowledge lifecycle and across every consumer class, because the quality of downstream knowledge artifacts is bounded by the quality of the acquisition that produced them.

SRE introduced service level objectives and error budgets as quantitative measures of reliability. DevOps introduced deployment frequency, lead time, change failure rate, and mean time to recovery (Forsgren, Humble, and Kim 2018). Knowledge Engineering requires equivalent metrics: coverage (what percentage of organizational knowledge is captured in maintainable form), currency (what percentage of knowledge artifacts accurately reflects the current system state), discoverability (how effectively consumers locate the knowledge they need), transfer effectiveness (whether consumers who access knowledge accomplish their intended task), and maintenance cost (the resources required to keep the knowledge layer trustworthy). These metrics address the invisible infrastructure problem directly: what can be measured can be made visible, and what is visible can be funded. The metrics also provide the feedback loops that a maturing discipline needs to evaluate its own methods and identify where investment produces the highest returns.

  1. Abbott, A. 1988. The System of Professions: An Essay on the Division of Expert Labor. University of Chicago Press.
  2. Bowker, G. C., and S. L. Star. 1999. Sorting Things Out: Classification and Its Consequences. MIT Press.
  3. Brooks, F. P. 1975. The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
  4. Callon, M. 1998. 'Introduction: The Embeddedness of Economic Markets in Economics.' In The Laws of the Markets, 1–57. Blackwell.
  5. Debois, P. 2009. DevOpsDays Ghent. First DevOpsDays conference, October 2009.
  6. Dewey, J. 1927. The Public and Its Problems. Holt.
  7. DiMaggio, P. J., and W. W. Powell. 1983. 'The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.' American Sociological Review 48 (2): 147–60.
  8. Feigenbaum, E. A. 1977. 'The Art of Artificial Intelligence: Themes and Case Studies of Knowledge Engineering.' Proceedings of IJCAI-77.
  9. Forsgren, N., J. Humble, and G. Kim. 2018. Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.
  10. Gales, C., and the Splunk Documentation Team. 2017. The Product is Docs. Splunk Press.
  11. Gigerenzer, G. 2000. Adaptive Thinking: Rationality in the Real World. Oxford University Press.
  12. Grossman, S. J., and J. E. Stiglitz. 1980. 'On the Impossibility of Informationally Efficient Markets.' American Economic Review 70 (3): 393–408.
  13. Hutchins, E. 1995. Cognition in the Wild. MIT Press.
  14. Latour, B. 1987. Science in Action: How to Follow Scientists and Engineers Through Society. Harvard University Press.
  15. Latour, B., and S. Woolgar. 1979. Laboratory Life: The Social Construction of Scientific Facts. Sage.
  16. Nelson, R. R., and S. G. Winter. 1982. An Evolutionary Theory of Economic Change. Harvard University Press.
  17. Nonaka, I., and H. Takeuchi. 1995. The Knowledge-Creating Company. Oxford University Press.
  18. Orr, J. E. 1996. Talking About Machines: An Ethnography of a Modern Job. Cornell University Press.
  19. Ostrom, E. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press.
  20. Petoff, J., B. Beyer, C. Jones, and N. R. Murphy. 2016. Site Reliability Engineering: How Google Runs Production Systems. O'Reilly Media.
  21. Polanyi, M. 1966. The Tacit Dimension. Routledge and Kegan Paul.
  22. Procida, D. n.d. Diataxis: A Systematic Approach to Technical Documentation Authoring. https://diataxis.fr/.
  23. Salman, D. 2026a. 'On Legibility.' https://dannysalman.com/legibility.
  24. Salman, D. 2026b. 'Knowledge Architecture Beyond the Single Consumer.' https://dannysalman.com/knowledge-architecture.
  25. Simon, H. A. 1969. The Sciences of the Artificial. MIT Press.
  26. Star, S. L., and J. R. Griesemer. 1989. 'Institutional Ecology, Translations, and Boundary Objects.' Social Studies of Science 19 (3): 387–420.
  27. Suchman, L. A. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.
  28. Sweller, J. 1988. 'Cognitive Load During Problem Solving: Effects on Learning.' Cognitive Science 12 (2): 257–85.
  29. Williamson, O. E. 1985. The Economic Institutions of Capitalism. Free Press.
  30. Winner, L. 1980. 'Do Artifacts Have Politics?' Daedalus 109 (1): 121–36.