Skip to content

ABC Tool

  • Home
  • About / Contect
    • PRIVACY POLICY
Familiarity is the enemy

Familiarity is the enemy

Posted on April 24, 2026 By safdargal12 No Comments on Familiarity is the enemy
Blog


My thoughts on why enterprise knowledge systems have failed for sixty years, and what might finally replace them.


A couple of weeks ago I demoed one part of what I have been building to a senior exec at a global enterprise – someone who had been asked to lead and guide AI adoption in their part of this billion dollar company – our conversation was off the record, but what they told me – and why they couldn’t buy my product – that is the basis of my essay.

First, they told me that what I had shown them was the first time they had seen an AI system for complex enterprise work that looked ready to deploy. Yes, they had familiar reservations: their data had to stay under their control (no problem, my architecture is designed around exactly that).

Next, they told me what the large consulting firms had been pitching them: the quotes sat in the hundreds of thousands, spread across a roughly threefold range. The high end was a gold-standard 99.5% accuracy promise while the low end was priced to be a deliberate foot in the door and the common thread was these firms were selling their own learning curve.

I had demoed a product that worked, while these behemoths – my competitors – were asking to be paid to build a product that worked.

Next I was told that they could not buy from me. Why?

Risk

And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.

That insurance – more than price and more than product – is what enterprise software has always traded on.

My conversation was not a one-off – of course – it is the shape of a sixty-year failure the industry has learned to call “prudent”.

I’m writing this today as a reminder of that failure, and as a public declaration to keep building anyway.


It’s 2011. Hewlett-Packard acquires Autonomy for US $11.1 billion, then a year later writes off $8.8 billion of Autonomy’s value (eighty percent!), blames fraud and sues Autonomy’s founder Mike Lynch.

Fast forward to June 2024 – after a thirteen-year legal battle – a US jury acquits Lynch on all counts. Lynch’s lawyers established that HP executives spent roughly six hours on conference calls with Autonomy before the $11.1 billion decision. Two months after his acquittal, Lynch died when his yacht – the Bayesian – sank off the coast of Sicily.

HP, one of the largest enterprise IT customers on earth, paid $11.1 billion for a knowledge-management product after six hours of phone calls with its founders yet a year later it could not tell what it had bought.

This is the category I want to talk about – enterprise knowledge management – the software that promises to capture what an organisation knows and make it usable. It has existed for forty years or more, yet it has never delivered the intelligence it pretended to offer.

I estimate it has cost something north of a quarter of a trillion dollars US in write-offs, opportunity cost, and honestly-counted productivity losses. And its 2026 incarnation – “just add AI to your wiki” – is the worst iteration thus far.

The reason is not that the technology is bad (although there’s certainly examples of that too). The reason is that the buyers select on the wrong axis.

They select on familiarity. They have always selected on familiarity.

Familiarity is the enemy.

The enemy, named

Twelve years ago I wrote a post arguing that the realised value of an information asset is a function of the technology used to transform it. That the gap between an asset’s potential value and what the business actually extracts from it should be the whole economics of the enterprise information management industry. The technology choice isn’t decoration on the outcome – it’s causal.

Of course, nobody read it – though it wouldn’t really have mattered if the whole world had – the industry kept buying the same things. The potential-versus-realised gap widened and at some point over the last three years (coincident with ChatGPT) – enterprise knowledge management started collapsing into a final embarrassment so complete it can no longer be hidden.

This is my pre-mortem. It is also, at the end, my proposal.

In 2011, Rich Hickey – the creator of the Clojure programming language, and arguably one of the most important computer scientists of the last twenty years – gave a talk called Simple Made Easy. He drew the distinction that most of this industry still ignores.

Simple is objective: two things are simple if they are not intertwined, if they do not interlock, if removing one does not collapse the other.

Easy is relative: something is easy to whom?

Easy is near-at-hand. Easy is familiar. Easy is what your team already knows, what your CIO has heard of, what the analyst quadrant showed you last year and will show you next year.

Enterprise software has spent decades confusing the two.

Hickey’s witty rejoinder is “Incidental is Latin for your fault.”

The entire apparatus of enterprise technology selection – the analyst reports, the RFP scoring rubrics, the CTO dinners, the Gartner quadrants, the AI world tours, the reference-customer asks, the preferred-supplier panels – is a machine for rewarding familiarity. It is not a machine for rewarding correctness.

The two are not the same.

Every one of the failures catalogued below, and the forty-year graveyard they sit on top of, has the same structural cause: the buyer bought what was familiar to them, not what was right. The vendor who looked safe beat the vendor who was innovative. The language the hiring committee recognised beat the language that would have made the system maintainable. The architecture that appeared on the last three analyst reports beat the architecture that would have actually solved the problem, at a fraction of the cost.

Familiarity is the selection criteria that matters, and has been since before I was born. It has cost – on my back of the envelope estimation – hundreds of billions of dollars.

This is my essay about why.


Five ways familiarity kills enterprise intelligence

1. The familiar vendor

Microsoft proudly announced in 2020 that SharePoint had over two hundred million monthly active users. They have every right to be proud of that. SharePoint is deployed in effectively every Fortune 1000 company.

It is also, by the testimony of its own users, one of the worst products ever.

Forrester’s 2012 SharePoint survey measured IT satisfaction at 73% and business-manager satisfaction at 62%. The eleven-point gap is the whole story. Enterprise IT bought SharePoint because it was bundled with Office, but the business tolerates it because the business has no choice.

A SharePoint consultant – writing in 2014 – called it “where documents come to die.” He meant it affectionately.

This is how a product with 200 million users can be universally described as a place documents go to die.

In any case, the actual product does not determine the sale – the familiarity of the vendor does. The product is an artefact of the buying signal, not the other way around.

2. The familiar language and the familiar architecture

Look at any large enterprise software vendor’s technology recruitment page. Count the mentions of Java, .NET, Azure, Oracle, SAP, ServiceNow. Then look for anything that is not one of those. The distribution is not an accident. It is a policy.

The language stacks that appear on those job ads are the language stacks that recruitment templates can process, that can be defended in hiring committee meetings, that the Big Four consulting firms are organised around billing for. Java monoliths have made tens of billions of dollars of enterprise revenue not because Java is the right tool (the JVM is impressive as a platform) – but because “Java” is a word that an internal promotions committee, an external auditor, and a departmental procurement officer can all pretend to understand. “Clojure” or “Datomic”?

Hah. Instant disqualification. Totally unfamiliar.

There is a commonly stated reason for all of this: “we can’t hire Clojure developers.”

That was actually the first thing I was told when I took an engineering leadership role at Qantas, leading a team of Clojure software engineers.

I found the opposite to be true.

Clojure was not a hiring barrier – it was a hiring filter.

Engineers who answered a Clojure job ad had self-selected for thinking in data rather than ceremony – it was a small pool, but one with exceptional talent. While I was leading that team, I hired a Python engineer who had never written a line of Lisp, but he thought in maps and reductions, and three years later he is still writing Clojure (now in his own startup).

Familiar-language hiring is not a hedge against key-person risk. It is a larger pool bought at the cost of the single best signal you had.

Fred Brooks drew a line in 1986: In No Silver Bullet he separated the difficulty of software into essential complexity (fundamental to the problem) and accidental complexity (imposed by our tools).

Hickey calls it incidental. Same thing. Enterprise software has spent forty years buying accidental complexity wholesale as part of the easy language and architecture decisions.

AI brings a new irony to this particular familiarity. When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been – an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language’s semantics across releases.

On every one of those axes, the languages the industry selected for human convenience are worse choices for the machine than the languages it rejected as unfamiliar. It is worth its own essay – I have written about this in detail.

For this one, the point is simpler: the familiarity that underwrote the Java decades is evaporating under the industry’s feet, but the industry is still buying as if it weren’t.

3. The familiar buyer motion

Enterprise software is not sold on outcomes. To be fair, most things aren’t.

The economics of software though – a zero marginal cost of reproduction – actually lends itself to selling on outcomes: imagine if software vendors sold their products with no upfront cost but an ongoing entitlement to just 10% of the cost savings their software generated? If their software fails to generate savings, they lose no additional capital. But if it succeeds, the upside is infinite.

So why isn’t that how software is sold? There’s a confluence of really strong economic reasons why.

One is that enterprise software procurement is a market in which the buyer cannot verify quality before purchase, cannot switch after purchase, and has no mechanism to measure outcomes during the contract.

Another is that there’s significant information asymmetry: the vendor knows something the buyer doesn’t.

The vendor knows that enterprise software implementation is notoriously difficult, requires massive behavioral change from the buyer’s employees, and has a staggeringly high failure rate. The software, in many cases, is “a lemon” – not because the code is broken, but because the promised organisational transformation is a mirage.

If vendors sold on outcomes, they would be shifting the implementation risk onto their own balance sheets. Instead, they sell a license to use the tool. They get paid their 90% gross margins upfront, effectively washing their hands of whether the tool actually achieves the business outcome. They extract the value of the promise while shifting the risk of the execution entirely onto the buyer.

George Akerlof described this dynamic in 1970 and won the Nobel Prize for it: a market for lemons. The vendors who win the lemon market are not the vendors with the best product. They are the vendors with the best lemon-market signals. Which is to say: the familiar ones.

Another reason the buyers don’t want to buy on outcomes is that it would require defining the outcome, establishing a baseline, measuring it, and holding people accountable. It introduces complexity and, perhaps more importantly, career risk. If a procurement officer stakes their reputation on a novel outcome-based contract with an innovative startup, and it fails, their reputation is ruined and they might need to start a new career.

Instead, the procurement officer acts rationally to protect themselves. They seek “lemon-market signals” – familiarity, Gartner Magic Quadrant placement, and massive brand presence.

They buy the enterprise equivalent of IBM. If it fails (which it often does), the officer is safe. They can say to the Estimates committee, “We bought the industry standard.”

Software is sold on the perception of safety by people who are rewarded for choosing safe options and penalised for choosing ones that turn out badly, regardless of whether those options actually were safer.

The rational strategy is not to buy the best product. It is to buy whatever product other officers at other agencies also bought. The safest decision is the most familiar decision. Career risk is minimised. Organisational risk is maximised. And the vendor who spent the marketing budget to make themselves the familiar choice wins regardless of whether their product works.

4. The familiar failure

In 1984, Doug Lenat launched a project called Cyc. Super ambitious, he believed common-sense intelligence was a function of hand-encoding the roughly ten million rules an average adult knows about the world.

Cyc ran for forty years, consumed approximately two thousand person-years of work, cost approximately two hundred million US dollars, shipped approximately thirty million hand-encoded assertions and completely failed.

At Digital Equipment Corporation an expert system called XCON began configuring VAX orders in 1980 and grew to 6,200 rules by 1987. It was reported that 40–50% of those rules churned every year, and eight dedicated engineers were required just to stay still. XCON was successful to some extent, saving $40 million per year, but it was too brittle to be useful outside of it’s narrow use-case, and it was retired in the early 1990s as the VAX business collapsed.

Other expert systems of the era – MYCIN for antibiotics, CADUCEUS for internal medicine, PROSPECTOR for mineral exploration – were technically successful and commercially never-deployed. Symbolics, the Lisp-machine company that hosted many of them, went from $101.6M in revenue to Chapter 11 in six years. Intellicorp, Teknowledge, and Inference Corporation are variously acquired, absorbed, or extinct. DARPA’s Strategic Computing Initiative and Japan’s Fifth Generation project together burned around $1.4 billion in the same era with no commercial product between them.

By 1997 the “Good Old Fashioned AI” industry – one built around expert systems and symbolic logic rather than the Generative AI industry that we have today – had been dead long enough for Booz Allen Hamilton to study what had replaced it: enterprise Knowledge Management.

Lucier and Torsiliera found that 84% of knowledge-management programs produced no significant impact on the adopting organisations. This was before Confluence. Before SharePoint. This 84% was Lotus Notes, corporate portals, document management, the Plumtree-and-Vignette generation of intranet middleware, the brief romance with Autonomy IDOL.

These failures do not rhyme. They are identical. They are failures of time and entropy. They required systems to be kept up to date by the most expert humans in the organisation – and by definition, the highest value use of their time is not keeping an IT system updated.

All of these failures are due to the economics of the structural cost curve at which the encoded knowledge ages out faster than the knowledge workers can update it. Each encoding outlived its relevance because the expert whose knowledge it was meant to codify was busy doing actual work.

Each of these systems was quietly retired, or violently acquired, or allowed to accumulate inside a running enterprise as a cost centre nobody had the political courage to shut down.

Forty years after XCON, Lenat – weeks before his death in August 2023 – warned that LLMs ‘train on CONVINCINGNESS rather than CORRECTNESS.’

His point is that Generative AI is a new attempt to bypass the same old problem of time and entropy while trading the empty, outdated databases of the 90s for a system that simply invents plausible answers so the expert doesn’t have to.

This is the same failure mode as all of the previous generation of enterprise knowledge management systems, expressed in a different language, in a different decade, at a different price.

The industry keeps buying the same shape of thing that – based on simple first year economics principles – cannot work, keeps getting the same result, yet cannot seem to learn those lessons.

Familiarity of failure is itself a variety of familiarity that is our enemy.

5. The familiar AI stack

In early 2023, a new architecture appeared in every Fortune 1000 Enterprise Architecture diagram. Retrieval Augmented Greatness. Four steps: chunk your documents, generate a vector embedding of those chunks. When you need intelligence, retrieve the chunks that are semantically similar to your query, shove them all into the context window of a large language model. Ask your documents.

For about eighteen months Retrieval Augmented Generation was hailed as the definitive answer to enterprise knowledge.

It was not.

In June 2024, Stanford’s Regulation, Evaluation, and Governance Lab benchmarked Lexis+ AI and Thomson Reuters’ Westlaw AI-Assisted Research – two of the most expensive commercial legal-RAG systems in the world. Hallucination rates were 17% for Lexis+ AI and over 34% for Westlaw, on authoritative corpora, with citations attached.

The citations – it turned out – pointed to the chunk that was retrieved, not to the evidence for the claim. The language model, under context load, would ignore the retrieved chunk and fabricate from its parametric memory instead. Sometimes the citation was real and the summary was invented. Sometimes the citation was invented and the summary was internally coherent. Sometimes the whole thing was right. But there was no reliable way to tell which was which.

Of course, state of the art models have improved in leaps and bounds since then. They’ll continue to improve, no doubt, but hallucination is a feature, not a bug. Systems that work in high stakes environments – like our Legal system – need to be built to account for that, rather than pretending it’s not there. But it’s a difficult problem to solve.

Pinecone, the category-defining vector database (vectors being the foundation of RAG) raised VC money at a $750 million valuation in April 2023 yet in 2025 it was reported to be exploring a sale – revenue for 2024 was $26.6 million. The popular consensus is that pgvector won – that Postgres extended to swallow the category and make the specialist vector database redundant.

Popular consensus is wrong.

Pgvector is capped by an 8KB page-size limit on dimensionality. It supports only HNSW and IVF_FLAT indexes – no DiskANN, no GPU. Its HNSW index draws memory from the live production database. It has no distributed-workload story. Performance collapses past roughly ten million vectors, which is where production actually begins. Pinecone did not lose to a better product. It lost to the more familiar of two inadequate ones – to a buyer whose ops team who already had Postgres running somewhere and whose question was priced wrong from the start.

Microsoft published a paper in 2024 called GraphRAG arguing – correctly – that chunk-based retrieval cannot answer multi-hop questions because chunks don’t know about one another. Seven months later, Microsoft published LazyGraphRAG admitting that their own original GraphRAG was 1,000× too expensive to index to be usable in production. Microsoft, publicly, admitting that the product it had been selling as the enterprise AI answer had indexing economics so broken it had to be rebuilt from scratch.

In June 2025, Gartner – a firm whose professional output is the legitimisation of enterprise IT spending – predicted that more than 40% of agentic AI projects will be cancelled by the end of 2027. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype.”

The category error under all of this is the assumption that you can take a document library or a wiki – an unstructured, untyped, un-audit-traced pile of documents – and make it intelligent by attaching a language model to it.

But you cannot. A wiki is not intelligent because a wiki does not know what an entity is or what relationships an entity might possess or how these encodings have evolved over time.

Nor does chunking the wiki add them. Embedding the chunks does not add them. Asking the chunks a question in natural language does not add them. The information the intelligent answer needs was never in the wiki in the first place.

Every wiki vendor on earth is now trying to add AI to their wiki. It is the category error of the decade. The wiki is not the thing you add AI to. The wiki is the thing AI replaces.

I am seeing this play out in slow motion. Yesterday I ran a experiential learning workshop for public and private sector teams; we pitted the teams against each other in a Battle Royale style game under time pressure – each team had a different set of AI tools (one team had my product).

The team with the largest legacy vendors’ offering? They spent about half the allocated time just trying to login to the right instance of the product that wasn’t completely lobotomised by their organisations’ enterprise IT policies.

This is how large bureaucracies can be given the potential for the most transformative technology of the last 30 years and then make it completely unusable for any serious work. They require it to be familiar.


The sixty-year graveyard

The five failure modes above are contemporary – the graveyard of failures under them?

Not contemporary. Extrodinary.

Expert systems (1965–95). Knowledge management 1.0 (1995–2010). The Semantic Web (2000–15). Modern knowledge graphs and Palantir (2010–present). Retrieval-augmented generation and its vector-database substrate (2022–present). Five waves. Same wave. Each time, the industry promised to close the gap between information’s potential and realised value. Each time, the technology underperformed the promise. Each time, the buyer bought the most familiar solution in the category and got what they paid for.

Cyc’s $200 million. HP-Autonomy’s $8.8 billion. The Lisp-machine industry’s collapse from half a billion in revenue to zero. Japan’s Fifth Generation at $400 million. DARPA Strategic Computing at $1 billion. SharePoint’s opportunity cost at 200 million users. God I can’t even imagine what that must be. Freebase’s closure. Every Pfizer, Equinor, and FIBO semantic-web programme that quietly wound down. The vector-database category’s dissolution into Postgres. The 40% of agentic-AI projects Gartner has just told us will be cancelled.

The industry has, cumulatively, spent somewhere north of a quarter of a trillion dollars on technology that failed to do what it claimed.

And yet the industry’s 2026 response is to buy the same shape of thing one more time, rebranded. Familiarity of failure is still familiarity. The buyers will still select on the demonstrably wrong axis.

The category has never once, in sixty years, produced a product that reliably made good on the promise printed on its marketing. And the 2026 answer – from every wiki vendor, every search vendor, every Microsoft-aligned systems integrator on the planet – is to add AI to the pile and bill the customer for another generation.

We have been here five times. This is the sixth.


The two-option trap

Here is one thing I think the sixty-year graveyard should actually tell us.

For forty years the industry has had two options, and only two.

Option one: encode the structure by hand. Expert systems, ontologies, the Semantic Web, Cyc, Palantir-style implementation. You get intelligence – if you can afford the PhD army to build it and keep it alive. Economics defines that you cannot.

Option two: skip the structure. Wikis, Confluence, SharePoint. You get adoption because nobody has to do anything hard, but you get no intelligence because nobody did anything hard.

The buyers who chose option two were not wrong, they were choosing the only option that would survive contact with a workforce that hates documenting, hates structuring, and will not hand-encode anything for anyone. Enterprise knowledge has always been as much a human problem as a technology one. Nobody wants to do the structuring work, and every prior architecture demanded that somebody do the structuring work rather than their actual job.

For the first time, I believe, there is a third option.

A language model, with the right harness, can read an unstructured PDF and propose the entities, the relationships, and the typed facts inside it. The harness catches the model’s mistakes and logs the work in an immutable ledger for A/B testing, improvement, cost optimisation and audit. The result is a resolved knowledge graph. The human dropped a file or forwarded an email – the exhaust of their actual, value add work. They didn’t have to do anything. The PhD army is not needed. The easy adoption curve applies because from the user’s side, the action is “do nothing extra”, not “fill in an ontology.”

This is the generational unlock.

Structure, for the first time, can be produced from content instead of demanded from people. It is what makes graph-native intelligence viable outside the five-person specialist-extraction teams it used to require – and what finally breaks the two-option trap the industry has been stuck in for – I’m not even sure – how many decades.


What unfamiliar looks like

In 2013, three founders – David Vélez, Cristina Junqueira, and an American engineer named Edward Wible – sat down with a blank piece of paper in São Paulo. These crazy bastards were going to build a bank. One of the questions they asked themselves was: if we were going to make a bank today, from scratch, what would the ideal platform look like?

Wible had read a paper called Out of the Tar Pit by Ben Moseley and Peter Marks (many of you will have heard of it, but it’s worth another read if you have time) which argued that mutable state is the accidental complexity that makes enterprise systems monstrously complex to debug and change.

Wible then he made two choices nobody with any sense would have made.

He found Datomic, and decided it sounded like the right fit for a new bank that didn’t want to be like the old banks. Datomic is a commercial database built on immutable facts and temporal queries, from a small American company called Cognitect, founded by Rich Hickey. Datomic had a total user base at the time that could have fit in a small auditorium. It could not, in principle, scale to a retail bank. But it encoded and made real the philosophy Rich had brought to bear in Clojure. Simple, not Easy. Correct, not Familar.

Datomic then led Wible and his team to Clojure, which they chose for the foundations of their new bank. A Lisp dialect, hosted on the JVM, with a community so small that two people meeting in an IRC channel counted as a conversation. For a bank. Not familiar.

Wible’s description of what happened next is worth quoting in full, because it is the opposite of what every familiar-vendor architecture diagram predicted:

“The scaling was, I would say, violent. It was so fast. It almost poured concrete over whatever we had started with in the beginning. Had to be good enough because there was no time to go revisit that. So, it just scaled up and up and up.”

Nubank is now the biggest independent digital bank in the world. Over 100 million customers. Nearly 90 million in Brazil (40% of the population), 3,000-plus Datomic databases, some with over 100 billion datoms (facts), 4,000 microservices on Kubernetes. 72 billion daily events through Kafka, millions of requests per second, public listing on the New York Stock Exchange. Nubank Acquired Cognitect from Rich Hickey in 2020, and the language and database that nobody with any sense would have chosen now run the largest retail bank in the Southern Hemisphere by customer count. Nubank now has many hundreds of Clojure developers, and they continue to train and hire more every year.

The reason the founding team picked what they picked – and the reason it worked – is that they refused to pick what was familiar. As Wible later put it: “We tried to think carefully about what we wanted to build and how we wanted to build it before we launched into typing.”

They did not buy what the analyst reports showed, they built what the problem required. The technology choice was causal. The success of the platform at scale was downstream of choices that everyone around them, in 2013, would have told them were wrong.

That is what unfamiliar looks like, it is not a vibe, it is not marketing, it is not a brave contrarian pose, it is an engineering decision – taken seriously – about what the problem actually requires – made by people who were willing to be wrong about it, and therefore had the chance to be right.

This is also the way I have been building software for the past 10 years.

The architecture the category requires

This is the point at which my essay becomes a pitch. The safe option would have been waiting another year to write it – waiting until what I have built had enough customers that my argument would not need to carry its own weight.

I decided against the safe option. Things are moving too fast and a polemic that waits for its pitch to be unarguable is a polemic that arrives too late to be genuine. So, here is what I have built – please, judge my argument separately from the implementation if you would.

What I have built is an Australian self-hosted intelligence platform where every key architectural choice in it is an anti-familiarity choice and I will name them all because the architecture is my argument against familiarity.

It’s built on Clojure. I watched Simple made Easy in 2013 or 2014 – it changed my life because it changed my career significantly – since then, I’ve made career choices that keep me working with Clojure. I believe in Clojure in the enterprise because I believe that incidental complexity in an enterprise systems is a liability that compounds for decades, and Clojure’s immutability and stability are the only answer at that time horizon. I also choose to build a business on it because I know that the benefits of building on Clojure will accrue to me and my customers, and my competitors would never choose Clojure.

It is built on Datomic, because the audit is not a bolt-on, it is the architecture – and Datomic is the only graph database whose data model is already an audit ledger.

It is graph-native – not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it’s core – because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top.

Its entity-resolution layer leverages multiple independent signals – because resolving “JPMorgan Chase & Co.” and “Chase Manhattan” to a single canonical entity by evidence rather than by guess is what intelligence actually means, and one signal is never sufficient.

It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required.

It is sovereign by design – not as a deployment topology, but as question of whose jurisdiction, whose law, and whose commercial decisions govern the operation. Self-hosted where the jurisdiction demands it and customer-controlled cloud where that suffices. In today’s world, the difference between an accreditation label attached to someone else’s cloud and a deployment the customer actually controls is the difference between a box-tick and sovereign control.

It is built in Australia, which is one of the least familiar places in the world to build an intelligence platform. This is deliberate as the customers for this product are governments, large enterprise and regulated-industries risk. Being based where the customers are – rather than where the venture capital is – is another anti-familiarity choice. It is also, for what it’s worth, not a choice I’d be willing to trade.

None of these choices are the familiar ones and none of them, on their own are a silver bullet (Brooks was right about that).

The difficult part of enterprise knowledge is essential complexity: capturing tacit knowledge, earning institutional trust, getting humans to agree on what is they think is true, surviving the political climate long enough to accumulate a useful corpus.

No architecture dissolves that but what the foundation above does is make those problems reachable. Every alternative foundation shipped in the last forty years has made them structurally unreachable. That is the full claim. Execution, adoption, support, and the slow political work of getting an organisation to write down what it actually thinks – those are the work of the next decade, not the elegance of the next diagram.

If I have one ask (apart from reading my long essay!) its for you to be honest about what a familiar choice is actually buying you. The buyer who picked Microsoft was not stupid. They were buying six things in one purchase: a solvent vendor who would still exist in three years; a deployment that would not fight the existing stack; an adoption path their users would not resist; a hiring pool for the team that would operate it; an answer they could defend at audit; and a mental model their organisation already spoke.

That bundle was rational, but I’m betting the right architecture makes it separable.

Customer-controlled deployment answers the solvency question – your operation does not depend on the vendor’s continuity of strategy, jurisdiction, or ownership. A graph-native conversational interface answers the friction, adoption, and mental-model questions in one move – analysts already think in entities and relationships. An immutable ledger answers the audit question more completely than any familiar vendor ever will. A smaller, evidence-based hiring process answers the hiring question with a better left tail than the familiar-language process ever has.

What familiarity bundled, the right architecture may separate. The category error was never the buyer’s rationality, it was paying for the whole bundle when only one part – the product – was load-bearing for the organisation and then accepting whatever product the vendor happened to ship and the individuals in the organisation happened to be familiar with.


Four tests for your current stack

Before I close, I offer a diagnostic – four tests – if more than one comes back as a fail, the failure modes in Sections 1–5 are load-bearing for your organisation – and your knowledge infrastructure is an artefact of familiarity, not correctness.

  1. The gap-analysis test. Ask your current system a question like: “Which of our strategic risks have no linked mitigation across any portfolio?” A vector database cannot answer this as it cannot reason with certainty. It is incapable of querying for absence – an empty relationship, a missing link, a white space. You can ask it what resembles a risk. You cannot ask it what is missing. If your honest answer is “I would need an analyst to read every document and compile the list,” your system is not an intelligence tool, it is a compression primitive with a chat interface on top. Reasoning about absence – negative reasoning, formally – requires a typed graph. Almost nothing in your current stack clears that bar, and no amount of retrieval-augmentation will put it there.
  2. The entity resolution test. Your corpus refers to the same entity as “JPMorgan Chase & Co.,” “JPMC,” “J.P. Morgan,” and “Chase Manhattan” – across eight reports, from four authors, over three years. Does your system resolve all four to a single canonical node, and attach to the resolution structured, multi-dimensional evidence – each dimension named, typed, queryable, and auditable? A dense vector has hundreds of dimensions and not one of them is named, typed, or queryable; it can tell you the cosine was close, not why the resolution was made. If the best your system can offer is a similarity score, a language model’s guess, or a proprietary heuristic with no exposed structure, it is not resolution. It is string-matching dressed up as reasoning. A system without structured entity resolution cannot trace an exposure, cannot count a network, cannot detect a collision, cannot produce a defensible dossier. It is a search bar with ambitions.
  3. The time-travel test. Can you query your system as it existed one year ago today? Can you tell me what it thought was true in April 2025? If no, you do not have an audit ledger, you have a running log that forgets. When a regulator, auditor, board or Jerry from Accounting asks what you knew and when, you will not be able to answer.
  4. The sovereignty test. If a foreign jurisdiction changes its posture tomorrow – export controls, data residency, an AI licensing regime, a successor to the CLOUD Act – does your access to your own intelligence infrastructure change with it? For most enterprise buyers relying on cross-border cloud in April 2026, the honest answer is yes, regardless of which border they sit on. How much longer will that be an acceptable answer?

The close

Most of my essay is unflattering, and while I’m not blaming anyone or anything in particular, I believe in understanding the underlying economics and incentives, and for me that leaves no doubt about this category.

Enterprise knowledge management has had forty years, a quarter of a trillion dollars, and five distinct technological generations to solve the problem it exists to solve, but it has not, and the 2026 answer – add AI to the wiki – repeats the same category error that has defined every prior wave.

The builders who understand this will build what replaces the wiki, while the wiki vendors who understand this will acquire them or be replaced by them. That is how categories end, the polite transition from one architecture to the next, the impolite imposition a dying category leaves for its customers.

Twelve years ago I wrote that the realised value of an information asset is a function of the technology used to transform it – in hindsight, I feel like I was right about the economics.

What I did not realise was that the answer would require a new generation of primitives – language models that can read text and propose structure, graph databases that can hold that structure safely, immutable ledgers that can prove where the structure came from – before the technology could realistically catch up to the theory.

It has now, or at least, I have shipped my answer and while I may be wrong – if I am I hope someone else will build the right answer. That would be fine, because the category needs the right answer more than it needs me to have it.

Rich Hickey – in his 2010 Clojure Conj talk Hammock Driven Development – puts the ethos I am trying to describe more cleanly than I can:

“If I could advocate anything, do not be afraid. Especially do not be afraid of being wrong.”

So, as much as to myself as to you: “Familiarity is the enemy”.

It seems like we have gotten used to calling familiarity safety.

We were wrong.



Source link

Post Views: 2

Post navigation

❮ Previous Post: Updated Apple Developer Program License Agreement now available – Latest News
Next Post: Today’s NYT Strands Hints, Answer and Help for April 24 #782 ❯

You may also like

Behind the unraveling of Dan Crenshaw
Blog
Behind the unraveling of Dan Crenshaw
April 22, 2026
vivo X300 Ultra and X300 FE's India launch date announced
Blog
vivo X300 Ultra and X300 FE's India launch date announced
April 23, 2026
Today’s NYT Connections Hints, Answers for April 11 #1035
Blog
Today’s NYT Connections Hints, Answers for April 11 #1035
April 12, 2026
HONOR’s new phones are gunning hard for the Galaxy S25 FE
Blog
HONOR’s new phones are gunning hard for the Galaxy S25 FE
April 23, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Get ready with the latest beta releases – Latest News
  • Mullvad VPN Creates iOS Master Switch to Protect Users From Data Leaks
  • How to unlock the Landroid screensaver
  • Parents Can Now See What Their Kids Are Asking Meta AI About
  • We still don’t have a more precise value for “Big G”

Recent Comments

No comments to show.

Archives

  • April 2026

Categories

  • Blog

Copyright © 2026 ABC Tool.

Theme: Oceanly News by ScriptsTown