Legal Tech Tools Shouldn’t Be Evaluated in Isolation
Start with your foundational stack to understand which tools will actually stick
👋 Hey there, I’m Hadassah. Each month, I share my take on how legal tech can support in-house teams—enabling the business, managing risk, and freeing up time for the work they enjoy most. Grounded in real-world experiences from across legal teams, I cover what works, what doesn’t, and the quick wins that make all the difference.
Before we dive in, a quick note: in this post, I discuss just a couple of examples of how legal teams solve operational bottlenecks. There are plenty of ways to approach these kinds of problems, and the right approach will always depend on your specific needs and context. My goal is to give you some food for thought as you define what this might look like for your team.

Legal tech funding has grown nearly tenfold over the past decade—from roughly $600 million in 2016 to $6 billion in 2025. Alongside that growth, the number of products has expanded just as rapidly. Today, there are hundreds of vendors and close to a thousand AI solutions, all trying to find a place inside how legal teams work.
For a while, much of the conversation around this wave has been driven by hype and momentum. But that’s starting to shift. Legal teams are becoming more pragmatic, moving away from what these products could hypothetically do toward a more grounded question of where they actually create value.
That shift brings us closer to reality, but it doesn’t necessarily make things easier. If anything, it introduces a different kind of complexity. As teams begin to evaluate products more seriously, a familiar set of questions tends to surface: how accurate is it? does it meet our security standards? how much does it cost? These are all valid, and in many cases necessary. But focusing on these alone shifts our attention way from some of the other—perhaps more important—factors that determine whether a new solution will actually deliver value in practice.
What tends to matter more is something that’s discussed far less explicitly: how well a solution fits into the systems where work is already happening.
This is particularly true for in-house legal teams. Unlike law firms, they’re not optimising for the business of law. They’re embedded within a broader organisation, expected to support and enable the business as a whole. Their tools don’t exist in isolation. They sit alongside systems used across multiple business functions, and often depend on them.
And yet, when new products are evaluated, that broader context is often treated as secondary. Conversations, especially with vendors, tend to focus on features, benchmarks, and high-level integrations. Meanwhile, a more fundamental question is left under-explored: where does this solution actually fit within the way our business works today?
Because in practice, no solution is evaluated in a vacuum. It’s evaluated relative to what’s already in place—the productivity tools, document management systems, communication layers, and workflows that shape how work gets done. Those starting points vary widely. One business may be deeply embedded in Microsoft, another fully Google-native. Some operate through structured systems like CLMs, others rely more on informally structured shared drives and email threads. Even when two teams are looking at the same solution, they’re often evaluating it against completely different environments.
This is something many legal teams intuitively understand, but it’s rarely made explicit. Which is where it becomes useful to take a step back and look more closely at the environment in which these products are expected to operate.
Start With The Stack You Already Have
Before evaluating any new product, there is a step that most teams tend to skip—not because it’s unimportant, but because it feels almost too obvious to examine. It’s understanding the stack that’s already in place.
Not the products being demoed or considered, but the ones where legal work already happens. The systems where documents are handled, where conversations unfold, where knowledge accumulates, where permissions are managed, and where workflows are carried out. Together, these form what can be described as a foundational stack.
Over time, this stack stops feeling like a set of choices and begins to function as a default environment. People no longer think about where they work or how they move between systems. They follow the paths that feel most natural. What emerges is not just a collection of tools, but a behavioural system that shapes how work flows, where friction appears, and what feels intuitive versus disruptive.
This is also why these systems are so resistant to change. They’re not only technically embedded, but cognitively embedded as well. They define habits, expectations, and what “normal” looks like across the business. As a result, workflows that may appear inefficient from the outside can feel entirely natural to those operating within them, simply because they align with the surrounding environment.
One way to make this more visible is to think of the stack in layers.
At the bottom sit the foundational infrastructure layers—identity and access management, single sign-on, security controls, and compliance requirements—that define what is permissible: how systems connect, how data moves, and who can access what. Above that are the systems where work actually happens—productivity environments, communication tools, and document or knowledge management systems—which determine where legal work lives and how it’s carried out day to day. On top of those sit workflow tools that orchestrate how work moves across the organization. And only then do we arrive at the layer most teams now focus on during evaluations: specialist tools, including AI.
What this layered view highlights is a simple but often overlooked reality. Legal teams tend to evaluate tools at the top of the stack, while the longer-term viability of those tools is largely shaped by the layers underneath. The lower layers determine where work happens, how that work is performed, and what constraints are already in place. Every new tool has to operate within those boundaries, whether that’s explicitly acknowledged or not.
Find Where Work Actually Gravitates Toward
Once you start looking at your stack as a layered system, the next step is to understand how work actually moves within it. Not all layers—and not all tools—carry the same weight. Some sit at the periphery, used occasionally and easily replaced. Others become central to how work gets done. Over time, a pattern emerges: certain systems act as the default starting point for most activities. This is where your team’s stack gravity sits.
Once you start looking at your environment this way, the next step is to understand how work actually moves within it.
Over time, most teams develop a kind of centre of gravity—a place where work naturally begins and where it tends to converge. This is not always formally defined, but it shows up clearly in behaviour. It’s where people instinctively go when something needs to get done, especially under time pressure. It’s where documents are actually handled, where conversations happen in practice, and where people turn when they need context.
For some teams, this centre of gravity sits in Word and email. For others, it has shifted toward collaboration environments like Slack. In more structured setups, it may sit within a CLM or another workflow system. What matters is not which tools sit at the centre, but that one or more exist—and that they shape how work is organised around them in different contexts.
This has practical consequences. Tools that sit close to that centre tend to feel intuitive. They align with existing habits and require little conscious effort to adopt. Tools that sit further away, even when they’re highly capable, tend to introduce friction. They ask users to step outside their natural environment, switch contexts, or maintain parallel workflows. Even small amounts of friction at this level tend to compound over time, often resulting in partial or inconsistent adoption.
A quick example of this in practice
I recently spoke to the Legal Ops Manager at a global healthtech company. Their legal team ran into a familiar issue: they were spending close to a full working day each week answering the same basic questions from across the business—requests coming in through Slack DMs, emails, and informal conversations.
The obvious solution might have been to introduce a formal intake tool. Instead, they took a different approach. They looked at where work was already happening—and built around that.
Slack was already the company’s operational centre. So rather than pulling people into a new system, they embedded the entire intake and triage flow directly into existing Slack channels. A Wordsmith AI agent handled routine questions in real time, while more complex requests were routed through a lightweight workflow using tools already in the stack, including Google Forms and Jira.
What made this work was not the technology itself, but the way it aligned with the team’s existing environment. There were no new logins, no major process shifts, and no expectation that the business would change how it asked for help. Even some of the limitations—like restricted Jira licenses—were worked around creatively rather than solved by introducing new tools.
As a result, the team reduced a significant portion of repetitive questions and, more importantly, repositioned legal closer to where the business already operated.
The importance of this dynamic is easy to underestimate because it’s difficult to measure. A tool may technically integrate with a core system, but still feel distant in practice. If it requires users to leave their primary workspace, reformat information, or duplicate steps, it’s already operating against the direction of gravity.
Why Starting Points Matter More Than Features
This is where many evaluations start getting foggy. New products are typically assessed based on what they can do while far less attention is paid to what they require in return. Every new solution implicitly asks for some degree of behavioural change. It may introduce a new interface, shift where work takes place, or require users to step outside their existing flow. These trade-offs are rarely examined explicitly, yet they often determine whether a solution will stick in the long run.
Every workflow starts and ends somewhere. A contract is opened somewhere. A question is asked in a particular channel. A request enters legal through a specific path. From there, work moves across systems, people, and layers of the stack until it reaches some form of resolution. These entry and exit points are easy to overlook, but they’re where much of the leverage sits. They determine what information is immediately available, how much context is preserved, and how much effort it takes to move work forward.
Solutions that align with these connection points tend to feel intuitive, even when they’re relatively simple. They meet users where they already are and integrate into workflows almost by default. Solutions that don’t align, no matter how advanced, tend to feel heavier. They require an extra step at the very beginning or the very end of a workflow, right at the point of critical interaction and collaboration, and that small shift might be just enough to create resistance. Because changing where work begins or ends is fundamentally different from changing how it is performed.
This becomes even more pronounced in the context of AI. AI tools, more than others, depend on being embedded early in the workflow, close to where work actually happens, and used frequently enough for trust to build over time. When those conditions are met, their value compounds quickly. When they’re not, even strong capabilities tend to remain underused.
What You Should Look For During Evaluations
Once you bring these dynamics into focus, the way you approach evaluation starts to change. The question is no longer just what a tool can do in isolation, but how it fits into the environment you already operate in. Vendor conversations become less about exploring features in the abstract and more about understanding how those features show up in your actual workflows.
In practice, this means looking more closely at how a solution integrates into the systems your team and the broader business already rely on. Not just whether an integration exists, but how it works. Whether it allows work to happen in place, or whether it requires additional steps, workarounds, or context switching.
Let’s explore another illustration of this in practice
I spoke to the Head of Legal at a global technology company. Their legal team faced a set of challenging circumstances while implementing a new CLM. The goal was clear: centralise contracts and enable the business to self-serve. The team selected Ironclad, which delivered strongly on both functionality and implementation support.
The real challenge wasn’t the tool itself—it was the environment it had to fit into.
At the same time as the CLM rollout, the business was undergoing a major Salesforce implementation. Sales teams were already adjusting to a new system, and the prospect of introducing another standalone platform quickly became a barrier. Adoption slowed, not because the CLM lacked capability, but because it sat too far from where work was now happening.
The initial plan was to rely on Ironclad’s native Salesforce integration. But in practice, that integration required additional budget and resources that weren’t available.
Rather than forcing the workflow to adapt to the tool, the team reversed the approach. IT built a custom connector that allowed Sales to trigger contract workflows directly from within Salesforce—removing the need to switch systems altogether.
That shift—bringing the tool closer to the business’s centre of gravity—made the difference. Adoption improved, and Legal was no longer seen as introducing friction, but as enabling the business.
This becomes even clearer during pilots. What often appears seamless in a demo can break down once real workflows are introduced. Documents may not move cleanly between systems, outputs may need to be copied rather than returned, and small interruptions begin to accumulate. That’s why it’s so important to treat any such trial or pilot as a simulation of your reality, using your documents and common business scenarios. Because each of these breaks in flow may seem minor in isolation, but together they determine whether a tool becomes part of daily work or remains something that is used only occasionally.
Across all stages of evaluation, the underlying question remains the same. Not what this tool can do, but where it sits in relation to how you already work.
Most legal tech products are evaluated as if they exist on their own. In reality, they’re always entering an environment that’s already shaped by existing systems, habits, and constraints. Understanding that environment—your foundational stack, where work begins and ends, as well as where it naturally gravitates—is what allows you to evaluate products more clearly.
Because in the end, adoption is not driven by capability alone. It’s driven by how naturally a solution fits into the flow of work that already exists. And the closer a solution is to that flow, the more likely it is to become part of it.
Many thanks to the in-house professionals who shared their experiences and helped ground this piece in the practical reality of implementing legal tech. And a big thank you the awesome Laura Jeffords Greenberg whose recent Linkedin post inspired me to shape my own thoughts on this topic. You can connect and follow her on Linkedin.
Want to dig deeper into how stack gravity plays out for your team? I’ll be back soon with a subscriber-only special—a simulation exploring how different foundational stacks shape technology decisions in practice.
Wishing you all a productive start to the spring season 🌻

