In the previous post in the Contract Review Software Buyer’s Guide, we explored the human element of building contract provision extraction software. The core point of the post was that if you wouldn’t trust the people who instructed a contract review system to accurately review contracts themselves, can you trust results from the system they built? Problematically, most automated contract provision extraction systems are instructed by under-qualified people. This post covers some ancillary issues.
Do Good People Plus Rules-Based Tech Work?
The extra investment in high-end reviewers might be less important for vendors who use rule based extraction technology: good people building provision models using the wrong tech doesn’t go very far; rules based systems are unlikely to be accurate on unfamiliar agreements and poor quality scans no matter how well they are trained. This is demonstrated by Mumboe’s experience using rules based contract provision extraction. As discussed in an earlier Guide post, Mumboe (a now-defunct contract management provider) had a graduate of a top university’s linguistics Ph.D program (who had previous experience in information extraction) leading their rules based natural language processing efforts. This linguistics Ph.D grad may not have known how to spot a subtle change of control provision, but it’s hard to imagine a more qualified person to write provision extraction rules. And, based on our discussion with someone who would know and had nothing to gain from saying anything negative about Mumboe, Mumboe’s provision models apparently worked poorly on unfamiliar documents. So, even a well-suited person instructing a rules based system did not lead to robust provision models. That said, the “garbage in, garbage out” logic still applies: Mumboe’s rule based system would presumably have been even lower quality if it had a less-skilled instructor. We know of two current rules based contract provision extraction vendors. Our sense is they are not hiring linguistics Ph.D grads, experienced lawyers, or the like to write their rules.
Client-Built Contract Provision Models
Contract provision extraction systems generally come pre-loaded with a number of provision models, and tend to offer a mechanism for clients to build new models. A vendor who offers a system built using provisions found by less qualified contract reviewers might respond to the “Garbage In, Garbage Out” post by arguing:
“We build our provision models based on examples our clients give us.”
This is a fine answer, as long as their other clients (1) trust the review quality that generated the client given examples and (2) do not expect to the provision models their system comes pre-loaded for to work accurately on atypically drafted provisions (since their source data may not include atypically-drafted provisions, or the reviewers may not have found them). This is a good time to remember that atypically drafted provisions are common yet sometimes hard to find, and they are just as valid as ones drafted using market standard language.
Perhaps, instead, a vendor using under-qualified reviewers might respond:
“It doesn’t matter how we build our provision models, since our clients are able to use our system build and supplement their own extraction policies.”
We think clients building and modifying their own provision models is great, and will have more on this point in a forthcoming post. That said, it is an unsatisfactory answer for a system’s standard contract provisions. A contract provision extraction system that is moderately accurate on its out-of-the-box provisions would be okay in a world where there are no better options. But there are. Why bother with a contract metadata abstraction system that has mediocre accuracy on unfamiliar agreements and poor quality scans?
Experienced Reviewers Have Been Shown to Make an Impact in eDiscovery
There are very significant differences between eDiscovery document review software and contract review software (which we will try to cover in a later post). Nonetheless, eDiscovery is also an information classification task, and—helpfully—gets a lot more outside research done on it.
A recent study has examined performance on an already settled Sun Microsystems/Oracle matter, where outside counsel had supervised a “rigorous” document review process “that included a thorough review with multiple quality checks.” Sixteen or more technology assisted and human teams are seeing how they perform against this already-completed work (note: some teams submitted multiple entries). While the study remains underway, Phase I takeaways include that “Software is only as good as its operators.” Here is more detail:
Tech Team 19’s human input was a single senior-level attorney who spent 64.5 hours on review and analysis. Tech 19 performed best at finding both responsive documents and privileged documents. “If the gold standard is to replicate the mind of the senior attorney who certifies he or she has conducted a reasonable inquiry in Federal Rule of Civil Procedure 26(g), then it appears that both the original review and Tech Team 19 used a favorable methodology. Coincidentally, Team 15 used similar technology as Tech 19, with multiple contract reviewers [note: here “contract reviewers” likely refers to the low cost temporary attorneys often deployed on litigation document review projects], and did not achieve similar results.
…
Tech 5 also engaged experienced attorneys and were the second most effective at identifying privileged information.
Experienced reviewers are a critical foundation for instructing software.
Contract Review Work is Hard
The real promise of automated contract provision detection software is how it can enhance tired, overworked and novice reviewers, giving them the benefit of an experienced reviewer’s guidance on unfamiliar text, as applied through software. Many contract review software systems were built by people who our users would not trust to review contracts accurately. Without appropriate human expertise as a foundation, how can these systems ever perform to a level where quality-obsessed users can come to trust their results?
Contract Review Buyers Guide Series:
- Part 1 - An Introduction to the Contract Review Software Buyer’s Guide
- Part 2 - What is Contract Review Software & Why Does it Exist?
- Part 3 - How Automated Contract Provision Extraction Systems Find Relevant Provisions, And Why “How” Matters
- Part 4 - No Rules: Problems With Rules-Based Contract Provision Extraction
- Part 5 - Manual Rule-Based Automated Provision Extraction Software Case Study: Mumboe
- Part 6 - Comparison- and Header Detection-Based Automated Contract Provision Extraction
- Part 7 - Foundations of Machine Learning-Based Contract Review Software
- Part 8 - Machine Learning Based Automated Contract Provision Extraction
- Part 9 - Machine Learning-Based Contract Provision Extraction on Poor Quality Scans
- Part 10 - Garbage In, Garbage Out: Why Who Instructs An Automated Contract Provision Extraction System Matters
- Part 11 - Further Information on Why Who Instructs An Automated Contract Provision Extraction System Matters
- Part 12 - How to Build an Automated Contract Provision Extraction System
- Part 13 - How to Add Non-Standard Clause Detection to Your Contract Metadata Extraction System
- Part 14 - Non-Standard Contract Clause Detection is Easy to Build, Hard to Get Right
- Part 15 - What is the difference between contract analysis and eDiscovery software?