In a previous post, we looked at some measures of patent attorney (or firm) success:
- Low cost;
- Minimal mistakes;
- Timely actions; and
- High legal success rate.
In this post, we will look at how we can measure these.
Let’s start with legal success. For legal success rate we identified the following:
- Case grants (with the caveat that the claims need to be of a good breadth);
- Cases upheld on opposition (if defending);
- Cases revoked on opposition (if opposing);
- Oral hearings won; and
- Court cases won.
When looking to measure these we come across the following problems:
- It may be easy to obtain the grant of a severely limited patent claim (e.g. a long claim with many limiting features) but difficult to obtain the grant of a more valuable broader claim (e.g. a short claim with few limiting features).
- Different technical fields may have different grant rates, e.g. a well-defined niche mechanical field may have higher grant rates than digital data processing fields (some “business method” areas have grant rates < 5 %).
- Cases are often transferred between firms or in-house counsel. More difficult cases are normally assigned to outside counsel. A drafting attorney may not necessarily be a prosecuting attorney.
- During opposition or an oral hearing, a claim set may be amended before the patent is maintained (e.g. based on newly cited art). Is this a “win”? Or a “loss”? If an opponent avoids infringement by forcing a limitation to a dependent claim, that may be a win. What if there are multiple opponents?
- In court, certain claims may be held invalid, certain claims held infringed. How do you reconcile this with “wins” and “losses”?
One way to address some of the above problems is to use a heuristic that assigns a score based on a set of outcomes or outcome ranges. For example, we can categorise an outcome and assign each category of outcome a “success” score. To start this we can brainstorm possible outcomes of each legal event.
To deal with the problem of determining claim scope, we can start with crude proxies such as claim length. If claim length is measured as string length, (1 / claim_length) may be used as a scoring factor. As automated claim analysis develops this may be replaced or supplemented by claim feature or limiting phrase count.
Both these approaches could also be used together, e.g. outcomes may be categorised, assigned a score, then weighted by a measure of claim scope.
For example, in prosecution, we could have the following outcomes:
- Application granted;
- Application abandoned; and
- Application refused.
Application refused is assigned the lowest or a negative score (e.g. -5). Abandoning an application is often a way to limit costs on cases that would be refused. However, applications may also be abandoned for strategic reasons. This category may be assigned the next lowest or a neutral score (e.g. 0). Getting an application granted is a “success” and so needs a positive score. It maybe weighted by claim breadth (e.g. constant / claim_length for shortest independent claim).
In opposition or contentious proceeding we need to know whether the attorney is working for, or against, the patent owner. One option maybe to set the sign of the score based on this information (e.g. a positive score for the patentee is a negative score for the opponent / challenger). Possible outcomes for opposition are:
- Patent maintained (generally positive for patentee, and negative for opponent);
- Patent refused (negative for patentee, positive for opponent).
A patent can be maintained with the claims as granted (a “good” result) or with amended claims (possibly good, possibly bad). As with prosecution we can capture this by weighting a score by the scope of the broadest maintained independent claim (e.g. claim_length_as_granted / claim_length_as_maintained).
Oral hearings (e.g. at the UK Intellectual Property Office or the European Patent Office) may be considered a “bonus” to a score or a separate metric, as any outcome would be taken into account by the above legal result.
For UK court cases, we again need to consider whether the attorney is working for or against the patentee. We could have the following outcomes:
- Patent is valid (all claims or some claims);
- Patent is invalid (all claims or some claims);
- Patent is infringed (all claims or some claims);
- Patent is not infringed (all claims or some claims);
- Case is settled out of court.
Having a case that is settled out of court provides little information, it typically reflects a position that both sides have some ground. It is likely better for the patentee than having the patent found invalid but not as good as having a patent found to be valid and infringed. Similarly, it may be better for a claimant than a patent being found valid but not infringed, but worse than the patent being found invalid and not infringed.
One option to score to partial validity or infringement (e.g. some claims valid/invalid, some claims infringed/not infringed) is to determine a score for each claim individually. For example, dependent claims may be treated using the shallowest dependency – effectively considering a new independent claim comprising the features of the independent claim and the dependents. A final score may be computed by summing the individual scores.
So this could work as a framework to score legal success based on legal outcomes. Theses legal outcomes may be parsed based on patent register data, claim data and/or court reports. There is thus scope for automation.
We still haven’t dealt with the issues of case transfers or different technical fields. One way to do this is to normalise or further weigh scores developed based on the above framework.
For technical fields, scores could be normalised based on average legal outcomes or scores for given classification groupings. There is a question of whether this data exists (I think it does for US art units, it may be buried in an EP report somewhere, I don’t think it exists for the UK). A proxy normalisation could be used where data is not available (e.g. based on internal average firm or company grant rates) or based on other public data, such as public hearing results.
Transferred cases could be taken into account by weighting by: time case held / time since case filing.
These may be measured by looking at the dates of event actions. These are often stored in patent firm record systems, or are available in patent register data.
It is worth noting that there are many factors outside the control of an individual attorney. For example, instructions may always be received near a deadline for a particular client, or a company may prefer to keep a patent pending by using all available extensions. The hope is that, as a first crude measure, these should average out over a range of applicants or cases.
For official responses, a score could be assigned based on the difference between the official due date and the date the action was completed. This could be summed over all cases and normalised. This can be calculated from at least EP patent register data (and could possibly be scraped from UKIPO website data).
For internal timeliness, benchmarks could be set, and a negative score assigned based on deviations from these. Example benchmarks could be:
- Acknowledgements / initial short responses sent with 1 working day of receipt;
- Office actions reported with 5 working days of receipt;
- Small tasks or non-substantive work (e.g. updating a document based on comments, replying to questions etc.) performed within 5 working days of receipt / instruction; and
- Substantive office-action and drafting work (e.g. reviews / draft responses) performed within 4 weeks of instruction.
This could be measured, across a set of cases, as a function of:
- a number of official communications issued to correct deviations;
- a number of requests to correct deficiencies (for cases where no official communication was issued); and/or
- a number of newly-raised objections (e.g. following the filing of amended claims or other documents).
This information could be obtained by parsing document management system names (to determine communication type / requests), from patent record systems, online registers and/or by parsing examination communications.
One issue with cost is that it is often relative: a complex technology may take more time to analyse or a case with 50 claims will cost more to process than a case with 5. Also different companies may have different charging structures. Also costs of individual acts need to be taken in context – an patent office response may seem expensive in isolation, but if it allows grant of a broad claim, may be better than a series of responses charged at a lower amount.
One proxy for cost is time, especially in a billable hours system. An attorney that obtains the same result in a shorter time would be deemed a better attorney. They would either cost less (if charged by the hour) or be able to do more (if working on a fixed fee basis).
In my post on pricing patent work, we discussed methods for estimating the time needed to perform a task. This involved considering a function of claim number and length, as well as citation number and length. One option for evaluating cost is to calculate the ratio: actual_time_spent / predicted_time_spent and then sum this over all cases.
Another approach is to look at the average number of office actions issued in prosecution – a higher number would indicate a higher lifetime cost. This number could be normalised per classification grouping (e.g. to counter the fact that certain technologies tend to get more objections).
The time taken would need to be normalised by the legal success measures discussed above. Spending no time on any cases would typically lead to very high refusal rates, and so even though a time metric would be low, this would not be indicative of a good attorney. Similarly, doing twice the amount of work may lead to a (small?) increase in legal success but may not be practically affordable. It may be that metrics for legal success are divided by a time spent factor.
Patent billing or record systems often keep track of attorney time. This would be the first place to look for data extraction.
An interesting result of this delve into detail is we see that legal success and cost need to be evaluated together, but that these can be measured independently of timeliness and error, which in turn . may be measured independently of each other. Indeed, timeliness and error avoidance may be seen as baseline competences, where deviations are to be minimised.
It would also seem possible, in theory at least, to determine these measures of success automatically, some from public data sources and others from existing internal data. Those that can be determined from public data sources raise the tantalising (and scary for some?) possibility of comparing patent firm performance, measures may be grouped by firm or attorney. It is hard to think how a legal ranking based on actual legal performance (as opposed to an ability to wine and dine legal publishers) would be bad for those paying for legal services.
It is also worth raising the old caveat that measurements are not the underlying thing (in a Kantian mode). There are many reasonable arguments about the dangers of metrics, e.g. from the UK health, railways or school systems. These include:
- the burden of measurement (e.g. added bureaucracy);
- modifying behaviour to enhance the metrics (e.g. at the cost of that which is not measured or difficult to measure);
- complex behaviour is difficult to measure, any measurement is a necessarily simplified snapshot of one aspect; and
- misuse by those in power (e.g. to discriminate or as an excuse or to provide backing for a particular point of view).
These, and more, need to be borne in mind when designing the measures. However, I believe the value of relatively objective measurement in an industry that is far too subjective is worth the risk.