This article will look into how the process of obtaining a patent could be automated using deep learning approaches. A possible pipeline for processing a patent application will be discussed. It will be shown how current state of the art natural language processing techniques could be applied.
Brief Overview of Patent Prosecution
First, let’s briefly look at how a patent is obtained. A patent application is filed. The patent application includes a detailed description of the invention, a set of figures, and a set of patent claims. The patent claims define the proposed legal scope of protection. A patent application is searched and examined by a patent office. Relevant documents are located and cited against the patent application. If an applicant can show that their claimed invention is different from each citation, and that any differences are also not obvious over the group of citations, then they can obtain a granted patent. Often, patent claims will be amended by adding extra features to clearly show a difference over the citations.
For a deep learning practitioner the first question is always: what data do I have? If you are lucky enough to have labelled datasets then you can look at applying supervised learning approaches.
It turns out that the large public database of patent publications is such a dataset. All patent applications needs to be published to continue to grant. This will be seen as a serendipitous gift for future generations.
In particular, a patent search report can be thought of as the following processes:
A patent searched locates a set of citations based on the language of a particular claim.
Each located citation is labelled as being in one of three categories:
– X: relevant to the novelty of the patent claim.
– Y: relevant to the inventive step of the patent claim. (This typically means the citation is relevant in combination with another Y citation.)
– A: relevant to the background of the patent claim. (These documents are typically not cited in an examination report.)
In reality, these two processes often occur together. For our ends, we may wish to add a further category: N – not cited.
Thinking as a data scientist, we have the following data records:
This data may be retrieved (for free) from public patent databases. This may need some intelligent data wrangling. The first process may be subsumed into the second process by adding the “not cited” category. If we move to a slightly more mathematical notation, we have as data:
(c, d, s)
Where c and d are based on a (long) string of text and s is a label with 4 possible values. We then want to construct a model for:
P(s | c, d)
I.e. a probability model for the search classifications given the claim text and citation detailed description. If we have this we can do many cool things. For example, for a set c, we can iterate over a set of d and select the documents with the highest X and Y probabilities.
Representations for c and d
Machine learning algorithms operate on real-valued tensors (n*m -dimensional arrays). more than that, the framework for many discriminative models maps data in the form of a large tensor X to a set of labels in the form of a tensor Y. For example, each row in X and Y may relate to a different data sample. The question then becomes how do we map (c, d, s) to (X, Y)?
Mapping s to Y is relatively easy. Each row of Y may be an integer value corresponding to one of the four labels (e.g. 0 to 3). In some cases, each row may need to represent the integer label as a “one hot” encoding, e.g. a value of  > [0, 0, 1, 0].
Mapping c and d to X is harder. There are two sub-problems: 1) how do we combine c and d? and 2) how do we represent each of c and d as sets of real numbers?
There is an emerging consensus on sub-problem 2). A great explanation may be found in Matthew Honnibal’s post Embed, Encode, Attend, Predict. Briefly summarised, we embed words from the text using a word embedding (e.g. based on Word2Vec or GloVe). This outputs a sequence of real-valued float vectors for each word (e.g. vectors of length ~300). We then encode this sequence of vector into a document matrix, e.g. where each row of the matrix represents a sentence encoding. One common way to do this is to apply a bidirectional recurrent neural network (RNN – such as an LSTM or GRU), where outputs of a forward and backward network are concatenated. An attention mechanism is then applied to reduce the matrix to a vector. The vector then represents the document.
A simple way to address sub-problem 1) is to simply concatenate c and d (in a similar manner to the forward and backward passes of the RNN). A more advanced approach might use c as an input to the attention mechanism for the generation of the document vector for d.
Obtain the Data
To get our initial data records – (Claim text, citation detailed description text, search classification) – we have several options. For a list of patent publications, we can obtain details of citation numbers and search classifications using the European Patent Office’s Open Patent Services RESTful API. We can also obtain a claim 1 for each publication. We can then use the citation numbers to look up the detailed descriptions, either using another call to the OPS API or using the USPTO bulk downloads.
I haven’t looked in detail at the USPTO examination datasets but the information may be available there as well. I know that the citations are listed in the XML for a US grant (but without the search classifications). Most International (PCT / WO) publications include the search report, so as a push you could OCR and regex the search report text to extract a (claim number, citation number, search category) tuple.
Once you have a dataset consisting of X and Y from c, d, s, the process then just becomes designing, training and evaluating different deep learning architectures. You can start with a simple feed forward network and work up in complexity.
I cannot guarantee your results will be great or useful, but hey if you don’t try you will never know!
In recent years there has been a resurgence of interest in machine learning and so-called “artificial intelligence” systems. Much of this resurgence is based on advances in so-called “deep learning”, neural networks with multiple layers of connections. For example, convolutional neural networks now provide state-of-the-art performance in many image recognition tasks and recurrent neural networks have been used to increase the accuracy of many commercial machine translation systems. Machine learning may be considered a subdiscipline of “artificial intelligence” that deals with algorithms that are trained to perform tasks such as classification based on collections of data. This recent resurgence has meant that more companies wish to protect innovations in this field. This quickly brings them into the realm of computer-implemented inventions, and the nuances of protection at the European Patent Office.
“Computer-implemented invention” is the European Patent Office term for a software invention. Claims that specify machine learning and artificial intelligence systems are almost certainly to be considered “computer-implemented inventions”. The innovation in such systems occurs in the design of the algorithms and/or software architectures. Claims for new hardware to implement machine learning and artificial intelligence systems, such as new graphical processing unit configurations, would not be classed as computer-implemented inventions and would be considered in the same manner as conventional computer devices.
What Do We Have To Go On?
As key advances in the field have only been seen since 2010, there are few Board of Appeal cases that explicitly consider these inventions. It is likely we will see many Board of Appeal decisions in this field, but it is unlikely these will filter through the system much before 2020. However, applications in the field are being filed and examined. The following review is based on knowledge of these applications, evaluated in the context of existing Board of Appeal cases.
A first issue regarding machine learning and artificial intelligence systems is that many of the underlying techniques are public knowledge, given the rapid turn-over of publications and repositories of electronic pre-prints such as arXiv. Hence, many applicants may face novelty and inventive step objections if the invention involves the application of known techniques to new domains or problems. For patent attorneys who are drafting new applications, it is recommended to perform a pre-filing search of such publication sources and ensure that the inventors provide a full appraisal of what is public knowledge.
Domain of Invention
A second issue is the domain of the invention. This may be seen as the context of the invention as presented in the claims and patent description.
Inventions that apply machine learning approaches to fields in engineering are generally considered more positively by the European Patent Office. These fields will typically either operate on low-level data that represents physical properties or have some form of actuation or change in the physical world. For example, the following domains are less likely to have features excluded from an inventive step evaluation for being “non-technical”: navigating a robot within a three-dimensional space; dynamic adaptive change of a Field Programmable Gate Array; audio signal analysis in speech processing; and controlling a power supply to a data centre.
On the other hand, inventions that apply machine learning approaches within a business or “enterprise” domain are likely to be analysed more closely. These inventions have a greater chance of claim features being excluded for being “non-technical”. These domains typically have an aim of increasing profit. The more this aim is explicit in the patent application, the more likely a “non-technical” objection will be raised. For example, the following inventions are more likely to have features excluded from an inventive step evaluation for being “non-technical”: intelligent organisation of playlists in a music streaming service; adaptive electronic trading of securities; automated provision of electronic information in a company hierarchy; and automated negotiation of online advertising auctions.
Exclusions from Patentability
A third issue that arises is that individual features of the claims fall within the exclusions of Article 52(2) EPC. In the field of machine learning and artificial intelligence systems, there is an increased risk of claim features being considered to fall into one of the following categories: mathematical methods; schemes, rules and methods for performing mental acts or doing business; and presentations of information. These will briefly be considered in turn below.
The field of machine learning is closely linked to the field of statistics. Indeed many machine learning algorithms are an application of statistical methods. Academic researchers in the field are trained to describe their contributions mathematically, and this is required for publication in an academic journal. However, the practice of the European Patent Office, as directed by the Boards of Appeal, typically regards statistical methods as mathematical methods. In their pure, unapplied form they are considered “non-technical”.
Schemes, Rules and Methods for Performing Mental Acts
A claim feature is likely to be considered part of schemes, rules and methods for performing mental acts when the scope of the feature is too broad or abstract. For example, if a claimed method step also covers a human being performing the step manually, it is likely that the scope is too broad.
Schemes, Rules and Methods for Doing Business
Claim features are likely to be considered schemes, rules and methods for doing business when the information processing relates to a business aim or goal. This is especially the case where the information processing is dependent on the content of the data being processed, and that content does not relate to a low-level recording or capture of a physical phenomenon.
For example, processing of a digital sound recording to clean the recording of noise would be considered “technical”; processing row entries in a database of information technology assets to remove duplicates for licensing purposes would likely be considered “non-technical”.
Presentation of Information
Objections that features relate to the presentation of information may occur when the innovation relates to user experience (UX) or user interface (UI) features.
For example, a machine learning algorithm that adaptively arranges icons on a smartphone according to use may receive objections on the grounds that features relate to mathematical methods (the algorithm) and presentation of information (the arrangement of icons on the graphical user interface). As per Guideline G-II, 3.7.1, grant is unlikely if information is simply displayed to a user and any improvement occurs in the mind of the user. However, it is possible to argue for a technical effect if the output provides information on an internal state of operation of a device (at the operating system level or below, e.g. battery level, processing unit utility etc.) or if the output improves a sequence of interactions with a user (e.g. provides a new way of operating a device). Again, a technical problem needs to be demonstrated and the machine learning algorithm needs to be a tool to solve this problem.
Subfields of ML and AI
In certain subfields of machine learning and artificial intelligence, there is a tendency for Boards of Appeal and Examining Divisions to consider inventions more or less “technical”. This is often for a combination of factors, including field of operation of appellants, the history of research and traditional applications, and the background and public policy preferences of staff of the European Patent Office.
For example, machine learning and artificial intelligence systems in the field of image, video and audio processing are more likely to be found to have “technical” features that can contribute to an inventive step under Article 56 EPC. A convolutional neural network architecture applied to image processing is more likely to be considered a “technical” contribution that the same architecture applied to text processing. Similarly, it may be argued that machine learning and artificial intelligence systems in the field of medicine and biochemistry have “technical” characteristics, e.g. if they operate on data originating from mass spectrometry or medical imaging.
However, advances in search, classification and natural language processing are more likely to be found to have “non-technical” features that cannot contribute to an inventive step under Article 56 EPC. These areas of machine learning and artificial intelligence systems are often felt to be “technical” by the engineers and developers building such systems. However, it is a nuance of European case law that these areas are often deemed to have claim features that fall into an excluded “business”, “mathematical” or “administrative” category.
A recent example may be found in case T 1358/09. The claim in this case comprised “text documents, which are digitally represented in a computer, by a vector of n dimensions, said n dimensions forming a vector space, whereas the value of each dimension of said vector corresponds to the frequency of occurrence of a certain term in the document”. The Board agreed with the appellant that the steps in the claim were different to those applied by a human being performing classification. However, the Board concluded that the algorithm underlying the method the claim did not “go beyond a particular mathematical formulation of the task of classifying documents”. They were of the opinion that the skilled person would have been given the (“non-technical”) text classification algorithm and simply be tasked with implementing it on a computer.
What Should We Not Do?
Managers and executives of commercial enterprises are often habituated into selling innovations to a non-technical audience. This means that invention disclosures often describe the invention at an abstract “marketing” level. When an invention is described in a patent application at this level, inventive step objections are likely.
The fact that mathematical formulae may comprise excluded “non-technical” features is difficult for inventors and practitioners to grasp. Often equations at an academic-publication level are included in patent specifications in an attempt to add technical character. This often backfires. While such equations may be deemed “technical” according to a standard definition of the term, they are often not deemed “technical” according to the definition applied by European case law.
In general, objections are more likely in this area when the scope of the claim is broad and attempts to cover applications of a particular algorithm in all industries. Applicants should be advised that trying to cover everything will likely lead to refusal.
What Should We Do?
Chances of grant may be increased by ensuring an examiner or Board of Appeal member can clearly see the practical application of the algorithm to a specific field or low-level technical area.
Patent attorneys drafting patent applications for machine learning and artificial intelligence systems should carefully consider the framing and description of the invention in the patent specification. In-depth discussions with the engineers and developers that are implementing the systems often enable innovations to be described more precisely. Given this precision, innovations may be framed as a “technical” or engineering innovation, i.e. a technical solution to a technical problem. This increases the chance of a positive opinion from the European Patent Office.
Often features of an invention will have both a business advantage and a “technical” advantage. For example, a machine learning system that learns how to dynamically route data over a network may help an online merchant more successfully route traffic to their website; however, this improved method may involve manipulation of data packets within a router that also improves network security. A patent specification describing the latter advantage will have a greater chance of grant than the former, regardless of the actual provenance of the invention. A practitioner may work with an inventor to ensure that initial business advantages are distilled to their proximate “technical” advantages and effects. For cases where data does not relate to a low-level recording or capture of a physical phenomenon, it is recommended to ensure that any described technical effect applies regardless of the content of the data.
When considering exclusion for “mental acts”, a risk of a “non-technical” objection may be reduced by ensuring that your method steps exclude a manual implementation. Note that this exclusion does not necessarily prevent other objections being raised (see T 1358/09 above).
When drafting patent applications, it is also important to describe the implementation of any mathematical method. In this manner, pseudo-code is often more useful than equations. It is also important to clearly define how attributes of the physical world are represented within the computer. Good questions to ask include: “What data structures and function routines are used to implement the elements of any equation?”, “How is data initially recorded, e.g. are documents a scanned image such as a bitmap or a markup file using a Unicode encoding?”, “What programming languages and libraries are being used?”, or “What application programming interfaces are important?”.
Practitioners do need to be concerned with including overly limiting definitions within the claims; however, a positive opinion is more likely when specific implementation examples are described in the patent specification, followed by possible generalisations, than when specific implementation examples are omitted and the description only presents a generalised form of the invention along with more detailed mathematical equations.
To be successful in search, classification and natural language processing, one approach is to determine whether features relating to a non-obvious technical implementation may be claimed. This approach often goes hand in hand with a knowledge of academic publications in the field. While such publications may disclose a version of an algorithm being used, they often gloss over the technical implementation (unless the underlying source code is released on GitHub). For example, is there any feature of the data, ignoring its content, which makes implementation of a given equation problematic? If inventors have managed to reduce the dimensionality of a neural network using clever string pre-processing or quantisation then there may be an argument that the resultant solution is implementable on mobile and embedded devices. Reducing a size of a model from 3 GB to 300 KB by intelligent selection of pipeline stages may enable you to argue for a technical effect.
Do Not Believe The Hype?
Despite the hype, machine learning and artificial intelligence systems are just another form of software solution. As such, all the general guidance and case law on computer-implemented inventions continues to apply. A benefit of the longer timescales of patent prosecution is that you ride out the waves of Gartner’s hype cycle. In fact, I still sometimes prosecute cases from the end of the dotcom boom…
Natural Language Processing and Deep Learning have the potential to overhaul patent operations for large patent departments. Jobs that used to cost hundreds of dollars / pounds per hour may cost cents / pence. This post looks at where I would be investing research funds.
The Path to Automation
In law, the path to automation is typically as follows:
Group of Patent Documents > Summary Clusters (Text or Image) (Landscaping)
Official Communication > Response Letter Text (Prosecution)
I know there is a lot of hype out there and I don’t particularly want to be responsible for pouring oil on the flames of ignorance. I have tried to base these thoughts on widely reviewed research papers. The aim is to provide more a piece of informed science fiction and to act as a guide as to what may be. (I did originally call it “Your Patent Department 2020” :).
Many of these things discussed below are still a long way off, and will require a lot of hard work. However, the same was said 10 years ago of many amazing technologies we now have in production (such as facial tagging, machine translation, virtual assistants, etc.).
Let’s dive into some examples.
At the moment, patent drafting typically starts as follows: receive invention disclosure, commission search (in-house or external), receive search results, review by attorney, commission patent draft. This can take weeks.
Instead, imagine a world where your inventors submit an invention disclosure and within minutes or hours you receive a report that tells you the most relevant existing patent publication, highlights potentially novel and inventive features and tells you whether you should proceed with drafting or not.
The techniques already exist to do this. You can download all US patent publications onto a hard disk that costs $75. You can convert high-dimensionality documents into lower-dimensionality real vectors (see https://radimrehurek.com/gensim/wiki.html or https://explosion.ai/blog/deep-learning-formula-nlp). You can then compute distance metrics between your decomposed invention disclosure and the corpus of US patent publications. Results can be ranked. You can use a Long Short Term Memory (LSTM) decoder (see https://www.tensorflow.org/tutorials/seq2seq) on any difference vector to indicate novel and possibly inventive features. A neural network classifier trained on previous drafting decisions can provide a probability of proceeding based on the difference results.
A draft patent application in a complicated field such as computing or electronics may take a qualified patent attorney 20 hours to complete (including iterations with inventors). This process can take 4-6 weeks.
Now imagine a world where you can generate draft independent claims from your invention disclosure and cited prior art at the click of a button. This is not pie-in-the-sky science fiction. State of the art systems that combine natural language processing, reinforcement learning and deep learning can already generate fairly fluid document summaries (see https://metamind.io/research/your-tldr-by-an-ai-a-deep-reinforced-model-for-abstractive-summarization). Seeding a summary based on located prior art, and the difference vector discussed above, would generate a short set of text with similar language to that art. Even if the process wasn’t able to generate a perfect claim off the bat, it could provide a rough first draft to an attorney who could quickly iterate a much improved version. The system could learn from this iteration (https://deepmind.com/blog/learning-through-human-feedback/) allowing it to improve over time.
In the old days, patent prosecution involved receiving a letter from the patent office and a bundle of printed citations. These would be processed, stamped, filed, carried around on an internal mail wagon and placed on a desk. More letters would be written culminating in, say, a written response and a set of amendments.
From this, imagine that your patent office post is received electronically, then automatically filed and docketed. Citations are also automatically retrieved and filed. Objection categories are extracted automatically from the text of the office action and the office action is categorised with a percentage indicating the chance of obtaining a granted patent. Additionally, the text of the citations is read and a score is generated indicating whether the citations remove novelty from your current claims (this is similar to the search process described above, only this time you know what documents you are comparing). If the score is lower than a given threshold, a set of amendment options are presented, along with a percentage chances of success. You select an option, maybe iterate the amendment, and then the system generates your response letter. This includes inserting details of the office action you are replying to (specifically addressing each objection that is raised), automatically generating passages indicating basis in the text of your application, explains the novel features, generates a problem-solution that has a basis in the text of your application, and provides pointers for why the novel features are not obvious. Again you iterate then file online.
Parts of this are already in place at major law firms (e.g. electronically filing and docketing). I have played with systems that can extract the text from an office action PDF and automatically retrieve and file documents via our document management application programming interface. With a set of labelled training data, it is easy to build an objection classification system that takes as input a simple bag of words. Companies such as Lex Machina (see https://lexmachina.com/) already crunch legal data to provide chances of litigation success; parsing legal data from say the USPTO and EPO would enable you to build a classification system that maps the full text of your application, and bibliographic data, to a chance of prosecution success based on historic trends (e.g. in your field since the 1970s). Vector-space representations of documents allow distance measures in n-dimensional space to be calculated, and decoder systems can translate these into the language of your specification. The lecture here explains how to create a question answering system using natural language processing and deep learning (http://media.podcasts.ox.ac.uk/comlab/deep_learning_NLP/2017-01_deep_NLP_11_question_answering.mp4). You could adapt this to generate technical problems based on document text, where the answer is bound to the vector-space distance metric. Indeed, patent claim space is relatively restricted (it is, at heart, a long sentence, where amendments are often additional sub-phrases of the sentence that are consistent with the language of the claimset); the nature of patent prosecution and added subject matter, naturally produces a closed-form style problem.
Imagining Reality is the First Stage to Getting There
There is no doubt that some of these scenarios will be devilishly hard to implement. It took nearly two decades to go from paper to properly online filing systems. However, prototypes of some of these solutions could be hacked up in a few months using existing technology. The low hanging fruit alone offers the potential to shave hundreds of thousands of dollars from patent prosecution budgets.
I also hope that others are aiming to get there too. If you are please get in touch!
Playing around with natural language processing has given me the confidence to attempt some claim language modelling. This may be used as a claim drafting tool or to process patent publication data. Here is a short post describing the work in progress.
Here, a caveat: this modelling will be imperfect. There will be claims that cannot be modelled. However, our aim is not a “perfect” model but a model whose utility outweighs its failings. For example, a model may be used to present suggestions to a human being. If useful output is provided 70% of the time, then this may prove beneficial to the user.
To start we will keep it simple. We will look at system or apparatus claims. As an example we can take Square’s payment dongle:
1. A decoding system, comprising:
a decoding engine running on a mobile device, the decoding engine in operation decoding signals produced from a read of a buyer’s financial transaction card, the decoding engine in operation accepting and initializing incoming signals from the read of the buyer’s financial transaction card until the signals reach a steady state, detecting the read of the buyer’s financial transaction card once the incoming signals are in a steady state, identifying peaks in the incoming signals and digitizing the identified peaks in the incoming signals into bits;
a transaction engine running on the mobile device and coupled to the decoding engine, the transaction engine in operation receiving as its input decoded buyer’s financial transaction card information from the decoding engine and serving as an intermediary between the buyer and a merchant, so that the buyer does not have to share his/her financial transaction card information with the merchant.
Let’s say a claim consists of “entities”. These are roughly the subjects of claim clauses, i.e. the things in our claim. They may appear as noun phrases, where the head word of the phrase is modelled as the core “entity”. They may be thought of as “objects” from an object-oriented perspective, or “nodes” in a graph-based approach.
In the above claim, we have core entities of:
“a decoding system”
“a decoding engine”
“a transaction engine”
An entity may have “properties” (i.e. “is” something) or may have other entities (i.e. “have” something).
In our example, the “decoding system” has the “decoding engine” and the “transaction engine” as child entities. Or put another way, the “decoding engine” and the “transaction engine” have the “decoding system” as a parent entity.
In the example, the properties of the entities are more complex. The “decoding system” does not have any. It just has the child entities. The “decoding engine” “is”:
“running on a mobile device”
“in operation decoding signals produced from a read of a buyer’s financial transaction card”
“in operation accepting and initializing incoming signals from the read of the buyer’s financial transaction card until the signals reach a steady state”
“detecting the read of the buyer’s financial transaction card once the incoming signals are in a steady state”
“identifying peaks in the incoming signals and digitizing the identified peaks in the incoming signals into bits”
In these “is” properties, we have a number of implicit entities. These are not in our claim but are referred to by the claim. They are basically the other nouns in our claim. They include:
“buyer’s financial transaction card”
[When modelling the part of speech tagger is mostly there but probably required human tweaking and confirmation.]
Mapping to Natural Language Processing
To extract noun phrases, we need the following processing pipeline:
claim_text > [1. Word Tokenisation] > list_of_words > [2. Part of Speech Tagging] > labelled_words > [3. Chunking] > tree_of_noun_phrases
Now, the NLTK toolkit provides default functions for 1) and 2). For 3) we have the options of a RegExParser, for which we need to supply noun phrase patterns, or Classifier-based chunkers. Both need a little extra work but there are tutorials on the Net.
Noun phrases should be used consistently throughout claim sentences – this can be used to resolve ambiguity.
This post sets out a number of resources to get you started with deep learning, with a focus on natural language processing for legal applications.
A Bit of Background
Deep learning is a bit of a buzz word. Basically, it relates to recent advances in neural networks. In particular, it relates to the number of layers that can be used in these networks. Each layer can be thought of as a mathematical operation. In many cases, it involves a multidimensional extension of drawing a line, y = ax + b, to separate a space into multiple parts.
I find it strange that when I studied machine learning in 2003/4, neural networks had gone out of fashion. The craze then was for support vector machines. Neural networks were seen as a bit of a dead end. While there was nothing wrong theoretically, in practice it wasn’t possible to train a network with more than a couple of layers. This limited their application.
Computers and software improved. Memory increased. Researchers realised they could co-opt the graphical processing units of beefy graphics cards of hardcore gamers to perform matrix and vector multiplication. The Internet improved access to large scale data sets and enabled the fast propagation of results. Software tool kits and standard libraries arrived. You could now program in Python for free rather than pay large licence fees for Matlab. Python made it easy to combine functionality from many different areas. Software became good at differentiating and incorporating advanced mathematic optimisation techniques. Google and Facebook poured money into the field. Etc.
This all led to researchers being able to build neural networks with more and more layers that could be trained efficiently. Hence, “deep” means more than two layers and “learning” refers to neural network approaches.
Deep Natural Language Processing
Deep learning has a number of different application areas. One big split is between image processing and natural language processing. The former has seen big success with the use of convolutional neural networks (CNNs), while natural language processing has tended to focus on recurrent neural networks (RNNs), which operate on sequences within time.
Image processing has also typically considered supervised learning problems. These are problems where you have a corpus of labelled data (e.g. ‘ImageX’ – ‘cat’) and you want a neural network to learn the classifications.
Natural language processing on the other hand tends to work with unsupervised learning problems. In this case, we have a large body of unlabelled data (see the data sources below) and we want to build models that provide some understanding of the data, e.g. that model in some way syntactic or semantic properties of text.
Saying this there are cross overs – there are several highly-cited papers that apply CNNs to sentence structures, and document classification can be performed on the basis of a corpus of labelled documents.
Introductory Blog Posts
A good place to start are these blog posts and tutorials. I’m rather envious of the ability of these folks to write so clearly about such a complex topic.
After you’ve read those blog articles a next step is to dive into the Udacity free Deep Learning course. This is taught in collaboration with Google Brain and is a great introduction to Logical Regression, Neural Networks, Data Wrangling, CNNs and a form of RNNs called Long Short Term Memory (LSTMs). It includes a number of interactive Jupyter/IPython Notebooks, which follow a similar path to the Tensorflow tutorials.
Once you’ve got your head around the theory, and have played around with some simple examples, the next step is to get building on some legal data. Here’s a selection of useful text sources with a patent slant:
The file you probably want here is enwiki-latest-pages-articles.xml.bz2. This clocks in at 13 GB compressed and ~58 GB uncompressed. It is supplied as a single XML file. Again I need to work on some Python helper functions to access the XML and return text.
(Note: this is the same format as recent USPTO grant data – a good XML parser that doesn’t read the whole file into memory would be useful.)
Although there is no API or bulk download option as of yet, it is possible to set up an RSS feed link based on search parameters. This RSS feed link can be processed to access links to each decision page. These pages can then be accessed and converted into text using a few Python functions (I have some scripts to do this I will share soon).
Again a human accessible resource. However, the decisions are accessible by year in fairly easy to parse tables of data (I again have some scripts to do this that I will share with you soon).
Your Document / Case Management System.
Many law firms use some kind of document and/or case management system. If available online, there may be an API to access documents and data stored in these systems. Tools like Textract (see below) can be used to extract text from these documents. If available as some form of SQL database, you can often access the data using ODBC drivers.
Once you have some data the hard work begins. Ideally what you want is a nice text string per document or article. However, none of the data sources listed above enable you to access this easily. Hence, you need to start building some wrappers in Python to access and parse the data and return an output that can be easily processed by machine learning libraries. Here are some tools for doing this, and then to build your deep learning networks. For more details just Google the name.
– brilliant for many natural language processing functions such as stemming, tokenisation, part of speech tagging and many more.
– an advanced set of NLP functions.
– another brilliant library for processing big document libraries – particularly good for lazy functions that do not store all the data in memory.
– for building your neural networks.
– a wrapper for Tensorflow or Theano that allows rapid prototyping.
– provides implementations for most of the major machine learning techniques, such as Bayesian inference, clustering, regression and more.
– great for easy parsing of semi-structured data such as websites (HTML) or patent documents (XML).
– a very simple wrapper over a number of different Linux libraries to extract text from a large variety of files.
– think of this as a command line Excel, great for manipulating large lists of data.
– numerical analysis in Python, used, amongst other things, for multidimensional arrays.
– great for prototyping and research, the engineers squared paper notebook of the 21st century, plus they can be easily shared on GitHub.
– many modern toolkits require a bundle of libraries, it can be easier to setup a Docker image (a form of virtualised container).
– for building web servers and APIs.
Now go build, share on GitHub and let me know what you come up with.
Often you are faced with the question: should I patent my invention? A quick, back-of-the-envelope calculation can help with this decision.
CAVEAT: these are all roughly sketched out figures. This post is written in my spare time between cooking, cleaning, childcare and work. It does not constitute legal or financial advice. The figures are rough generalisations that allow you to work out whether it’s worth investigating further but may vary considerable for each individual case. Always get professional help with the details.
Obtaining a patent is not a cheap process. As of 2017, my very rough rule-of-thumb is to budget £50k per country over the 20 year lifetime (excluding taxes – ~$75k).
This is based on, for a typical case:
~£10k for initial work (e.g. searching), drafting an application and the costs of first (i.e. priority) filing.
~£10k for developing strategy after an initial patent office search (e.g. UKIPO or in the International phase) and for filing an International patent application within a year of the first filing.
~£5k per country to enter the national or regional phase after the end of the International phase for the International application. This is about right for a simple US and European entry; countries requiring translations may be up to £10k per country.
~£15k per country for prosecution and grant. This is likely the most variable figure, with variance typically being on the upside (i.e. more expensive) if you are unlucky with prior art or a particular obstinate examiner.
~£10k per country for renewal fees over 20 years. Again, this varies per country.
In terms of the distribution with time, this breaks down to:
~£10k / year for first 3-4 years.
~£0.5-1k / year for next 16-17 years.
Hence, most of the costs are front-loaded to the first 3-4 years: you need ~£30k over this period to properly take part in the patenting process.
Return on Investment
For a decent return, you want the patent’s value over its 20 year life to be at least 3x its cost (excluding inflation). Say this is £150k.
This works out as a real return of at least 5-6% per year over the lifetime of the patent.
The value of a patent is unlikely to be gained evenly over its lifetime. Statistics show that much of a patent’s value is realised towards the end of its life, e.g. 10-15 or 15-20 years post filing.
Anything less than this and your business would be better off just investing in the stock market.
How to Determine Value
This is normally the hard part. However, there are a few short-cuts.
Most of the claims, understandably, were made by large companies. As such, the £500k / year average claim may include a number of different patented products. However, small businesses often only have one or two patents or products. Hence, the small business claim may be closer to a lower bound on yearly value per patent.
Of course, you can perform your own calculations. For a very rough upper bound on the benefit, simply add up the profits derived from each of your main products or services and multiply by 0.1. (This does assume you are making a profit.) For a lower bound, multiply this 10% saving by 0.5.
Now remember this is a yearly saving. The total saving will thus depend on the lifetime of your product.
Assuming a rough product lifetime of 10 years, and a lower bound on the tax claim of £15k / year, this means that an average UK patent provides a saving of £150k over its lifetime. This just happens to be the number we came up with above for a decent return.
From these rough calculations we see a couple of things:
To justify a UK patent’s value based on a Patent Box claim, you need to be making around £150k / year in profit for at least one product or service.
If this applies, a UK patent covering the product or service will pay for its costs and make a decent return.
Patenting can thus be economically justified in this case.
Another way a patent can provide a return is through licensing. (Someone pays you for your permission to use the technology of the patent.)
Looking at our rough figures, you would need licence fees of ~£150k over 20 years, or approximately £7.5k / year.
Hence, if you feel that you can get one or more companies to pay £10k / year for the technology, patenting is worthwhile.
In this case, an average worldwide FRAND licence rate for major markets for mobile equipment and infrastructure for a portfolio of 2G, 3G and 4G patents was deemed to be 0.05%. Now Unwired Planet have around 2,500 patents. Some googling indicates total infrastructure and handset sales to be around $150 billion (split 1:2). If everyone licensed at this rate, the annual licensing revenue would be $7.5 billion, divided by 2,500 patents gives you an average licensing income of $3 million (~£2 million) per patent per year.
Of course this is an upper-upper-bound estimate, you won’t get a licensing fee from each sale and this may be time-limited (e.g. the value of 2G technology not used in current handsets is falling). However, it does show that a licensing revenue of £20 million per patent over its lifetime is not completely pie-in-the-sky and may be relevant if you are lucky and patent a subsequent core technology.
In this case, IBM covers its patenting costs, but there is only a small real return from licensing alone. Hence, for IBM licensing is a useful aspect to cover costs, but must form only a portion of the value of a given patent.
Valuing individual patents is tricky. This article here is interesting – http://www.hayes-soloway.com/patent-valuation . It suggests a lower bound on patent transactions of around $90,000 (£70k), a median of around $200k (£150k) and an average of around $400k (£300k). Each of Kodak’s patents was valued at around $500k when recently sold in 2012.
These valuations are consistent with the numbers discussed so far. The lower bound on the value of patents when sold is a little above cost (but not below cost). The median amount provides the magical £150k figure discussed above, i.e. a real return of around 5-6%. If you are lucky and/or skilled (delete depending on your political persuasion), a value of around £300k provides a decent market-beating return of around 10%. The higher figures also compensate for the fact that average patent grant rates are around 50% – hence, there is a certain amount of survivor bias and each of these sells would need to factor in the sunk costs of their unsuccessful brethren.
Another caveat here – patents tend to be very illiquid and most patent transactions involve large companies with large patent portfolios. Hence, while these figures may be applicable to similar sized entities, they may not apply as much to small and medium sized businesses. The distribution of values is also likely to be a power law distribution, with a few patents having astronomical valuations, and a long tail of patents with low valuations.
Here, we see that if you are a large company, it is worth patenting for the value you realise if you sell your patents.
Access to Market
We now move into the more hand-wavy aspects of patent valuation.
Underlying all this discussion is the fact that patents allow you to sue those who are providing products without your permission that fall within your patent claims . Licensing is one way to realise this value by providing permission for cash.
Another way patents can provide value is by allowing you access to a market at a low cost through cross-licensing. This is where another entity has at least one patent that covers your product or service. They could thus prevent you from accessing the market by either refusing permission or demanding high licensing fees. However, you have a patent that covers their product or service. Hence, each side has a potential weapon they can deploy and the sensible outcome is to come to an agreement to provide permission to use each other’s technology.
The problem with cross-licensing is that these deals are typically performed in confidence. There is thus little data to quantify the transaction. Standard public licensing rates provide some indication of the value. Hence, the licensing figures from above may be used here.
Average licensing rates can vary from 0.01% to 30% depending on the technology, product and market. Most are probably below 5-10%, with higher rates for low volume, high profit products (e.g. software services) and lower rates for commodity items (e.g. phone handsets).
One (very rough) way you can value access to a market is thus to:
determine the size of the potential market for your product;
determine an average revenue for you for this market over a 20-year period; and
times this by 10%.
Working backwards from our figures above, this gives us an average revenue of £150k / 0.1 = £1.5 million over 20 years (which may be £300k / year for a 5 year lifespan, £150k / year for a 10 year lifespan, and £75k / year for a 20 year lifespan etc.).
If you are not selling your product yet, you can look at figures for the size of the potential market by dividing these figures by an estimated, percentage market share. For example, if you believe you can gain 10% of a market, the market needs to be worth £15 million over the 20 years (e.g. £3 million / year for a 5 year lifespan, £1.5 million / year for a 10 year lifespan, and £750k / year for a 20 year lifespan etc.).
The other flip-side to this is to look at the cost of litigation. If cross-licensing can avoid the costs of litigation then this also provides value. If we say an average court case costs between £1-3 million, then the value of your patent depends on the likelihood of litigation. In this case, if a chance of litigation is above 15%, patenting is cost effective. Here, you can also ask for a quote for litigation insurance in your market and use that to determine the value of any patent on a competitor’s product or service.
These simple calculations mean that, for a product with a 5 year lifespan and a potential market of only £100k per year, patenting may not be cost effective if looking at access to market.
One reason why small businesses obtain patents is to gain investment.
Likewise, one reason venture capitalists invest in small businesses with patents is because they perform similar calculations to those above (although with prettier and more accurate spreadsheets) and realise they can obtain an above market return (or a market return for a given risk – 90% of small businesses fail folks).
Now venture capitalists have requests for funding from many small startups (understatement). Most of these will be refused. One way you can cut through the noise as a company is to show you have at least a strong chance of obtaining a patent. Hence, a patent application may provide an immediate effect by enabling leverage – i.e. the patenting costs may facilitate a much large amount of funding.
Of course, there are many different factors that influence funding, and most of these may be more important than a patent portfolio (such as founders / founder experience, market proposition, existing capital raised, and existing profit). Let’s say, conservatively, that having a patent increases your chance of funding from 0% to 10%. In this case, funding of £200k plus would justify an initial £20k patent spend (e.g. initial filing and International application).
Another way of looking at this may be to compare patenting costs and engineer costs. Say an engineer costs £50k / year, where on-costs are £75k (i.e. actual cost to company is 1.5x salary). The question to then ask is: what would increase your chances of funding more: 4 months of that engineer’s time or having a patent application?
If the answer is that, at your current stage of development, 4 months engineer time would greatly enhance your offering and increase your chance of funding by 50%, then limited funds may be better spent on that rather than patenting.
If you are at a stage where development has been kept confidential, and 4 months of engineer time would make only small incremental improvements to attract funding, then patenting becomes a better choice.
You can also run similar arguments with consultant costs and other areas such as marketing.
Patented products make for good marketing.
This may only be a small proportion of a patent’s value but should not be overlooked.
For example, an average marketing budget may be 10% of sales. If a patent replaces 1% of that (i.e. has the same effect as 1% of the sales budget), then a patent could start to make a decent return if revenues are £15 million or more over 10 year (i.e. £1.5 million / year).
What Have We Learnt
Often it is difficult to provide an answer to the question: should I get a patent?
Patent attorneys typically err on the side of saying “yes”, as that is what they do day-in-day-out. It can be like asking a decorator: should I paint my house? (I decided not to say it may be like asking a car salesman: should I buy a car? :))
In certain businesses the answer is often “yes”, but the reason is “because that’s what we do”. Similar, in other businesses (I’m looking at you software), the answer is often “no”, with the reason being “because we don’t do that here”.
Hopefully, in the discussion above, I have tried to explain some of the areas and conditions where there may be an economic justification for obtaining a patent.
In particular, assuming a product with a 10 year lifespan, patenting may be cost effective:
if you are paying UK corporation tax and your product will earn £150k / year in profit;
if your market is worth more than £1.5 million per year and you can capture at least 10% of this;
if the patented technology is of interest to one or more acquirers;
if the chance of litigation is above 15% in your market;
if it increases your chance of funding from 0 to 10%; or
if it increases sales by 1% of products with revenues of more than £1.5 million / year.
Some of these value factors may be gained independently. For example, a patent may allow you to reduce UK corporation tax, increase sales, provide access to a market and reduce litigation risk. The more the factors apply cumulatively, the lower the figures above need to be.
By sketching these numbers out on the back of an envelope, say over 30 minutes, you can get a feel for how relevant patenting is for your company.
If you look at these figures and gasp, then patenting may not be right for you. Although patenting is open to anyone, practically you need to be a business with actual or projected revenues of hundreds of thousands of pounds for the system to work properly.
If you are close to break-even thresholds, there need to be other good reasons to patent, or prospects for future growth need to be good, otherwise patenting may not be worthwhile economically.
If you are way over the thresholds, and you do not have a patenting strategy, then this provides strong basis for an argument to your Board of Directors to get one. It may justify spending a few thousand pounds on professional advice to fill in the details of feasibility.
If you have an existing patenting strategy, running these calculations once a year or so may enable you to make decisions on maintaining patents and patent applications, and provide justification to support existing budgets (or even to ask for more funds).
What is uncertain, what is unknown and what can be modelled?
We can know, for a given classification, past grant rates. This gives a rough a priori probability.
We can also know abandonment and withdrawal rates.
We cannot know how an examiner is going to approach the case.
One of the biggest unknowns is the prior art that is cited. Prefiling searches enable a general view of the level and type of art that maybe cited. However, in my experience, prefiling search art is rarely cited in subsequent search and examination reports, everyone has a different set of preferred art to cite.
Generally there will be a link between claim length and novelty / inventive step objections: shorter claims are more likely to receive objections on these grounds.
We also cannot always know how valuable a patent will be. This depends on commercial context that is constantly changing.
We cannot know the outcome of litigation.
We can, though, update our probabilities based on events. For example, comments from an examiner, opposition board, Board of Appeal, or other party to proceedings can change our knowledge. A positive opinion can increase our estimate of the probability of success and a negative opinion can decrease the same.
Professionals that appear, externally, to be able to control uncertainty will attract more business. No one likes uncertainty, especially in business. However, even a rudimentary history or background knowledge would indicate that, although it is possible to be lucky, uncertainty can never be banished or controlled. Any offer of certainty is thus false.
Obtaining a patent is an uncertain process. It is difficult, if not impossible, to predict the prior art that may be located or the examiner you are assigned. Grant rates often vary from 5 to 50%, and it is rare for patent claims to be allowed without limitations during prosecution. However, there are techniques to manage this uncertainty. Some of these are discussed below.
The International Patent Application
For many businesses, the US and Europe are core markets. To obtain patent protection in these markets, many patent attorneys advise filing an International patent application (also called a Patent Cooperation Treaty – PCT – application). An International patent application only needs to be converted into specific national or regional applications 30 months from an initial filing or priority date. This provides time for a product or service to develop in parallel with a pending application and leaves open the possibility of obtaining protection in states such as Japan, China, Korea and Australia.
An International patent application is searched and a written opinion is drawn up by an examiner. The written opinion resembles an examination report. For applicants from Europe, the European Patent Office prepare these documents. The European Patent Office is seen internationally as one of the tougher patent offices; I often see cases with favourable opinions from examiners in the US, Korea and China hit objections when the case is examined by the European Patent Office.
There are also costs to consider. Patenting is not cheap. Depending on length and scope, a patent application will likely cost between £5-10k (all figures are excluding taxes and at 2017 rates) to be drafted. Filing costs for an International patent application are £4-5k (most of this being official fees). Filing costs for national or regional applications at the end of the International phase will cost between £5-10k (a chunk of this being official fees and/or translation costs). Then it may cost between £5-15k to prosecute an application and pay grant fees. A good rule of thumb is £30k per country over a 3-7 year period.
Faced with this, a strategy I often suggest is set out below:
Initial UK Patent Filing
First, it is worth noting that I would not attempt the patenting process unless I could budget around £10k per year over the first 3 years.
Second, it is good to take advantage of the ease and low cost of the UK Intellectual Property Office for a first filing. Official fees are only £230 for filing, search and examination (a bargain really – European Patent Office fees are 10x this). Unlike the US there is no need for assignments and declarations to be filed. You can register this first filing with the priority document access service as well, which makes supply of a certified copy of the priority document a doddle.
UK Combined Search and Examination Report
The UK Intellectual Property Office provide a combined search and examination report with 4-6 months. You can ask to accelerate this and if you have a good reason the request is often granted, shortening the time to 4-8 weeks. While a UK search is often not quite as thorough as an European search, it is quick and cheap (e.g. as compared with Europe or US). It is thus a useful way of identifying any “low-hanging” prior art that may be problematic.
For example, if “knock-out” prior art is located you can choose to withdraw the application within the first 12 months before publication. This helps to cap your loss at the £10k or so of initial costs; it prevents you spending another £20k only to get a refusal on subsequent national or regional applications (or even to have a patent that would easily get knocked out in court). Withdrawing before publication means the content of the patent application will not become public and count against future applications you may make. This is useful if the patent application relates to a product in development; you may come up with an improvement after a year that could support a further patent application that can reuse much of the initial material.
Even if “knock out” prior art is not found, the UK combined search and examination report can help you strengthen your patent claims. For example, prior art may be cited that anticipates your main claims but an amendment is possible that renders the claims novel and inventive over the cited documents. It is definitely better to work this out over a leisurely 4 month period (e.g. iterating with the inventors who may still remember the case), rather than rush this just before priority-claiming applications need to be filed at the 12-month point. While you can never be sure that subsequent searches by other patent offices will not find other, more relevant prior art, an amendment at this stage is often going to be taking your application in the right direction and will be making favourable opinions more likely. Engineers may like to see this as a first “stress test” for the patent application.
The UK combined search and examination report may also flag other issues such as clarity or support that are best dealt with early on. For example, a term you and your inventors thought was well-known may be considered by the UK examiner to be unclear; the specification may then be amended to provide a more in-depth definition from text-books or Wikipedia.
If you do need to amend the claims at this 6-12 month stage, another advantage is that you can make sure that you maximise the scope of positions covered by your patent claims. For example, you first filing may have 20 patent claims. If a number of these claims are deemed obvious over the general knowledge or certain claims need to be added to the main independent claims, then claims may be deleted and other improved fall-back positions added.
Typically, it is good to set aside some inventor time, and a budget of £1-2k to review the UK combined search and examination report and cited art. I often see those who choose not to make this investment at this stage be subject to avoidable higher costs later on in prosecution.
If you have a set of patent claims that are novel and inventive over the prior art cited by the UK Intellectual Property Office, the next stage is to file an International patent application within 12 months of the initial filing date.
If you are a UK company, the European Patent Office will perform another search and issue a written opinion setting out any objections. They are pretty good at issuing this within 4-6 months of filing the International patent application. The European search and written opinion provides the second “stress test” of the claims.
Often the European examiner will locate new prior art. One way to reduce this risk is to amend the background of the patent specification before filing the International patent application to make reference to the prior art located in the UK search. In 25-50% of cases, if the UK-cited prior art is relevant and reasonable, the European examiner will (understandably) take the easier option of citing it again. At the very least, referencing the UK-cited art can help you “seed” the European search towards areas you have had time to analyse.
If the European examiner does locate new prior art, then again it is recommended to repeat the same analysis that was performed for the UK combined search and examination report. Often you still have over a year before choices regarding national or regional applications need to be made. A relatively leisurely 4-8 weeks review cycle, incorporating comments from inventors or other engineers, at an attorney cost of £1-2k, can again reap cost savings later on in prosecution.
For example, if the European cited art is “knock out”, costs can be capped at around £15k (e.g. drafting, UK filing, PCT filing and review costs). It may not be possible to have the search results in time to be able to stop publication (which is why the UK search is good). This may seem a lot but it prevents additional spends of £15-20k per country (e.g. £30k < spend < £80k) only for you to receive multiple refusals 2-3 years later.
If amendment is possible, then this can be determined following a review of the prior art and a claim set prepared for national and regional applications. At this stage you may have more confidence in the claims as you know they have been through both UK and European examination. This may make it easier to justify patent applications in multiple countries to a company board or budget comittee.
This process represents an additional spend of up to £4k in attorney time. However, this easily pays for itself:
It can avoid spending up to £40k+ on patent applications worldwide that are unlikely to be granted.
It can avoid long and protracted European Patent prosecution.
It often simply represents front-loading of costs that would be occurred in normal prosecution.
It allows leisurely review while the case may still be fresh in inventors minds (with touch points at 6 and 18 months following the filing process). This can also promote inventor engagement with the patent process.
The possibility of Patent Prosecution Highways could avoid long and protracted prosecution In multiple countries.
If you do obtain a patent it is likely to be stronger and hence of more value.
Obtaining a strong, enforceable patent that protects your software invention is often difficult. Here I will touch on some approaches to stack the odds in your favour.
Why is it difficult to patent software?
There are a number of hurdles that must be overcome to obtain a patent for a software invention. These include:
Being new: at least one aspect of your invention must differ from other solutions available to the public. This includes solutions described in other patent applications, blog posts, manuals, online documentation and white papers.
Being inventive: not only must your invention have a differing feature, that differing feature needs to be non-obvious. If the differing feature is common knowledge, e.g. is a common feature described in text books or on Wikipedia, and it is straightforward to use it in the context of the other known features, then your invention will be deemed obvious. Likewise if the differing feature is described in another document, and it would be obvious to combine this other document with the pre-existing solution providing the other features, then your inventive will be said to lack an inventive step.
Being patentable: your software invention must meet requirements set by law for patentable subject matter. Each jurisdiction has slightly different rules. Normally, statute sets some very broad categories of excluded subject matter. Individual cases and hearings then provide a body of case law that says which areas are allowable and which areas are not. For example, in Europe you need to show that the differing feature provides a ‘technical’ effect, which is often an engineering improvement.
Patenting software also taxes patent attorneys and patent examiners. With mechanical products, you can often see and feel the invention. Similarly, pharmaceutical inventions may be defined through sets of well-defined chemical formulae. Software is harder to visualise – there may be multiple technology layers in an implementation stack and many non-essential interoperating parts. This can often lead to poor patent specifications and misunderstandings.
Also if a patent claim is too specific then it will be easy for a software developer to work around. Most inventions will need to transcend a particular programming language or technology to cover ports to different platforms and to future-proof a patent’s value. However, if a patent claim is too broad, it is often deemed too abstract to be patentable and may also run afoul of clarity provisions.
What do these difficulties mean in practice?
In practice these difficulties often lead to:
Low grant rates;
High prosecution spend; or
These factors often interact to form a vicious cycle of mutual distrust: too many poor quality patent specifications are filed, leading to cynicism from patent examiners and the public, which leads to knee-jerk rejections and lobbying, which in turn undermines confidence in the system from businesses.
What can we do?
The first thing software companies can do is to find the right patent attorney or attorney firm. There are a few attorneys who deal with software day-in-day-out. These need to be sought out. Look for an attorney with experience of working for a large software company, e.g. Microsoft, IBM, Hewlett-Packard, Oracle, SAS, Amazon, Google. The European Patent Office allows you to search by representative to see example applicants.
The second thing software companies can do is to set high standards for their patent specifications. The recent change in practice in the US will hopefully catalyse this. Technical or engineering features should be defined in detail; any high-level marketing terms or IT jargon should be jettisoned. A strong technical problem should be eluded to, and there should be a good set of tiered fall-back positions, each with their own defined engineering advantages.
The third thing software companies can do is to keep on top of the case law in different jurisdictions. Your patent attorney may offer to help you with this. At a simple level, a one page table can show what kind of inventions have been allowed and what kind of inventions have been refused. For example, UK hearing officers often find that database management improvements are not allowed, whereas European examiners find these are technical.
Fast Track are an organisation that provide reports on unlisted private UK companies.
Useful list are the Top Track 250 and Tech Track 100 (the reports are in PDF format – the end of the reports have the data in tabular form). These reports are also published in the Sunday Times every year.
From these lists you can collate a large list of companies that may or may not require intellectual property services. I prefer a long CSV list with no fancy formatting.
Matching by Technology
Most companies specialise in particular areas of technology. Likewise, most patent attorneys have specific experience in certain technologies. A good technology match saves time and money.
If you have lots of time (or a work experience student or a Mechanical Turk) you can take each company from your list, one-by-one, and perform a search on EspaceNet. You can then look through the results and make a note of the classifications of the patent applications returned from the search.
Iterate through a large list of companies / applicants;
Clean the company / applicant name to ensure relevant search results;
Process the search results to extract the classifications;
Process the search results to determine the patent agent of record;
Process the classifications to build up a technology profile for each company / applicant; and
Process the classifications to rank companies / applicants within a particular technology area.
For example, say you are a patent attorney with 20 years worth of experience in organic macromolecular compounds or centrifugal apparatus. Who would you look at helping? How about:
Or say you wanted to know what technology areas Company X worked in? How about:
(* Quiz: any idea who this may be? Guesses in the comments…)
Or say you work for Company X and you wonder which patent attorneys work for your competitors or in a particular technology area. How about:
By improving matching, e.g. between companies and patent attorneys, we can open up legal services. As the potential of technology grows, legal service provision need not be limited to a small pool of ad-hoc connections. Companies can get a better price by looking outside of expensive traditional patent centres. Work product can be improved as those with the experience and passion for a particular area of technology can be matched with companies that feel the same.