Connecting the Pipes (in Windows)

One thing I have been trying to do recently is to connect together a variety of information sources. This has inevitably involved Python.

Estonian Snake Pipe by Diego Delso, Wikimedia Commons, License CC-BY-SA 3.0
Estonian Snake Pipe by Diego Delso, Wikimedia Commons, License CC-BY-SA 3.0

Due to the Windows-centric nature of business software, I have also needed to setup Python on a Windows machine. Although setting up Python is easy on a Linux machine it is a little more involved for Windows (understatement). Here is how I did it.

  • First, download and install one of the Python Windows installers from here. As I am using several older modules I like to work with version 2.7 (the latest release is 2.7.8).
  • Second, if connecting to a Microsoft SQL database, install the Python ODBC module. I downloaded the 32-bit version for Python 2.7 from here.
  • Third, I want to install IPython as I find a notebook is the best way to experiment. This is a little long-winded. Download the ez_install.py script as described and found here. I downloaded into my Python directory. Next run the script from the directory (e.g. python ez_setup.py). Then add the Python scripts directory to your Environmental Variables as per here. Then install IPython using the command: easy_install ipython[all].
  • Fourth, download a Windows installer for Numpy and Pandas from here. I downloaded the 32-bit versions for Python 2.7. Run the installers.

Doing this I can now run a iPython notebook (via the command: ipython notebook – this will open a browser window for your default browser). I found Pandas gave me an error on the initial import as dateutil was missing – this was fixed by running the command: easy_install python-dateutil.

Now the aim is to connect the European Patent Office’s databases of patent and legal information to internal SQL databases and possibly external web-services such as the DueDil API

 

 

Amateur Hour: A Patent Workflow Web App

As you may remember, a while back I posted some ideas for a patent workflow tool. It is taking a while, what with actual work and family commitments. However, I finally have a rough-and-ready prototype covering at least the initial review stage.

The application* is built in Flask. It generates an XML document containing the entered data. Fields are rendered based on the XML document (making use of XSLT). To avoid file system headaches, XML data is stored as string data in an SQLite3 database. The data is indexed using a hash masquerading as a key identifier. The key identifier can then be passed as a URL parameter to retrieve a particular XML document. Although nowhere near a fully working “thing”, the code is here if you want a look: https://github.com/benhoyle/attass .

Initial Review: Process Overview in Pictures

First we enter our case reference:

Enter Case Reference
Enter Case Reference

Then we enter the communication details and the objections raised:

Communication Overview
Communication Overview

Then we briefly enter salient details of each objection raised in the communication. This can be used for reporting and as a reference for a later, more detailed review:

Enter First Objection
Enter First Objection

There is an option to enter further objections under the same category (see the lower checkbox). This adds an additional XML element and populates it with data from a template. Once submit is pressed, fields for a next objection will load:

Enter Second Objection
Enter Second Objection

The result is then a populated XML document that forms the starting point for a response:

XML Document
XML Document

Where from here?

I have similar review workflows in progress for novelty and inventive step. They follow a similar pattern: an XML template defines data entry and works the user through a number of review steps. I have JavaScript functions that breaks down a claim into features – I can use this as a front-end for my novelty review. Inventive step has a different process for each of the UK, Europe and US, wherein the process incorporates the current practice from case law.

The aim is that objections entered in the initial review will be addressed through a detailed review and/or instructions. As we initially enter the objections, we do not have to worry about missing objections or approaching things in a less efficient order. The workflow also allows a response to be split into a number of modular processes. These are then ripe for outsourcing, e.g. to paralegal staff or trainees, allowing an attorney to concentrate their time on the “meat” of the objections and thus saving money for clients. The workflow also provides mental scaffolding that is perfect for trainees and/or sleep-deprived attorneys with young children/dogs.

I use a combination of Evernote, Remember the Milk and Trello to jot down ideas, plan and set out to-do lists. Currently pending are:

  • Map XML to more user-friendly form fields;
  • Sort the loading of existing data;
  • Sort the CSS for that tiny textarea;
  • Add some JavaScript time-savers to the front-end (e.g. that allow a user to click “same communication” for multiple objections);
  • Build an XSL file that transforms the result of the initial review to text for storage or reporting;
  • Work out how to use cloud storage APIs to automatically save a copy of the above to a document management system;
  • Add detailed review workflow, including bespoke processes for novelty, inventive step and patentability/excluded subject matter;
  • Add easy “report bug/suggest feature” reporting for iterative updates; and
  • Host on a £30 Raspberry Pi in the office.

* Aside: ‘web-site’/’web-app’/’app’/’application’ are all kind of the same thing. “Web-site” was a traditionally static site that hosted HTML documents. No-one really does that any more though; nearly all sites are built dynamically, making them more like a traditionally client-server application (especially with JavaScript on the front end and Python or similar on the back-end).

Patents: Another Language?

I have been playing with natural language processing.

Now I have a body of patent data (see here), I can do some interesting things. For example, most people would say that patents have a pretty specific terminology. I say: show me the data.

Taking all patent publications in 2001 as an example, I programmed a little routine that:

  • Extracted the text data of each patent publication;
  • Split the text data into words;
  • Filtered the words for non-words (e.g. punctuation etc.);
  • Applied a stemming algorithm (from 1979!); and
  • Recorded the frequency distribution of the results.

In total I counted 277493492 occurrences of 287455 unique word stems.

In common with most written material, 100 words accounted for 50% of the published material. Amazing when you think about it.

(Next time you get a drafting bill from a patent attorney, complain that half their work is shuffling 100 words around :)).

Here is the graph (click to zoom for full glory).

Cumulative Percentage of Top100 Words
Cumulative Percentage of Top 100 Words (click for full-size)

Patent Stopwords

There is more.

“Stopwords” are common words that are often filtered out when analysing documents. The Natural Language Tool Kit provides a set based on a general analysis of written English. These include words such as:

…’did’, ‘doing’, ‘a’, ‘an’, ‘the’, ‘and’,  ‘but’, ‘if’, ‘or’, ‘because’,  ‘as’,  ‘until’,  ‘while’,  ‘of’,  ‘at’,  ‘by’,  ‘for’…

In total there are 127 stopwords in this collection representing high-frequency content that has little lexical use.

I thought it would be interesting to compare these stopwords with the 127 most frequent in our frequency count.

Words that occurred frequently in (US) patent publications that do not comprise regular stopwords include:

said use first one form invent thi may second data claim wherein accord control signal present devic provid portion includ embodi compris method layer surfac system process exampl step ha shown connect posit prefer oper gener mean inform circuit imag unit time materi also end wa member line film side least select apparatu output element refer receiv describ direct base light section set show substrat contain display view valu part cell two plural group structur number optic electrod input result abov respect region memori plate case differ user

These words will be familiar to most patent professionals. The result of the stemming operation can be seen in certain words, e.g. “oper” – these should be treated as “oper*” – “operates”, “operating”, “operate” etc.. You can see that stemming is not perfect (“thi” may relate to “this”, which has been taken to be a plural form) but it is generally good enough. Without the stemming there would be many different variations of the same word in our counts.

Now this list of “patent stopwords” is useful. Firstly, these words are probably not useful for searching in isolation (we may move onto n-grams later). Secondly, they can be used as a dictionary of sorts for claim drafting. Thirdly, they could be used to distinguish patent text from non-patent text (e.g. as the basis for a feature vector for this classification).

The words that occur in patent specifications but also occur in “the real world” are also interesting:

the of a to and in is for be an as by with or are that from which at on can it have such each not when between other into through further more about than will so if then

These can be used as universal stopwords.

Further Fun

There are a number of paths for further analysis:

  • Extend across the whole US patent publication corpus from 2001 to 2014. I may need to optimise my code to do this!
  • Perform a similar analysis for different classification levels – e.g. do patents classified as G have a different vocabulary from those classified as H?
  • Look at infrequent or unique words – How many are there? Are they useful for searching clusters?

A Dip in the Patent Information Seas (An EPO Online Patent Services Tutorial)

Over Christmas I had a chance to experiment with the European Patent Office’s Online Patent Services. This is a web service / application programming interface (API) for accessing the large patent databases administered by the European Patent Office. It has enormous potential.

To get to grips with the system I set myself a simple task: taking a text file of patent publication numbers (my cases), generate a pie chart of the resulting classifications. In true Blue Peter-style, here is one I made earlier (it’s actually better in full SVG glory, but WordPress.com do not support the format):

Classifications for Cases (in %)
Classifications for Cases (in %)

Here is how to do it: –

Step 1 – Get Input

Obtain a text file of publication numbers. Most patent management systems (e.g. Inprotech) will allow you to export to Excel. I copied and pasted from an Excel column into a text file, which resulted in a list of publication numbers separated by new line (“\n”) elements.

Step 2 – Register

Register for a free EPO OPS account here: http://www.epo.org/searching/free/ops.html . About a day later the account was approved.

Step 3 – Add an App

Setup an “app” at the EPO Developer Portal. After registering you will receive an email with a link to do this. Generally the link is something like: https://developers.epo.org/user/[your no.]/apps. You will be asked to login.

Setup the “app” as something like “myapp” or “testing” etc.. You will then have access to a key and a secret for this “app”. Make a note of these. I copied and pasted them into an “config.ini” file of the form:

[Login Parameters]
C_KEY="[Copied key value]"
C_SECRET="[Copied secret value]"

Step 4 – Read the Docs

Read the documentation. Especially ‘OPS version 3.1 documentation – version 1.2.10 ‘. Also see this document for a description of the XML Schema (it may be easier than looking at the schema itself).

Step 5 – Authenticate

Now onto some code. First we need to use that key and secret to authenticate ourselves using OAuth.

I first of all tried urllib2 in Python but this was not rendering the POST payload correctly so I reverted back to urllib, which worked. When using urllib I found it easier to store the host and authentication URL as variables in my “config.ini” file. Hence, this file now looked like:

[Login Parameters]
C_KEY="[Copied key value]"
C_SECRET="[Copied secret value]"

[URLs]
HOST=ops.epo.org
AUTH_URL=/3.1/auth/accesstoken

Although object-oriented-purists will burn me at the stake, I created a little class wrapper to store the various parameters. This was initialised with the following code:

import ConfigParser
import urllib, urllib2
import httplib
import json
import base64
from xml.dom.minidom import Document, parseString
import logging
import time

class EPOops():

	def __init__(self, filename):
		#filename is the filename of the list of publication numbers

		#Load Settings
		parser = ConfigParser.SafeConfigParser()
		parser.read('config.ini')
		self.consumer_key = parser.get('Login Parameters', 'C_KEY')
		self.consumer_secret = parser.get('Login Parameters', 'C_SECRET')
		self.host = parser.get('URLs', 'HOST')
		self.auth_url = parser.get('URLs', 'AUTH_URL')

		#Set filename
		self.filename = filename

		#Initialise list for classification strings
		self.c_list = []

		#Initialise new dom document for classification XML
		self.save_doc = Document()

		root = self.save_doc.createElement('classifications')
		self.save_doc.appendChild(root)

The authentication method was then as follows:

def authorise(self):
		b64string = base64.b64encode(":".join([self.consumer_key, self.consumer_secret]))
		logging.error(self.consumer_key + self.consumer_secret + "\n" + b64string)
		#urllib2 method was not working - returning an error that grant_type was missing
		#request = urllib2.Request(AUTH_URL)
		#request.add_header("Authorization", "Basic %s" % b64string)
		#request.add_header("Content-Type", "application/x-www-form-urlencoded")
		#result = urllib2.urlopen(request, data="grant_type=client_credentials")
		logging.error(self.host + ":" + self.auth_url)

		#Use urllib method instead - this works
		params = urllib.urlencode({'grant_type' : 'client_credentials'})
		req = httplib.HTTPSConnection(self.host)
		req.putrequest("POST", self.auth_url)
		req.putheader("Host", self.host)
		req.putheader("User-Agent", "Python urllib")
		req.putheader("Authorization", "Basic %s" % b64string)
		req.putheader("Content-Type" ,"application/x-www-form-urlencoded;charset=UTF-8")
		req.putheader("Content-Length", "29")
		req.putheader("Accept-Encoding", "utf-8")

		req.endheaders()
		req.send(params)

		resp = req.getresponse()
		params = resp.read()
		logging.error(params)
		params_dict = json.loads(params)
		self.access_token = params_dict['access_token']

This results in an access token you can use to access the API for 20 minutes.

Step 6 – Get the Data

Once authentication is sorted, getting the data is pretty easy.

This time I used the later urllib2 library. The URL was built as a concatenation of a static look-up string and the publication number as a variable.

The request uses an “Authentication” header with a “Bearer” variable containing the access token. You also need to add some error handling for when your allotted 20 minutes runs out – I looked for an error message mentioning an invalid access token and then re-performed the authentication if this was detected.

I was looking at “Biblio” data. This returned the classifications without the added overhead of the full-text and claims. The response is XML constructed according to the schema described in the Docs above.

The code for this is as follows:

def get_data(self, number):
		data_url = "/3.1/rest-services/published-data/publication/epodoc/"
		request_type = "/biblio"
		request = urllib2.Request("https://ops.epo.org" + data_url + number + request_type)
		request.add_header("Authorization", "Bearer %s" % self.access_token)
		try:
			resp = urllib2.urlopen(request)
		except urllib2.HTTPError, error:
			error_msg = error.read()
			if "invalid_access_token" in error_msg:
				self.authorise()
				resp = urllib2.urlopen(request)

		#parse returned XML in resp
		XML_data = resp.read()
		return XML_data

Step 7 – Parse the XML

We now need to play around with the returned XML. Python offers a couple of libraries to do this, including Minidom and ElementTree. ElementTree is preferred for memory-management reasons but I found that the iter() / getiterator() methods to be a bit dodgy in the version I was using, so I fell back on using Minidom.

As the “Biblio” data includes all publications (e.g. A1, A2, A3, B1 etc), I selected the first publication in the data for my purposes (otherwise there would be a duplication of classifications). To do this I selected the first “<exchange-document>” tag and its child tags.

As I was experimenting, I actually extracted the classification data as two separate types: text and XML. Text data for each classification, simply a string such as “G11B  27/    00            A I”, can be found in the  “<classification-ipcr>” tag. However, when looking at different levels of classification this single string was a bit cumbersome. I thus also dumped an XML tag – “<patent-classification>” – containing a structured form of the classification, with child tags for “<section>”, “<class>”, “<subclass>”, “<main-group>” and “<subgroup>”.

My function saved the text data in a list and the extracted XML in a new XML string. This allowed me to save these structures to disk, more so I could pick up at a later date without continually hitting the EPO data servers.

The code is here:

def extract_classification(self, xml_str):
		#extract the  elements
		dom = parseString(xml_str)
		#Select first publication for classification extraction
		first_pub = dom.getElementsByTagName('exchange-document')[0]
		self.c_list = self.c_list + [node.childNodes[1].childNodes[0].nodeValue for node in first_pub.getElementsByTagName('classification-ipcr')]

		for node in first_pub.getElementsByTagName('patent-classification'):
			self.save_doc.firstChild.appendChild(node)

Step 8 – Wrap It All Up

The above code needed a bit of wrapping to load the publication numbers from the text file and to save the text list and XML containing the classifications. This is straightforward and shown below:

def total_classifications(self):
		number_list = []

		#Get list of publication numbers
		with open("cases.txt", "r") as f:
			for line in f:
				number_list.append(line.replace("/","")) #This gets rid of the slash in PCT publication numbers

		for number in number_list:
			XML_data = self.get_data(number.strip())
			#time.sleep(1) - might want this to be nice to EPO 🙂
			self.extract_classification(XML_data)

		#Save list to file
		with open("classification_list.txt", "wb") as f:
			f.write("\n".join(str(x) for x in self.c_list))

		#Save xmldoc to file
		with open("save_doc.xml", "wb") as f:
			self.save_doc.writexml(f)

Step 9 – Counting

Once I have the XML data containing the classifications I wrote a little script to count the various classifications at each level for charting. This involved parsing the XML and counting unique occurrences of strings representing different levels of classification. For example, level “section” has values such as “G”, “H”. The next level, “class”, was counted by looking at a string made up of “section” + “class”, e.g. “G11B”. The code is here:

from xml.dom.minidom import parse
import logging, pickle, pygal
from pygal.style import CleanStyle

#create list of acceptable tags - tag_group - then do if child.tagName in tag_group

#initialise upper counting dict
upper_dict = {}

#initialise list of tags we are interested in
tags = ['section', 'class', 'subclass', 'main-group', 'subgroup']

with open("save_doc.xml", "r") as f:
	dom = parse(f)

#Get each patent-classification element
for node in dom.getElementsByTagName('patent-classification'):
	#Initialise classification string to nothing
	class_level_val = ""
	logging.error(node)
	#for each component of the classification
	for child in node.childNodes:
		logging.error(child)
		#Filter out "text nodes" with newlines
		if child.nodeType is not 3 and len(child.childNodes) > 0:

			#Check for required tagNames - only works if element has a tagName
			if child.tagName in tags:

				#if no dict for selected component
				if child.tagName not in upper_dict:
					#make one
					upper_dict[child.tagName] = {}
				logging.error(child.childNodes)

				#Get current component value as catenation of previous values
				class_level_val = class_level_val + child.childNodes[0].nodeValue

				#If value is in cuurent component dict
				if class_level_val in upper_dict[child.tagName]:
					#Increment
					upper_dict[child.tagName][class_level_val] += 1
				else:
					#Create a new entry
					upper_dict[child.tagName][class_level_val] = 1

print upper_dict
#Need to save results
with open("results.pkl", "wb") as f:
	pickle.dump(upper_dict, f)

The last lines print the resulting dictionary and then save it in a file for later use. After looking at the results it was clear that past the “class” level the data was not that useful for a high-level pie-chart, there were many counts of ‘1’ and a few larger clusters.

Step 10 – Charting

I stumbled across Pygal a while ago. It is a simple little charting library that produces some nice-looking SVG charts. Another alternative is ‘matlibplot‘.

The methods are straightforward. The code below puts a rim on the pie-chart with a breakdown of the class data.

#Draw pie chart
pie_chart = pygal.Pie(style=CleanStyle)
pie_chart.title = 'Classifications for Cases (in %)'

#Get names of different sections for pie-chart labels
sections = upper_dict['section']

#Get values from second level - class
classes = upper_dict['class']
class_values = classes.keys() #list of different class values

#Iterate over keys in our section results dictionary
for k in sections.keys():
 #check if key is in class key, if so add value to set for section

 #Initialise list to store values for each section
 count_values = []
 for class_value in class_values:
 if k in class_value: #class key - need to iterate from class keys
 #Add to list for k
 #append_tuple = (class_value, classes[class_value]) - doesn't work
 count_values.append(classes[class_value])
 #count_values.append(append_tuple)
 pie_chart.add(k, count_values)

pie_chart.render_to_file('class_graph.svg')

That’s it. We now have a file called “class_graph” that we can open in our browser. The result is shown in the pie-chart above, which shows the subject-areas where I work. Mainly split between G and H. The complete code can be found on GitHub: https://github.com/benhoyle/EPOops.

Going Forward

The code is a bit hacky, but it is fairly easy to refine into a production-ready method. Options and possibilities are:

  • Getting the data from a patent management system directly (e.g. via an SQL connection in Python).
  • Adding the routine as a dynamic look-up on a patent attorney website – e.g. on a Django or Flask-based site.
  • Look up classification names using the classification API.
  • The make-up of a representative’s cases would change fairly slowly (e.g. once a week for an update). Hence, you could easily cache most of the data, requiring few look-ups of EPO data (the limit is 2.5GB/week for a free account).
  • Doing other charting – for example you could plot countries on Pygal’s world map.
  • Adapt for applicants / representatives using EPO OPS queries to retrieve the publication numbers or XML to process.
  • Looking at more complex requests, full-text data could be retrieved and imported into natural language processing libraries.

Can You Mine Patent [Big] Data at Home?

Possibly. Let’s give it a go.

Big data - from DARPA

Data

In my experience, no one has quite realised how amazing this link is. It is a hosting (by Google) of bulk downloads of patent and trademark data from the US Patent and Trademark Office.

Just think about this for a second.

Here you can download images of most commercial logos used between 1870(!) and the present day. Back in the day, doing image processing and machine learning, I would have given my right arm for such a data set.

Moreover, you get access (eventually) to the text of most US patent publications. Considering there are over 8 million of these, and considering that most new and exiting technologies are the subject of a patent application, this represents a treasure trove of information on human innovation.

Although we are limited to US-based patent publications this is not a problem. The US is the world’s primary patent jurisdiction – many companies only patent in the US and most inventions of importance (in modern times) will be protected in the US. At this point we are also not looking at precise legal data – the accuracy of these downloads is not ideal. Instead, we are looking at “Big Data” (buzzword cringe) – general patterns and statistical gists from “messy” and incomplete datasets.

Storage

Initially, I started with 10 years worth of patent publications: 2001 to 2011. The data from 2001 onwards is pretty reliable; I have been told the OCR data from earlier patent publications is near useless.

An average year is around 60 GBytes of data (zipped!). Hence, we need a large hard drive.

You can pick up a 2TB external drive for about £60. I have heard they crash a lot. You might want to get two and mirror the contents using rsync.

[Update: command for rsync I am using is:

rsync -ruv /media/EXTHDD1/'Patent Downloads' /media/EXTHDD2/'Patent Downloads'

where EXTHDD1 and EXTHDD2 are the two USB disk drives.]

Flashgot
Flashgot

Download

I have an unlimited package on BT Infinity (hurray!). A great help to download the data is a little Firefox plugin called FlashGot. Install it, select the links of the files you want to download, right-click and choose “Flashgot” selection. This basically sets off a little wget script that gets each of the links. I set it going just before bed – when I wake up the files are on my hard-drive.

The two sets of files that look the most useful are the 2001+ full-text archives or the 2001+ full-text and embedded images. I went for 10 years worth of the latter.

Folders (cc: Shoplet Office Supplies)
Folders (cc: Shoplet Office Supplies)

Data Structure

The structure of the downloaded data is as follows:

  • Directory: Patent Downloads
    • Directory: [Year e.g. 2001] – Size ~ 65GB
      • [ZIP File – one per week – name format is date e.g. 20010607.ZIP] – Size ~ 0.5GB
        • Directory: DTDS [Does what it says of the tin – maybe useful for validation but we can generally ignore for now]
        • Directory: ENTITIES [Ditto – XML entities]
        • Directories: UTIL[XXXX] [e.g. UTIL0002, UTIL0003 – these contain the data] – Size ~ 50-100MB
          • [ZIP Files – one per publication – name format is [Publication Number]-[Date].ZIP e.g. US20010002518A1-20010607.ZIP] – Size ~ 50-350KB
            • [XML File for the patent publication data – name format is [Publication Number]-[Date].XML e.g. US20010002518A1-20010607.XML] – Size ~ 100Kb
            • [TIF Files for the drawings –  name format is [Publication Number]-[Date]-D[XXXXX].TIF where XXXXX is the drawing number e.g. US20010002518A1-20010607-D00012.TIF] – Size ~20kB

[Update: this structure varies a little 2004+ – there are a few extra layers directories between the original zipped folder and the actual XML.]

The original ZIPs
The original ZIPs

ZIP Files & Python

Python is my programming language of choice. It is simple and powerful. Any speed disadvantage is not really felt for large scale, overnight batch processing (and most modern machines are well up to the task).

Ideally I would like to work with the ZIP files directly without unzipping the data. For one-level ZIP files (e.g. the 20010607.ZIP files above) we can use the ‘zipfile‘, a built-in Python module. For example, the following short script ‘walks‘ through our ‘Patent Downloads’ directory above and prints out information about each first-level ZIP file.

import os
import zipfile
import logging
logging.basicConfig(filename="processing.log", format='%(asctime)s %(message)s')

exten = '.zip'
top = "/YOURPATH/Patent Downloads"

def print_zip(filename):
	print filename
	try:
		zip_file = zipfile.ZipFile(filename, "r")
		# list filenames

		for name in zip_file.namelist():
			print name,
		print

		# list file information
		for info in zip_file.infolist():
			print info.filename, info.date_time, info.file_size

	except Exception, ex:
		#Log error
		logging.exception("Exception opening file %s") %filename
		return

def step(ext, dirname, names):
	ext = ext.lower()

	for name in names:
		if name.lower().endswith(ext):
			print_zip(str(os.path.join(dirname, name)))

# Start the walk
os.path.walk(top, step, exten)

This code is based on that helpfully provided at PythonCentral.io. It lists all the files in the ZIP file. Now we have a start at a way to access the patent data files.

However, more work is needed. We come up against a problem when we hit the second-level of ZIP files (e.g. US20010002518A1-20010607.ZIP). These cannot be manipulated again recursively with zipfile. We need to think of a way around this so we can actually access the XML.

As a rough example of the scale we are taking about – a scan through 2001 to 2009 listing the second-level ZIP file names took about 2 minutes and created a plain-text document 121.9 MB long.

Next Time

Unfortunately, this is all for now as my washing machine is leaking and the kids are screaming.

Next time, I will be looking into whether zip_open works to access second-level (nested) ZIP files or whether we need to automate an unzip operation (if our harddrive can take it).

We will also get started on the XML processing within Python using either minidom or ElementTree.

Until then…

Automated Law: Simple Claim Breakdown Function

Patent attorneys: we care about the independent claims. An independent claim is a paragraph of text that defines an invention. Each invention has a number of discrete features. Can I build a function to spilt a claim into its component features?

The answer is possibly. Here is one way I could go about doing it.

First I would start with a JavaScript file: claimAnalysis.js. I would link this to an HTML page: claimAnalysis.html. This HTML page would have a large text box to copy and paste the text of an independent claim.

On a keyup() or onchange() event I would then run the following algorithm:

  • Get text as from text box as a string.
  • Set character placemarker as 0.
  • From placemarker, find character from set of character:s [“,”, “:”, “;”,”-” or new line].
  • Store characters from 0 to found character index as string in array.
  • Repeat last two steps until “.” or end of text.

From this we should have a rough breakdown of a claim into feature string arrays. It will not be perfect but it would make a good start.

We can then show each located string portion in the array to a user. For example, with JavaScript we can add a table within a form containing input text boxes in rows. Each text box can contain a string portion. We can also add a checkbox to each portion or table row.

The user can then be offered “spilt” or “join” option buttons.

  • “Split” requires only one selection.
  • The user is told to place the cursor/select text in the box where they want the split to occur (using selectionStart property?).
  • Two features are then created based on the cursor position or selected text.
  • “Join” requires > 1 features to be selected via the checkboxes.
  • All selected features are combined into one string portion in one text box which replaces the previous text boxes (possibly by redrawing the table).

Once any splitting or joining is complete the user can confirm the features. A confirm button could use the POST method to input the features to a PHP script that saves them as XML on the server.

<claim><number>1</number><feature id="1">A method for doing something comprising:</feature>...</claim>

Automating a Legal Workflow: Dealing with Patent Communications

Last time we looked at how certain legal processes could benefit from automation. Today we will identify some patent process examples.

This post (and the others in the series) may be useful for:

  • Patent attorneys wishing to automate their processes;
  • Software developers looking to develop legal products; or
  • Those who are interested in how patents work.

First let us have a look at a “vanilla” patent application. The process of applying for a patent looks something like this:

20131001-071717.jpg
The colour-coding is as follows:

  • Grey process blocks are performed in relation to a patent office (e.g. European Patent Office).
  • The blue documents are typically prepared by a patent attorney.
  • The green documents are prepared and issued by the patent office.

For now we will concentrate on the central blocks. Much of a patent attorney’s day-to-day work consists of reporting documents issued by patent offices and of preparing documents to file in response.

The search and each iteration of the examination go something like this:

20131002-184025.jpg

In my mind there are three areas in the above workflow where we could build an automated framework:

  • Initial brief review of objections by an attorney;
  • Detailed review of objections by attorney and development of a strategy to address the objections; and
  • The building of a response letter.

The reasons are:

  • To improve consistency, both for a single attorney and between attorneys;
  • To increase the chances of grant, by ensuring all outstanding objections are addressed;
  • To improve quality, by prompting an attorney for reasoning and basis;
  • To improve training, by embedding knowledge within the system and providing a guided path; and
  • To reduce stress, by providing a framework that ensures consistency and quality without requiring the attorney to remember to provide the framework; and
  • Last but not least, reduce attorney costs for clients by concentrating attorney time appropriately.

To start small, my aim is to build a web-based system for each of the three areas set out above. This system involves:

  • Creation of an XML document to store data;
  • Use of HTML forms to obtain data entered by an attorney;
  • Use of PHP to save obtained data as said XML document; and
  • Use of a letter-generation engine to generate letter text based on the XML document.

For example, a first stage may resemble the following flow diagram:

20131002-193447.jpg

The aim is that this should take no more than 10 minutes. Information is gathered that may serve as a framework for a detailed review. Also a brief review highlights anything that may need to be emphasised to an applicant (and avoids an attorney being negligent).

A web-form gathers data, which is stored as an XML document. A suggested DTD is as follows:

<!ELEMENT REVIEW (CASEREF, COMMUNICATION, COMM_DATE, OBJECTIONS)> <!-- ROOT ELEMENT-->
<!ELEMENT CASEREF (#PCDATA)>
<!ELEMENT COMMUNICATION (#PCDATA)>
<!ELEMEMT COMM_DATE (DATE)>
<!ELEMENT OBJECTIONS (OBJECTION*)>
<!ELEMENT OBJECTION (TYPE, COMMUNICATION, COMM_SECTION, APPLICATION_SECTION, LEGAL_PROVISION, REASON)>

<!ELEMENT COMM_SECTION (MAIN_NUMBER, PARA_NUMBER*)>

<!ELEMENT MAIN_NUMBER (NUMBER)>
<!ELEMENT PARA_NUMBER (NUMBER)>

<!ELEMENT APPLICATION_SECTION (CLAIM | DESCRIPTION)>

<!ELEMENT CLAIM (NUMBER)>
<!ELEMENT DESCRIPTION (PARAGRAPH+ | PORTION+)>
<!ELEMENT PORTION (START_LINE_NUMBER, END_LINE_NUMBER?, PAGE)>
<!ELEMENT START_LINE_NUMBER (NUMBER)>
<!ELEMENT END_LINE_NUMBER (NUMBER)>
<!ELEMENT PAGE (NUMBER)>

<!ELEMENT DATE (DAY, MONTH, YEAR)>
<!ELEMENT DAY (NUMBER)>
<!ELEMENT MONTH (NUMBER)>
<!ELEMENT YEAR (NUMBER)>

<!ELEMENT NUMBER (#PCDATA)>
<!ELEMENT REASON (#PCDATA)>
<!ELEMENT PARAGRAPH (#PCDATA)>
<!ELEMENT TYPE (#PCDATA)>

This results in an XML document similar to this:

<?xml version="1.0"?>
<review>
  <caseref>130929Test7</caseref>
  <communication>Article 94(3) EPC</communication>
  <comm_date>2013-09-29</comm_date>
  <objections>
    <objection entered="True">
      <type>unity</type>
      <comm_section>O</comm_section>
      <application_section>O</application_section>
      <legal_provision>O</legal_provision>
      <reason>Claims 1 and 5 do not share special technical features.<reason/>
    </objection>
    <objection entered="True">
      <type>sufficiency</type>
      <comm_section/>
      <application_section/>
      <legal_provision/>
      <reason/>
    </objection>
  </objections>
</review>

(Both are works in progress so forgive any inaccuracies.)

This generated XML document can then be used to produce a short sentence or two for a reporting letter or email. For example, something like:

The Examiner raises [No. of <objection> tags] objections: [list <objection><type> for each <objection>].

Under [first <objection><type>] ([<objection><legal_provision>]), the Examiner objections to [<objection><application_section>] on the grounds of [<objection><reason>].

Etc.

If and when instructions are received to review the communication in more detail, the XML document can be extended during the process below:
Detailed Review

Finally, the extended XML document can be processed by a letter generation engine (e.g. something built in PHP or Python) to generate text for a response letter:
Letter Generation

This is all a work in progress so I will update you as I develop more. Any additional ideas or comments, please add them below.

(PS: if you like the charts see my previous post – Drawing Patent Figures on the iPad – they were drawn quickly in Grafio while giving the kids breakfast.)

Automating a Legal Workflow – First Thoughts

Being an engineer by training (and at heart) I have always been a bit sceptical of those who say that the law cannot be automated. Yes, there will always be special cases in need of bespoke thinking, but these can be acknowledged within a system. As law involves the application of written rules + historical information, there is actually a large overlap with information and language theory.

Here are a few thoughts as I start looking at this myself. They may not all be “right” and may change as I go along but, hey, they might somehow be useful to someone.

Where to start?

As with most things a first step is conscious reflection. Attempt to look with fresh eyes and then perform a brain dump; this provides a good starting point. I normally do not worry about extensively documenting everything; this is impractical if not impossible. If you build any system in an agile, iterative and extendible manner you can refine later.

A great person to do this is an intern or new starter – most if not all within an organisation will assume that the way things are done is the way things can be done. This is why people hire management consultants (at large cost).

At this stage bullet points are good – they allow hierarchy to be captured. This naturally translates into most data structures.

Law: meet the Internet

I may be a bit Web 2.0 (or even Web 1.0) but web technologies are a good starting point for automation. HTML, XML, CSS, JavaScript, PHP are mature technologies specifically designed for document processing. They also have, at their heart, the separation of content from presentation – one thing every lawyer who uses Word needs to learn.

Another advantage of web technologies is you can quickly build prototypes and have everyone access them. They are also cheap, well-documented and easy – you can set up a web server on a mothballed PC using the LAMP framework (Linux, Apache, MySQL and PHP) with a few lines of command line code. Hell you can set up a system for a small business on a Raspberry Pi for £50 – silent and 3-5 Watts. This is how the Googles and Facebooks became so large so quickly.

Model Process in a Code-Friendly Manner

Next step is to iterate through your process brain dump adding some structure.

Pick out key events and triggers – model in flow diagrams.

Pick out key documents requiring input – model as bullet point hierarchies. Make a note of where input is actually required, what information may be stored already and how long each input takes to generate.

Think modules – try to separate process and documents into relatively independent elements – ideally each module or level should have 3-7 elements (reflecting the number we can hold in our minds at any one time). If you go above 10 elements – add another layer of hierarchy. It is easier for our brains to work at different layers of hierarchy, each layer having a few elements, and “drill down” than it is to have few layers with many elements per layer.

Pick the easiest modular subprocess. Start with this and prototype independently. Big systems hardly ever succeed (think of any public IT infrastructure). Document extensively – blog internally or externally – it is a great way to document and you get the marketing for free!

Go from the model of the subprocess and any documents to a web-workflow. Documents can become XML then HTML. Presentation can be dealt with later via CSS – concentrate first on functionality. Take key process functions and code-up computer equivalents (e.g. in PHP or other server backend – Python is good for quickly prototyping back-end function). I would avoid extensive database use at this stage – they have relatively high inertia – iterate at a flat-file level until things stabilise, at which point you can look at a database.

Anyway I will try to provide some examples shortly to make this all a little more concrete.