The amazing @PatentSecretary has made my day by sending me a link on how to remove multiple authors from Track Changes Word documents.

This has been a pain for a while now. Firstly, Word sometimes suffers from bouts of multiple personality disorder, imagining me to be several individuals with the same name but with different Track Changes colours. Secondly, it is a pain when working in teams on a document for external use or review. It also doesn’t help that useful features are shuffled around with each version update of Word.

The advice itself comes from this very useful article by Shauna Kelly. The  bit about removing author information is set out below:

Q: I want to send my document outside the company. I want to leave tracked changes in the document, but I don’t want anyone to see who made the tracked changes or when they were made. How do I do that?

Word 2002 and earlier

In Word 2002 and earlier, you can’t. The author (or reviewer) information and the date information are permanently attached to the revision when the revision was tracked. You can’t change them, even in macro code.

Word 2003

In Word 2003, Tools > Options > Security. Tick the box “Remove personal information from file properties on save.” In spite of the name, this does more than just remove information in the file properties. If this box is ticked, Word removes the name of the author of a tracked change, and it removes the date and time that the change was made when you save your document. But it leaves the tracked change itself. All tracked changes and comments will be now attributed to an anonymous “Author”.

In Word 2007 and Word 2010

For one document at a time, you can remove the personal information about tracked changes. To do that:

  • In Word 2007: Round Office button > Prepare > Inspect Document > Inspect.
  • *In Word 2010: File > Info > Check for Issues > Inspect Document > Inspect.*

After the Inspector does its thing, you will see several ‘Remove All’ buttons.

  • The Remove All button for Comments, Revisions etc removes comments and accepts all tracked changes.
  • *The Remove All button for Document Properties and Personal Information just assigns the name “Author” to your tracked changes, and removes the date and time the tracked change. This is the one you need if you want to retain the tracked changes, but remove the author’s name and the date and time the tracked change was made.*

The Remove All button for Document Properties and Personal Information sets the ‘Remove personal information from file properties on save’ option for the document. So next time you save, your name will again be removed from tracked changes. If you don’t want that, then:

  • In Word 2010 do File > Info. In the ‘Prepare for Sharing’ section you will now see a note telling you that personal information will be removed on save. Click ‘Allow this information to be saved in your file’ to turn the setting off.
  • In Word 2007 and Word 2010 you can turn off this option in the Privacy Settings in the Trust Center. The option is greyed out and disabled unless (a) you have a document created in an earlier version of Word that used this setting or (b) you run the Document Inspector from the File (or Office Button) menu and choose to remove Document Properties and Personal Information.

A presentation given as a CIPA Webinar on 25 February 2014.

Provides an introduction to software as it relates to patenting and an overview of current practice in UK and Europe. Details of relevant legislation and case law are provided, together with some tips for drafting.

Provided according to the terms set out here: http://www.eip.com/legal.php – i.e. does not constitute legal advice and should be taken as guidance.

As you may remember, a while back I posted some ideas for a patent workflow tool. It is taking a while, what with actual work and family commitments. However, I finally have a rough-and-ready prototype covering at least the initial review stage.

The application* is built in Flask. It generates an XML document containing the entered data. Fields are rendered based on the XML document (making use of XSLT). To avoid file system headaches, XML data is stored as string data in an SQLite3 database. The data is indexed using a hash masquerading as a key identifier. The key identifier can then be passed as a URL parameter to retrieve a particular XML document. Although nowhere near a fully working “thing”, the code is here if you want a look: https://github.com/benhoyle/attass .

Initial Review: Process Overview in Pictures

First we enter our case reference:

Enter Case Reference

Enter Case Reference

Then we enter the communication details and the objections raised:

Communication Overview

Communication Overview

Then we briefly enter salient details of each objection raised in the communication. This can be used for reporting and as a reference for a later, more detailed review:

Enter First Objection

Enter First Objection

There is an option to enter further objections under the same category (see the lower checkbox). This adds an additional XML element and populates it with data from a template. Once submit is pressed, fields for a next objection will load:

Enter Second Objection

Enter Second Objection

The result is then a populated XML document that forms the starting point for a response:

XML Document

XML Document

Where from here?

I have similar review workflows in progress for novelty and inventive step. They follow a similar pattern: an XML template defines data entry and works the user through a number of review steps. I have JavaScript functions that breaks down a claim into features - I can use this as a front-end for my novelty review. Inventive step has a different process for each of the UK, Europe and US, wherein the process incorporates the current practice from case law.

The aim is that objections entered in the initial review will be addressed through a detailed review and/or instructions. As we initially enter the objections, we do not have to worry about missing objections or approaching things in a less efficient order. The workflow also allows a response to be split into a number of modular processes. These are then ripe for outsourcing, e.g. to paralegal staff or trainees, allowing an attorney to concentrate their time on the “meat” of the objections and thus saving money for clients. The workflow also provides mental scaffolding that is perfect for trainees and/or sleep-deprived attorneys with young children/dogs.

I use a combination of Evernote, Remember the Milk and Trello to jot down ideas, plan and set out to-do lists. Currently pending are:

  • Map XML to more user-friendly form fields;
  • Sort the loading of existing data;
  • Sort the CSS for that tiny textarea;
  • Add some JavaScript time-savers to the front-end (e.g. that allow a user to click “same communication” for multiple objections);
  • Build an XSL file that transforms the result of the initial review to text for storage or reporting;
  • Work out how to use cloud storage APIs to automatically save a copy of the above to a document management system;
  • Add detailed review workflow, including bespoke processes for novelty, inventive step and patentability/excluded subject matter;
  • Add easy “report bug/suggest feature” reporting for iterative updates; and
  • Host on a £30 Raspberry Pi in the office.

* Aside: ‘web-site’/’web-app’/’app’/’application’ are all kind of the same thing. “Web-site” was a traditionally static site that hosted HTML documents. No-one really does that any more though; nearly all sites are built dynamically, making them more like a traditionally client-server application (especially with JavaScript on the front end and Python or similar on the back-end).

I have been playing with natural language processing.

Now I have a body of patent data (see here), I can do some interesting things. For example, most people would say that patents have a pretty specific terminology. I say: show me the data.

Taking all patent publications in 2001 as an example, I programmed a little routine that:

  • Extracted the text data of each patent publication;
  • Split the text data into words;
  • Filtered the words for non-words (e.g. punctuation etc.);
  • Applied a stemming algorithm (from 1979!); and
  • Recorded the frequency distribution of the results.

In total I counted 277493492 occurrences of 287455 unique word stems.

In common with most written material, 100 words accounted for 50% of the published material. Amazing when you think about it.

(Next time you get a drafting bill from a patent attorney, complain that half their work is shuffling 100 words around :)).

Here is the graph (click to zoom for full glory).

Cumulative Percentage of Top100 Words

Cumulative Percentage of Top 100 Words (click for full-size)

Patent Stopwords

There is more.

“Stopwords” are common words that are often filtered out when analysing documents. The Natural Language Tool Kit provides a set based on a general analysis of written English. These include words such as:

…’did’, ‘doing’, ‘a’, ‘an’, ‘the’, ‘and’,  ‘but’, ‘if’, ‘or’, ‘because’,  ‘as’,  ‘until’,  ‘while’,  ‘of’,  ‘at’,  ‘by’,  ‘for’…

In total there are 127 stopwords in this collection representing high-frequency content that has little lexical use.

I thought it would be interesting to compare these stopwords with the 127 most frequent in our frequency count.

Words that occurred frequently in (US) patent publications that do not comprise regular stopwords include:

said use first one form invent thi may second data claim wherein accord control signal present devic provid portion includ embodi compris method layer surfac system process exampl step ha shown connect posit prefer oper gener mean inform circuit imag unit time materi also end wa member line film side least select apparatu output element refer receiv describ direct base light section set show substrat contain display view valu part cell two plural group structur number optic electrod input result abov respect region memori plate case differ user

These words will be familiar to most patent professionals. The result of the stemming operation can be seen in certain words, e.g. “oper” – these should be treated as “oper*” – “operates”, “operating”, “operate” etc.. You can see that stemming is not perfect (“thi” may relate to “this”, which has been taken to be a plural form) but it is generally good enough. Without the stemming there would be many different variations of the same word in our counts.

Now this list of “patent stopwords” is useful. Firstly, these words are probably not useful for searching in isolation (we may move onto n-grams later). Secondly, they can be used as a dictionary of sorts for claim drafting. Thirdly, they could be used to distinguish patent text from non-patent text (e.g. as the basis for a feature vector for this classification).

The words that occur in patent specifications but also occur in “the real world” are also interesting:

the of a to and in is for be an as by with or are that from which at on can it have such each not when between other into through further more about than will so if then

These can be used as universal stopwords.

Further Fun

There are a number of paths for further analysis:

  • Extend across the whole US patent publication corpus from 2001 to 2014. I may need to optimise my code to do this!
  • Perform a similar analysis for different classification levels – e.g. do patents classified as G have a different vocabulary from those classified as H?
  • Look at infrequent or unique words – How many are there? Are they useful for searching clusters?

Over Christmas I had a chance to experiment with the European Patent Office’s Online Patent Services. This is a web service / application programming interface (API) for accessing the large patent databases administered by the European Patent Office. It has enormous potential.

To get to grips with the system I set myself a simple task: taking a text file of patent publication numbers (my cases), generate a pie chart of the resulting classifications. In true Blue Peter-style, here is one I made earlier (it’s actually better in full SVG glory, but WordPress.com do not support the format):

Classifications for Cases (in %)

Classifications for Cases (in %)

Here is how to do it: -

Step 1 – Get Input

Obtain a text file of publication numbers. Most patent management systems (e.g. Inprotech) will allow you to export to Excel. I copied and pasted from an Excel column into a text file, which resulted in a list of publication numbers separated by new line (“\n”) elements.

Step 2 – Register

Register for a free EPO OPS account here: http://www.epo.org/searching/free/ops.html . About a day later the account was approved.

Step 3 – Add an App

Setup an “app” at the EPO Developer Portal. After registering you will receive an email with a link to do this. Generally the link is something like: https://developers.epo.org/user/[your no.]/apps. You will be asked to login.

Setup the “app” as something like “myapp” or “testing” etc.. You will then have access to a key and a secret for this “app”. Make a note of these. I copied and pasted them into an “config.ini” file of the form:

[Login Parameters]
C_KEY="[Copied key value]"
C_SECRET="[Copied secret value]"

Step 4 – Read the Docs

Read the documentation. Especially ‘OPS version 3.1 documentation – version 1.2.10 ‘. Also see this document for a description of the XML Schema (it may be easier than looking at the schema itself).

Step 5 – Authenticate

Now onto some code. First we need to use that key and secret to authenticate ourselves using OAuth.

I first of all tried urllib2 in Python but this was not rendering the POST payload correctly so I reverted back to urllib, which worked. When using urllib I found it easier to store the host and authentication URL as variables in my “config.ini” file. Hence, this file now looked like:

[Login Parameters]
C_KEY="[Copied key value]"
C_SECRET="[Copied secret value]"

[URLs]
HOST=ops.epo.org
AUTH_URL=/3.1/auth/accesstoken

Although object-oriented-purists will burn me at the stake, I created a little class wrapper to store the various parameters. This was initialised with the following code:

import ConfigParser
import urllib, urllib2
import httplib
import json
import base64
from xml.dom.minidom import Document, parseString
import logging
import time

class EPOops():

	def __init__(self, filename):
		#filename is the filename of the list of publication numbers

		#Load Settings
		parser = ConfigParser.SafeConfigParser()
		parser.read('config.ini')
		self.consumer_key = parser.get('Login Parameters', 'C_KEY')
		self.consumer_secret = parser.get('Login Parameters', 'C_SECRET')
		self.host = parser.get('URLs', 'HOST')
		self.auth_url = parser.get('URLs', 'AUTH_URL')

		#Set filename
		self.filename = filename

		#Initialise list for classification strings
		self.c_list = []

		#Initialise new dom document for classification XML
		self.save_doc = Document()

		root = self.save_doc.createElement('classifications')
		self.save_doc.appendChild(root)

The authentication method was then as follows:

def authorise(self):
		b64string = base64.b64encode(":".join([self.consumer_key, self.consumer_secret]))
		logging.error(self.consumer_key + self.consumer_secret + "\n" + b64string)
		#urllib2 method was not working - returning an error that grant_type was missing
		#request = urllib2.Request(AUTH_URL)
		#request.add_header("Authorization", "Basic %s" % b64string)
		#request.add_header("Content-Type", "application/x-www-form-urlencoded")
		#result = urllib2.urlopen(request, data="grant_type=client_credentials")
		logging.error(self.host + ":" + self.auth_url)

		#Use urllib method instead - this works
		params = urllib.urlencode({'grant_type' : 'client_credentials'})
		req = httplib.HTTPSConnection(self.host)
		req.putrequest("POST", self.auth_url)
		req.putheader("Host", self.host)
		req.putheader("User-Agent", "Python urllib")
		req.putheader("Authorization", "Basic %s" % b64string)
		req.putheader("Content-Type" ,"application/x-www-form-urlencoded;charset=UTF-8")
		req.putheader("Content-Length", "29")
		req.putheader("Accept-Encoding", "utf-8")

		req.endheaders()
		req.send(params)

		resp = req.getresponse()
		params = resp.read()
		logging.error(params)
		params_dict = json.loads(params)
		self.access_token = params_dict['access_token']

This results in an access token you can use to access the API for 20 minutes.

Step 6 – Get the Data

Once authentication is sorted, getting the data is pretty easy.

This time I used the later urllib2 library. The URL was built as a concatenation of a static look-up string and the publication number as a variable.

The request uses an “Authentication” header with a “Bearer” variable containing the access token. You also need to add some error handling for when your allotted 20 minutes runs out – I looked for an error message mentioning an invalid access token and then re-performed the authentication if this was detected.

I was looking at “Biblio” data. This returned the classifications without the added overhead of the full-text and claims. The response is XML constructed according to the schema described in the Docs above.

The code for this is as follows:

def get_data(self, number):
		data_url = "/3.1/rest-services/published-data/publication/epodoc/"
		request_type = "/biblio"
		request = urllib2.Request("https://ops.epo.org" + data_url + number + request_type)
		request.add_header("Authorization", "Bearer %s" % self.access_token)
		try:
			resp = urllib2.urlopen(request)
		except urllib2.HTTPError, error:
			error_msg = error.read()
			if "invalid_access_token" in error_msg:
				self.authorise()
				resp = urllib2.urlopen(request)

		#parse returned XML in resp
		XML_data = resp.read()
		return XML_data

Step 7 – Parse the XML

We now need to play around with the returned XML. Python offers a couple of libraries to do this, including Minidom and ElementTree. ElementTree is preferred for memory-management reasons but I found that the iter() / getiterator() methods to be a bit dodgy in the version I was using, so I fell back on using Minidom.

As the “Biblio” data includes all publications (e.g. A1, A2, A3, B1 etc), I selected the first publication in the data for my purposes (otherwise there would be a duplication of classifications). To do this I selected the first “<exchange-document>” tag and its child tags.

As I was experimenting, I actually extracted the classification data as two separate types: text and XML. Text data for each classification, simply a string such as “G11B  27/    00            A I”, can be found in the  “<classification-ipcr>” tag. However, when looking at different levels of classification this single string was a bit cumbersome. I thus also dumped an XML tag – “<patent-classification>” – containing a structured form of the classification, with child tags for “<section>”, “<class>”, “<subclass>”, “<main-group>” and “<subgroup>”.

My function saved the text data in a list and the extracted XML in a new XML string. This allowed me to save these structures to disk, more so I could pick up at a later date without continually hitting the EPO data servers.

The code is here:

def extract_classification(self, xml_str):
		#extract the  elements
		dom = parseString(xml_str)
		#Select first publication for classification extraction
		first_pub = dom.getElementsByTagName('exchange-document')[0]
		self.c_list = self.c_list + [node.childNodes[1].childNodes[0].nodeValue for node in first_pub.getElementsByTagName('classification-ipcr')]

		for node in first_pub.getElementsByTagName('patent-classification'):
			self.save_doc.firstChild.appendChild(node)

Step 8 – Wrap It All Up

The above code needed a bit of wrapping to load the publication numbers from the text file and to save the text list and XML containing the classifications. This is straightforward and shown below:

def total_classifications(self):
		number_list = []

		#Get list of publication numbers
		with open("cases.txt", "r") as f:
			for line in f:
				number_list.append(line.replace("/","")) #This gets rid of the slash in PCT publication numbers

		for number in number_list:
			XML_data = self.get_data(number.strip())
			#time.sleep(1) - might want this to be nice to EPO :)
			self.extract_classification(XML_data)

		#Save list to file
		with open("classification_list.txt", "wb") as f:
			f.write("\n".join(str(x) for x in self.c_list))

		#Save xmldoc to file
		with open("save_doc.xml", "wb") as f:
			self.save_doc.writexml(f)

Step 9 – Counting

Once I have the XML data containing the classifications I wrote a little script to count the various classifications at each level for charting. This involved parsing the XML and counting unique occurrences of strings representing different levels of classification. For example, level “section” has values such as “G”, “H”. The next level, “class”, was counted by looking at a string made up of “section” + “class”, e.g. “G11B”. The code is here:

from xml.dom.minidom import parse
import logging, pickle, pygal
from pygal.style import CleanStyle

#create list of acceptable tags - tag_group - then do if child.tagName in tag_group

#initialise upper counting dict
upper_dict = {}

#initialise list of tags we are interested in
tags = ['section', 'class', 'subclass', 'main-group', 'subgroup']

with open("save_doc.xml", "r") as f:
	dom = parse(f)

#Get each patent-classification element
for node in dom.getElementsByTagName('patent-classification'):
	#Initialise classification string to nothing
	class_level_val = ""
	logging.error(node)
	#for each component of the classification
	for child in node.childNodes:
		logging.error(child)
		#Filter out "text nodes" with newlines
		if child.nodeType is not 3 and len(child.childNodes) > 0:

			#Check for required tagNames - only works if element has a tagName
			if child.tagName in tags:

				#if no dict for selected component
				if child.tagName not in upper_dict:
					#make one
					upper_dict[child.tagName] = {}
				logging.error(child.childNodes)

				#Get current component value as catenation of previous values
				class_level_val = class_level_val + child.childNodes[0].nodeValue

				#If value is in cuurent component dict
				if class_level_val in upper_dict[child.tagName]:
					#Increment
					upper_dict[child.tagName][class_level_val] += 1
				else:
					#Create a new entry
					upper_dict[child.tagName][class_level_val] = 1

print upper_dict
#Need to save results
with open("results.pkl", "wb") as f:
	pickle.dump(upper_dict, f)

The last lines print the resulting dictionary and then save it in a file for later use. After looking at the results it was clear that past the “class” level the data was not that useful for a high-level pie-chart, there were many counts of ’1′ and a few larger clusters.

Step 10 – Charting

I stumbled across Pygal a while ago. It is a simple little charting library that produces some nice-looking SVG charts. Another alternative is ‘matlibplot‘.

The methods are straightforward. The code below puts a rim on the pie-chart with a breakdown of the class data.

#Draw pie chart
pie_chart = pygal.Pie(style=CleanStyle)
pie_chart.title = 'Classifications for Cases (in %)'

#Get names of different sections for pie-chart labels
sections = upper_dict['section']

#Get values from second level - class
classes = upper_dict['class']
class_values = classes.keys() #list of different class values

#Iterate over keys in our section results dictionary
for k in sections.keys():
 #check if key is in class key, if so add value to set for section

 #Initialise list to store values for each section
 count_values = []
 for class_value in class_values:
 if k in class_value: #class key - need to iterate from class keys
 #Add to list for k
 #append_tuple = (class_value, classes[class_value]) - doesn't work
 count_values.append(classes[class_value])
 #count_values.append(append_tuple)
 pie_chart.add(k, count_values)

pie_chart.render_to_file('class_graph.svg')

That’s it. We now have a file called “class_graph” that we can open in our browser. The result is shown in the pie-chart above, which shows the subject-areas where I work. Mainly split between G and H. The complete code can be found on GitHub: https://github.com/benhoyle/EPOops.

Going Forward

The code is a bit hacky, but it is fairly easy to refine into a production-ready method. Options and possibilities are:

  • Getting the data from a patent management system directly (e.g. via an SQL connection in Python).
  • Adding the routine as a dynamic look-up on a patent attorney website – e.g. on a Django or Flask-based site.
  • Look up classification names using the classification API.
  • The make-up of a representative’s cases would change fairly slowly (e.g. once a week for an update). Hence, you could easily cache most of the data, requiring few look-ups of EPO data (the limit is 2.5GB/week for a free account).
  • Doing other charting – for example you could plot countries on Pygal’s world map.
  • Adapt for applicants / representatives using EPO OPS queries to retrieve the publication numbers or XML to process.
  • Looking at more complex requests, full-text data could be retrieved and imported into natural language processing libraries.

Possibly. Let’s give it a go.

Big data - from DARPA

Data

In my experience, no one has quite realised how amazing this link is. It is a hosting (by Google) of bulk downloads of patent and trademark data from the US Patent and Trademark Office.

Just think about this for a second.

Here you can download images of most commercial logos used between 1870(!) and the present day. Back in the day, doing image processing and machine learning, I would have given my right arm for such a data set.

Moreover, you get access (eventually) to the text of most US patent publications. Considering there are over 8 million of these, and considering that most new and exiting technologies are the subject of a patent application, this represents a treasure trove of information on human innovation.

Although we are limited to US-based patent publications this is not a problem. The US is the world’s primary patent jurisdiction – many companies only patent in the US and most inventions of importance (in modern times) will be protected in the US. At this point we are also not looking at precise legal data – the accuracy of these downloads is not ideal. Instead, we are looking at “Big Data” (buzzword cringe) – general patterns and statistical gists from “messy” and incomplete datasets.

Storage

Initially, I started with 10 years worth of patent publications: 2001 to 2011. The data from 2001 onwards is pretty reliable; I have been told the OCR data from earlier patent publications is near useless.

An average year is around 60 GBytes of data (zipped!). Hence, we need a large hard drive.

You can pick up a 2TB external drive for about £60. I have heard they crash a lot. You might want to get two and mirror the contents using rsync.

[Update: command for rsync I am using is:

rsync -ruv /media/EXTHDD1/'Patent Downloads' /media/EXTHDD2/'Patent Downloads'

where EXTHDD1 and EXTHDD2 are the two USB disk drives.]

Flashgot

Flashgot

Download

I have an unlimited package on BT Infinity (hurray!). A great help to download the data is a little Firefox plugin called FlashGot. Install it, select the links of the files you want to download, right-click and choose “Flashgot” selection. This basically sets off a little wget script that gets each of the links. I set it going just before bed – when I wake up the files are on my hard-drive.

The two sets of files that look the most useful are the 2001+ full-text archives or the 2001+ full-text and embedded images. I went for 10 years worth of the latter.

Folders (cc: Shoplet Office Supplies)

Folders (cc: Shoplet Office Supplies)

Data Structure

The structure of the downloaded data is as follows:

  • Directory: Patent Downloads
    • Directory: [Year e.g. 2001] – Size ~ 65GB
      • [ZIP File - one per week - name format is date e.g. 20010607.ZIP] – Size ~ 0.5GB
        • Directory: DTDS [Does what it says of the tin - maybe useful for validation but we can generally ignore for now]
        • Directory: ENTITIES [Ditto - XML entities]
        • Directories: UTIL[XXXX] [e.g. UTIL0002, UTIL0003 - these contain the data] – Size ~ 50-100MB
          • [ZIP Files - one per publication - name format is [Publication Number]-[Date].ZIP e.g. US20010002518A1-20010607.ZIP] – Size ~ 50-350KB
            • [XML File for the patent publication data - name format is [Publication Number]-[Date].XML e.g. US20010002518A1-20010607.XML] – Size ~ 100Kb
            • [TIF Files for the drawings -  name format is [Publication Number]-[Date]-D[XXXXX].TIF where XXXXX is the drawing number e.g. US20010002518A1-20010607-D00012.TIF] – Size ~20kB

[Update: this structure varies a little 2004+ - there are a few extra layers directories between the original zipped folder and the actual XML.]

The original ZIPs

The original ZIPs

ZIP Files & Python

Python is my programming language of choice. It is simple and powerful. Any speed disadvantage is not really felt for large scale, overnight batch processing (and most modern machines are well up to the task).

Ideally I would like to work with the ZIP files directly without unzipping the data. For one-level ZIP files (e.g. the 20010607.ZIP files above) we can use the ‘zipfile‘, a built-in Python module. For example, the following short script ‘walks‘ through our ‘Patent Downloads’ directory above and prints out information about each first-level ZIP file.

import os
import zipfile
import logging
logging.basicConfig(filename="processing.log", format='%(asctime)s %(message)s')

exten = '.zip'
top = "/YOURPATH/Patent Downloads"

def print_zip(filename):
	print filename
	try:
		zip_file = zipfile.ZipFile(filename, "r")
		# list filenames

		for name in zip_file.namelist():
			print name,
		print

		# list file information
		for info in zip_file.infolist():
			print info.filename, info.date_time, info.file_size

	except Exception, ex:
		#Log error
		logging.exception("Exception opening file %s") %filename
		return

def step(ext, dirname, names):
	ext = ext.lower()

	for name in names:
		if name.lower().endswith(ext):
			print_zip(str(os.path.join(dirname, name)))

# Start the walk
os.path.walk(top, step, exten)

This code is based on that helpfully provided at PythonCentral.io. It lists all the files in the ZIP file. Now we have a start at a way to access the patent data files.

However, more work is needed. We come up against a problem when we hit the second-level of ZIP files (e.g. US20010002518A1-20010607.ZIP). These cannot be manipulated again recursively with zipfile. We need to think of a way around this so we can actually access the XML.

As a rough example of the scale we are taking about – a scan through 2001 to 2009 listing the second-level ZIP file names took about 2 minutes and created a plain-text document 121.9 MB long.

Next Time

Unfortunately, this is all for now as my washing machine is leaking and the kids are screaming.

Next time, I will be looking into whether zip_open works to access second-level (nested) ZIP files or whether we need to automate an unzip operation (if our harddrive can take it).

We will also get started on the XML processing within Python using either minidom or ElementTree.

Until then…

Every so often you get a case that needs to be filed on the last day of the one-year priority periodHowever, when this happens you need to know how long a year is. 

“FOOL!” You may shout.

But no, does a one-year period include or exclude a day of a starting event? I.e. if you file a first application on 1 January 2013, do you have until 1 January 2014 INCLUSIVE to file a priority-claiming application? Or must the priority-claiming application be filed BY 1 January 2014 EXCLUSIVE, i.e. by 31 December 2013? Trainees may stumble here.

Confusingly the patent legislation in Europe and UK is not entirely helpful. To get an answer you need to go old skool: back to 1883 and the Paris Convention for the Protection of Intellectual Property.

More precisely, Article 4, paragraph C, clause 2 averts your crisis:

C.

(1) The periods of priority referred to above shall be twelve months for patents and utility models, and six months for industrial designs and trademarks.

(2) These periods shall start from the date of filing of the first application; the day of filing shall not be included in the period.

Hurray! We can file on 1 January 2014.

The link-bait title is only half tongue-in-cheek.

Last night I attended a great little seminar on improving business-to-business social media use run by Bath and Bristol Marketing Network [I cheated a little - it's a network for "marketing professionals" rather than "marketing amateurs"]. The speakers were Noisy Little Monkey - a digital marketing agency [who I now respect even more knowing they have an office in Shepton Mallet].

The main points that filtered through my fatigued post-5.30pm brain were:

  1. Identify your audience.
  2. Use images/graphics as well as text.
  3. Plan, test, measure, evaluate, repeat.
  4. Social media is not about conversion
  5. Identify the Twitter geeks who are going to push your content.
  6. Use editorial and event calendars to generate a content plan for a year.
Social Media Drives Growth!

Social Media Drives Growth!
CC: mkhmarketing


Here’s some more detail:

Identify your audience

  • Even better, categorise it.
  • Identify 5-10 groups and write a half-page “persona” for each group.
  • E.g. Michael Smith – manager of a software company – 45 – lives in Hereford with 2 kids.
  • Bear these “personas” in mind when writing content.

Use images/graphics as well as text

Plan, test, measure, evaluate, repeat

  • The tools are there – e.g. X Analytics, Twitter analysis tools like FollowerWonk etc. – build evidence and base strategy on it.
  • Prepare a monthly report that gives traffic/demographic/content statistics.
  • Systematically experiment with variations on format and content and use the above statistics to evaluate. E.g. What topics pique interest? Do images actually make a difference to engagement and sharing?

Social media is not about conversion

  • Sales come from phone calls, website visits, face-to-face encounters. Social media is the noise that pushes people into the sales funnel. It does work.
  • That said the pressure on pushy sales is removed.
  • Educating and entertaining become more important.

Identify the Twitter geeks who are going to push your content

  • As in most things, only 1-5% of a group actually drives conversations.
  • For example, on Twitter there are key individuals that are followed by many – if you were looking to get exposure work out what they like and what makes them tick. Find out what their interests are to aim content at them for retweets, comments and blog conversations.
  • You can identify individuals using tools – you can sort by individuals who have a large number of followers in areas you operate in who are likely to retweet URLs.

Use editorial and event calendars to generate a content plan for a year

  • You might know when IP events are going to be held. You might know when technology events are to be held. You can  plan your content (e.g. blog posts) around these.
  • Also you can find out magazine and newspaper editorial calendars (just google “magazine name” + “editorial calendar”) – you can have a yearly plan of when articles are published and fit blog articles into this.

Patent attorneys: we care about the independent claims. An independent claim is a paragraph of text that defines an invention. Each invention has a number of discrete features. Can I build a function to spilt a claim into its component features?

The answer is possibly. Here is one way I could go about doing it.

First I would start with a JavaScript file: claimAnalysis.js. I would link this to an HTML page: claimAnalysis.html. This HTML page would have a large text box to copy and paste the text of an independent claim.

On a keyup() or onchange() event I would then run the following algorithm:

  • Get text as from text box as a string.
  • Set character placemarker as 0.
  • From placemarker, find character from set of character:s [",", ":", ";","-" or new line].
  • Store characters from 0 to found character index as string in array.
  • Repeat last two steps until “.” or end of text.

From this we should have a rough breakdown of a claim into feature string arrays. It will not be perfect but it would make a good start.

We can then show each located string portion in the array to a user. For example, with JavaScript we can add a table within a form containing input text boxes in rows. Each text box can contain a string portion. We can also add a checkbox to each portion or table row.

The user can then be offered “spilt” or “join” option buttons.

  • “Split” requires only one selection.
  • The user is told to place the cursor/select text in the box where they want the split to occur (using selectionStart property?).
  • Two features are then created based on the cursor position or selected text.
  • “Join” requires > 1 features to be selected via the checkboxes.
  • All selected features are combined into one string portion in one text box which replaces the previous text boxes (possibly by redrawing the table).

Once any splitting or joining is complete the user can confirm the features. A confirm button could use the POST method to input the features to a PHP script that saves them as XML on the server.

<claim><number>1</number><feature id="1">A method for doing something comprising:</feature>...</claim>

A while back we looked at using “Assigned Tasks” to send tasks to other people.

This previous technique required the recipient to manage their own tasks. This may not be great if the recipient is over-loaded. It also does not allow the sender of the task to change the task properties (e.g. change priority to urgent or move to another date).

There is another way to manage people using Outlook tasks. This is by using shared tasks. How to do this is explained below.

22438339_s

Setup a Shared Folder – Managee Computer

We will assume the person you want to manage is a “managee”. These steps need to be performed on the managee’s computer.

  1. Click on “Tasks” at the bottom of Outlook.
  2. Click on the “Tasks” entry in the left-hand-side menu.
  3. Click on the “Folder” tab at the top of the tasks view.
  4. Click on “Folder Permissions” (second to last entry).
  5. Click “Add”.
  6. Select everyone you want as a “manager” and click “OK”.
  7. Select the “Author” permission from the dropdown list and click “OK”.

Setup a Shared Folder – Manager Computer

You need to perform the following steps on the computer(s) of those who want to manage the managee.

  1. Click on “Tasks” at the bottom of Outlook.
  2. Click on the “Folder” tab at the top of the tasks view.
  3. Click “Open Shared Tasks” (third to last entry).
  4. Type the name of the managee or select from the list that appears when you select the “Name…” button.
  5. The managee’s tasks should then appear in a folder with their name under a “Shared Folders” heading on the left-hand-side.

Managing

Adding tasks for the managee:

  1. On the manager’s computer, go to “Tasks” in Outlook.
  2. Select the folder with the managee’s name.
  3. Then select “New Task” from the top.
  4. The added task will now appear in the “Tasks” list on the managee’s computer.
  5. It is recommend to add a “Category” that says who added the task – this will help the managee filter by sender.

On the managee’s side:

  1. If they go to “Tasks” in Outlook and select the “To-Do List” view (red flag) from the “Home” top menu they can see all tasks due in the future and past in a handy to-do list.
  2. The managee can then concentrate on doing the tasks due under the “Today” section (or those in the past).

The manager can now, via Outlook on their computer, edit existing tasks. For example:

  1. On the manager’s computer, go to “Tasks” in Outlook.
  2. Select the folder with the managee’s name.
  3. View the “To-Do List” for the selected folder.
  4. Double click a task to edit or delete (this will only work for tasks created by the manager).

Tasks can be reassigned to a different date, can be changed priority, can have notes added  etc..

Hide Private Tasks

If the managee is using tasks and does not want these viewable by everyone (e.g. “walk dog”, “pick up crack pipe” etc.) we need to create a private folder.

  1. Click on “Tasks” at the bottom of Outlook.
  2. Click on the “Folder” tab at the top of the tasks view.
  3. Click “New Folder” and call this “Private Tasks”.
  4. On the “Home” tab select “Simple List” in the “Current View”, select all existing tasks (using SHIFT) the click “Move” button (to the right) and select the “Private Tasks” folder.
  5. New private tasks should then be added to the “Private Tasks” folder (by selecting it on the left-hand-side before adding a task).

Let me know if you find any tricks or alternatives.

Next Page »



Follow

Get every new post delivered to your Inbox.

Join 1,163 other followers