Can robots have needs?

A patent attorney digresses.

I recently read an article by Professor Margaret Boden on “Robot Needs”. While I agree with much of what Professor Boden says, I feel we can be more precise with our questions and understanding. The answer is more “maybe” than “no“.

Warning: this is only vaguely IP related.

Definitions & Embodiment

First, some definitions (patent attorneys love debating words). The terms “robot”, “AI” and “computer” are used interchangeably in the article. This is one of the problems of the piece, especially when discussing “needs”. If a “computer” is simply a processor, some memory and a few other bits, then yes, a “computer” does not have “needs” as commonly understood. However, it is more of an open question as to whether a computing system, containing hardware and software, could have those same “needs”.

AI

This brings us to “AI”. The meaning of this term has changed in the last few years, best seen perhaps in recent references to an “AI” rather than “AI” per se.

  • In the latter half of the twentieth century, “AI” was mainly used in a theoretical sense to refer to non-organic intelligence. The ambiguity arises with the latter half of the term. “Intelligence” means many different things to many different people. Is playing chess or Go “intelligent”? Is picking up a cup “intelligent”? I think the closest we come to agreement is that it generally relates to higher cortical functions, especially those demonstrated by human beings.
  • Since the “deep learning” revival broke into public consciousness (2015+?) “AI” has taken on a second meaning: an implementation of a multi-layer neural network architecture. You can download an “AI” from Github. “AI” here could be used interchangeably with “chatbot” or a control system for a driverless car. On the other hand, I don’t see many people referring to SQL or DBpedia as an “AI“.
  • AI” tends to be used to refer more to the software aspects of “intelligent” applications rather than a combined system of server and software. There is a whiff of Descartes: “AI” is the soul to the server “body

Based on that understanding, do I believe an “AI” as exemplified by today’s latest neural network architecture on Github has “needs“? No. This is where I agree with Professor Boden. However, do I believe that a non-organic intelligence could ever have “needs“? I think the answer is: Yes.

Robots

This leads us to robots. A robot is more likely to be seen as having “needs” than “AI” or a “computer“. Why is this?

mars-mars-rover-space-travel-robot-73910

Robots have a presence in the physical world – they are “embodied“. They have power supplies, motors, cameras, little robotic arms, etc. (Although many forget that your normal rack servers share a fair few components.) They clearly act within the world. They make demands on this world, they need to meet certain requirements in order to operate. A simple one is power; no battery, no active robot. I think most people could understand that, in a very simple way, the robot “needs” power.

Let’s take the case where a robot is powered by a software control system. Now we have a “full house“: a “robot” includes a “computer” that executes an “AI“. But where does the “need” reside? Again, it feels wrong to locate it in the “computer” – my laptop doesn’t really “need” anything. Saying an “AI” “needs” something is like saying a soul “needs” food (regardless of whether you believe in souls). We then fall back on the “robot“. Why does the robot feel right? Because it is the most inclusive abstract entity that encompasses an independent agent that acts in the world.

Needs, Goals & Motivation

Before we take things further lets go on a detour to look at “needs” in more detail. In the article, “needs” are described together with “goals” and “motivation“. Maslow’s famous pyramid features. In this way, a lot is packaged into the term.

Maslow’s Pyramid – By Factoryjoe on WikiCommons

Can we have “needs” without “goals“? Possibly. A quick google shows several articles on “What Bacteria Need to Live” (clue: raw chicken and your kitchen). I think we can relatively safely say that bacteria “need” food and water and a benign environment. Do bacteria have “goals“? Most would say: No. “Goals“, especially as used to describe human behaviour, suggests the complex planning and goal-seeking machinery of the human brain (e.g. as a crude generalisation: the frontal lobes and corpus striatum amongst others). So we need to be careful mixing these – we have a term that may be applied to the lowest level of life, and a term than possibly only applies to the highest levels of life. While robots could relatively easily have “needs“, it would be more much difficult to construct one with “goals“. We would also stumble into “motivation” – have does a robot transform a “need” into a “goal” to pursue it?

Now, as human beings we instinctively know what “motivation” feels like. It is that feeling in the bladder that drives you off your chair to the toilet; it is the itchy uneasiness and dull empty abdominal ache that propels you to the crisp packet before lunch; it is the parched feeling in the throat and the awareness that your eyes are scanning for a chiller cabinet. It is harder to put it into words, or even to know where it starts or ends. Often we just do. Asked why we are doing what we do and the brain makes up a story. Sometimes there is a vague correlation between the two.

Now this is interesting. Let’s have a look at brains for more insight.

Brains

Nature is great. She has evolved at least the Earth’s most efficient data processing device (ignore that the “she” here also doesn’t really exist). Looking at how she has done this allows us to cheat a little when building robots.

A first thing to note is that nature is lazy and stupid (hurray!). She recycles, duplicates, always takes the easy option. This paradoxically means we have arrived at efficiency through inefficiency. Brains started out as chemical gradients, then rudimentary cellular architecture to control these gradients, then multi-cellular architectures, nervous passageways, spinal cords, brain stems, medullas, pons, mid-brains, limbic structures and cortex. Structures are built on top of structures and wired up in ways that would give an electrician a heart attack. Plus structures are living – they change and grow over time within an environment.

In the brain “needs“, at least those near the bottom of the Maslowian pyramid, map fairly nicely onto lower brain structures: the brain stem, medulla, pons, and mid-brain. The thalamus helps to bridge the gap between body and cortex. The cortex then stores representations of these “needs“, and maps them to and from sensory representations. Another crude and incorrect generalisation, but those lower structures are often called the “lizard brain“, as those bits of neural hardware are shared with our reptilian cousins. The raw feeling of “needs” such as hunger, thirst, sexual desire, escape and attack is possibly similar across many animals. What does differ is the behaviour and representations triggered in response to those needs, as well as the top down triggering (e.g. what makes a human being fear abstract nouns).

Lower Brain Structure – Cancer Research UK / Wikimedia Commons

To quote from this article on cortical evolution:

Comparative studies of brain structure and development have revealed a general bauplan that describes the fundamental large-scale architecture of the vertebrate brain and provides insight into its basic functional organization. The telencephalon not only integrates and stores multimodal information but is also the higher center of action selection and motor control (basal ganglia). The hypothalamus is a conserved area controlling homeostasis and behaviors essential for survival, such as feeding and reproduction. Furthermore, in all vertebrates, behavioral states are controlled by common brainstem neuromodulatory circuits, such as the serotoneric system. Finally, vertebrates harbor a diverse set of sense organs, and their brains share pathways for processing incoming sensory inputs. For example, in all vertebrates, visual information from the retina is relayed and processed to the pallium through the tectum and the thalamus, whereas olfactory input from the nose first reaches the olfactory bulb (OB) and then the pallium.

Needs” near the middle or even the top of Maslow’s pyramid are generally mammalian needs. These include love, companionship, acceptance and social standing. Consensus is forming that nature hijacked parental bonds, especially those that arise from and encourage breast feeding, to build societies. An interesting question is does this require the increase in cortical complexity that is seen in mammals? These “needs” mainly arise from the structures that surround the thalamus and basal ganglia, as well as mediators such as oxytocin. So that pyramid does actually have a vague neural correlate; we build our social lives on top of a background of other more essential drives.

1511_The_Limbic_Lobe
Illustration from Anatomy & Physiology, Connexions Web site. http://cnx.org/content/col11496/1.6/, Jun 19, 2013.

The top of Maslow’s pyramid is contentious. What the hell is self-actualisation? Being the best you you can be? What does that mean? The realisation of talents and potentialities? What if my talent is organising people to commit genocide? Rants aside, Wikipedia gives us something like:

Expressing one’s creativity, quest for spiritual enlightenment, pursuit of knowledge, and the desire to give to and/or positively transform society are examples of self-actualization.

What these seem to be are human qualities that are generally not shared with other animals. Creativity, spirituality, knowledge and morality are all enabled by the more developed cortical areas found in human beings, as coordinated by the frontal lobes, where these cortical areas feed back to both the mammalian and lower brain structures.

A person may thus be likened to a song. The beat and bass provided by the lower brain structures, lead guitar and vocals by the mammalian structures, and the song itself (in terms of how these are combined in time) by the enlarged cortex.

Back to Needs

We can now understand some of the problems that arise when Professor Boden refers to “needs“. Human “needs” arise at a variety of levels, where higher levels are interconnected with and feed back to lower levels. Hence, you can take about “needs” such as hunger relatively independently of social needs, but social needs only arise in systems that experience hunger. There is thus a question of whether we can talk about social needs independent of lower needs.

We can also see how the answer to the question: “can robots ever have needs?” ignores this hierarchy. It is easier to see how a robot could experience a “need” equivalent to hunger than it is to see it experience a “need” equivalent to acceptance within a social group. It is extremely difficult to see how we could have a “self-actualised” robot.

Environment

Before we look at whether robots care we also need to introduce “the environment“. Not even human beings have “needs” in isolation. Indeed, a “need” implies something is missing, if an environment fulfils the requirement of a need, is it still a “need“?

Additionally, behaviour that is not suited to an environment would fall outside most lay definitions of “intelligence“. “Intelligence” is thus to a certain extent a modelling of the world that enables environmental adaptation.

aerial photo of mountain surrounded by fog
Photo by icon0.com on Pexels.com

The environment comes into play in two areas: 1) human “needs” have evolved within a particular “environment“; and 2) a “need” is often expressed as behaviour that obtains a requirement from the “environment” that is not immediately present.

Food, water, a reasonable temperature range (10 to 40 degrees Celsius), and an absence of harmful substances are fairly fundamental for most life; but these are actually a mirror image of the physical reality in which life on Earth evolved. If our planet had an ambient temperature of 50 to 100 degrees Celsius, would we require warmth? Can non-hydrogen-based life exist without water? Could you feed off cosmic rays?

These are not ancillary points. If we do create complex information processing devices that act in the world, where behaviour is statistical and environment-dependent, would their low-level needs over with ours? At presence it appears that a source of electrical power is a fairly fundamental “robot” or “AI” need. If that electrical power is generated from urine , do we have a “need” for power or for urine? If urine is correlated with over-indulging on cider at a festival, does the “AI” have a “need” for inebriated festival goers?

The sensory environment of robots also differs from human beings. Animals share evolutionary pathways for sensory apparatus. We have similar neuronal structure to process smell, sight, sound, motor-feedback, touch and visceral sensations, at least at lower levels of processing complexity. In comparison, robots often have simple ultrasonic transceivers, infra-red signalling, cameras and microphones. Raw data is processed using a stack of libraries and drivers. What would evolution in this computing “environment” look like? Can robots evolve in this environment?

Do robots have “needs“?

So back to “robots“. It is easier to think about “robots” than “AI“, as they are embodied in a way that provides an implicit reference to the environment. “AI” in this sense may be used much as we use “brain” and “mind” (it being difficult with deep learning to separate software structure from function).

Do robots have “needs“? Possibly. Could robots have “needs“? Yes, fairly plausibly.

Given a device with a range of sensory apparatus, a range of actuators such as motors, and modern reinforcement learning algorithms (see here and here) you could build a fairly autonomous self-learning system.

The main problem would not be “needs” but “goals“. All the reinforcement learning algorithms I am aware of require an explicit representation of “good“, normally in the form of a “score“. What is missing is a mapping between the environment and the inner state of the “AI“. This is similar to the old delineation between supervised and unsupervised learning. It doesn’t help that roboticists skilled at representing physical hardware states tend to be mechanical engineers, whereas AI researchers tend to be software engineers. It requires a mirroring of the current approach, so that we can remove scores altogether (this is an aim of “inverse reinforcement learning“). While this appears to be a lacuna in most major research efforts, it does not appear insurmountable. I think the right way to go is for more AI researchers to build physical robots. Physical robots are hard.

Do robots care?

Do most “robots” as currently constructed “care“? I’d agree with Professor Boden and say: No.

accident black and white care catastrophe
Photo by Snapwire on Pexels.com

Care” suggests a level of social processing that the majority of robot and AI implementations currently lack. Being the self-regarding species that we are, most “social” robots as currently discussed refer to robots that are designed to interact with human beings. Expecting this to naturally result in some form of social awareness or behaviour is nonsensical: it is similar to asking why flies don’t care about dogs. One reason human beings are successful at being social is we have a fairly sophisticated model of human beings to go on: ourselves. This model isn’t exact (or even accurate), and is largely implemented below our conscious awareness. But it is one up from the robots.

A better question is possibly: do ants care? I don’t know the answer. One one hand: No, it is difficult to locate compassion or sympathy within an ant. On the other hand: Yes, they have complex societies where different ants take on different roles, and they often act in a way that benefits those societies, even to the detriment of themselves. Similarly, it is easier to design a swarm of social robots that could be argued to “care” about each other than it is to design a robot that “cares” about a human being.

Also, I would hazard to guess that a caring robot would first need to have some form of autonomy; it would need to “care” about itself first. An ant that cannot acquire its own food and water is not an ant that can help in the colony.

Could future “robots” “care“? Yes – I’d argue that it is not impossible. It would likely require a complex representation of human social needs but maybe not the complete range of higher human capabilities. There would always be the question of: does the robot *truly* care? But then this question can be raised of any human being. It is also a fairly pertinent question for psychopaths.

Getting Practical

Despite the hype, I agree with Professor Boden that we are a long way away from any “robot” and “AI” operating in a way that is seen as nearing human. Much of the recent deep learning success involve models that appear cortical, we seem to have ignored the mammalian areas and the lower brain structures. In effect, our rationality is trying to build perfectly rational machines. But because they skip the lower levels that tie things together, and ignore the submerged subconscious processes that mainly drive us, they fall short. If “needs” are seen as an expression of these lower structures and processes, then Professor Boden is right that we are not producing “robots” with “needs“.

As explained above though, I don’t think creating robots with “needs” is impossible. There may even be some research projects where this is the case. We do face the problem that so far we are coming at things backwards, from the top-down instead of the bottom-up. Using neural network architectures to generate representations of low-level internal states is a first step. This may be battery levels, voltages, currents, processor cycles, memory usage, and other sensor signals. We may need to evolve structural frameworks in simulated space and then build upon these. The results will only work if they are messy.

European Divisional Applications: Sanity Prevails

Huzzah.

The Administrative Council of the European Patent Office has decided to accept a change to Rule 36 EPC to remove the 24-month time limit for filing divisional applications.

Update: The actual decision has now been published and can be found here: http://www.epo.org/law-practice/legal-texts/official-journal/ac-decisions/archive/20131024a.html .

A draft of the decision is set out below:

Article 1
The Implementing Regulations to the EPC shall be amended as follows:

1. Rule 36(1) shall read as follows:

“(1) The applicant may file a divisional application relating to any pending earlier European patent application.”

2. The following paragraph 4 shall be added to Rule 38:

“(4) The Rules relating to Fees may provide for an additional fee as part of the filing fee in the case of a divisional application filed in respect of any earlier application which is itself a divisional application.”

3. Rule 135(2) shall read as follows:

“(2) Further processing shall be ruled out in respect of the periods referred to in Article 121, paragraph 4, and of the periods under Rule 6, paragraph 1, Rule 16, paragraph 1(a), Rule 31, paragraph 2, Rule 36, paragraph 2, Rule 40, paragraph 3, Rule 51, paragraphs 2 to 5, Rule 52, paragraphs 2 and 3, Rules 55, 56, 58, 59, 62a, 63, 64 and Rule 112, paragraph 2.”
Article 2

1. This decision shall enter into force on 1 April 2014.
2. It shall apply to divisional applications filed on or after that date.

The proposals are described in more detail in this document kindly circulated by the President of the European Patent Insititute.

Basically, Rule 36(1) EPC reverts back to its old form – from 1 April 2014 you will be able to file divisional applications as long as the parent application is still pending. [Subject to the caveat that an official version of the decision is yet to be published.]

Extra charges are being introduced, but these will only apply to divisionals of divisionals (i.e. “second generation divisional applications”). The exact charges have not been decided yet.

Although some will cry “u-turn” (not European Patent Attorneys though, we are all too polite), it is good to see the Administrative Council listen to feedback from users of the current system and act accordingly.

Feedback

Out of 302 responses received to a survey in March 2013 only about 7% sympathised with the current system. The negative consequences of the current system are clear:

It requires applicants to decide too early whether to file divisional applications (146 responses), e.g. before being sure of their interest in the inventions or their viability, prior to the possible emergence of late prior art, before having had the opportunity to dispute a non-unity objection, or even before being sure of the subject-matter for which (unitary) patent protection will be sought. Thus, the applicant is forced to file precautionary divisionals, thereby increasing the costs associated with prosecution (143 responses).

The time limits have not met their objectives (102 responses), since there has been no reduction in the number of divisionals, legal certainty has not increased, long sequences of divisionals are still possible, or there has been no acceleration of examination.

The time limits are complex and difficult to monitor, creating an additional burden and further costs for applicants and representatives (89 responses).

The negative effects of the introduced time limits are increased by the slow pace of examination (82 responses).

To address these consequences, the message from users was also clear: 65% voted for the reinstatement of the previous Rule 36 EPC. To their credit, it appears that the Administrative Council has done just that.

Reading between the lines it seems that enquiries from users of the system regarding the 24-month time limit and its operation were also causing a headache for the European Patent Office.

More Divisionals After Refusal?

An interesting aside is that it appears G1/09 is gaining ground. The document prepared for the Administrative Council stated:

This practice [of filing divisional applications before oral proceedings], though, has lost most of its basis since the Enlarged Board of Appeal issued its decision G 1/09 on 27 September 2010, in which it came to the conclusion that a European patent application which has been refused by a decision of the Examining Division is thereafter still pending within the meaning of Rule 25 EPC 1973 (current Rule 36(1) EPC) until the expiry of the time limit for filing a notice of appeal.

Consequently, applicants may file divisional applications after refusal of the parent application, without the need to resort to precautionary filings before oral proceedings

These “zombie” divisional applications always gave me the creeps – I would prefer to file before refusal. It will be interesting to see whether the practice of filing divisional applications in the notice of appeal period will now increase.

Amazon’s “One Click”: Still Not Inventive [T 1244/07]

Often, when discussing my job in public, I receive a number of pontifications on the various merits, or more usually demerits, of “the patent system”.  It is said that “the patent system” is “broken” and that there are far too many “dubious patents”. Oft cited is Amazon’s “1-Click” patent, typically described as a monopoly on anything “1-Click” related.

In reply, I politely ask which “patent system” is being discussed, explaining the system of national territorial rights. The eyes then glaze over as I launch my defence.

Helpfully for me the European Patent Office has recently published Board of Appeal decision T 1244/07. This case relates to a divisional European patent application within the “1-Click” family. It is worth a read, especially for those with negative views of “the patent system”, as it rather nicely shows the European Patent system operating smoothly with not a breakage in sight.

Firstly,  the claimed invention does not consist of the words “1-Click”. Instead, all of the following features are required:

         “A method for ordering an item using a client system, the method comprising:

receiving from a server system a client identifier of the client system when the client system first interacts with a server system;

persistently storing the client identifier at the client system, wherein the client identifier is from then on included in messages sent from the client system to the server system and retrieved by the server system each time a message with an identifier is received from the client system by the server system;

storing at the server system for that client and other clients a customer table containing a mapping from each client identifier identifying a client system to a purchaser last associated to said client system;

storing at the server system customer information for various purchasers or potential purchasers, said customer information containing purchaser-specific order information, including sensitive information related to the purchaser;

connecting at a later point in time, when a purchase is intended, the client system to the server system, comprising the steps of:

sending from the client system a request for information describing an item to be ordered along with the client identifier;

determining at the server system whether single-action ordering is enabled for that purchaser at the client system;

if enabled sending from the server system the requested information to the client system along with an indication to perform a single action to place the order for the item;

displaying at the client system information identifying the item and displaying an indication of a single action that is to be performed to order the identified item,

performing at the client system that single action and in response to that indicated single action being performed, sending to a server a single action order to order the identified item and automatically sending the client identifier whereby a purchaser does not input identification information when ordering the item, and

completing at the server system the order by adding the purchaser-specific order information including said sensitive information that is mapped to the client identifier received from the client system.”

Secondly, even with this long list of features the Board of Appeal found the claim to lack an inventive step over previously published documents. In particular a journal article called “Implementing a Web Shopping Cart” by Baron C. et al was cited.

Thirdly, I enjoyed the thinly-veiled dig at US and Canadian law:

         27. It is interesting to observe the outcome of this application in other jurisdictions.

In the US, where there is no specific exclusion for business methods, the validity of the equivalent claims was never decided in court, but a decision by the Court of Appeal of the Federal Circuit (D6) lifted an injunction on the basis that the alleged infringer had “raised substantial questions as to the validity of the … patent”. The patent was also re-examined and allowed in essentially the same form albeit limited with additional features of a shopping cart. The office action in the re-examination did not discuss D1, or go into details of cookie technology and the skilled person’s appreciation of it.

In Canada, the examiner had considered equivalent claims to be obvious over D5 and cookie technology. The review (D7) by the Commissioner of Patents found that the use of a cookie to retrieve purchaser-specific information was obvious (point 87), and the single-action ordering aspect not obvious (point 102), but an unallowable business method (point 181) and not technical (point 186). On appeal, the Federal Court overturned the latter findings for having no basis under Canadian law for such exclusions. D1 was not discussed in either of these decisions.

Sustainability, Innovation and Creativity at TEDxBristol

On 8 September 2011 I attended TEDxBristol, an independently organised TED conference showcasing local leaders in the fields of sustainability, innovation and creativity. The event was held at Bristol’s newly-opened Mshed in a waterfront area that is quickly becoming a creative hub for digital industries.

The theme for the event was the World Around Us. A particular strength of Bristol enterprise is the ability to work both at a global and local level.

  • Wendy Stephenson, a renewable energy engineer, described how The Converging World helped a small Somerset village, Chew Magna, build wind turbines in India to offset their carbon emissions.
  • Tony Bury, a philanthropist and serial entrepreneur, explained the difference a mentor can make. His charity, The Mowgli Foundation, matches mentors with entrepreneurs in South West UK, Jordan, Lebanon and Syria.

The Innovation session saw talks from, amongst others, Bloodhound SCC, the Nanoscience & Quantum Information (NSQI) Centre at the University of Bristol and inventor Tom Lawton.

  • Richard Noble of Bloodhound CC entertainingly explained how they found themselves inspiring the next generation of engineers as part of their efforts to obtain a Eurofighter jet engine. The Bloodhound Education Programme involves over 2,410 primary and secondary schools, 176 further education colleges and 33 universities. Their journey to build a 1000mph car is now gathering pace, with tests due to begin in just over a year.  They are aptly based just behind the SS Great Britain in Bristol Docklands.
  • Professor Mervyn Miles presented some of the Centre’s current research including some mind-bending work on a holoassembler. This device use optical traps of focused near infra-red radiation, positioned in space via a dynamic hologram, to assemble microscopic, and even nanoscopic, structures. Researchers have combined this technology with a multi-touch interface to create a system that would not look out of place in a science fiction film.
  • Tom’s talk provided a fascinating insight into the ups and downs of a private inventor. Over the last 10 years Tom has worked on a 360 degree camera for capturing immersive images (a “BubbleScope”). Tom explained his journey from his initial inspriation while travelling to his current iPhone pre-production accessory.

The event also featured preformances that built upon another of the West Country’s strengths: an ability to combine technology and the arts to produce truly original creations.

  • nu desine, a young start-up from Bristol, showed off their AlphaSphere musical instrument. Their presentation also produced one of my favourite quotes from the event: “I don’t rap, I’m an electronic engineer”.
  • David Glowacki, a theoretical chemist at the University of Bristol, demonstrated his Danceroom Spectroscopy project. This fuses theoretical Feynmann-Hibbs molecular dynamics simulations with a 3D imaging camera to allow the motion of dancers to warp the external forcefields felt by the simulated particles. The result is projected onto a screen with collisions mapped onto a muscial output. View it here.
  • Tom Mitchell and Imogen Heap ended the day with a demonstration of Tom’s musical gloves. These gloves allow wearers to manipulate music using just hand gestures. The result is unbelievable; you can watch the performance here.

In all, the day was a resounding success. A big thank you to Karl Belizaire, the event orgnaiser and his team. Hopefully those that attended were inspired to create their own impact in Bristol and the wider World Around Us.

PS: I was very impressed with the designs and doodles from the event – Nat Al-Tahhan lead the design work; check out here blog and event doodles here.

Copyright in the Digital age – Infopaq Revisited

A couple of recent copyright cases, one in the European Court of Justice (ECJ) and one in the England & Wales High Court, have me worried about the balance between copyright and technology.

Before I begin, let me emphasise that I believe copyright is necessary to provide a reward for intellectual endeavour. There is nothing better to promote and pay for artistic and journalistic works. The problem is ascertaining an appropriate balance between the rights of copyright owners and the rights of the public and technologists. I also have a natural dislike of the use of copyright as a control mechanism, rather than as a (temporary) way to recoup commercial investment. With these recent cases I think the balance has tipped too far in the direction of rights holders. I wish to point out that this is my amateur interpretation of the cases, feel free to comment on corrections etc and do not use it as legal opinion.

Infopaq

The first case is C-5/08 –  Infopaq International A/S v Danske Dagblades Forening (“Infopaq”) at the ECJ. The case considers a number of questions referred by the Danish Courts (Højesteret). The background is set out in the judgement, but to crudely summarise: Infopaq digitised newspapers (scan and OCR) in order to provide a searchable database of articles. A search term could then be entered and the results printed, together with an extract of 11 words indicating the context in which the search term appeared. The ECJ found that those 11 words could be subject to copyright protection if “the elements thus reproduced are the expression of the intellectual creation of their author”.   The comments of the ECJ, for example in paragraphs 40 and 47, suggest that this test, in most cases, would likely be satisfied.

Meltwater

In the second case – The Newspaper Licensing Agency Ltd & Ors v Meltwater Holding BV & Ors [2010] EWHC 3099 (Ch) (26 November 2010)  (“Meltwater”) – the English High Court cites Infopaq and considers similar issues. Both cases rely on an interpretation of Directive 2001/29/EC (“InfoSoc Directive”). Meltwater runs a Media Monitoring Service, which involves monitoring newspaper websites to create a database of articles. Website content is scraped to create the database, which can be searched for a particular term. Like Infopaq, the service may be used by commercial organisations who wish to monitor references to their brands in the media. An email report is sent out containing a hyperlink to each relevant article, the opening words of the article after the headline, and, like Infopaq’s service, an extract showing the context in which a search term appears. One of the questions was whether an End User infringes publisher’s copyright in the articles by receiving the email, and thus requires a licence. The Court found that copyright could exist in a headline and text extract. Receipt of the email constituted making a copy and thus there was infringement (see paragraph 102).

Meltwater then continued to suggest that supply of a hyperlink may infringe copyright (in this case an End User clicking on a link, and thus creating a copy would infringe) and that forwarding the hyperlink would also infringe (see paragraph 104). The exceptions were considered and found not to apply.

Consequences

Both cases appear to introduce an additional layer of legalistic worry on day-to-day digital activities. For example, the ECJ state that:

23. According to the Højesteret, it is not disputed in this case that consent from the rightholders is not required to engage in press monitoring activity and the writing of summaries consisting in manual reading of each publication, selection of the relevant articles on the basis of predetermined search words, and production of a manually prepared cover sheet for the summary writers, giving an identified search word in an article and its position in the newspaper. Similarly, the parties in the main proceedings do not dispute that genuinely independent summary writing per se is lawful and does not require consent from the rightholders.

This seems to suggest that the automation of this activity introduces civil liability. Automation necessarily involves storage of data, which in most cases is synonymous  with copying. This has been considered and the InfoSoc Directive provides exemption for technological use. The exemption requires:

  1. the act is temporary;
  2. it is transient or incidental;
  3. it is an integral and essential part of a technological process;
  4. the sole purpose of that process is to enable a transmission in a network between third parties by an intermediary of a lawful use of a work or protected subject-matter; and
  5. the act has no independent economic significance.

However, it both cases the exemption was found not to hold. In my opinion, conditions 4 and 5 are too onerous; for example, processing thousands of articles to determine sentiment for commercial ends would not comply with 4 or 5 but in my mind could be exempt.

Printing

It is interesting that reference is repeatedly made in Infopaq to the “printing” of the search results. If a hardcopy was not made, for example the search results were viewed on a computer screen or iPad, would the finding of infringement have been the same?

Small Parts and Twitter

I have generally been impressed with the common sense decisions in cases such as Francis Day & Hunter Limited v. Twentieth Century Fox Corp Limited[1940] AC 112, wherein copyright was found not to subsist in the title “the man who broke the bank at Monte Carlo”. More cases are set out in paragraph 61 of Meltwater. I think it is very dangerous to extend protect to short passages, titles, names and headlines, especially without any straightforward exemptions for citation. Indeed one of the reasons titles were denied copyright was to allow the free citation of books and articles.

While Infopaq introduces the condition of “the elements thus reproduced [being] the expression of the intellectual creation of their author”, I think it would be difficult to prove this did not apply; every headline, even if mostly factual, has at least a scintilla of creation, e.g. word choice, placement etc.. I think the Court in Meltwater accepted this was a likely result of following Infopaq.

The result of Infopaq and Meltwater is the situation that a Tweet, or even a newsagent’s sandwich-board, containing a newspaper headline is likely a prima facie infringement of copyright. Further a Tweet with a hyperlink accompanying the headline is a further infringement, if permission has not been obtained. Clicking on the link is also an infringement. If you RT a Guardian, New York Times, Washington Post or Telegraph tweet linking to an article you may be infringing copyright. I cannot see this interpretation as being practical or commercially desirable. I would thus expect an appeal from Meltwater in due course and some clarification from the higher courts.

 

UPDATE: The Court of Appeal have now dismissed Meltwater’s appeal. Read the judgement: here. The concerns set out above remain. While, in this particular case, the Courts have come to the right conclusion commercially – i.e. Meltwater’s customers required a licence, some guidance is required here on use and exclusions to avoid losing the trust of common Internet users.

Bath: Software City?

Bath. Well known for this kind of thing:

Not, however, so well known for this kind of thing:

PC

While the professional, scientific and technical, and information and communications, industries make up 13% of Bath’s workforce, they contribute 27% of the total gross-value-added (GVA) contributions (see page 24 of BANES’s Economic Strategy publication). The future of this area was the subject of a talk at the Octagon, Bath as part of the Treasure & Transform exhibition. The Chimp was in attendance (any factual errors in the reportage are my own, for which I and my hastily iPhone-typed notes apologise).

The Panel

The talk was in the form of a panel discussion chaired by Simon Bond, Director of Bath Ventures Innovation Centre (University of Bath). The panel featured the cream of Bath’s software industry: Paul Kane, CEO of CommunityDNS, Richard Godfrey, Director and Founder of i-Principles and Shaun Davey, CEO of IPL.

What Makes Bath Good for Software?

First up was a discussion of the attributes that make Bath attractive for software and high-tech companies. This included:

  • Excellent local universities (e.g. Bath & Bath Spa) providing a stream of talented people.
  • Great quality of life, especially for families.
  • Unique concentration of creative skills, e.g. in publishing, design and the arts.
  • A culture of quality, of building things well.
  • A good (if not affordable) transport link into London.
  • A concentration of affluent people, e.g. for investment.

What Difficulties do Businesses in Bath Face?

Next, the talk moved on to the difficulties faced by software and high-tech companies in Bath. These included:

  • Transport problems; e.g. high cost of train travel, difficulty getting into the center of the city; unpredictabiltiy of traffic; congestion and pollution.
  • Lack of office space suitable for software firms. Many offices were converted houses which were not suitable for the kind of open plan offices needed to allow productive brainstorming and idea sharing.
  • Communications infrastructure: outside of Bath University’s JANET link high-bandwidth connections were rare (although we were told BT are looking to install fibre in Kingsmead, mainly for residential customers). Also poor wi-fi (and 3G) coverage.
  • Cost of housing (I wholeheartedly agree with this one!): this prices out talented young people, e.g. between 18 and 40; who find other areas more affordable.

How Can Bath Improve?

Having looked at the difficulties, ways in which Bath could improve its position were discussed:

  • Stop non-city-centre traffic from needing to pass through the city centre (this would need lobbying of the Highways Agency on a national level).
  • Lobby BT/Virgin to upgrade communications infrastructure. Innovative ideas such as laying fibre along the river/canal tow-path or in existing drains and/or providing wi-fi from the hills were put forward.
  • Develop world-class conference venues. International conferences were a great way for Bath to attract talented people and become a thought leader (this was thanks to Muir Macdonald, MD of BMT Defence Services).
  • Continue the recent co-operation between the public and private sectors; consultations were always welcomed.
  • Use SMEs in Bath for local services; there are many companies in Bath that outsource services to London-based firms when local equivalents exist.
  • Increase promotion of high-tech industires in Bath. High-tech industry has often been the poor relation to the tourism industry when it comes to press coverage and marketing, which needs to change.

My Two Cents

Comparisons were made with Old Street (so-called “Silicon Roundabout”) and Cambridge (so-called “Silicon Fen”). I have worked in (and with) software and high-tech comapnies in both these locations. Bath can learn certain lessons from these areas:

  • Bath needs more innovation centres. The current Innovation Centre is at capacity and the awkward location in the bow of the river in the city centre is not great for commuting. BANES has earmarked the Bath City Riverside (where I will soon live) as sites with potential for development. These industrial areas on the edge of the city (Twerton / Newbridge), with easy road access, would make ideal (and cheap) sites for incubation centres that would solve the office space problems discussed above. Cambridge have recently opened their Hauser Forum in a similar edge-of-city location, which complements sites such as St Johns Innovation Centre (where I have worked). These buidling provide more than office space, they enable like-minded talented young people to mingle and connect, building a network effect that results in real growth and GVA.
  • The panel discussed attracting highly-educated, young 18-30-somethings to Bath to work in the high-tech industries. It was amusing listening to the ideas put forward, because the Chimp falls exactly within that demographic. For the Chimp the greatest difficulty coming to Bath is a lack of affordable family housing; if you were employed in Bath on a good yet (business-wise) affordable salary you would not be able to afford any 3-bed property outside of the estates in the south-west of the city. There are more affordable properties outside of Bath, but this means you will need to drive into Bath, and thus face the traffic problems discussed above. Here real brave thought and leadership is required, not only from the public sector, but from the more conservative and outspoken Bath public who often put a blanket ban on talk of development.

Summary

I will leave you with a corruption of Paul Kane’s ad-libbed marketing soundbite:

High-tech businesses in Bath thrive.

That they continue to thrive will be down to a continuation of the brave public-private partnerships and investments exemplified by the Treasure and Transform exhibition and seminars.

Image thanks to electricinca / Flickr

Image: Idea go / FreeDigitalPhotos.net

Some Interesting Euro-Moroccan Developments

I have always been a fan of the progressive nature of the European Patent Office, for example, including Turkey which is (at present) a non-EU state.

I was thus glad to see signs of a new partnership between the Moroccan Patent Office and the EPO. An agreement has been signed that appears similar to existing extension state agreements (wherein an European patent can be registered as a national patent in Balkan states that are not yet signatories to the European Patent Convention).

Whereas the extension state agreements became a stepping stone for becoming a full EPC signatory, the new Moroccan agreement appears to be a looser co-operation agreement that could pave the way for easy conversion of a European patent into a Moroccan national patent. Exact implementation details will follow amendments to Moroccan national law.

More details can be found here:

http://www.epo.org/topics/news/2010/20101220.html

http://www.eplawpatentblog.com/eplaw/2010/12/epo-european-patents-may-become-valid-in-morocco.html

“EU Patent” Rumour Mill: Saga Update

Plans have been afoot since the dawn of time (well at least the 70s) for a unitary patent right across the whole of the EU (plus special guests like Turkey). See Wikipedia for background.

EU Patent?

As many may know, the existing “European Patent” is a bit of a hydra; on grant a centrally administered application is converted into a number of national rights which need to be managed and enforced nationally. For years European politicians have been labouring away to sort out this messy back-end to make it easier for patent owners to enforce their rights Europe-wide. However, the proposition covers a minefield of incendiary issues:  language (the Italians and Spanish get particularly fired up over this); forum-shopping (the British and Germans do not like the thought of national rights being litigated in back-water jurisdictions); replacement of sovereign rights; integration with the European Patent Office; the disjoin between EU members and European Patent Convention (EPC) states (the latter being more progressive)  etc..

Things had been placed back on the burner a little after several years in the wilderness. However, the language issue recently torpedoed any forward progress.

Interesting though, this weekend a number of rumours have been circulating that several EU countries are making another stab at it, this time in the form of a private arrangement under the Lisbon Treaty. This quite cleverly side-steps the language issue by leaving the Spanish and Italians on the outside of the private arrangement. For example, PatLit reports on an article in the Financial Times  and Axel Horns picks up on a tweet from Mr Vincent Van Quickenborne (great name), Belgian Minister for enterprise and streamlining policy.

Quick Post: IP Enforcement in the Age of Austerity – The PCC

Without much publicity (both inside and outside of the world of IP), fundamental reforms to the procedure for IP enforcement at the Patents County Court (PCC) have come into effect.

Basically, the PCC offers a more streamlined and cost effective forum for low value/complexity claims. Designed for small to medium sized enterprises (SMEs), costs are capped at £50,000. The form of proceedings sits somewhere between proceedings before the UKIPO, the EPO and the High Court.  Damages are due to be limited to £500,000 by legislation coming into force in April 2011.

The changes are summarised by the UKIPO here.

The reforms are discussed by Field Fisher Waterhouse here. [Update: details of the first case management conference have been published; see here]

Patlit has a useful two-part guide here and here.

The Spark comments on the transfer of cases from the High Court here (also discussed by Patlit here).

British Photographer v. Texan Pornographer

I read about this case last week. Although pretty straightforward from a legal point of view it made a good story: hard-working teenage amateur – check; evil Texan pornographer – check; lawyer-done-good – check; copyright issues – check; amusing email exchanges – check; and justice served – check.

I will leave the details of the story to Plagiarism Today and Eric Goldman (follow links). The photographer in question (Lara Jade Coton – who is now a bit older) writes about winning the case here.

An interesting point that arose was the separation in the US between copyright and publicity rights (see Eric Goldman’s report). Even if a photograph on the Internet appears to have a permissive (copyright) Creative Commons licence, if it features or references one or more people, and is used in advertising, you will also need to obtain permission from those people with regard to their publicity rights. This was apparently poorly considered by the court.

Another interesting aspect is the registration of copyright works in the US.  If the photograph in question had been registered in the US damages would have been a lot higher (in the end they were around $4k) and costs for attorneys fees may have been available. However, is it reasonable to expect a 14-year-old British amateur photographer to register her photos in the US? (Or even to know to do this?) What about a Flickr account with hundreds of photos? A quick glance at the US Copyright Office website shows that there is room for improvement in terms of usability and clarity; even the material for Teachers and Students is unclear as to the registration process (i.e. What exactly can be registered? What fees are required? How does registration function for works uploaded to Internet sites?)

In the present case, if there was no additional defamation action, it would likely not have been worth pursuing a copyright action. The issues with registration and enforcement raised by the case are not unique to the US; however, they again demonstrate that some form of global copyright is needed (e.g. to protect artists and their works) but that the present system(s) need revision in the Internet age.