On 22 February 2019, a Board of Appeal of the European Patent Office referred a set of questions relating to computer simulations to the Enlarged Board of Appeal (the European patent version of the Supreme Court). This is being considered as pending referral G1/19. This is only the second time that the Enlarged Board of Appeal have considered questions relating to computer-related inventions. If the Enlarged Board of Appeal choose to answer the questions, the result could influence how machine learning and artificial intelligence inventions are examined in Europe.
The full interlocutory decision that led to the referral can be found here: G1/19.
All the People, So Many People
The original case on appeal related to modelling pedestrian crowd movement. Claim 1 of the main request considered a model of a pedestrian that included a “provisional path” and “an inconvenience function”, where movement was analysed around various obstacles. The models could be used to design buildings such as train stations or stadiums.
The Board of Appeal found that claim 1 avoided the exclusions of Articles 52(2) and (3) EPC as it related to a “computer-implemented method”. However, the Board considered that claim 1 was straightforward to implement on a computer “requiring only basic knowledge of data structures and algorithms”. The Board also deemed that the design of the method was not motivated by technical considerations concerning the internal functioning of the computer. The question of inventive step thus revolved around whether claim 1 provided further technical aspects that go beyond the mere functioning of a conventionally programmed computer.
Electrons are People too?
The Appellant argued that the claim provided a further technical effect in the form of “a more accurate simulation of crowd movement”. The Appellant argued that modelling a crowd was no different from modelling a set of electrons. The case on appeal thus considered what form of activities could be seen as a technical task producing a technical effect. The Board was not convinced that numerically calculating the trajectory of an object as determined by the laws of physics was always a technical task; for example, they believed that a technical effect requires, at a minimum a direct link with physical reality, such as a change in or a measurement of a physical entity.
The problem for the Board of Appeal was that the Appellant cited T 1227/05, which is discussed here. In this case, a numerical simulation of a noise-affected circuit was deemed to provide a technical effect as it related to “an adequately defined class of technical items”. However, the Board agreed that the arguments applied in T 1227/05 could be applied to the modelling of pedestrians, as the laws of physics apply to both people and electrons. The Board appeared minded to go against T 1227/05, on the grounds that the benefits were either in the brains of the engineers using the simulations or were the common benefits of using computers to implement testing methods.
Is Software 2.0 Patentable in Europe?
The Board appreciated that numerical development tools and computer simulations play an important role in the development of new products. Many of the points under discussion could also apply to the much larger field of machine learning, where the lines between measurement and simulation are often blurred. Indeed, the Board note this saying that there may be no ground to distinguish between simulating and using a model to predict the function of a system. The Board believed that a decision on the patentability of simulation methods needs to be made, and guidance on the interpretation of Articles 52(2) and (3) and 56 EPC would be useful.
We have Questions
The Board have thus referred three questions to the Enlarged Board of Appeal:
the assessment of inventive step, can the computer-implemented simulation of a
technical system or process solve a technical problem by producing a technical
effect which goes beyond the simulation’s implementation on a computer, if the computer-implemented
simulation is claimed as such?
the answer to the first question is yes, what are the relevant criteria for
assessing whether a computer-implemented simulation claimed as such solves a
technical problem? In particular, is it a sufficient condition that the
simulation is based, at least in part, on technical principles underlying the
simulated system or process?
are the answers to the first and second questions if the computer-implemented
simulation is claimed as part of a design process, in particular for verifying
The referral raises a number of interesting points for computer-related inventions. In the interlocutory decision the Board go back to VICOM (T 208/84) to argue that a direct link with physical reality seems necessary. However, they cited different cases to suggest that it was not clear whether a direct or “real-world” effect needed to be present to provide a technical effect. With simulations there is also a question of whether non-claimed features, such as a future use, could be taken into account when assessing inventive step.
The Board of Appeal have done well to summarise the issues in the area of simulation and to highlight where further clarification would be useful. As is often the case, the actual claimed invention appears to be a straw person for consideration of broader policy questions.
The referral is timely. Cases relating to machine learning and “artificial intelligence” are increasing rapidly. Last time the Enlarged Board of Appeal had a chance to clarify the law for computer-implemented inventions, in G3/08, they dodged the bullet, arguing that the referral was inadmissible. The well-formed points by the Board of Appeal, and the general zeitgeist, mean that they may not be able to do this again.
First, some definitions (patent attorneys love debating words). The terms “robot”, “AI” and “computer” are used interchangeably in the article. This is one of the problems of the piece, especially when discussing “needs”. If a “computer” is simply a processor, some memory and a few other bits, then yes, a “computer” does not have “needs” as commonly understood. However, it is more of an open question as to whether a computing system, containing hardware and software, could have those same “needs”.
This brings us to “AI”. The meaning of this term has changed in the last few years, best seen perhaps in recent references to an “AI” rather than “AI” per se.
In the latter half of the twentieth century, “AI” was mainly used in a theoretical sense to refer to non-organic intelligence. The ambiguity arises with the latter half of the term. “Intelligence” means many different things to many different people. Is playing chess or Go “intelligent”? Is picking up a cup “intelligent”? I think the closest we come to agreement is that it generally relates to higher cortical functions, especially those demonstrated by human beings.
Since the “deep learning” revival broke into public consciousness (2015+?) “AI” has taken on a second meaning: an implementation of a multi-layer neural network architecture. You can download an “AI” from Github. “AI” here could be used interchangeably with “chatbot” or a control system for a driverless car. On the other hand, I don’t see many people referring to SQL or DBpedia as an “AI“.
“AI” tends to be used to refer more to the software aspects of “intelligent” applications rather than a combined system of server and software. There is a whiff of Descartes: “AI” is the soul to the server “body“
Based on that understanding, do I believe an “AI” as exemplified by today’s latest neural network architecture on Github has “needs“? No. This is where I agree with Professor Boden. However, do I believe that a non-organic intelligence could ever have “needs“? I think the answer is: Yes.
This leads us to robots. A robot is more likely to be seen as having “needs” than “AI” or a “computer“. Why is this?
Robots have a presence in the physical world – they are “embodied“. They have power supplies, motors, cameras, little robotic arms, etc. (Although many forget that your normal rack servers share a fair few components.) They clearly act within the world. They make demands on this world, they need to meet certain requirements in order to operate. A simple one is power; no battery, no active robot. I think most people could understand that, in a very simple way, the robot “needs” power.
Let’s take the case where a robot is powered by a software control system. Now we have a “full house“: a “robot” includes a “computer” that executes an “AI“. But where does the “need” reside? Again, it feels wrong to locate it in the “computer” – my laptop doesn’t really “need” anything. Saying an “AI” “needs” something is like saying a soul “needs” food (regardless of whether you believe in souls). We then fall back on the “robot“. Why does the robot feel right? Because it is the most inclusive abstract entity that encompasses an independent agent that acts in the world.
Needs, Goals & Motivation
Before we take things further lets go on a detour to look at “needs” in more detail. In the article, “needs” are described together with “goals” and “motivation“. Maslow’s famous pyramid features. In this way, a lot is packaged into the term.
Can we have “needs” without “goals“? Possibly. A quick google shows several articles on “What Bacteria Need to Live” (clue: raw chicken and your kitchen). I think we can relatively safely say that bacteria “need” food and water and a benign environment. Do bacteria have “goals“? Most would say: No. “Goals“, especially as used to describe human behaviour, suggests the complex planning and goal-seeking machinery of the human brain (e.g. as a crude generalisation: the frontal lobes and corpus striatum amongst others). So we need to be careful mixing these – we have a term that may be applied to the lowest level of life, and a term than possibly only applies to the highest levels of life. While robots could relatively easily have “needs“, it would be more much difficult to construct one with “goals“. We would also stumble into “motivation” – have does a robot transform a “need” into a “goal” to pursue it?
Now, as human beings we instinctively know what “motivation” feels like. It is that feeling in the bladder that drives you off your chair to the toilet; it is the itchy uneasiness and dull empty abdominal ache that propels you to the crisp packet before lunch; it is the parched feeling in the throat and the awareness that your eyes are scanning for a chiller cabinet. It is harder to put it into words, or even to know where it starts or ends. Often we just do. Asked why we are doing what we do and the brain makes up a story. Sometimes there is a vague correlation between the two.
Now this is interesting. Let’s have a look at brains for more insight.
Nature is great. She has evolved at least the Earth’s most efficient data processing device (ignore that the “she” here also doesn’t really exist). Looking at how she has done this allows us to cheat a little when building robots.
A first thing to note is that nature is lazy and stupid (hurray!). She recycles, duplicates, always takes the easy option. This paradoxically means we have arrived at efficiency through inefficiency. Brains started out as chemical gradients, then rudimentary cellular architecture to control these gradients, then multi-cellular architectures, nervous passageways, spinal cords, brain stems, medullas, pons, mid-brains, limbic structures and cortex. Structures are built on top of structures and wired up in ways that would give an electrician a heart attack. Plus structures are living – they change and grow over time within an environment.
In the brain “needs“, at least those near the bottom of the Maslowian pyramid, map fairly nicely onto lower brain structures: the brain stem, medulla, pons, and mid-brain. The thalamus helps to bridge the gap between body and cortex. The cortex then stores representations of these “needs“, and maps them to and from sensory representations. Another crude and incorrect generalisation, but those lower structures are often called the “lizard brain“, as those bits of neural hardware are shared with our reptilian cousins. The raw feeling of “needs” such as hunger, thirst, sexual desire, escape and attack is possibly similar across many animals. What does differ is the behaviour and representations triggered in response to those needs, as well as the top down triggering (e.g. what makes a human being fear abstract nouns).
Comparative studies of brain structure and development have revealed a general bauplan that describes the fundamental large-scale architecture of the vertebrate brain and provides insight into its basic functional organization. The telencephalon not only integrates and stores multimodal information but is also the higher center of action selection and motor control (basal ganglia). The hypothalamus is a conserved area controlling homeostasis and behaviors essential for survival, such as feeding and reproduction. Furthermore, in all vertebrates, behavioral states are controlled by common brainstem neuromodulatory circuits, such as the serotoneric system. Finally, vertebrates harbor a diverse set of sense organs, and their brains share pathways for processing incoming sensory inputs. For example, in all vertebrates, visual information from the retina is relayed and processed to the pallium through the tectum and the thalamus, whereas olfactory input from the nose first reaches the olfactory bulb (OB) and then the pallium.
“Needs” near the middle or even the top of Maslow’s pyramid are generally mammalian needs. These include love, companionship, acceptance and social standing. Consensus is forming that nature hijacked parental bonds, especially those that arise from and encourage breast feeding, to build societies. An interesting question is does this require the increase in cortical complexity that is seen in mammals? These “needs” mainly arise from the structures that surround the thalamus and basal ganglia, as well as mediators such as oxytocin. So that pyramid does actually have a vague neural correlate; we build our social lives on top of a background of other more essential drives.
The top of Maslow’s pyramid is contentious. What the hell is self-actualisation? Being the best you you can be? What does that mean? The realisation of talents and potentialities? What if my talent is organising people to commit genocide? Rants aside, Wikipedia gives us something like:
Expressing one’s creativity, quest for spiritual enlightenment, pursuit of knowledge, and the desire to give to and/or positively transform society are examples of self-actualization.
What these seem to be are human qualities that are generally not shared with other animals. Creativity, spirituality, knowledge and morality are all enabled by the more developed cortical areas found in human beings, as coordinated by the frontal lobes, where these cortical areas feed back to both the mammalian and lower brain structures.
A person may thus be likened to a song. The beat and bass provided by the lower brain structures, lead guitar and vocals by the mammalian structures, and the song itself (in terms of how these are combined in time) by the enlarged cortex.
Back to Needs
We can now understand some of the problems that arise when Professor Boden refers to “needs“. Human “needs” arise at a variety of levels, where higher levels are interconnected with and feed back to lower levels. Hence, you can take about “needs” such as hunger relatively independently of social needs, but social needs only arise in systems that experience hunger. There is thus a question of whether we can talk about social needs independent of lower needs.
We can also see how the answer to the question: “can robots ever have needs?” ignores this hierarchy. It is easier to see how a robot could experience a “need” equivalent to hunger than it is to see it experience a “need” equivalent to acceptance within a social group. It is extremely difficult to see how we could have a “self-actualised” robot.
Before we look at whether robots care we also need to introduce “the environment“. Not even human beings have “needs” in isolation. Indeed, a “need” implies something is missing, if an environment fulfils the requirement of a need, is it still a “need“?
Additionally, behaviour that is not suited to an environment would fall outside most lay definitions of “intelligence“. “Intelligence” is thus to a certain extent a modelling of the world that enables environmental adaptation.
The environment comes into play in two areas: 1) human “needs” have evolved within a particular “environment“; and 2) a “need” is often expressed as behaviour that obtains a requirement from the “environment” that is not immediately present.
Food, water, a reasonable temperature range (10 to 40 degrees Celsius), and an absence of harmful substances are fairly fundamental for most life; but these are actually a mirror image of the physical reality in which life on Earth evolved. If our planet had an ambient temperature of 50 to 100 degrees Celsius, would we require warmth? Can non-hydrogen-based life exist without water? Could you feed off cosmic rays?
These are not ancillary points. If we do create complex information processing devices that act in the world, where behaviour is statistical and environment-dependent, would their low-level needs over with ours? At presence it appears that a source of electrical power is a fairly fundamental “robot” or “AI” need. If that electrical power is generated from urine , do we have a “need” for power or for urine? If urine is correlated with over-indulging on cider at a festival, does the “AI” have a “need” for inebriated festival goers?
The sensory environment of robots also differs from human beings. Animals share evolutionary pathways for sensory apparatus. We have similar neuronal structure to process smell, sight, sound, motor-feedback, touch and visceral sensations, at least at lower levels of processing complexity. In comparison, robots often have simple ultrasonic transceivers, infra-red signalling, cameras and microphones. Raw data is processed using a stack of libraries and drivers. What would evolution in this computing “environment” look like? Can robots evolve in this environment?
Do robots have “needs“?
So back to “robots“. It is easier to think about “robots” than “AI“, as they are embodied in a way that provides an implicit reference to the environment. “AI” in this sense may be used much as we use “brain” and “mind” (it being difficult with deep learning to separate software structure from function).
Do robots have “needs“? Possibly. Could robots have “needs“? Yes, fairly plausibly.
Given a device with a range of sensory apparatus, a range of actuators such as motors, and modern reinforcement learning algorithms (see here and here) you could build a fairly autonomous self-learning system.
The main problem would not be “needs” but “goals“. All the reinforcement learning algorithms I am aware of require an explicit representation of “good“, normally in the form of a “score“. What is missing is a mapping between the environment and the inner state of the “AI“. This is similar to the old delineation between supervised and unsupervised learning. It doesn’t help that roboticists skilled at representing physical hardware states tend to be mechanical engineers, whereas AI researchers tend to be software engineers. It requires a mirroring of the current approach, so that we can remove scores altogether (this is an aim of “inverse reinforcement learning“). While this appears to be a lacuna in most major research efforts, it does not appear insurmountable. I think the right way to go is for more AI researchers to build physical robots. Physical robots are hard.
Do robots care?
Do most “robots” as currently constructed “care“? I’d agree with Professor Boden and say: No.
“Care” suggests a level of social processing that the majority of robot and AI implementations currently lack. Being the self-regarding species that we are, most “social” robots as currently discussed refer to robots that are designed to interact with human beings. Expecting this to naturally result in some form of social awareness or behaviour is nonsensical: it is similar to asking why flies don’t care about dogs. One reason human beings are successful at being social is we have a fairly sophisticated model of human beings to go on: ourselves. This model isn’t exact (or even accurate), and is largely implemented below our conscious awareness. But it is one up from the robots.
A better question is possibly: do ants care? I don’t know the answer. One one hand: No, it is difficult to locate compassion or sympathy within an ant. On the other hand: Yes, they have complex societies where different ants take on different roles, and they often act in a way that benefits those societies, even to the detriment of themselves. Similarly, it is easier to design a swarm of social robots that could be argued to “care” about each other than it is to design a robot that “cares” about a human being.
Also, I would hazard to guess that a caring robot would first need to have some form of autonomy; it would need to “care” about itself first. An ant that cannot acquire its own food and water is not an ant that can help in the colony.
Could future “robots” “care“? Yes – I’d argue that it is not impossible. It would likely require a complex representation of human social needs but maybe not the complete range of higher human capabilities. There would always be the question of: does the robot *truly* care? But then this question can be raised of any human being. It is also a fairly pertinent question for psychopaths.
Despite the hype, I agree with Professor Boden that we are a long way away from any “robot” and “AI” operating in a way that is seen as nearing human. Much of the recent deep learning success involve models that appear cortical, we seem to have ignored the mammalian areas and the lower brain structures. In effect, our rationality is trying to build perfectly rational machines. But because they skip the lower levels that tie things together, and ignore the submerged subconscious processes that mainly drive us, they fall short. If “needs” are seen as an expression of these lower structures and processes, then Professor Boden is right that we are not producing “robots” with “needs“.
As explained above though, I don’t think creating robots with “needs” is impossible. There may even be some research projects where this is the case. We do face the problem that so far we are coming at things backwards, from the top-down instead of the bottom-up. Using neural network architectures to generate representations of low-level internal states is a first step. This may be battery levels, voltages, currents, processor cycles, memory usage, and other sensor signals. We may need to evolve structural frameworks in simulated space and then build upon these. The results will only work if they are messy.
What is uncertain, what is unknown and what can be modelled?
We can know, for a given classification, past grant rates. This gives a rough a priori probability.
We can also know abandonment and withdrawal rates.
We cannot know how an examiner is going to approach the case.
One of the biggest unknowns is the prior art that is cited. Prefiling searches enable a general view of the level and type of art that maybe cited. However, in my experience, prefiling search art is rarely cited in subsequent search and examination reports, everyone has a different set of preferred art to cite.
Generally there will be a link between claim length and novelty / inventive step objections: shorter claims are more likely to receive objections on these grounds.
We also cannot always know how valuable a patent will be. This depends on commercial context that is constantly changing.
We cannot know the outcome of litigation.
We can, though, update our probabilities based on events. For example, comments from an examiner, opposition board, Board of Appeal, or other party to proceedings can change our knowledge. A positive opinion can increase our estimate of the probability of success and a negative opinion can decrease the same.
Professionals that appear, externally, to be able to control uncertainty will attract more business. No one likes uncertainty, especially in business. However, even a rudimentary history or background knowledge would indicate that, although it is possible to be lucky, uncertainty can never be banished or controlled. Any offer of certainty is thus false.
It’s the kind of question that you answer in a covering letter for a job or in an interview. You often answer it when you have no experience of the job. You tend to forget the question more than a decade later.
So: why do I work as a patent attorney?
I love technology. Growing up my favourite possessions were a box of Lego, a BBC Micro, a cheap Bush walkman and my Casio calculator watch. In conversation I get excited about machine learning and natural language processing. Blade Runner and Terminator 2 are my favourite films. I find the Promethean ability to breath life into inert matter fascinating. Working as a patent attorney means I am immersed in technology of all kinds every day.
I like helping inventors and innovative companies. As a patent attorney you get to work with some of the smartest, most creative engineers on the planet. You also work in the real commercial world, as opposed to the more artificial confines of academia.
I enjoy diving deep into new subject matter and linking it to existing understanding. I have a “systematic” mind, I enjoy figuring out what makes things work. As a kid, I devoured Encylopedias and practically slept with a copy of the Usborne Book of Knowledge. I studied hard, partly through sheer curiosity. I always find how we know what we know fascinating. I may be the only one of my school and University peers who uses their subject knowledge everyday. Each new invention builds upon strata of past learning in a way that is deeply satisfying.
I like an intellectual challenge (the flip-side to being easily bored by the surface of things). I like wrestling an idea into language.
And the more quotidian reasons: I like being able to pay the bills; I like working in a place with free Nespresso and apples; I like having good colleagues and leadership.
Why you need the why
You need these reasons to keep going through the day-to-day work and the ups-and-downs of commercial reality.
Nine out of ten small businesses fail, typically despite great inventions and people.
Human contact is often lost beneath the required bureaucratic machinery of large organisations.
Cases can be granted or refused based solely on the luck of the examiner draw.
The gap between the hyperbole needed to sale a product and the prosaic hardwork to get the product working.
The size of the body of previously-published materials.
Cases you work on for years are left behind as companies pivot and cost-cut.
There’s always a deadline or five.
Every other patent attorney is just as driven and smart and is competing against you.
If you align why you are doing something with what you are doing then things become a lot easier.
* Caveat: I understand that this can seem a little MBA-gimmicky, and I do share your skepticism, but the underlying question is a sound one. Reflection is also a good thing, and needed now more than ever with the iPhone buzzing and blinking.
[This is a somewhat reflective piece that only has a tangential relation to patent work. Feel free to ignore until more patent-centric posts come along. It may be of help for others considering different working patterns while looking after young children.]
I have been working part time since May as my partner and I share the childcare for our three children. I am now coming up to the three month mark. Here are my reflections on the experience.
I currently work Monday, Tuesday and Friday. Salary, holiday and other benefits are prorated on a three-fifths basis. On a Wednesday and Thursday I am responsible for the childcare while my partner works. The older two children will be in school from September, while the youngest is under one. Working part time is temporary, we will reassess our options when the demands of the youngest tail off a little.
Before number three came along, I worked for a period full-time with the older two children in nursery. Compared to this arrangement there are a number of advantages in working part-time.
Part time works leads to better supervisory work.
For example, I work with a number of pre and post-qualified associates, supervising and guiding their work. I can set up a task at the beginning of the week and then check this at the end of the week. Not being there makes me a better teacher and manager – I have to issue clear and concise guidance, and I am prevented from micromanaging. I also feel it improves the learning and initiative of our associates – they need to work out things for themselves and prepare materials for easy review and comprehension.
Days at work pass more rapidly – “flow” is easier.
As time is limited there is always something to do and no space for procrastination. The feeling that lunch or 5:30pm has crept up on me happens more often. This is also because I have more mental energy for work tasks, and I appreciate the silence and room to think after days of childcare.
Clients get a good deal.
Having more mental energy and less time for procrastination leads to high quality work for a higher proportion of the working week.
More housework gets done.
Being home for two extra days means time for jobs such as tidying and washing. These jobs used to be relegated to the weekend when often we were too tired to spend much time doing them.
We eat better.
On the days when I am off I have time to whip up a batch of food in the slow cooker or do some baking. This saves us money and is healthier, especially compared to buying prepared food or ready meals. It means there is normally a batch of leftovers in the fridge. I also do the weekly shop on a Wednesday morning when it is relatively quiet.
I have more practice at parenting.
I am not the best parent in the world. I get angry. I have a relatively low tolerance for messing about. I do not plan amazing educational activities. However, by just being around, my bond with the children is improving. Anecdotally, I also think their behaviour, at least outside the home, is improving.
The children also benefit from having two different parents look after them. I think this makes them more robust and more open, as they are not tied to any one individual’s behavioural patterns. They also experience both a male and female perspective.
More equality, less resentment.
As we are both working (approximately) 50% of the time (60-40 for pedants), it becomes easier to share things like bills, housework and random expenses on the same basis. Hence, no one feels resentful at being the breadwinner/homemaker while the other party lives a life of Reilly as the homemaker/breadwinner. Decisions are also easier as no one plays a breadwinner/homemaker trump card.
More energy and motivation to live better.
Not sitting behind a desk for most of the week has health benefits. Looking after children definitely involves more physical exercise, so as a result I am healthier than I was working full time. Coupled with eating better, this results in more energy and motivation.
Career on hold.
Practically, going part-time has parked any available career progression until I return to my full-time position. This is a somewhat fair trade-off. There is not the (over) time to dedicate to career building and progression at the moment. However, I think this will become harder to bear as more of my contemporaries progress. It is noted that there are a lack of permanent options for part-time employees.
Weeks pass quickly.
When working your brain can often assume that you have a full week to get things done. This is not the case. Combined with the increased “flow” documented above, it is easy for time to fly by. This requires extra diligence and planning.
Missing seminar sessions.
For some reason, most continuing professional development sessions are scheduled for a Wednesday or Thursday. Watching a recorded session does not count as full session time.
Looking after young children full-time, four days a week means that attention comes in 30 second segments. Turn away for more than this and a four-year-old is balancing a totem pole on the baby’s head, or the baby is doing a stunt roll down the stairs, or someone is eating sequins. This can be mentally draining. A good couple of hours or relatively silent, contemplative time is needed to let my brain return to normal.
No time for side projects.
I have a number of web development, robotics and artificial intelligence ideas I enjoy playing around with (e.g. via Lego mindstorms, Flask projects, or Raspberry Pi GPI/O or machine vision test rigs). Although I would love to sit down and work on these during the days looking after the children, the reality is this is just not possible. Instead, discipline is needed to carve out 30mins before work or an hour before bed or on the weekend. This is hard to enact when you are tired.
Stuff That Works Both Ways
Although there is a three-fifths hit to gross income, this rather surprisingly does not equal a three-fifths hit to net income.
In the UK there is a progressive taxation of income, meaning that if you earn more you are often taxed at a higher rate. Also there are arbitrary single income cut-offs for benefits such as child benefit. Childcare costs have also risen at a much higher rate than income. For us this means that the amount we lose in income approximately equals the amount we would have to pay out in childcare. So we are about even overall.
Mentally flipping between two very different environments can lead to cognitive dissonance. However, it can help in seeing more of the context both at home and at work, as neither completely takes over your life. For example, the differences between household and commercial accounts may be of orders of magnitude, but a frugal approach to home finances can help prevent profligate policies at work, and increase client value. Also the soap opera of company mergers, acquisitions and bankruptcies, and the lack of control in these situations, may be approached in the same stoic manner as a screaming toddler. In the end, the balance helps strip out some of the needless noise and concentrate on long term value.