Mike Slinn
Mike Slinn

What Does ‘Control’ Mean in 2022?

Published 2022-10-05.
Time to read: 19 minutes.

Mike Slinn has been working in the tech industry for more than four decades. His passion for technology coupled with his keen interest in unraveling puzzles led him to seek work as an expert witness in technology legal disputes. To date, he has been retained 17 times for his expert opinion and looks forward to building on that foundation.

Contact Mike to discuss your needs.

This article is from the series entitled Technology Expert Articles for Attorneys. You might find the articles of interest if you are looking for a software expert witness, a technology expert witness or a computer expert witness.

I have never been disqualified.
All my opinions have been accepted.

I was asked by some attorneys recently how I would define ‘control’, specifically over software.


That is an interesting question. After a literature search, it appears that the normal definitions for the word ‘control’ have not been updated since before the advent of computing machinery, approximately 80 years ago. While the previous definitions still apply and are valid, they do not provide a nuanced definition suitable for discussing the control of current technology.

These changes took several generations to evolve; they did not all happen at once. One could look back at a moment in time and recall how the situation was then.

The short answer is “it depends on what is being controlled, and the context”. Read on for more detail.

This introductory article, while rather long compared to the other articles in this series, is only meant to discuss the major aspects of the concept of control, as applied to systems, and used by lay people. Engineers specializing in classical control systems would employ a more technical definition, without considering many of the aspects discussed here.

I wrote this article over a period of 4 weeks, and have revised it extensively as my thinking developed. As with all the articles on this website, I will continue to update this article as new information becomes available. Also, any such definition would be well served by vigorous discussions.

... it depends on what is being controlled, and the context ...

Control, Defined

Let's start off the discussion with a few dictionary definitions for ‘control’, as a noun and a verb. This is not meant to be exhaustive.

  1. to direct the behavior of (a person or animal) : to cause (a person or animal) to do what you want
  2. to have power over (something)
  3. to direct the actions or function of (something) / to cause (something) to act or function in a certain way
  4. to exercise restraining or directing influence over : REGULATE
  5. power or authority to guide or manage
  6. a device or mechanism used to regulate or guide the operation of a machine, apparatus, or system
 – Paraphrased from the Merriam-Webster Dictionary
  1. the power to influence or direct people's behavior or the course of events
  2. determine the behavior or supervise the running of
 – From the Oxford Dictionary
  1. n. the power to direct, manage, oversee and/or restrict the affairs, business or assets of a person or entity.
  2. v. to exercise the power of control.
 – From the Law.com Dictionary

Lawinsider.com has some definitions, based on analyzing a corpus of documents, however their corpus appears to have been drawn from corporate law and the definitions offered are rather narrow in scope as a result. More than a dozen definitions are offered. The one that seems most applicable to software is:

Control or "controls" for purposes hereof means that a person or entity has the power, direct or indirect, to conduct or govern the policies of another person or entity.

 – From LawInsider.com

Control v. Ownership


In a nutshell, control is influence or authority over, while ownership is the state of having legal title, which means legal control of the status of something. Ownership might require a legal responsibility to provide some measure of control. Responsibility for outcomes might derive from exercising or failing to exercise proper control. Further examination of specific examples might be instructive, and might require a restatement of this short definition. Control is possible without ownership; multiple scenarios are possible, some of which are mentioned in this article.


No discussion these days of the word ‘control’ in a software context would be complete without mentioning the word ‘pwned’. Pwned is a portmanteau of the words ‘power’ and ‘owned’ or ‘perfectly owned’. For video gamers, and increasingly in the trade press, the word means an opponent has been defeated, or ‘owned’. Note the conflation of the concepts of ownership, control and dominance. Listen to this short video to hear how the word is pronouced.

Pwned – typically used to imply that someone has been controlled or compromised.  – From haveibeenpwned.com
If your email has been pwned, it means that the security of your account has been compromised. ... It could mean your passwords and email addresses have ended on the hands of cyber criminals. Hacking an account using your email address is possibly the first step of identity theft.

 – From F-Secure, a company that provides enterprise-level security products.

Assessing Control: Examples


A person generally considers themselves to control a thing or a person when they receive feedback appropriate for the action that they took, including their inaction. For example, flicking a light switch should cause lights that are connected to shine. If a light did not shine, the person might consider the switch or the light bulb to be broken, or perhaps the circuit might be suspect.

If for some reason feedback is suppressed, or imperceptible, then the person would also likely believe that they were not in control, unless they believed that plausible explanations existed for the lack of feedback. For example, a person with complete color blindness would be aware of their inability to perceive color. Similarly, a person with a hearing defect that prevented them from hearing sounds that might indicate a device is responding to their actions would not expect to hear those sounds.

Control systems theory is taught to electrical engineers, usually starting in the third year and beyond. I'm not going to get into that stuff with you now, but in case you want to know more, here is a technical diagram that suggests some basic concepts, including feedback, for closed-loop control systems. The course it links to has more detail.

From ‘Feedback Systems’ lecture in ElectronicsTutorials by AspenCore, Inc
From ‘Feedback Systems’ lecture in ElectronicsTutorials by AspenCore, Inc

Learning New Skills

Learning a skill can enhance one’s ability to control something. For example, learning to ride a bicycle, or learning to fly a kite. Regarding flying kites, complete control is impossible because the wind cannot be controlled, but learning how to respond to wind changes is an essential part of learning to fly a kite.

Thus, absolute control over some types of things might not be possible, even if some measure of control can be attained or enhanced through practice.

Inert Objects

Scissors are useful for cutting paper. As inert objects, scissors do nothing on their own. The person holding the scissors decides what to cut, when to cut it, and how and where to perform the action. Sharp scissors can be dangerous when handled improperly, or when handled by children, or if a properly trained adult attempts to use them when inebriated.

It is easy to determine who controls inert objects like scissors: whoever is holding the scissors has control of them, provided the inert object is not attached to anything else, or is subject to physical forces such as magnetism that might affect the object to a greater degree than the person can reliably overcome.

Someone holding the scissors might be conforming to directions provided by another person, or perhaps the scissors were left out in easy reach of children or the public. Any discussion of liability is out of scope for my expertise.

Losing Control

Someone at the top of a tall ski hill covered in several inches of slightly wet snow could make a large snowball. They completely control the large snowball at this point. However, if they push hard, and start rolling it downhill, the snowball could grow rapidly as it accelerates, causing injury, death and destruction. The snowball would be out of control for most of its downward journey. This example shows that someone could have control over an object at a moment in time, and then loose control.

Devices With Computational Capability

One major consideration for devices that contain computational elements stems from the engineering practice of layering successively higher functionality, from hardware, to firmware, and software; even the software itself is built in layers. Another term, similar to ‘layers’ that is equally applicable for defining control, is ‘hierarchy’. Devices that contain general-purpose computational elements have a hierarchy of control sub-categories.


The type of device, and whether it contains a general-purpose computational element, provides a contextually dependent meaning for the word ‘control’, this is true whether the word is used as a noun or verb.

Following are two examples. These examples are drawn from the physical world, so most people can find them relatable; however, close analogs exist in the fields of enterprise software, particularly e-commerce, enterprise resource planning (ERP), and software-as-a-service (SaaS). Examples include recommender systems, image recognition and generation, speech recognition and generation, traffic prediction, weather prediction, email filtering, security, fraud detection, dynamic pricing, and much more.

Vacuum Cleaners

The concept of controlling a classically constructed device, such as a circa 1995 vacuum cleaner, differs from controlling a device with a general-purpose computational element, such as the iRobot Scooba® floor washing robot, first available in 2005.

1995 Dyson DC01 Dual Cyclonic Vacuum Cleaner
1995 Dyson DC01 Dual Cyclonic Vacuum Cleaner
2005 iRobot Scooba® wet vacuum cleaner
2005 iRobot Scooba® wet vacuum cleaner

The older vacuum cleaner would only have an on/off switch, perhaps a power selector, and perhaps the ability to adjust various optional attachments. The operator would be able to control where they place the vacuum head by lifting and placing the head in the location where they want to clean.

A robotic cleaner, in contrast, finds its own path to clean, and might have sensors that cause it to employ various cleaning strategies when it encounters something that triggers a specific cleaning algorithm. Clearly, the finer points of the concept of controlling a vacuum cleaner differs depending on the nature of the device.


The Ford Model T was named the most influential car of the 20th century by the 1999 Car of the Century competition. No-one alive today had yet been born when the Model T was first offered for sale in 1908, so here are a few reminders of what technology was like at that time.

The Model T had no battery. The only use of electricity by early versions of this car was for the 4 spark plugs, and they received power produced by a flywheel magneto system. Acetylene gas lamps were used for headlights, and kerosene lamps were used for side and tail lights.

“A friend of mine came out and used one lamp to light his cigarette”
“A friend of mine came out and used one lamp to light his cigarette”

Controlling a Ford Model T was somewhat different from modern cars: once the Model T was started, drivers could set the throttle with their right hand, press on the brake, clutch and reverse pedals, turn the steering wheel, and operate the hand brake. This diagram is from the Model T Ford Club of America:

Cadillac began offering cars with push-button starters in 1912, but few people could afford such a luxury car; instead, the Model T was started with a hand crank. Power steering would not be invented for several years, so the steering wheel was mechanically connected to the steering mechanism for the front wheels. There were no power windows, in fact, early Model Ts did not even have windows in the doors; this could be uncomfortable in bad weather.

Transistors were invented decades later, in 1956, and vacuum tubes were still exotic and expensive in 1918. Consequently, car radios did not appear for another decade, and they would use vacuum tubes until the mid-1960s. Cars would not be mass-produced with automatic transmissions until General Motors introduced the Hydramatic three-speed hydraulic automatic in 1939.

BMW ConnectedDrive Driver Assistance
BMW ConnectedDrive Driver Assistance

In contrast, the average vehicle in 2022 contains about 50 microprocessors, all interconnected, and most of those vehicles also have at least one camera. Vehicles from various manufacturers use multiple cameras to park themselves, for both parallel and regular perpendicular parking. Some vehicles in 2022 also have assisted reversing, which assumes steering control to mirror the path the vehicle most recently took going forward. This system makes backing out of a confined parking place easy. All the driver has to do is operate the accelerator and brakes and monitor the surrounding area, while the steering follows the exact path the car took to enter the space.

When using either of these modern features, what controls the car when parking or reversing? The car continually uses its many sensors and internal guidance system to make course corrections. All the driver does is indicate their intent, and the only action they can take is to control the speed, pause or abort the procedure. This seems rather similar to how a rider controls a horse, yet differences do exist. Clearly, the finer points of the concept of controlling a car differs depending on the nature of the car, and the last word has yet to be spoken on this matter.

From horseandrider.com: Out-of-Control Trail Horse!<br>What should you do on the trail when your horse just won’t settle down—and even tries to bolt?
From horseandrider.com: Out-of-Control Trail Horse!
What should you do on the trail when your horse just won’t settle down—and even tries to bolt?

Six Levels of Vehicle Autonomy

The Society of Automotive Engineers (SAE) defines six levels of driver assistance technology. These levels have been adopted by the U.S. Department of Transportation (US DOT). The following summary table is taken from the US National Highway Traffic Safety Administration (NHTSA), which is a division of the US DOT:

Level 0The human driver does all the driving.
Level 1 An advanced driver assistance system (ADAS) on the vehicle can sometimes assist the human driver with either steering or braking/accelerating, but not both simultaneously.
Level 2 An advanced driver assistance system (ADAS) on the vehicle can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention (“monitor the driving environment”) at all times and perform the rest of the driving task.
Level 3An automated driving system (ADS) on the vehicle can itself perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time when the ADS requests the human driver to do so. In all other circumstances, the human driver performs the driving task.
Level 4An automated driving system (ADS) on the vehicle can itself perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human need not pay attention in those circumstances.
Level 5 An automated driving system (ADS) on the vehicle can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving.

Tesla’s Autopilot feature

Tesla cars were designed to provide self-driving features, although some of those features are not enabled yet for all customers. Those advanced features include autopilot, autosteer, smart summon, full self-driving, taking direction from a calendar instead of a human, and self-parking. The concept of controlling such a vehicle is highly nuanced, and goes far beyond the concept of controlling an animal.

Tesla’s Autopilot feature is classified as “Level 2” vehicle autonomy, which means the vehicle can control steering and acceleration, but a human in the driver’s seat can take control at any time.

Autopilot Full Self-Driving Hardware from Tesla.

“Whether a [Level 2] automated driving system is engaged or not, every available vehicle requires the human driver to be in control at all times, and all state laws hold the human driver responsible for the operation of their vehicles,” an NHTSA spokesperson said. “Certain advanced driving assistance features can promote safety by helping drivers avoid crashes and mitigate the severity of crashes that occur, but as with all technologies and equipment on motor vehicles, drivers must use them correctly and responsibly.”

 – From A Tesla on autopilot killed two people in Gardena. Is the driver guilty of manslaughter?, Los Angeles Times, 2022-01-19.
“Mercedes-Benz is the first manufacturer to put a Level 3 system with international valid certification into series production.”

Mercedes-Benz Says Self-Driving Option Ready to Roll, published in The Detroit Bureau

More could be said regarding context, but this serves to introduce the topic.

Artificial Intelligence and Machine Learning

Software capable of learning has become commonplace in business software. Machine Learning (ML) learns and predicts based on passive observations by applying sophisticated statistical methods, whereas Artificial Intelligence (AI) implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals.

There is controversy in the software community on whether ML is a subset of AI, or a separate field. For the purposes of this introductory discussion, this is a distinction without a difference, however one should be aware of the inconsistency of terminology when reading literature. The distinctions might take on more significance in a more advanced discussion, however, depending on the topic. By late 2021, most AI installations were, in fact, ML installations.

In 2020, “50% of respondents reported that their companies have adopted AI in at least one business function”. Note that McKinsey’s report uses the definition that ML is a subset of AI, and in fact their cited usages of AI are almost exclusively examples of ML.
 –From McKinsey & Company: The state of AI in 2020

Magical and Biased Results

Although many research papers from 2021 discuss ways to show how results by ML systems could be explained, the reality is that most of these systems currently have no way to explain their results; they operate as a black box, and they are vulnerable to learning unstated bias introduced during initial training.

Bias is one of the major issues that AI suffers from, considering that it is embedded in the AI system we design and employed by governments and businesses to make decisions using biased-embedded AI models and data.

 –From Artificial intelligence: Explainability, ethical issues and bias, published Aug 3, 2021 in the Annals of Robotics and Automation by Dr. Alaa Marshan, Department of Computer, Brunel University London, College of Engineering, Design and Physical Sciences Science, Public University in Uxbridge, England.
“Companies increasingly manage risks related to AI explainability... AI high performers remain more likely than others to recognize and mitigate most risks. For example, respondents at high performers are 2.6 times more likely than others to say their organizations are managing equity and fairness risks such as unwanted bias in AI-driven decisions.” –From McKinsey & Company: The state of AI in 2020

Magicians seem to perform magic because they do not explain their amazing results. For the ML systems which cannot explain the rationale behind an output or state change, could anyone be said to have complete control over them?

Again, more could be said on this topic, but I will save that discussion for another time.

Would A Superintelligent AI Be Impossible To Control?

Completely autonomous AI is upon us, and many well-informed technologists are gravely concerned that such a thing could not be controlled.

Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.

 –Alan Turing (1950), Computing Machinery and Intelligence, Mind.
We are unable to specify [these] objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordinarily difficult.

 –From Living With Artificial Intelligence, Lecture 1 by Prof. Stuart Russell, University of California at Berkeley, 2021.
We are currently experiencing a revival in the discussion of AI as a potential catastrophic risk. These risks range from machines causing significant disruptions to labor markets, to drones and other weaponized machines literally making autonomous kill-decisions...

Asimov’s first law of robotics has been proved to be incomputable, and therefore unfeasible...

Total containment is, in principle, impossible, due to fundamental limits inherent to computing itself...

A superintelligent machine is containable if there is a control strategy that prevents its acting on the external world when there is a reason to predict that will harm humans, and allows it otherwise.

 –From Superintelligence Cannot be Contained: Lessons from Computability Theory by Manuel Alfonseca et al, published in the Journal of Artificial Intelligence Research on Jan 5, 2021.

Given the above, I was surprised to learn that on March 10, 2022 the US eliminated the human controls requirement for fully automated vehicles. Who is in control of the vehicle? Can a thing be held accountable for its actions? Oh, boy...

Control Sub-Categories

It would seem to be useful to distinguish between the following sub-categories of control regarding software. This is not an exhaustive list. The preceding dictionary definitions appear to encompass the subcategories listed here; the following information enhances the definitions shown earlier without contradicting them.

  1. Manipulating user interface controls (users manipulating the user interface or providing data, as designed, such that the software provides a benefit to the user). Perception varies between devices, depending on the larger context. This sub-category distinguishes between normal usage of software from the ability to modify the operational parameters of the software.
    1. Car drivers and airplane pilots are tasked with controlling their vehicles, even though they just manipulate the user interface controls. Many or most modern vehicles use software to interpret user actions; physical connections between user interface elements and control surfaces are increasingly rare, especially in large vehicles and electric vehicles. Pilots and drivers are required to exercise judgement and maintain situational awareness while operating the vehicle, such that it remains under their control.
    2. Data entry clerks also manipulate the user interface controls in order to type in data, but few would argue that they are in control. These people operate in a very narrow context, where situational awareness is not a factor, and no judgement beyond how to interpret the written data is required. There is nothing that a data entry clerk could normally do during their work that might affect the state of the entire system that they interact with.
    3. More examples would likely be instructive.
  2. Administrative control: Higher privileged users changing the status of regular users and the data contained in the system.
  3. Operational control: Installing and maintaining the software in a physical or virtual system, including physically (re)locating the system.
  4. Malicious control: Bad actors altering the access privilege of authorized users, granting access to unauthorized users, altering the data in the system, suppressing or changing the inputs to the system. From the point of view of those responsible for the proper operation of the system, they would perceive malicious control to the system being out of control.
  5. Social Control: Affecting the perception of a system, such that the behavior of the users of that system are influenced while interacting with it, or the timing of their interaction is influenced, or their desire to interact with the system is suppressed. Perception is reality, in some sense; in fact, controlling people’s perception of a device is as significant as controlling access to the physical device. For example, if a person believes that an angry software god will strike them dead if they touch a sacred keyboard, the priest advocating such nonsense effectively controls the device.

Denying Control

If someone can prevent another from controlling something, is this not itself a form of control? This could be accomplished by the following:

  • Breaking or damaging the device, or otherwise rendering it inoperative
  • Blocking access to the device
  • Masking the device’s responses
  • Suppressing the effects of the user’s actions, or redirecting those actions elsewhere
  • Turning off the device
  • Associating attempts to control the device with negative consequences, for example electrically shocking anyone who touches the control surface
  • ... and oh, so many more reasons!

European AI Act and Liability Directive

Europe is leading the way towards a legal framework for AI implementations. The European Artificial Intelligence Act has its own website (artificialintelligenceact.eu).

The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.

 – From European Artificial Intelligence Act

“The AI Liability Directive is just a proposal for now, and has to be debated, edited, and passed by the European Parliament and Council of the European Union before it can become law”, as reported by The Register September 29, 2022 in Europe just might make it easier for people to sue for damage caused by AI tech

The new AI Liability Directive makes a targeted reform of national fault-based liability regimes and will apply to claims against any person for fault that influenced the AI system which caused the damage; any type of damage covered under national law (including resulting from discrimination or breach of fundamental rights like privacy); and claims made by any natural or legal person.

 – From Questions & Answers: AI Liability Directive

US Blueprint for an AI Bill of Rights

The White House Office of Science and Technology Policy has released the Blueprint for an AI Bill of Rights public policy document. It includes five protections, the following 5 core ethical principles, and call to action to protect the American public’s rights in an automated world.

  • Safe and Effective Systems: protection from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: people not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: people should be protected from abusive data practices via built-in protections and you should have agency over how their data is used. Big tech companies will fight this every way they can.
  • Notice and Explanation: People should know that an automated system is being used and understand how and why it contributes to outcomes that impacts them. Good luck with that, today's technology is not designed with that capability in mind.
  • Alternative Options: People should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problem.

Given the unrelenting erosion of people's rights in the USA in recent decades, one wonders how relevant the above will be.

The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. It also is not determinative of what the U.S. government’s position will be in any international negotiation. Adoption of these principles may not meet the requirements of existing statutes, regulations, policies, or international instruments, or the requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities.

 – From About the US Blueprint for an AI Bill of Rights


This article is meant to stimulate discussion of a more modern and contextually aware definition for the word ‘control’. Devices that employ computational capability may require a more nuanced definition of control, while devices that go beyond general computational capability and employ machine learning and/or artificial intelligence may require an even more specialized definition. I have not offered any such definitions; an entire book could be dedicated to deriving them. However, for a specific circumstance, a nuanced and contextually relevant definition could be derived.

Given the similarity between the concept of controlling a sentient being such as a horse, and controlling an autonomous device such as a self-driving vehicle or robotic vacuum cleaner, we may soon see court cases that cite horse-and-buggy precedents from 100 years or more ago.

On a related note, in 2012, Kyle Graham, Assistant Professor of Law, Santa Clara University, discussed how new technology is gradually included into stare decisis (the doctrine of precedent) in Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations.

A horse and buggy, circa 1910, Oklahoma
A horse and buggy, circa 1910, Oklahoma
Just as improperly trained animals might incur legal liability when used for certain purposes, improperly trained AI/ML systems might also incur liability.

 – An imaginary attorney at some time in the not-too-distant future.
Plus ça change, plus c'est la même chose.
The more things change, the more they stay the same.

 – Jean-Baptiste Alphonse Karr

Also, briefly explored in this article is the subjective nature of reality and consequently the subjective nature of the belief or feeling of control by an individual, which might influence a world view that may not be consistent with the actual physical world; consequences might arise from taking actions based on erroneous beliefs. Again, an entire phenomenological treatise could be written on this topic.

Perception is not reality, but, admittedly, perception can become a person's reality.

 – Dr. Jim Taylor, from “Perception Is Not Reality”, published in Psychology Today, August 5, 2019.

Contact Mike Slinn

No technical recruiters for contract work or employment please.

  • Email
  • Direct: 514-418-0156
  • Mobile: 650-678-2285


The content on this website is provided for general information purposes only and does not constitute legal or other professional advice or an opinion of any kind. Users of this website are advised to seek specific legal advice by contacting their own legal counsel regarding any specific legal issues. Michael Slinn does not warrant or guarantee the quality, accuracy or completeness of any information on this website. The articles published on this website are current as of their original date of publication, but should not be relied upon as accurate, timely or fit for any particular purpose.

Accessing or using this website does not create a client relationship. Although your use of the website may facilitate access to or communications with Michael Slinn via e-mail or otherwise via the website, receipt of any such communications or transmissions does not create a client relationship. Michael Slinn does not guarantee the security or confidentiality of any communications made by e-mail or otherwise through this website.

This website may contain links to third-party websites. Monitoring the vast information disseminated and accessible through those links is beyond Michael Slinn's resources, and he does not attempt to do so. Links are provided for convenience only and Michael Slinn does not endorse the information contained in linked websites nor guarantee its accuracy, timeliness or fitness for a particular purpose.