Mike Slinn
Mike Slinn

What Does ‘Control’ Mean in 2024?

Published 2022-10-05. Last modified 2023-12-17.
Time to read: 21 minutes.

Mike Slinn has been working in the tech industry for more than four decades. His passion for technology, coupled with his keen interest in unraveling puzzles, have allowed Mr. Slinn to excel as an expert witness in technology legal disputes. To date, he has been retained 19 times for his expert opinion.

Contact Mike Slinn to discuss how he might provide value.

This article is from the series entitled Technology Expert Articles for Attorneys. You might find the articles of interest if you are looking for a software expert witness, a technology expert witness, or a computer expert witness.

I have never been disqualified.
All my opinions have been accepted.

Sign up for my newsletter!

This article introduces topics relating to the control of software, software systems, and software-driven devices. The control of devices that incorporate artificial intelligence and machine learning are also discussed.

While the examples given in this article include home appliances and cars, the concepts and principles also apply to enterprise software, and software in general. The article concludes with an historical perspective, yielding an unexpected insight.

Addendums are included that discusses proposals for new legal frameworks in the US and Europe, and a proposed new framework for hazard analysis.

Introduction

The dictionary definitions for the word ‘control’ have not been updated since before the advent of computing machinery, approximately 80 years ago. While the previous definitions still apply and are valid, they do not provide a nuanced definition suitable for discussing the control of current technology.

These changes took several human generations to evolve; they did not all happen at once. While writing this document, I came to realize that legal precedent from 150 years ago is becoming increasingly relevant. The advent of new technology has brought us full circle. Read on to learn more.

It depends on what is being controlled, and the context

This introductory article introduces the major aspects of the concept of control, as applied to systems, and used by lay people. Engineers specializing in classical control systems would employ a more technical definition, without considering many of the aspects discussed here. Because I am an electrical engineer with a systems orientation, and decades of software experience, I have a broader mandate.

I wrote this article over a period of 4 weeks, and have revised it extensively as my thinking developed. As with all the articles on this website, I will continue to update this article as new information becomes available. Furthermore, any such definition would be well served by vigorous discussion.

Control, Defined

Let's start the discussion with a few dictionary definitions for ‘control’, as a noun and a verb. This is not meant to be exhaustive.

  1. to direct the behavior of (a person or animal) : to cause (a person or animal) to do what you want
  2. to have power over (something)
  3. to direct the actions or function of (something) / to cause (something) to act or function in a certain way
  4. to exercise restraining or directing influence over : REGULATE
  5. power or authority to guide or manage
  6. a device or mechanism used to regulate or guide the operation of a machine, apparatus, or system
 – Paraphrased from the Merriam-Webster Dictionary
  1. the power to influence or direct people's behavior or the course of events
  2. determine the behavior or supervise the running of
 – From the Oxford Dictionary
  1. n. the power to direct, manage, oversee and/or restrict the affairs, business, or assets of a person or entity.
  2. v. to exercise the power of control.
 – From the Law.com Dictionary

Lawinsider.com has some definitions, based on analyzing a corpus of documents, however their corpus appears to only have been drawn from corporate law, and the definitions offered are narrow in scope as a result. More than a dozen definitions are offered. The one that seems most applicable to software is:

Control or "controls" for purposes hereof means that a person or entity has the power, direct or indirect, to conduct or govern the policies of another person or entity.

 – From LawInsider.com

Control v. Ownership

In a nutshell, the traditional definition of control is influence or authority over, while ownership is defined as the state of having legal title, which means legal control of the status of something. Ownership might require a legal responsibility to provide some measure of control. Responsibility for outcomes might derive from exercising or failing to exercise proper control. Further examination of specific examples might be instructive, and might require a restatement of this short definition.

Control is possible without ownership; multiple scenarios are possible, some of which are mentioned in this article.

Pwned

No discussion these days of the word ‘control’ in a software context would be complete without mentioning the word ‘pwned’. Pwned is a portmanteau of the words ‘power’ and ‘owned’ or ‘perfectly owned’. For video gamers, and increasingly in the trade press, the word means an opponent has been defeated, or ‘owned’. Listen to this short video to hear how the word is pronounced.

Note the conflation of the concepts of ownership, control, and dominance.

Pwned – typically used to imply that someone has been controlled or compromised.  – From haveibeenpwned.com
If your email has been pwned, it means that the security of your account has been compromised. … It could mean your passwords and email addresses have ended on the hands of cyber criminals. Hacking an account using your email address is possibly the first step of identity theft.

 – From F-Secure, a company that provides enterprise-level security products.

Assessing Control

Feedback

A person generally considers themselves to control a thing, an animal, or a person when they receive feedback appropriate for the action that they took, including their inaction. For example, flicking a light switch should cause lights that are connected to shine. If a light does not shine, the person might consider the switch or the light bulb to be broken, or perhaps the circuit might be suspect.

If for some reason feedback is suppressed, or imperceptible, then the person would also likely believe that they were not in control, unless they believed that plausible explanations existed for the lack of feedback. For example, a person with complete color blindness would likely be aware of their inability to perceive color. Similarly, a person with a hearing defect that prevented them from hearing sounds that might indicate a device is responding to their actions would not expect to hear those sounds.

Control systems theory is usually taught to electrical engineers starting in the third year. I'm not going to get into that material here. In case you would like to know more, below is a technical diagram that suggests some basic concepts, including feedback, for closed-loop control systems. The course linked to has more detail.

Learning New Skills

Learning a skill can enhance one’s ability to control something. For example, learning to ride a bicycle, or learning to fly a kite. Regarding flying kites, complete control is impossible because the wind cannot be controlled, but learning how to respond to wind changes is an essential part of learning to fly a kite.

Thus, absolute control over some types of things might not be possible, even if some measure of control can be attained or enhanced through practice.

Inert Objects

Scissors are useful for cutting paper. As inert objects, scissors do nothing on their own. The person holding the scissors decides what to cut, when to cut it, and how and where to perform the action. Sharp scissors can be dangerous when handled improperly, or when handled by children, or if a properly trained adult attempts to use them when inebriated.

It is easy to determine who controls inert objects like scissors: whoever is holding the scissors has control of them, provided the inert object is not attached to anything else, or is subject to physical forces such as magnetism that might affect the object to a greater degree than the person can reliably overcome.

Someone holding the scissors might be conforming to directions provided by another person, or perhaps the scissors were left out in easy reach of children or the public. Any discussion of liability is out of scope for my expertise.

Losing Control

Someone at the top of a tall ski hill covered in several inches of slightly wet snow could make a large snowball. They completely control the large snowball at this point. However, if they push hard, and start rolling it downhill, the snowball could grow rapidly as it accelerates, causing injury, death, and destruction. The snowball would be out of control for most of its downward journey.

This example shows that someone could have control over something, and then lose control.

Devices With Computational Capability

One major consideration for devices that contain computational elements stems from the engineering practice of layering successively higher functionality, from hardware, to firmware, and software; even the software itself is built in layers. Another term, similar to ‘layers’ that is equally applicable for defining control, is ‘hierarchy’.

Devices that contain general-purpose computational elements have a hierarchy of control categories.

Context

The type of device, and whether it contains a general-purpose computational element, provides a contextually dependent meaning for the word ‘control’. This is true whether the word is used as a noun or verb.

Control Categories

It is useful to distinguish between the following categories of control regarding software. This is not an exhaustive list. The preceding dictionary definitions encompass the categories listed below; the following information enhances the definitions shown earlier without contradicting them.

  1. Manipulating user interface controls (users manipulating the user interface or providing data, as designed, such that the software provides a benefit to the user). Perception varies between devices, depending on the larger context. This sub-category distinguishes between normal usage of software from the ability to modify the operational parameters of the software.
    1. Car drivers and airplane pilots are tasked with controlling their vehicles, even though they just manipulate the user interface controls. Many or most modern vehicles use software to interpret user actions; physical connections between user interface elements and control surfaces are increasingly rare, especially in large vehicles and electric vehicles. Pilots and drivers are required to exercise judgement and maintain situational awareness while operating the vehicle, such that it remains under their control.
    2. Data entry clerks also manipulate the user interface controls in order to type in data, but few would argue that they are in control. These people operate in a very narrow context, where situational awareness is not a factor, and no judgement beyond how to interpret the written data is required. There is nothing that a data entry clerk could normally do during their work that might affect the state of the entire system that they interact with.
    3. More examples would likely be instructive.
  2. Administrative control: Higher privileged users changing the status of regular users and the data contained in the system.
  3. Operational control: Installing and maintaining the software in a physical or virtual system, including physically (re)locating the system.
  4. Malicious control: Bad actors altering the access privilege of authorized users, granting access to unauthorized users, altering the data in the system, suppressing or changing the inputs to the system. From the point of view of those responsible for the proper operation of the system, they would perceive malicious control to the system being out of control.
  5. Social Control: Affecting the perception of a system, such that the behavior of the users of that system are influenced while interacting with it, or the timing of their interaction is influenced, or their desire to interact with the system is suppressed. Perception is reality, in some sense; in fact, controlling people’s perception of a device is as significant as controlling access to the physical device. For example, if a person believes that an angry software god will strike them dead if they touch a sacred keyboard, the priest advocating such nonsense effectively controls the device.

Denying Control

If someone can prevent another from controlling something, is this not itself a form of control? This could be accomplished by the following:

  • Breaking or damaging the device, or otherwise rendering it inoperative
  • Blocking access to the device
  • Masking the device’s responses
  • Suppressing the effects of the user’s actions, or redirecting those actions elsewhere
  • Turning off the device
  • Associating attempts to control the device with negative consequences, for example electrically shocking anyone who touches the control surface
  • … and oh, so many more reasons!

Artificial Intelligence and Machine Learning

Software capable of learning has become commonplace in business software. Machine Learning (ML) learns and predicts based on passive observations by applying sophisticated statistical methods, whereas Artificial Intelligence (AI) implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals.

There is controversy in the software community on whether ML is a subset of AI, or a separate field. For the purposes of this introductory discussion, this is a distinction without a difference, however one should be aware of the inconsistency of terminology when reading literature. The distinctions might take on more significance in a more advanced discussion, however, depending on the topic. By late 2021, most AI installations were, in fact, ML installations.

In 2020, “50% of respondents reported that their companies have adopted AI in at least one business function”. Note that McKinsey’s report uses the definition that ML is a subset of AI, and in fact their cited usages of AI are almost exclusively examples of ML.
 –From McKinsey & Company: The state of AI in 2020

Magical Results

ML systems must be trained before they can be used. The training data fed into an ML system defines its future responses.

Although many research papers from 2021 discuss ways to show how results by ML systems could be explained, the reality is that most of these systems currently have no way to explain their results; they operate as a black box, and they are vulnerable to learning unstated bias introduced during initial training.

“Companies increasingly manage risks related to AI explainability… AI high performers remain more likely than others to recognize and mitigate most risks. For example, respondents at high performers are 2.6 times more likely than others to say their organizations are managing equity and fairness risks such as unwanted bias in AI-driven decisions.”  –From McKinsey & Company: The state of AI in 2020

Magicians seem to perform magic because they do not explain their amazing results. For the ML systems which cannot explain the rationale behind an output or state change, could anyone be said to have complete control over them?

Again, more could be said on this topic, but I will save that discussion for another time.

Unacknowledged Bias

Bias is one of the major issues that AI suffers from, considering that it is embedded in the AI system we design and employed by governments and businesses to make decisions using biased-embedded AI models and data.

 –From Artificial intelligence: Explainability, ethical issues and bias, published Aug 3, 2021 in the Annals of Robotics and Automation by Dr. Alaa Marshan, Department of Computer, Brunel University London, College of Engineering, Design and Physical Sciences Science, Public University in Uxbridge, England.

Is AI Impossible To Control?

Completely autonomous AI is upon us, and many well-informed technologists are gravely concerned that such a thing cannot be controlled.

It is instructive to consider a parent’s often futile desire to control their teenage offspring. When teenagers are alone with their friends, the best parents can hope for is that their children were provided good examples while growing up, and were shown how to deal with peer pressure effectively.

ML systems are much the same; the results they produce are due to their training. ML systems cannot be micromanaged to produce correct results.

Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do.

 –Alan Turing (1950), widely considered to be the father of theoretical computer science and artificial intelligence, published in Computing Machinery and Intelligence, Mind.

On March 10, 2022 the US eliminated the human controls requirement for fully automated vehicles. Whom or what controls the vehicle? Can a thing be held accountable for its actions?

Examples

Close analogs exist in the fields of enterprise software, particularly e-commerce, enterprise resource planning (ERP), and software-as-a-service (SaaS).

Examples include recommender systems, image recognition and generation, speech recognition and generation, traffic prediction, weather prediction, email filtering, security, fraud detection, dynamic pricing, and much more.

Following are two examples, drawn from the physical world, with the intention that most readers would find them relatable.

Vacuum Cleaners

The concept of controlling a classically constructed device, such as a traditional vacuum cleaner, differs from controlling a device with a general-purpose computational element, such as the iRobot Scooba® floor washing robot, first available in 2005.

1995 Dyson DC01 Dual Cyclonic Vacuum Cleaner
1995 Dyson DC01 Dual Cyclonic Vacuum Cleaner
2005 iRobot Scooba® wet vacuum cleaner
2005 iRobot Scooba® wet vacuum cleaner

The older vacuum cleaner would only have an on/off switch, perhaps a power selector, and perhaps the ability to adjust various optional attachments. The operator would be able to control where they place the vacuum head by lifting and placing the head in the location where they want to clean.

In contrast, consider a robotic cleaner, such as the 2005 iRobot Scooba® wet vacuum cleaner pictured above. Robotic cleaners find their path to clean autonomously. This type of device usually has sensors that cause them to employ various cleaning strategies when it encounters something that triggers a specific cleaning algorithm.

Clearly, the finer points of the concept of ‘controlling’ a vacuum cleaner varies depending on the nature of the device.

Transportation

The Ford Model T was named the most influential car of the 20th century by the 1999 Car of the Century competition. No-one alive today had yet been born when the Model T was first offered for sale in 1908, so here are a few reminders of what technology was like at that time.

The Model T had no battery. The only use of electricity by early versions of this car was for the 4 spark plugs, and they received power produced by a flywheel magneto system. Acetylene gas lamps were used for headlights, and kerosene lamps were used for side and tail lights.

Controlling a Ford Model T was somewhat different from modern cars: once the Model T was started, drivers could set the throttle with their right hand, press on the brake, clutch and reverse pedals, turn the steering wheel, and operate the hand brake. This diagram is from the Model T Ford Club of America:

Cadillac began offering cars with push-button starters in 1912, but few people could afford such a luxury car; instead, the Model T was started with a hand crank. Power steering would not be invented for several years, so the steering wheel was mechanically connected to the steering mechanism for the front wheels. There were no power windows, in fact, early Model Ts did not even have windows in the doors; this could be uncomfortable in bad weather.

Transistors were invented decades later, in 1956, and vacuum tubes were still exotic and expensive in 1918. Consequently, car radios did not appear for another decade, and they would use vacuum tubes until the mid-1960s. Cars would not be mass-produced with automatic transmissions until General Motors introduced the Hydramatic three-speed hydraulic automatic in 1939.

In contrast, the average vehicle in 2022 contains about 50 interconnected microprocessors, and most of those vehicles also have at least one camera. Vehicles from various manufacturers use multiple cameras to park themselves autonomously, for both parallel and regular perpendicular parking.

Some vehicles in 2023 also have assisted reversing, which assumes steering control to mirror the path the vehicle most recently took going forward. This system makes backing out of a confined parking place easy. All the driver has to do is operate the accelerator and brakes and monitor the surrounding area, while the steering follows the exact path the car took to enter the space.

When using either of these modern features, what controls the car when parking or reversing? The car continually uses its many sensors and internal guidance system to make course corrections. All the driver does is indicate their intent, and the only action they can take is to control the speed, pause or abort the procedure. This seems rather similar to how a rider controls a horse, yet differences do exist.

Clearly, the finer points of the concept of controlling a car differs depending on the nature of the car, and the last word has yet to be spoken on this matter.

Six Levels of Vehicle Autonomy

The Society of Automotive Engineers (SAE) defines six levels of driver assistance technology. These levels have been adopted by the U.S. Department of Transportation (US DOT). The following is taken from the US National Highway Traffic Safety Administration (NHTSA), which is a division of the US DOT:

Level 0 The human driver does all the driving.
Level 1 An advanced driver assistance system (ADAS) on the vehicle can sometimes assist the human driver with either steering or braking/accelerating, but not both simultaneously.
Level 2 An advanced driver assistance system (ADAS) on the vehicle can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention (“monitor the driving environment”) at all times and perform the rest of the driving task.
Level 3 An automated driving system (ADS) on the vehicle can itself perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time when the ADS requests the human driver to do so. In all other circumstances, the human driver performs the driving task.
Level 4 An automated driving system (ADS) on the vehicle can itself perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human need not pay attention in those circumstances.
Level 5 An automated driving system (ADS) on the vehicle can do all the driving in all circumstances. The human occupants are just passengers, and never need to be involved in driving.

Autonomous Cars

Self-driving cars have analogs in machine learning systems, which are increasingly incorporated into enterprise software.

Tesla cars were advertised as being designed to provide self-driving features, however when this was written, many of those features were not enabled yet for all customers, and it seemed uncertain if all of them would ever be enabled. Those advanced features include autopilot, autosteer, smart summon, full self-driving, taking direction from a calendar instead of a human, and self-parking.

Tesla’s Autopilot feature is classified as Level 2 vehicle autonomy, which means the vehicle can control steering and acceleration, but a human in the driver’s seat can must be able to take control at any time.

Regardless of whether the claims made by Tesla are supportable, the concept of controlling such a vehicle is nuanced. When I read articles like Tesla Recalls 362,758 Vehicles Over Full Self-Driving Software Safety Concerns, I wonder if litigation will arise.

Tesla Autopilot Full Self-Driving Hardware from Tesla.

Whether a [Level 2] automated driving system is engaged or not, every available vehicle requires the human driver to be in control at all times, and all state laws hold the human driver responsible for the operation of their vehicles,” an NHTSA spokesperson said. “Certain advanced driving assistance features can promote safety by helping drivers avoid crashes and mitigate the severity of crashes that occur, but as with all technologies and equipment on motor vehicles, drivers must use them correctly and responsibly.

 – From A Tesla on autopilot killed two people in Gardena. Is the driver guilty of manslaughter?, Los Angeles Times, 2022-01-19.
Mercedes-Benz is the first manufacturer to put a Level 3 system with international valid certification into series production.

Mercedes-Benz Says Self-Driving Option Ready to Roll, published in The Detroit Bureau.
We are unable to specify [these] objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordinarily difficult.

 –From Living With Artificial Intelligence, Lecture 1 by Prof. Stuart Russell, University of California at Berkeley, 2021.

More could be said regarding autonomy and context, but this serves to introduce the topic.

Update 2023-11-01 Tesla Wins Another Lawsuit

Tesla wins first US Autopilot trial involving fatal crash. I am sure we have not heard the last of this issue.

Update 2023-12-06 GM’s Cruise Dismisses 900 Employees, Cutting 24% of Workforce

After several high-profile accidents, California regulators suspended Cruise’s license to operate as authorities accused the company of hiding details. BNN Bloomberg has more details.

Update 2023-12-14 Massive Tesla Recall

Tesla forced to recall just about every car it has ever built – Regulators are fuming over the Autopilot driver assist system

Conclusion

This article is meant to stimulate discussion of a more modern and contextually aware definition for the word ‘control’. Devices that employ computational capability may require a more nuanced definition of control, while devices that go beyond general computational capability and employ machine learning and/or artificial intelligence may require an even more specialized definition.

I have not offered any such definitions; an entire book could be dedicated to deriving them. However, for a specific circumstance, a nuanced and contextually relevant definition could be derived.

Implications

We have discussed the similarity between the concept of controlling a sentient being such as a horse, and controlling an autonomous device such as a self-driving vehicle or robotic vacuum cleaner. Court cases that cite horse-and-buggy precedents from 100 years or more ago may soon arise.

Doctrine Of Precedent

On a related note, in 2012, Kyle Graham, Assistant Professor of Law, Santa Clara University, discussed how new technology is gradually included into stare decisis (the doctrine of precedent) in Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations.

A horse and buggy, circa 1910, Oklahoma
A horse and buggy, circa 1910, Oklahoma
Just as improperly trained animals might incur legal liability when used for certain purposes, improperly trained AI/ML systems might also incur liability.

 – An imaginary attorney at some time in the not-too-distant future.
Plus ça change, plus c'est la même chose.
The more things change, the more they stay the same.

 – Jean-Baptiste Alphonse Karr

Addendums

US Blueprint for an AI Bill of Rights

The White House Office of Science and Technology Policy has released the Blueprint for an AI Bill of Rights public policy document. It includes five protections, the following 5 core ethical principles, and call to action to protect the American public’s rights in an automated world.

  • Safe and Effective Systems: protection from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: people not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: people should be protected from abusive data practices via built-in protections and you should have agency over how their data is used. Big tech companies will fight this every way they can.
  • Notice and Explanation: People should know that an automated system is being used and understand how and why it contributes to outcomes that impacts them. Good luck with that, today's technology is not designed with that capability in mind.
  • Alternative Options: People should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problem.
The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. It also is not determinative of what the U.S. government’s position will be in any international negotiation. Adoption of these principles may not meet the requirements of existing statutes, regulations, policies, or international instruments, or the requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities.
 – From About the US Blueprint for an AI Bill of Rights

European AI Act and Liability Directive

Europe is leading the way towards a legal framework for AI implementations with the European Artificial Intelligence Act.

The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
 – From European Artificial Intelligence Act

“The AI Liability Directive is just a proposal for now, and has to be debated, edited, and passed by the European Parliament and Council of the European Union before it can become law”, as reported by The Register September 29, 2022 in Europe just might make it easier for people to sue for damage caused by AI tech.

The new AI Liability Directive makes a targeted reform of national fault-based liability regimes and will apply to claims against any person for fault that influenced the AI system which caused the damage; any type of damage covered under national law (including resulting from discrimination or breach of fundamental rights like privacy); and claims made by any natural or legal person.
 – From Questions & Answers: AI Liability Directive

2024-01-22

The final draft was unofficially published.

TAIHA: Proposed Hazard Analysis

Missy Cummings, Professor and Director of Mason Autonomy and Robotics Center at George Mason University, has written A Taxonomy for AI Hazard Analysis. This paper will be published in the Journal of Cognitive Engineering and Decision Making.

With the rise of artificial intelligence in safety-critical systems like surface transportation, there is a commensurate need for new hazard analysis approaches to determine if and how AI contributes to accidents, which are also increasing in number and severity. The original Swiss Cheese model widely used for hazard analyses focuses uniquely on human activities that lead to accidents, but cannot address accidents where AI is a possible causal factor. To this end, the Taxonomy for AI Hazard Analysis (TAIHA) is proposed that introduces layers focusing on the oversight, design, maintenance and testing of AI. TAIHA is illustrated with real-world accidents. TAIHA does not replace the traditional Swiss cheese model, which should be used in concert when a joint human-AI system exists, such as when people are driving a car with AI-based advanced driving assist features.

References

  1. ‘Control’ from Merriam-Webster Dictionary
  2. ‘Control’ from Oxford Dictionary
  3. ‘Control’ from Law.com Dictionary
  4. ‘Control’ from LawInsider.com
  5. Have I Been Pwned goes open source, bags help from FBI
  6. haveibeenpwned.com
  7. What steps should you take when your email has been pwned?
  8. ‘Context’ from Lexico.com
  9. McKinsey & Company: The state of AI in 2020
  10. ‘Black box’ from Merriam-Webster Dictionary
  11. Artificial intelligence: Explainability, ethical issues and bias
  12. Computing Machinery and Intelligence, Mind
  13. U.S. eliminates human controls requirement for fully automated vehicles
  14. 1999 Car of the Century
  15. Acetylene gas lamps
  16. Model T Ford Club of America
  17. BMW driver assistance
  18. Society of Automotive Engineers
  19. NHTSA: Automated Vehicles for Safety
  20. US National Highway Traffic Safety Administration
  21. Telsa autopilot
  22. Tesla Recalls 362,758 Vehicles Over Full Self-Driving Software Safety Concerns
  23. Tesla Autopilot Full Self-Driving Hardware
  24. Tesla videos on Vimeo
  25. A Tesla on autopilot killed two people in Gardena. Is the driver guilty of manslaughter?
  26. Mercedes-Benz Says Self-Driving Option Ready to Roll
  27. Living With Artificial Intelligence, Lecture 1
  28. Tesla wins first US Autopilot trial involving fatal crash
  29. BNN Bloomberg has more details.
  30. Tesla forced to recall just about every car it has ever built – Regulators are fuming over the Autopilot driver assist system
  31. Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations
  32. White House Office of Science and Technology Policy
  33. Blueprint for an AI Bill of Rights
  34. European Artificial Intelligence Act
  35. Europe just might make it easier for people to sue for damage caused by AI tech
  36. Unofficially published
  37. Missy Cummings
  38. A Taxonomy for AI Hazard Analysis

Contact Mike Slinn

No technical recruiters for contract work or employment please.

  • Email
  • Direct: 514-418-0156
  • Mobile: 650-678-2285

Disclaimer

The content on this website is provided for general information purposes only and does not constitute legal or other professional advice or an opinion of any kind. Users of this website are advised to seek specific legal advice by contacting their own legal counsel regarding any specific legal issues. Michael Slinn does not warrant or guarantee the quality, accuracy or completeness of any information on this website. The articles published on this website are current as of their original date of publication, but should not be relied upon as accurate, timely or fit for any particular purpose.

Accessing or using this website does not create a client relationship. Although your use of the website may facilitate access to or communications with Michael Slinn via e-mail or otherwise via the website, receipt of any such communications or transmissions does not create a client relationship. Michael Slinn does not guarantee the security or confidentiality of any communications made by e-mail or otherwise through this website.

This website may contain links to third-party websites. Monitoring the vast information disseminated and accessible through those links is beyond Michael Slinn's resources, and he does not attempt to do so. Links are provided for convenience only and Michael Slinn does not endorse the information contained in linked websites nor guarantee its accuracy, timeliness or fitness for a particular purpose.

* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.