How does AI relate to the workplace and to employers and employees? This is the topic of this lecture which will provide an introduction to the relationship between AI and labor law. Labor law regulates working life and the relationship between employer and employee, as well as the relationship between trade union and employer. Broadly speaking, the purpose of labor law is to protect the weaker party to the employment contract, the employee. The European Commission has suggested a definition of AI that is useful for this lecture. AI "Refers to systems that display intelligent behavior by analyzing their environment and taking actions with some degree of autonomy to achieve specific goals". Also, that "AI-based systems can be purely software-based, acting in the virtual world or AI can be embedded in hardware devices". This is important in relation to the regulation of the workplace. At work, AI is present both in the form of algorithmic processes in computers and also in the form of robots automatically linking together AI and algorithms to robotics and the internet of things. As of today, no legislation explicitly regulates AI at work. Instead, the challenges to fit the new technology of AI into the pre-existing level of the framework, and courts have not presented any case law on AI at work. The standing of the law regarding AI at work is not entirely clear. I will break down this lecture in five parts: Employment protection when AI replaces humans, AI as an employer, workplace safety issues when working alongside robots, equal treatment, can robots discriminate, and lastly, data protection and surveillance issues. Let's start with employment protection. The introduction of AI and robotics into working life will imply that some jobs will disappear and new ones will be created. AI can make workers redundant and labor law does not hinder an employer from replacing workers with robots and AI. The employer's decision to implement AI and robots results in workers being redundant for economic, technological, and structural reasons, and this constitute just cause for termination of the employment contract. In some jurisdictions, a seniority principle governs the order in which employees are terminated so that workers with shorter periods of employment are terminated before those with longer periods of employment. This is usually referred to as the last-in, first-out principle. Since the decision to implement AI and robots to the workplace is within the employer's managerial prerogative, workers must accept working alongside robots. Refusing to do so would provide the employer with just cause for termination of employment based on reasons pertaining to the individual worker. A worker must stay up to date with technological changes to work processes. A key policy goal is then to retrain workers in jobs which are likely to disappear and help them transition into other professionals. Let's now move on to AI as an employer. Is it possible for an AI to represent the employer at work? An algorithm or robot can perform the role of a manager at work to the extent that the actions taken can be construed as emanating from a human legally holding the power to allot and direct work. The employee and the employer can stipulate in the employment contract that an algorithm will represent the will of the employer and that the employee is to receive binding instructions from role. The legal responsibility of the actions of the algorithm is borne by the employer, and the instructions given must respect labor law and also the terms of the contract. For example, boundaries to the duty to perform work. AI systems in a management role must not be in breach of data protection law, like the right to transparency regarding processing, and right not to be profiled or subject to particular decisions based solely on automated means. Because of the power imbalance between employer and employee, it is possible that employees, legally speaking, cannot freely consent to every type of data processing. Let's move on to health and safety when working alongside algorithms and robots. Robots and AI at workplace present both challenges and opportunities for health and safety at work. To the extent it is reasonably practicable, the employer is required to ensure that the workplace, machinery, equipment, and processes under his or her control are safe and without risk to health. Firstly, algorithms and robots can be useful for workers engaged in dangerous work. It might actually very well be reasonably practicable to demand that the employer implements the assistance of this type of new technology at work. Secondly, because of the autonomous and possibly unpredictable behavior of robots and algorithms, humans working alongside thinking machines might experience new forms of stress and mental health risks. Employers are obliged to take measures to decrease these novel risks. Health and safety law provides workers with the right to training on new machinery and algorithms, and should a worker be injured by a robot, it would count as an occupational injury. Most existing legislation on health and safety at work operates under the assumption that machines and robots present dangerous to workers, and that there should be a safe distance between the two. Health and safety law must be updated so that it takes into account the implications of humans working closely to robots and AI. Let's move on to equal treatment and questions regarding AI and discrimination. AI systems might be involved in processes regarding hiring and firing of workers, and management of the workforce. The use of AI can present new problems regarding both direct and indirect discrimination. Applicants for a position as well as workers are protected against directed discrimination. That is being treated less favorably in a comparable situation because they have protected characteristic, for example, race or gender. An algorithm engaged in management must be instructed not to discriminate in this way. Indirect discrimination is also primitive. This means that it is not allowed to implement a policy that applies in the same way for everybody, but in effect disadvantages a group of people who share a protected characteristic. A policy that applies equally can still be discriminatory. Requirements concerning height or language proficiency might constitute indirect discrimination on the grounds of sex and ethnicity respectively. AI must not be allowed to reproduce prejudices possibly held by the people who constructed the system. The algorithm must be instructed so as not to ask questions that are irrelevant to the particular context, for example, a hiring process or the setting of wages. Since AI and Machine Learning collect and process data on historical events, it is key that algorithms are programmed in a way that does not perpetuate historical biases and exclusionary practices. A company's previous recruitment practices might have favored a particular category of candidates, and the algorithm must not be allowed to carry this practice into future recruitment. Is this particularly important because of the widely held notion that machines always operate in an objective and neutral matter? Now, for the last part, data dependency and surveillance issues. AI, algorithms, and robots must in order to operate and to learn collect, and process vast amounts of data, and in the context of the workplace this information is personal data pertained to the employees, their personnel records, their past work performance, and so on. AI systems must respect data protection legislation which prescribes rights and duties on part of the employer and employee. At the workplace, AI and robotics often presupposed that employees are subjected to different kinds of surveillance while working. An employer is allowed to implement surveillance systems at work, but these must respect employee privacy and be proportional in the individual instance to a legitimate overriding interest on part of the employer, and employees must be informed of the surveillance in advance. AI systems must not be in breach of employees right to privacy at work. To sum up, everything that labor law prohibits an ordinary human employer from doing, is also not allowed from algorithm. The employer is legally speaking responsible for the actions of algorithms and robots. AI and robotics must be implemented to the workplace in a way that complies with health and safety law, anti-discrimination legislation, data protection legislation, and workers' rights to personal integrity. AI reaches into many areas of labor protection under regulation of the workplace. Labor lawyers must continue to engage with the topic of AI to ensure that the goal of labor protection can be realized also into future context of big data, robotics, the Internet of things, and AI. It is also important that labor law responds to the call for human-centered vision for AI put forward by, for example, international organization, OECD, which ask that governments work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity and aim to ensure that the benefits from AI are broadly and fairly shared.