Connect with us

Machine Learning Tutorial: The Multinomial Logistic Regression (Softmax Regression)

Published

on

In the previous two machine learning tutorials, we examined the Naive Bayes and the Max Entropy classifiers. In this tutorial we will discuss the Multinomial Logistic Regression also known as Softmax Regression. Implementing Multinomial Logistic Regression in a conventional programming language such as C++, PHP or JAVA can be fairly straightforward despite the fact that an iterative algorithm would be required to estimate the parameters of the model.

Update: The Datumbox Machine Learning Framework is now open-source and free to download. Check out the package com.datumbox.framework.machinelearning.classification to see the implementation of SoftMax Regression Classifier in Java.

What is the Multinomial Logistic Regression?

The Multinomial Logistic Regression, also known as SoftMax Regression due to the hypothesis function that it uses, is a supervised
learning algorithm which can be used in several problems including text classification. It is a regression model which generalizes the logistic regression to classification problems where the output can take more than two possible values. We should note that Multinomial Logistic Regression is closely related to MaxEnt algorithm because it uses the same activation functions. Nevertheless in this article we will present the method in a different context than we did on Max Entropy tutorial.

When to use Multinomial Logistic Regression?

Multinomial Logistic Regression requires significantly more time to be trained comparing to Naive Bayes, because it uses an iterative algorithm to estimate the parameters of the model. After computing these parameters, SoftMax regression is competitive in terms of CPU and memory consumption. The Softmax Regression is preferred when we have features of different type (continuous, discrete, dummy variables etc), nevertheless given that it is a regression model, it is more vulnerable to multicollinearity problems and thus it should be avoided when our features are highly correlated.

Theoretical Background of Softmax Regression

Similarly to Max Entropy, we will present the algorithm in the context of document classification. Thus we will use the contextual information of the document in order to categorize it to a certain class. Let our training dataset consist of m (xi,yi) pairs and let k be the number of all possible classes. Also by using the bag-of-words framework let {w1,…,wn} be the set of n words that can appear within our texts.

The model of SoftMax regression requires the estimation of a coefficient theta for every word and category combination. The sign and the value of this coefficient show whether the existence of the particular word within a document has a positive or negative effect towards its classification to the category. In order to build our model we need to estimate the parameters. (Note that the θi vector stores the coefficients of ith category for each of the n words, plus 1 for coefficient of the intercept term).

In accordance with what we did previously for Max Entropy, all the documents within our training dataset will be represented as vectors with 0s and 1s that indicate whether each word of our vocabulary exists within the document. In addition, all vectors will include an additional “1” element for the intercept term.

In Softmax Regression the probability given a document x to be classified as y is equal to:

[1]

Thus given that we have estimated the aforementioned θ parameters and given a new document x, our hypothesis function will need to estimate the above probability for each of the k possible classes. Thus the hypothesis function will return a k dimensional vector with the estimated probabilities:

[2]

By using the “maximum a posteriori” decision rule, when we classify a new document we will select the category with the highest probability.

In our Multinomial Logistic Regression model we will use the following cost function and we will try to find the theta parameters that minimize it:

[3]

Unfortunately, there is no known closed-form way to estimate the parameters that minimize the cost function and thus we need to use an iterative algorithm such as gradient descent. The iterative algorithm requires us estimating the partial derivative of the cost function which is equal to:

[4]

By using the batch gradient descent algorithm we estimate the theta parameters as follows:

`1. Initialize vector θj with 0 in all elements2. Repeat until convergence {            θj <- θj - α (for every j)} `

After estimating the θ parameters, we can use our model to classify new documents. Finally we should note that as we discussed in a previous article, using the Gradient Descent “as is” is never a good idea. Extending the algorithm to adapt learning rate and normalizing the values before performing the iterations are ways to improve convergence and speed up the execution.

Did you like the article? Please take a minute to share it on Twitter. 🙂

My name is Vasilis Vryniotis. I’m a Data Scientist, a Software Engineer, author of Datumbox Machine Learning Framework and a proud geek. Learn more

Flexible expressions could lift 3D-generated faces out of the uncanny valley

Published

on

3D-rendered faces are a big part of any major movie or game now, but the task of capturing and animating them in a natural way can be a tough one. Disney Research is working on ways to smooth out this process, among them a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley.

Of course this technology has come a long way from the wooden expressions and limited details of earlier days. High-resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are not just limitless in variety, they’re very easy to get wrong.

Think of how someone’s entire face changes when they smile — it’s different for everyone, but there are enough similarities that we fancy we can tell when someone is “really” smiling or just faking it. How can you achieve that level of detail in an artificial face?

Existing “linear” models simplify the subtlety of expression, making “happiness” or “anger” minutely adjustable, but at the cost of accuracy — they can’t express every possible face, but can easily result in impossible faces. Newer neural models learn complexity from watching the interconnectedness of expressions, but like other such models their workings are obscure and difficult to control, and perhaps not generalizable beyond the faces they learned from. They don’t enable the level of control an artist working on a movie or game needs, or result in faces that (humans are remarkably good at detecting this) are just off somehow.

A team at Disney Research proposes a new model with the best of both worlds — what it calls a “semantic deep face model.” Without getting into the exact technical execution, the basic improvement is that it’s a neural model that learns how a facial expression affects the whole face, but is not specific to a single face — and moreover is nonlinear, allowing flexibility in how expressions interact with a face’s geometry and each other.

Think of it this way: A linear model lets you take an expression (a smile, or kiss, say) from 0-100 on any 3D face, but the results may be unrealistic. A neural model lets you take a learned expression from 0-100 realistically, but only on the face it learned it from. This model can take an expression from 0-100 smoothly on any 3D face. That’s something of an over-simplification, but you get the idea.

Image Credits: Disney Research

The results are powerful: You could generate a thousand faces with different shapes and tones, and then animate all of them with the same expressions without any extra work. Think how that could result in diverse CG crowds you can summon with a couple clicks, or characters in games that have realistic facial expressions regardless of whether they were hand-crafted or not.

It’s not a silver bullet, and it’s only part of a huge set of improvements artists and engineers are making in the various industries where this technology is employed — markerless face tracking, better skin deformation, realistic eye movements and dozens more areas of interest are also important parts of this process.

The Disney Research paper was presented at the International Conference on 3D Vision; you can read the full thing here.

Europe sets out the rules of the road for its data reuse plan

Published

on

European Union lawmakers have laid out a major legislative proposal today to encourage the reuse of industrial data across the Single Market by creating a standardized framework of trusted tools and techniques to ensure what they describe as “secure and privacy-compliant conditions” for sharing data.

Enabling a network of trusted and neutral data intermediaries, and an oversight regime comprised of national monitoring authorities and a pan-EU coordinating body, are core components of the plan.

The move follows the European Commission’s data strategy announcement in February, when it said it wanted to boost data reuse to support a new generation of data-driven services powered by data-hungry artificial intelligence, as well as encouraging the notion of using “tech for good” by enabling “more data and good quality data” to fuel innovation with a common public good (like better disease diagnostics) and improve public services.

The wider context is that personal data is already regulated in the bloc (such as under the General Data Protection Regulation; GDPR), which restricts reuse. While commercial considerations can limit how industrial data is shared.

The EU’s executive believes harmonzied requirements that set technical and/or legal conditions for data reuse are needed to foster legal certainty and trust — delivered via a framework that promises to maintain rights and protections and thus get more data usefully flowing.

The Commission sees major business benefits flowing from the proposed data governance regime. “Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it writes in a press release.

It has further data-related proposals incoming in 2021, in addition to a package of digital services legislation it’s due to lay out early next month — as part of a wider reboot of industrial strategy which prioritises digitalization and a green new deal.

All legislative components of the strategy will need to gain the backing of the European Council and parliament so there’s a long road ahead for implementing the plan.

Data Governance Act

EU lawmakers often talk in shorthand about the data strategy being intended to encourage the sharing and reuse of “industrial data” — although the Data Governance Plan (DGA) unveiled today has a wider remit.

The Commission envisages the framework enabling the sharing of data that’s subject to data protection legislation — which means personal data; where privacy considerations may (currently) restrain reuse — as well as industrial data subject to intellectual property, or which contains trade secrets or other commercially sensitive information (and is thus not typically shared by its creators primarily for commercial reasons).

In a press conference on the data governance proposals, internal market commissioner Thierry Breton floated the notion of “data altruism” — saying the Commission wants to provide citizens with an organized way to share their own personal data for a common/public good, such as aiding research into rare diseases or helping cities map mobility for purposes like monitoring urban air quality.

“Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose,” the Commission writes in a Q&A on the proposal.

It’s planning a public register where entities will be able to register as a “data altruism organisation” — provided they have a not-for-profit character; meet transparency requirements; and implement certain safeguards to “protect the rights and interests of citizens and companies” — with the aim of providing “maximum trust with minimum administrative burden”, as it puts it.

The DGA envisages different tools, techniques and requirements governing how private sector bodies share data versus private companies.

For public sector bodies there may be technical requirements (such as encryption or anonymization) attached to the data itself or further processing limitations (such as requiring it to take place in “dedicated infrastructures operated and supervised by the public sector”), as well as legally binding confidentiality agreements that must be signed by the reuser.

“Whenever data is being transferred to a reuser, mechanisms will be in place that ensure compliance with the GDPR and preserve the commercial confidentiality of the data,” the Commission’s PR says.

To encourage businesses to get on board with pooling their own data sets — for the promise of a collective economic upside via access to bigger volumes of pooled data — the plan is for regulated data intermediaries/marketplaces to provide “neutral” data-sharing services, acting as the “trusted” go-between/repository so data can flow between businesses.

“To ensure this neutrality, the data-sharing intermediary cannot exchange the data for its own interest (e.g. by selling it to another company or using it to develop their own product based on this data) and will have to comply with strict requirements to ensure this neutrality,” the Commission writes on this.

Under the plan, intermediaries’ compliance with data handling requirements would be monitored by public authorities at a national level.

But the Commission is also proposing the creation of a new pan-EU body, called the European Data Innovation Board, that would try to knit together best practices across Member States — in what looks like a mirror of the steering/coordinating role undertaken by the European Data Protection Board (which links up the EU’s patchwork of data protection supervisory authorities).

“These data brokers or intermediaries that will provide for data sharing will do that in a way that your rights are protected and that you have choices,” said EVP Margrethe Vestager, who heads up the bloc’s digital strategy, also speaking at today’s press conference.

“So that you can also have personal data spaces where your data is managed. Because, initially, when you ask people they say well actually we do want to share but we don’t really know how to do it. And this is not only the technicalities — it’s also the legal certainty that’s missing. And this proposal will provide that,” she added.

Data localization requirements — or not?

The commissioners faced a number of questions over the hot button issue of international data transfers.

Breton was asked whether the DGA will include any data localization requirements. He responded by saying — essentially — that the rules will bake in a series of conditions which, depending on the data itself and the intended destination, may mean that storing and processing the data in the EU is the only viable option.

“On data localization — what we do is to set a GDPR-type of approach, through adequacy decisions and standard contractual clauses for only sensitive data through a cascading of conditions to allow the international transfer under conditions and in full respect of the protected nature of the data. That’s really the philosophy behind it,” Breton said. “And of course for highly sensitive data [such as] in the public health domain it is necessary to be able to set further conditions, depending on the sensitivity, otherwise… Member States will not share them.”

“For instance it could be possible to limit the reuse of this data into public secure infrastructures so that companies will come to use the data but not keep them. It could be also about restricting the number of access in third countries, restricting the possibility to further transfer the data and if necessary also prohibiting the transfer to a third country,” he went on, adding that such conditions would be “in full respect” of the EU’s WTO obligations.

In a section of its Q&A that deals with data localization requirements, the Commission similarly dances around the question, writing: “There is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice. At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

At the presser, Breton also noted that companies that want to gain access to EU data that’s been made available for reuse will need to have legal representation in the region. “This is important of course to ensure the enforceability of the rules we are setting,” he said. “It is very important for us — maybe not for other continents but for us — to be fully compliant.”

The commissioners also faced questions about how the planned data reuse rules would be enforced — given ongoing criticism over the lack of uniformly vigorous enforcement of Europe’s data protection framework, GDPR.

“No rule is any good if not enforced,” agreed Vestager. “What we are suggesting here is that if you have a data-sharing service provider and they have notified themselves it’s then up to the authority with whom they have notified actually to monitor and to supervise the compliance with the different things that they have to live up to in order to preserve the protection of these legitimate interests — could be business confidentiality, could be intellectual property rights.

“This is a thing that we will keep on working on also in the future proposals that are upcoming — the Digital Services Act and the Digital Markets Act — but here you have sort of a precursor that the ones who receive the notification in Member States they will also have to supervise that things are actually in order.”

Also responding on the enforcement point, Breton suggested enforcement would be baked in up front, such as by careful control of who could become a data reuse broker.

“[Firstly] we are putting forward common rules and harmonized rules… We are creating a large internal market for data. The second thing is that we are asking Member States to create specific authorities to monitor. The third thing is that we will ensure coherence and enforcement through the European Data Innovation Board,” he said. “Just to give you an example… enforcement is embedded. To be a data broker you will need to fulfil a certain number of obligations and if you fulfil these obligations you can be a neutral data broker — if you don’t

Alongside the DGA, the Commission also announced an Intellectual Property Action Plan.

Vestager said this aims to build on the EU’s existing IP framework with a number of supportive actions — including financial support for SMEs involved in the Horizon Europe R&D program to file patents.

The Commission is also considering whether to reform the framework for filing standards essential patents. But in the short term Vestager said it would aim to encourage industry to engage in forums aimed at reducing litigation.

“One example could be that the Commission could set up an independent system of third party essentiality checks in view of improving legal certainty and reducing litigation costs,” she added of the potential reform, noting that protecting IP is an important component of the bloc’s industrial strategy.

How Do You Differentiate AI From Automation?

Published

on

A lot of us use the terms artificial intelligence (AI) and automation interchangeably to describe the technological take over in the human-operated processes, and a lot of us would stare blankly at the person who asks the difference between the two. I know I did when I was asked the same.

It has been common to use these words interchangeably, even in professional use, to describe the innovative advancement in the regular processes. However, in actuality, these terms are not as similar as you think them to be. There are huge differences between the intricacy levels of the two.

While automation means making software or hardware that can get things done automatically with minimal human intervention, artificial intelligence refers to making machines intelligent. Automation is suitable for automating the repetitive, daily tasks and may require minimum or no human intervention.

It is based on specific programming and rules. If an organization wishes to convert this automation into AI, it will need to power it with data. Such data is termed as big data and comprises of Machine Learning, graphs, and neural networks. The output of automation is specific; however, AI carries the risk of uncertainty just like a human brain.

AI and automation play a vital role in the modern workplace simultaneously due to the availability of vast data and rapid technological development. Although the Gartner Survey claims that more than 37% (one-third) organizations use artificial intelligence in some form, these digits do not consider the implementation complexities.

While it is true that both of these advancements make our work easy, many employees believe that AI and automation are here to take over their jobs. Job loss is an unavoidable phenomenon and is going to take place, automation, or not. However, the evolved job replacing the traditional one is going to be more engaging and productive compared to the outdated ones.

“Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.”

Automation

Look around. You’ll find yourself surrounded by the automated systems. The reason you don’t have to wait for long hours at the bank or the reason you don’t have to rewrite the same mail a thousand times is automation. Automation’s sole purpose is to let machines take over repetitive, tedious, and monotonous tasks.

The primary benefit of employing automation in your business processes is that it frees up the employees’ time, enabling them to focus on more critical tasks. These tasks are those that require personal skill or human judgment. The secondary benefit is the efficiency of business with reduced cost and a productive workforce.

Organizations are more open to adopting automated machinery despite its high installation charges because the machinery never requires a sick leave or a holiday. It always gets the work done on time, without a break.

The point to consider for its differentiation from AI is that the machines are all piloted by manual configuration. It is a conceptualized way of saying that you have to configure your automation system to suit your organization’s needs and requirements. It is nothing superior to a machine that has the smarts to follow orders.

Artificial Intelligence

We want machinery that could replicate the human thought process, but we do not wish to experience the real-life cases of Interstellar, The Matrix, or Wall-E. That is a precise summation of AI – assisting human life without taking control of it. It is a technology that mimics what a human can say, so and think but won’t be stirred by natural limitations like age and death.

Unlike automation, AI is not compatible with repetitive tasks or following orders. Its purpose is to seek patterns, learn from experience, and select the appropriate responses in certain situations without depending on human intervention or guidance.

“According to Market and Markets, by 2025, the AI industry will grow up to a \$190 billion industry.”

Differences In AI And Automation

As discussed above, people use the terms – AI and Automation – interchangeably. These terminologies have different objectives. AI’s main objective is to create brilliant machines to carry out the tasks that require intelligent thinking. It is the engineering and science of making devices so smart that they can mimic human behaviour and intelligence.

AI creates technology that enables computers and machines to think and behave like humans and learn from them. On the contrary, automation focuses on simplifying and speeding up the routine, repetitive tasks to increase the efficiency and quality of the output with minimum to no human intervention. Besides their terminology and objectives, AI and automation differ on the following basis.

 Sr. No. Basis of Differentiation Automation Artificial Intelligence 1. Meaning It is a pre-set program that self runs to perform specific tasks. It is engineering the systems to have human-like thinking capability. 2. Purpose To help the employees by automizing the routine, repetitive, and monotonous processes to save time. To help the employees by making machines that can carry out the tasks that require human-like intelligent thinking and decision making. 3. Nature of tasks performed It performs repetitive, routine, and monotonous tasks. It performs more intelligent and critical tasks that require the thinking and judgment of the human brain. 4. Added Features It does not have highly exclusive added features. It involves self-learning and development from experiences. 5. Human Intervention It may require a little or least human intervention (to switch the system on/ off) It requires no human intervention as it takes the necessary information from the data and self-learns from the experiences or data feeds.

How Are AI And Automation Connected?

Now that we have seen what differentiates them from each other and have understood the meaning of each individually let’s see what similarities they hold.

There is one single thing that drives both AI and automation, and that is data. The automated devices collect and combine the data while the systems with artificial intelligence understand it. Indeed, the success or failure of a company depends on numerous factors like increased productivity, employees’ ability to contribute to the organization’s expansion, and business efficiency.

However, the factor more significant than any of these is data. The automated machinery relentlessly and obsessively feeds on the data. With artificial intelligence employed in the systems and automated machines chewing on the data, the companies can make smarter business decisions than before.

The two systems are highly compatible. The businesses flourish at an entirely different level when these two are combined. Take the example of the modern-day cloud-based payroll solution. The software does the task of calculation or allocation of payroll with the help of programming. AI comes into the picture by taking the data of individual employees and passing it to the automated calculative system and taking the calculated amount and transferring the amount into individual employees’ accounts.

It coordinated with software like leave management or attendance management to maintain accuracy in the calculation process. The program calculates as it is programmed to do, without checking whether the data is correct or not. It is AI that sorts the information and gives relevant data to the program to calculate the payroll, thereby becoming the “brain” of the software.

A combination of AI and automation can birth a software that requires no human intervention and gives 100% accuracy and lawful compliance of the process. It is, however, just the tip of the iceberg. Imagine how powerful the organizations can become in the future by coupling the machines capable of collecting massive amounts of data with the systems that can brilliantly make that information meaningful.

“The famous futurologist and visionary and CEO of Tesla — Elon Musk — said that Robots and AI will be able to do everything better than us, creating the biggest risk that we face as a civilization.”

Final Thoughts

When we think about how technology has changed everything about our lives, we observe that technologies such as automation and AI are becoming more dominant forms of brilliance and are plunging themselves into our environments. The fact – AI has transformed from being our assistant to something so powerful while the automated systems are swiftly outperforming us in many pursuits – is not bogus.

Technology is all about exploring the possibilities, and that is precisely what AI and automation serve. They explore new opportunities to outsmart the human brain. That being said, artificial intelligence is about making the machines smart to supersede human brilliance and behaviours, while automation simplifies and speeds up processes with minimal or zero human intervention.

MIT Study: Effects of Automation on the Future of Work Challenges Policymakers

Published

on

By John P. Desmond, AI Trends Editor

Rising productivity brought on by automation has not led an increase in income for workers. This is among the conclusions of the 2020 report from the MIT Task Force on the Future of Work, founded in 2018 to study the relation between emerging technologies and work, to shape public discourse and explore strategies to enable a sharing of prosperity.

“Wages have stagnated,” said Dr. Elisabeth Reynolds, Executive Director, MIT Task Force on the Work of the Future, who shared results of the new task force report at the AI and the Work of the Future Congress 2020 held virtually last week.

The report made three areas of recommendations, the first around translating productivity gains from advances in automation to better quality jobs. “The quality of jobs in this country has been falling and not keeping up with those in other countries,” she said. Among rich countries, the US is among the worst places for the less educated and low-paid workers.” For example, the average hourly wage for low-paid workers in the US is \$10/hour, compared to \$14/hour for similar workers in Canada, who have health care benefits from national insurance.

“Our workers are falling behind,” she said.

The second area of recommendation was to invest and innovate in education and skills training. “This is a pillar of our strategy going forward,” Reynolds said. The report focuses on workers between high school and a four-year degree. “We focus on multiple examples to help workers find the right path to skilled jobs,” she said.

Many opportunities are emerging in health care, for example, specifically around health information technicians. She cited the IBM P-TECH program, which provides public high school students from underserved backgrounds with skills they need for competitive STEM jobs, as a good example of education innovation. P-TECH schools enable students to earn both their high school diploma and a two year associate degree linked to growing STEM fields.

The third area of innovation is to shape and expand innovation.

“Innovation creates jobs and will help the US meet competitive challenges from abroad,” Reynolds said. R&D funding as a percent of GDP in the US has stayed fairly steady for states from 1953 to 2015, but support from the federal government has declined over that time. “We want to see greater activity by the US government,” she said.

In a country that is politically divided and economically polarized, many have a fear of technology. Deploying new technology into the existing labor market has the potential to make such divisions worse, continuing downward pressure on wages, skills and benefits, and widening income inequality. “We reject the false tradeoffs between economic growth and having a strong labor market,” Dr. Reynolds said. “Other countries have done it better and the US can do it as well,” she said, noting many jobs that exist today did not exist 40 years ago.

The COVID-19 crisis has exacerbated the different realities between low-paid workers deemed “essential” needing to be physically present to earn their livings, and higher-paid workers able to work remotely via computers, the report noted.

The Task Force is co-chaired by MIT Professors David Autor, Economics, and David Mindell, Engineering, in addition to Dr. Reynolds. Members of the task force include more than 20 faculty members drawn from 12 departments at MIT, as well as over 20 graduate students. The 2020 Report can be found here.

Low-Wage Workers in US Fare Less Well Than Those in Other Advanced Countries

In a discussion on the state of low-wage jobs, James Manyika, Senior Partner, McKinsey & Co., said low-wage workers have not fared well across the 37 countries of the Organization for Economic Cooperation and Development (OECD), “and in the US, they have fared far worse than in other advanced countries,” he said. Jobs are available, but the wages are lower and, “Work has become a lot more fragile,” with many jobs in the gig worker economy (Uber, Lyft for example) and not full-time jobs with some level of benefits.

Addressing cost of living, Manyika said the cost of products such as cars and TVs have declined as a percentage of income, but costs of housing, education and health care have increased dramatically and are not affordable for many. The growth in the low-wage gig-worker type of job has coincided with “the disappearance of labor market protections and worker voice,” he said, noting, “The power of workers has declined dramatically.”

Geographically, two-thirds of US job growth has happened in 25 metropolitan areas. “Other parts of the country have fared far worse,” he said. “This is a profound challenge.”

In a session on Labor Market Dynamics, Susan Houseman, VP and Director of Research, W.E. Upjohn Institute for Employment Research, drew a comparison to Denmark for some contrasts. Denmark has a strong safety net of benefits for the unemployed, while the US has “one of the least generous unemployment systems in the world,” she said. “This will be more important in the future with the growing displacement caused by new technology.”

Another contrast between the US and Denmark is the relationship of labor to management. “The Danish system has a long history of labor management cooperation, with two-thirds of Danish workers in a union,” she said. “In the US, unionization rates have dropped to 10%.”

“We have a long history of labor management confrontation and not cooperation,” Houseman said. “Unions have really been weakened in the US.”

As for recommendations, she suggested that the US strengthen its unemployment systems, help labor organizations to build, raise the federal minimum wage [Ed. Note: Federal minimum wage increased to \$10/hour on Jan. 2, 2020, raised from \$7.25/hour, which was set in 2009.], and provide universal health insurance, “to take it out of the employment market.”

She suspects the number of workers designated as independent contractors is “likely understated” in the data.

Jayaraman of One Fair Wage Recommends Sectoral Bargaining

Later in the day, Saru Jayaraman, President One Fair Wage and Director, Food Labor Research Center, at the University of California, Berkeley, spoke about her work with employees and business owners. One Fair Wage is a non-profit organization that advocates for a fair minimum wage, including for example, a suggestion that tips be counted as a supplement to minimum wage for restaurant workers.

“We fight for higher wages and better working conditions, but it’s more akin to sectoral bargaining in other parts of the world,” she said. Sectoral collective bargaining is an effort to reach an agreement covering all workers in a sector of the economy, as opposed to between workers for individual firms. “It is a social contract,” Jayaraman said.

In France, 98% of workers were covered by sectoral bargaining as of 2015. ”The traditional models for improving wages and working conditions workplace by workplace do not work,” she said. She spoke of the need to maintain a “consumer base” of workers who put money back into the economy.

With the pandemic causing many restaurants to scale back or close, more restaurant owners have reached out to her organization in an effort to get workers back with the help of more cooperative agreements. “We have been approached by hundreds of restaurants in the last six months who are saying it’s time to change to a minimum wage,” she said. “Many were moved that so many workers were not getting unemployment insurance. They are rethinking every aspect of their businesses. They want a more functional system where everybody gets paid, and we move away from slavery. It’s a sea change among employers.”

She said nearly 800 restaurants are now members of her organization.

For the future, “We don’t have time for each workplace to be organized. We need to be innovating with sectoral bargaining to raise wages and working conditions across sectors. That is the future combined with workplace organizing,” she said.

Read the 2020 report from the MIT Task Force on the Future of Work; learn about IBM P-TECH and about One Fair Wage.