Connect with us

AI

Dark Energy: Map Gives Clue About What It Is—but Deepens Dispute About the Cosmic Expansion Rate

Avatar

Published

on

Dark energy is one of the greatest mysteries in science today. We know very little about it, other than it is invisible, it fills the whole universe, and it pushes galaxies away from each other. This is making our cosmos expand at an accelerated rate. But what is it? One of the simplest explanations is that it is a “cosmological constant”—a result of the energy of empty space itself—an idea introduced by Albert Einstein.

Many physicists aren’t satisfied with this explanation, though. They want a more fundamental description of its nature. Is it some new type of energy field or exotic fluid? Or is it a sign that Einstein’s equations of gravity are somehow incomplete? What’s more, we don’t really understand the universe’s current rate of expansion.

Now our project, the extended Baryon Oscillation Spectroscopic Survey (eBOSS), has come up with some answers. Our work has been released as a series of 23 publications, some of which are still being peer reviewed, describing the largest three-dimensional cosmological map ever created.

Currently, the only way we can feel the presence of dark energy is with observations of the distant universe. The farther galaxies are, the younger they appear to us. That’s because the light they emit took millions or even billions of years to reach our telescopes. Thanks to this sort of time-machine, we can measure different distances in space at different cosmic times, helping us work out how quickly the universe is expanding.

Using the Sloan Digital Sky Survey telescope, we measured more than two million galaxies and quasars—extremely bright and distant objects that are powered by black holes—over the last two decades. This new map covers around 11 billion years of cosmic history that was essentially unexplored, teaching us about dark energy like never before.

Picture of the Sloan Digital Survey Telescope.
SDSS telescope. Image credit: Sloan Digital Sky Survey/wikipedia, CC BY-SA

Our results show that about 69 percent of our universe’s energy is dark energy. They also demonstrate, once again, that Einstein’s simplest form of dark energy—the cosmological constant—agrees the most with our observations.

When combining the information from our map with other cosmological probes, such as the cosmic microwave background—the light left over from the big bang—they all seem to prefer the cosmological constant over more exotic explanations of dark energy.

Cosmic Expansion in Dispute

The results also provide a better insight into some recent controversies about the expansion rate of the universe today and about the geometry of space.

Combining our observations with studies of the universe in its infancy reveals cracks in our description of its evolution. In particular, our measurement of the current rate of expansion of the universe is about 10 percent lower than the value found using direct methods of measuring distances to nearby galaxies. Both these methods claim their result is correct and very precise, so their difference cannot simply be a statistical fluke.

The precision of eBOSS enhances this crisis. There is no broadly accepted explanation for this discrepancy. It may be that someone made a subtle mistake in one of these studies. Or it may be a sign that we need new physics. One exciting possibility is that a previously unknown form of matter from the early universe might have left a trace on our history. This is known as “early dark energy,” thought to be present when the universe was young, which could have modified the cosmic expansion rate.

Recent studies of the cosmic microwave background suggested that the geometry of space may be curved instead of being simply flat, which is consistent with the most accepted theory of the big bang. But our study concluded that space is indeed flat.

Even after these important advances, cosmologists over the world will remain puzzled by the apparent simplicity of dark energy, the flatness of space and the controversial values of the expansion rate today. There is only one way forward in the quest for answers—making larger and more detailed maps of the universe. Several projects are aiming to measure at least ten times more galaxies than we did.

If the maps from eBOSS were the first to explore a previously missing gap of 11 billion years of our history, the new generation of telescopes will make a high-resolution version of the same period of time. It is exciting to think about the fact that future surveys may be able to resolve the remaining mysteries about the universe’s expansion in the next decade or so. But it would be equally exciting if they revealed more surprises.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA

Source: https://singularityhub.com/2020/07/31/dark-energy-map-gives-clue-about-what-it-is-but-deepens-dispute-about-the-cosmic-expansion-rate/

AI

Flexible expressions could lift 3D-generated faces out of the uncanny valley

Avatar

Published

on

3D-rendered faces are a big part of any major movie or game now, but the task of capturing and animating them in a natural way can be a tough one. Disney Research is working on ways to smooth out this process, among them a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley.

Of course this technology has come a long way from the wooden expressions and limited details of earlier days. High-resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are not just limitless in variety, they’re very easy to get wrong.

Think of how someone’s entire face changes when they smile — it’s different for everyone, but there are enough similarities that we fancy we can tell when someone is “really” smiling or just faking it. How can you achieve that level of detail in an artificial face?

Existing “linear” models simplify the subtlety of expression, making “happiness” or “anger” minutely adjustable, but at the cost of accuracy — they can’t express every possible face, but can easily result in impossible faces. Newer neural models learn complexity from watching the interconnectedness of expressions, but like other such models their workings are obscure and difficult to control, and perhaps not generalizable beyond the faces they learned from. They don’t enable the level of control an artist working on a movie or game needs, or result in faces that (humans are remarkably good at detecting this) are just off somehow.

A team at Disney Research proposes a new model with the best of both worlds — what it calls a “semantic deep face model.” Without getting into the exact technical execution, the basic improvement is that it’s a neural model that learns how a facial expression affects the whole face, but is not specific to a single face — and moreover is nonlinear, allowing flexibility in how expressions interact with a face’s geometry and each other.

Think of it this way: A linear model lets you take an expression (a smile, or kiss, say) from 0-100 on any 3D face, but the results may be unrealistic. A neural model lets you take a learned expression from 0-100 realistically, but only on the face it learned it from. This model can take an expression from 0-100 smoothly on any 3D face. That’s something of an over-simplification, but you get the idea.

Image Credits: Disney Research

The results are powerful: You could generate a thousand faces with different shapes and tones, and then animate all of them with the same expressions without any extra work. Think how that could result in diverse CG crowds you can summon with a couple clicks, or characters in games that have realistic facial expressions regardless of whether they were hand-crafted or not.

It’s not a silver bullet, and it’s only part of a huge set of improvements artists and engineers are making in the various industries where this technology is employed — markerless face tracking, better skin deformation, realistic eye movements and dozens more areas of interest are also important parts of this process.

The Disney Research paper was presented at the International Conference on 3D Vision; you can read the full thing here.

Source: https://techcrunch.com/2020/11/25/flexible-expressions-could-lift-3d-generated-faces-out-of-the-uncanny-valley/

Continue Reading

AI

Europe sets out the rules of the road for its data reuse plan

Avatar

Published

on

European Union lawmakers have laid out a major legislative proposal today to encourage the reuse of industrial data across the Single Market by creating a standardized framework of trusted tools and techniques to ensure what they describe as “secure and privacy-compliant conditions” for sharing data.

Enabling a network of trusted and neutral data intermediaries, and an oversight regime comprised of national monitoring authorities and a pan-EU coordinating body, are core components of the plan.

The move follows the European Commission’s data strategy announcement in February, when it said it wanted to boost data reuse to support a new generation of data-driven services powered by data-hungry artificial intelligence, as well as encouraging the notion of using “tech for good” by enabling “more data and good quality data” to fuel innovation with a common public good (like better disease diagnostics) and improve public services.

The wider context is that personal data is already regulated in the bloc (such as under the General Data Protection Regulation; GDPR), which restricts reuse. While commercial considerations can limit how industrial data is shared.

The EU’s executive believes harmonzied requirements that set technical and/or legal conditions for data reuse are needed to foster legal certainty and trust — delivered via a framework that promises to maintain rights and protections and thus get more data usefully flowing.

The Commission sees major business benefits flowing from the proposed data governance regime. “Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it writes in a press release.

It has further data-related proposals incoming in 2021, in addition to a package of digital services legislation it’s due to lay out early next month — as part of a wider reboot of industrial strategy which prioritises digitalization and a green new deal.

All legislative components of the strategy will need to gain the backing of the European Council and parliament so there’s a long road ahead for implementing the plan.

Data Governance Act

EU lawmakers often talk in shorthand about the data strategy being intended to encourage the sharing and reuse of “industrial data” — although the Data Governance Plan (DGA) unveiled today has a wider remit.

The Commission envisages the framework enabling the sharing of data that’s subject to data protection legislation — which means personal data; where privacy considerations may (currently) restrain reuse — as well as industrial data subject to intellectual property, or which contains trade secrets or other commercially sensitive information (and is thus not typically shared by its creators primarily for commercial reasons). 

In a press conference on the data governance proposals, internal market commissioner Thierry Breton floated the notion of “data altruism” — saying the Commission wants to provide citizens with an organized way to share their own personal data for a common/public good, such as aiding research into rare diseases or helping cities map mobility for purposes like monitoring urban air quality.

“Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose,” the Commission writes in a Q&A on the proposal.

It’s planning a public register where entities will be able to register as a “data altruism organisation” — provided they have a not-for-profit character; meet transparency requirements; and implement certain safeguards to “protect the rights and interests of citizens and companies” — with the aim of providing “maximum trust with minimum administrative burden”, as it puts it.

The DGA envisages different tools, techniques and requirements governing how private sector bodies share data versus private companies.

For public sector bodies there may be technical requirements (such as encryption or anonymization) attached to the data itself or further processing limitations (such as requiring it to take place in “dedicated infrastructures operated and supervised by the public sector”), as well as legally binding confidentiality agreements that must be signed by the reuser.

“Whenever data is being transferred to a reuser, mechanisms will be in place that ensure compliance with the GDPR and preserve the commercial confidentiality of the data,” the Commission’s PR says.

To encourage businesses to get on board with pooling their own data sets — for the promise of a collective economic upside via access to bigger volumes of pooled data — the plan is for regulated data intermediaries/marketplaces to provide “neutral” data-sharing services, acting as the “trusted” go-between/repository so data can flow between businesses.

“To ensure this neutrality, the data-sharing intermediary cannot exchange the data for its own interest (e.g. by selling it to another company or using it to develop their own product based on this data) and will have to comply with strict requirements to ensure this neutrality,” the Commission writes on this.

Under the plan, intermediaries’ compliance with data handling requirements would be monitored by public authorities at a national level.

But the Commission is also proposing the creation of a new pan-EU body, called the European Data Innovation Board, that would try to knit together best practices across Member States — in what looks like a mirror of the steering/coordinating role undertaken by the European Data Protection Board (which links up the EU’s patchwork of data protection supervisory authorities).

“These data brokers or intermediaries that will provide for data sharing will do that in a way that your rights are protected and that you have choices,” said EVP Margrethe Vestager, who heads up the bloc’s digital strategy, also speaking at today’s press conference.

“So that you can also have personal data spaces where your data is managed. Because, initially, when you ask people they say well actually we do want to share but we don’t really know how to do it. And this is not only the technicalities — it’s also the legal certainty that’s missing. And this proposal will provide that,” she added.

Data localization requirements — or not?

The commissioners faced a number of questions over the hot button issue of international data transfers.

Breton was asked whether the DGA will include any data localization requirements. He responded by saying — essentially — that the rules will bake in a series of conditions which, depending on the data itself and the intended destination, may mean that storing and processing the data in the EU is the only viable option.

“On data localization — what we do is to set a GDPR-type of approach, through adequacy decisions and standard contractual clauses for only sensitive data through a cascading of conditions to allow the international transfer under conditions and in full respect of the protected nature of the data. That’s really the philosophy behind it,” Breton said. “And of course for highly sensitive data [such as] in the public health domain it is necessary to be able to set further conditions, depending on the sensitivity, otherwise… Member States will not share them.”

“For instance it could be possible to limit the reuse of this data into public secure infrastructures so that companies will come to use the data but not keep them. It could be also about restricting the number of access in third countries, restricting the possibility to further transfer the data and if necessary also prohibiting the transfer to a third country,” he went on, adding that such conditions would be “in full respect” of the EU’s WTO obligations.

In a section of its Q&A that deals with data localization requirements, the Commission similarly dances around the question, writing: “There is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice. At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

At the presser, Breton also noted that companies that want to gain access to EU data that’s been made available for reuse will need to have legal representation in the region. “This is important of course to ensure the enforceability of the rules we are setting,” he said. “It is very important for us — maybe not for other continents but for us — to be fully compliant.”

The commissioners also faced questions about how the planned data reuse rules would be enforced — given ongoing criticism over the lack of uniformly vigorous enforcement of Europe’s data protection framework, GDPR.

“No rule is any good if not enforced,” agreed Vestager. “What we are suggesting here is that if you have a data-sharing service provider and they have notified themselves it’s then up to the authority with whom they have notified actually to monitor and to supervise the compliance with the different things that they have to live up to in order to preserve the protection of these legitimate interests — could be business confidentiality, could be intellectual property rights.

“This is a thing that we will keep on working on also in the future proposals that are upcoming — the Digital Services Act and the Digital Markets Act — but here you have sort of a precursor that the ones who receive the notification in Member States they will also have to supervise that things are actually in order.”

Also responding on the enforcement point, Breton suggested enforcement would be baked in up front, such as by careful control of who could become a data reuse broker.

“[Firstly] we are putting forward common rules and harmonized rules… We are creating a large internal market for data. The second thing is that we are asking Member States to create specific authorities to monitor. The third thing is that we will ensure coherence and enforcement through the European Data Innovation Board,” he said. “Just to give you an example… enforcement is embedded. To be a data broker you will need to fulfil a certain number of obligations and if you fulfil these obligations you can be a neutral data broker — if you don’t

Alongside the DGA, the Commission also announced an Intellectual Property Action Plan.

Vestager said this aims to build on the EU’s existing IP framework with a number of supportive actions — including financial support for SMEs involved in the Horizon Europe R&D program to file patents.

The Commission is also considering whether to reform the framework for filing standards essential patents. But in the short term Vestager said it would aim to encourage industry to engage in forums aimed at reducing litigation.

“One example could be that the Commission could set up an independent system of third party essentiality checks in view of improving legal certainty and reducing litigation costs,” she added of the potential reform, noting that protecting IP is an important component of the bloc’s industrial strategy.

Source: https://techcrunch.com/2020/11/25/europe-sets-out-the-rules-of-the-road-for-its-data-reuse-plan/

Continue Reading

AI

How Do You Differentiate AI From Automation?

Avatar

Published

on

A lot of us use the terms artificial intelligence (AI) and automation interchangeably to describe the technological take over in the human-operated processes, and a lot of us would stare blankly at the person who asks the difference between the two. I know I did when I was asked the same.

It has been common to use these words interchangeably, even in professional use, to describe the innovative advancement in the regular processes. However, in actuality, these terms are not as similar as you think them to be. There are huge differences between the intricacy levels of the two.

While automation means making software or hardware that can get things done automatically with minimal human intervention, artificial intelligence refers to making machines intelligent. Automation is suitable for automating the repetitive, daily tasks and may require minimum or no human intervention.

It is based on specific programming and rules. If an organization wishes to convert this automation into AI, it will need to power it with data. Such data is termed as big data and comprises of Machine Learning, graphs, and neural networks. The output of automation is specific; however, AI carries the risk of uncertainty just like a human brain.

AI and automation play a vital role in the modern workplace simultaneously due to the availability of vast data and rapid technological development. Although the Gartner Survey claims that more than 37% (one-third) organizations use artificial intelligence in some form, these digits do not consider the implementation complexities.

While it is true that both of these advancements make our work easy, many employees believe that AI and automation are here to take over their jobs. Job loss is an unavoidable phenomenon and is going to take place, automation, or not. However, the evolved job replacing the traditional one is going to be more engaging and productive compared to the outdated ones.

“Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.” 

Automation

Look around. You’ll find yourself surrounded by the automated systems. The reason you don’t have to wait for long hours at the bank or the reason you don’t have to rewrite the same mail a thousand times is automation. Automation’s sole purpose is to let machines take over repetitive, tedious, and monotonous tasks.

The primary benefit of employing automation in your business processes is that it frees up the employees’ time, enabling them to focus on more critical tasks. These tasks are those that require personal skill or human judgment. The secondary benefit is the efficiency of business with reduced cost and a productive workforce.

Organizations are more open to adopting automated machinery despite its high installation charges because the machinery never requires a sick leave or a holiday. It always gets the work done on time, without a break.

The point to consider for its differentiation from AI is that the machines are all piloted by manual configuration. It is a conceptualized way of saying that you have to configure your automation system to suit your organization’s needs and requirements. It is nothing superior to a machine that has the smarts to follow orders.

Artificial Intelligence

We want machinery that could replicate the human thought process, but we do not wish to experience the real-life cases of Interstellar, The Matrix, or Wall-E. That is a precise summation of AI – assisting human life without taking control of it. It is a technology that mimics what a human can say, so and think but won’t be stirred by natural limitations like age and death.

Unlike automation, AI is not compatible with repetitive tasks or following orders. Its purpose is to seek patterns, learn from experience, and select the appropriate responses in certain situations without depending on human intervention or guidance.

“According to Market and Markets, by 2025, the AI industry will grow up to a $190 billion industry.” 

Differences In AI And Automation

As discussed above, people use the terms – AI and Automation – interchangeably. These terminologies have different objectives. AI’s main objective is to create brilliant machines to carry out the tasks that require intelligent thinking. It is the engineering and science of making devices so smart that they can mimic human behaviour and intelligence.

AI creates technology that enables computers and machines to think and behave like humans and learn from them. On the contrary, automation focuses on simplifying and speeding up the routine, repetitive tasks to increase the efficiency and quality of the output with minimum to no human intervention. Besides their terminology and objectives, AI and automation differ on the following basis.

Sr. No. Basis of Differentiation Automation Artificial Intelligence
1. Meaning It is a pre-set program that self runs to perform specific tasks. It is engineering the systems to have human-like thinking capability.
2. Purpose To help the employees by automizing the routine, repetitive, and monotonous processes to save time. To help the employees by making machines that can carry out the tasks that require human-like intelligent thinking and decision making.
3. Nature of tasks performed It performs repetitive, routine, and monotonous tasks. It performs more intelligent and critical tasks that require the thinking and judgment of the human brain.
4. Added Features It does not have highly exclusive added features. It involves self-learning and development from experiences.
5. Human Intervention It may require a little or least human intervention (to switch the system on/ off) It requires no human intervention as it takes the necessary information from the data and self-learns from the experiences or data feeds.

How Are AI And Automation Connected?

Now that we have seen what differentiates them from each other and have understood the meaning of each individually let’s see what similarities they hold.

There is one single thing that drives both AI and automation, and that is data. The automated devices collect and combine the data while the systems with artificial intelligence understand it. Indeed, the success or failure of a company depends on numerous factors like increased productivity, employees’ ability to contribute to the organization’s expansion, and business efficiency.

However, the factor more significant than any of these is data. The automated machinery relentlessly and obsessively feeds on the data. With artificial intelligence employed in the systems and automated machines chewing on the data, the companies can make smarter business decisions than before. 

The two systems are highly compatible. The businesses flourish at an entirely different level when these two are combined. Take the example of the modern-day cloud-based payroll solution. The software does the task of calculation or allocation of payroll with the help of programming. AI comes into the picture by taking the data of individual employees and passing it to the automated calculative system and taking the calculated amount and transferring the amount into individual employees’ accounts.

It coordinated with software like leave management or attendance management to maintain accuracy in the calculation process. The program calculates as it is programmed to do, without checking whether the data is correct or not. It is AI that sorts the information and gives relevant data to the program to calculate the payroll, thereby becoming the “brain” of the software.

A combination of AI and automation can birth a software that requires no human intervention and gives 100% accuracy and lawful compliance of the process. It is, however, just the tip of the iceberg. Imagine how powerful the organizations can become in the future by coupling the machines capable of collecting massive amounts of data with the systems that can brilliantly make that information meaningful.

“The famous futurologist and visionary and CEO of Tesla — Elon Musk — said that Robots and AI will be able to do everything better than us, creating the biggest risk that we face as a civilization.” 

Final Thoughts

When we think about how technology has changed everything about our lives, we observe that technologies such as automation and AI are becoming more dominant forms of brilliance and are plunging themselves into our environments. The fact – AI has transformed from being our assistant to something so powerful while the automated systems are swiftly outperforming us in many pursuits – is not bogus.

Technology is all about exploring the possibilities, and that is precisely what AI and automation serve. They explore new opportunities to outsmart the human brain. That being said, artificial intelligence is about making the machines smart to supersede human brilliance and behaviours, while automation simplifies and speeds up processes with minimal or zero human intervention.

Source: https://www.aiiottalk.com/artificial-intelligence/differentiate-ai-from-automation/

Continue Reading

AI

MIT Study: Effects of Automation on the Future of Work Challenges Policymakers  

Avatar

Published

on

The 2020 MIT Task Force on the Future of Work suggests how the productivity gains of automation can coexist with opportunity for low-wage workers. (Credit: Getty Images)

By John P. Desmond, AI Trends Editor  

Rising productivity brought on by automation has not led an increase in income for workers. This is among the conclusions of the 2020 report from the MIT Task Force on the Future of Work, founded in 2018 to study the relation between emerging technologies and work, to shape public discourse and explore strategies to enable a sharing of prosperity.  

Dr. Elisabeth Reynolds, Executive Director, MIT Task Force on the Work of the Future

“Wages have stagnated,” said Dr. Elisabeth Reynolds, Executive Director, MIT Task Force on the Work of the Future, who shared results of the new task force report at the AI and the Work of the Future Congress 2020 held virtually last week.   

The report made three areas of recommendations, the first around translating productivity gains from advances in automation to better quality jobs. “The quality of jobs in this country has been falling and not keeping up with those in other countries,” she said. Among rich countries, the US is among the worst places for the less educated and low-paid workers.” For example, the average hourly wage for low-paid workers in the US is $10/hour, compared to $14/hour for similar workers in Canada, who have health care benefits from national insurance. 

“Our workers are falling behind,” she said.  

The second area of recommendation was to invest and innovate in education and skills training. “This is a pillar of our strategy going forward,” Reynolds said. The report focuses on workers between high school and a four-year degree. “We focus on multiple examples to help workers find the right path to skilled jobs,” she said.  

Many opportunities are emerging in health care, for example, specifically around health information technicians. She cited the IBM P-TECH program, which provides public high school students from underserved backgrounds with skills they need for competitive STEM jobs, as a good example of education innovation. P-TECH schools enable students to earn both their high school diploma and a two year associate degree linked to growing STEM fields.  

The third area of innovation is to shape and expand innovation.  

“Innovation creates jobs and will help the US meet competitive challenges from abroad,” Reynolds said. R&D funding as a percent of GDP in the US has stayed fairly steady for states from 1953 to 2015, but support from the federal government has declined over that time. “We want to see greater activity by the US government,” she said. 

In a country that is politically divided and economically polarized, many have a fear of technology. Deploying new technology into the existing labor market has the potential to make such divisions worse, continuing downward pressure on wages, skills and benefits, and widening income inequality. “We reject the false tradeoffs between economic growth and having a strong labor market,” Dr. Reynolds said. “Other countries have done it better and the US can do it as well,” she said, noting many jobs that exist today did not exist 40 years ago. 

The COVID-19 crisis has exacerbated the different realities between low-paid workers deemed “essential” needing to be physically present to earn their livings, and higher-paid workers able to work remotely via computers, the report noted.   

The Task Force is co-chaired by MIT Professors David Autor, Economics, and David Mindell, Engineering, in addition to Dr. Reynolds. Members of the task force include more than 20 faculty members drawn from 12 departments at MIT, as well as over 20 graduate students. The 2020 Report can be found here.   

Low-Wage Workers in US Fare Less Well Than Those in Other Advanced Countries  

James Manyika, Senior Partner, McKinsey & Co.

In a discussion on the state of low-wage jobs, James Manyika, Senior Partner, McKinsey & Co., said low-wage workers have not fared well across the 37 countries of the Organization for Economic Cooperation and Development (OECD), “and in the US, they have fared far worse than in other advanced countries,” he said. Jobs are available, but the wages are lower and, “Work has become a lot more fragile,” with many jobs in the gig worker economy (Uber, Lyft for example) and not full-time jobs with some level of benefits. 

Addressing cost of living, Manyika said the cost of products such as cars and TVs have declined as a percentage of income, but costs of housing, education and health care have increased dramatically and are not affordable for many. The growth in the low-wage gig-worker type of job has coincided with “the disappearance of labor market protections and worker voice,” he said, noting, “The power of workers has declined dramatically.” 

Geographically, two-thirds of US job growth has happened in 25 metropolitan areas. “Other parts of the country have fared far worse,” he said. “This is a profound challenge.”  

In a session on Labor Market Dynamics, Susan Houseman, VP and Director of Research, W.E. Upjohn Institute for Employment Research, drew a comparison to Denmark for some contrasts. Denmark has a strong safety net of benefits for the unemployed, while the US has “one of the least generous unemployment systems in the world,” she said. “This will be more important in the future with the growing displacement caused by new technology.”  

Another contrast between the US and Denmark is the relationship of labor to management. “The Danish system has a long history of labor management cooperation, with two-thirds of Danish workers in a union,” she said. “In the US, unionization rates have dropped to 10%.”  

“We have a long history of labor management confrontation and not cooperation,” Houseman said. “Unions have really been weakened in the US.”  

As for recommendations, she suggested that the US strengthen its unemployment systems, help labor organizations to build, raise the federal minimum wage [Ed. Note: Federal minimum wage increased to $10/hour on Jan. 2, 2020, raised from $7.25/hour, which was set in 2009.], and provide universal health insurance, “to take it out of the employment market.” 

She suspects the number of workers designated as independent contractors is “likely understated” in the data.   

Jayaraman of One Fair Wage Recommends Sectoral Bargaining 

Saru Jayaraman, President One Fair Wage and Director, Food Labor Research Center, University of California, Berkeley

Later in the day, Saru Jayaraman, President One Fair Wage and Director, Food Labor Research Center, at the University of California, Berkeley, spoke about her work with employees and business owners. One Fair Wage is a non-profit organization that advocates for a fair minimum wage, including for example, a suggestion that tips be counted as a supplement to minimum wage for restaurant workers.  

“We fight for higher wages and better working conditions, but it’s more akin to sectoral bargaining in other parts of the world,” she said. Sectoral collective bargaining is an effort to reach an agreement covering all workers in a sector of the economy, as opposed to between workers for individual firms. “It is a social contract,” Jayaraman said.  

In France, 98% of workers were covered by sectoral bargaining as of 2015. ”The traditional models for improving wages and working conditions workplace by workplace do not work,” she said. She spoke of the need to maintain a “consumer base” of workers who put money back into the economy.   

With the pandemic causing many restaurants to scale back or close, more restaurant owners have reached out to her organization in an effort to get workers back with the help of more cooperative agreements. “We have been approached by hundreds of restaurants in the last six months who are saying it’s time to change to a minimum wage,” she said. “Many were moved that so many workers were not getting unemployment insurance. They are rethinking every aspect of their businesses. They want a more functional system where everybody gets paid, and we move away from slavery. It’s a sea change among employers.”   

She said nearly 800 restaurants are now members of her organization. 

For the future, “We don’t have time for each workplace to be organized. We need to be innovating with sectoral bargaining to raise wages and working conditions across sectors. That is the future combined with workplace organizing,” she said.  

Read the 2020 report from the MIT Task Force on the Future of Work; learn about IBM P-TECH and about One Fair Wage. 

Source: https://www.aitrends.com/workforce/mit-study-effects-of-automation-on-the-future-of-work-challenges-policymakers/

Continue Reading
Energy9 hours ago

Ballard Closes US$402.5 Million Bought Deal Offering of Common Shares

Energy9 hours ago

Lithium-ion Battery Market Size USD 129.3 Billion By 2027 At A CAGR of 18.0% | Valuates Reports

Energy9 hours ago

CBAK Energy and Kandi Group Signed Supply Framework Agreement

Energy10 hours ago

Level Sensors Market worth $6.1 billion by 2025 – Exclusive Report by MarketsandMarkets™

Energy10 hours ago

Rescheduling of Work Commitments in Corentyne Block, Guyana

Cyber Security11 hours ago

Technological Innovations at the Tokyo Olympics

Gaming15 hours ago

The Basics: Popular Casino Games You Should Play

Press Releases15 hours ago

The International Vaccine Institute Supports a Global Campaign to Reduce the Spread of Covid-19

Esports16 hours ago

T1 re-signs top laner Canna, extends contract until 2022

Energy16 hours ago

JinkoSolar to Report Third Quarter 2020 Results on December 7, 2020

Energy16 hours ago

EWPG Holding AB: Interim report for the period 1 January – 30 September 2020

Esports17 hours ago

Khan joins DAMWON as team’s new top laner

Cyber Security18 hours ago

Different ways tech plays a key role in securing igaming platforms

Energy22 hours ago

LONGi suministra 273MW de sus módulos solares a la mayor planta solar del Sureste de Asia

Ecommerce23 hours ago

ADvendio Celebrates 10 Years of Product Excellence and Growth

Esports24 hours ago

Super Smash Bros. Melee Slippi mod launches broadcast feature early in response to #FreeMelee

Esports24 hours ago

The 7 best low-back gaming chairs

Denmark
Esports1 day ago

Heroic move past Endpoint in BLAST Premier Showdown

Esports1 day ago

Na’Vi brings Mag back from inactive roster to coach Dota 2 team

Esports1 day ago

Fantasy games live for DreamHack Masters Winter and Flashpoint 2 playoffs

Energy1 day ago

Worldwide Hybrid Diesel Genset Industry to 2026 – Key Drivers and Restraints

Energy1 day ago

The Neutrino Energy Group Transcends the Theoretical to Transform Practical Energy Use Worldwide

Esports1 day ago

How to activate crossplay on Rainbow Six Siege

Esports1 day ago

Pokémon teases 25th-anniversary celebration during Macy’s Thanksgiving Day Parade

Energy1 day ago

ChemPoint es seleccionado como distribuidor de los productos de Soluciones Especializadas de DuPont para México

Esports1 day ago

Apex Legends Dev Believes Wattson Isn’t ‘Useless’

Esports1 day ago

Horizon Voice Actor Shows What it Was Like to Record During the Pandemic

United States
Esports1 day ago

Liquid edge past MAD Lions in BLAST Premier Fall Showdown

Energy1 day ago

Europe Excavator Market Outlook Report 2020-2025 Featuring Prominent Players – Caterpillar, CNH, John Deere, Kobelco, Liebherr

Esports1 day ago

League’s original 17 champions and how different they were 11 years ago

Trending