Connect with us


Executive Interview: Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency 




As CTO of the Cybersecurity & Infrastructure Security Agency of the DHS, Brian Gattoni is charged with understanding and advising on cyber and physical risks to the nation’s critical infrastructure. 

Understanding and Advising on Cyber and Physical Risks to the Nation’s Critical Infrastructure 

Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency

Brian R. Gattoni is the Chief Technology Officer for the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. CISA is the nation’s risk advisor, working with partners to defend against today’s threats and collaborating to build a secure and resilient infrastructure for the future. Gattoni sets the technical vision and strategic alignment of CISA data and mission services. Previously, he was the Chief of Mission Engineering & Technology, developing analytic techniques and new approaches to increase the value of DHS cyber mission capabilities. Prior to joining DHS in 2010, Gattoni served in various positions at the Defense Information Systems Agency and the United States Army Test & Evaluation Command. He holds a Master of Science Degree in Cyber Systems & Operations from the Naval Postgraduate School in Monterey, California, and is a Certified Information Systems Security Professional (CISSP).  

AI Trends: What is the technical vision for CISA to manage risk to federal networks and critical infrastructure? 

Brian Gattoni: Our technology vision is built in support of our overall strategy. We are the nation’s risk advisor. It’s our job to stay abreast of incoming threats and opportunities for general risk to the nation. Our efforts are to understand and advise on cyber and physical risks to the nation’s critical infrastructure.  

It’s all about bringing in the data, understanding what decisions need to be made and can be made from the data, and what insights are useful to our stakeholders. The potential of AI and machine learning is to expand on operational insights with additional data sets to make better use of the information we have.  

What are the most prominent threats? 

The Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security is the Nation’s risk advisor.

The sources of threats we frequently discuss are the adversarial actions of nation-state actors and those aligned with nation-state actors and their interests, in disrupting national critical functions here in the U.S. Just in the past month, we’ve seen increased activity from elements supporting what we refer to in the government as Hidden Cobra [malicious cyber activity by the North Korean government]. We’ve issued joint alerts with our partners overseas and the FBI and the DoD, highlighting activity associated with Chinese actors. On people can find CISA Insights, which are documents that provide background information on particular cyber threats and the vulnerabilities they exploit, as well as a ready-made set of mitigation activities that non-federal partners can implement.   

What role does AI play in the plan? 

Artificial intelligence has a great role to play in the support of the decisions we make as an agency. Fundamentally, AI is going to allow us to apply our decision processes to a scale of data that humans just cannot keep up with. And that’s especially prevalent in the cyber mission. We remain cognizant of how we make decisions in the first place and target artificial intelligence and machine learning algorithms that augment and support that decision-making process. We’ll be able to use AI to provide operational insights at a greater scale or across a greater breadth of our mission space.  

How far along are you in the implementation of AI at the CISA? 

Implementing AI is not as simple as putting in a new business intelligence tool or putting in a new email capability. Really augmenting your current operations with artificial intelligence is a mix of the culture change, for humans to understand how the AI is supposed to augment their operations. It is a technology change, to make sure you have the scalable compute and the right tools in place to do the math you’re talking about implementing. And it’s a process change. We want to deliver artificial intelligence algorithms that augment our operators’ decisions as a support mechanism.  

Where we are in the implementation is closer to understanding those three things. We’re working with partners in federally funded research and development centers, national labs and the departments own Science and Technology Data Analytics Tech Center to develop capability in this area. We’ve developed an analytics meta-process which helps us systemize the way we take in data and puts us in a position to apply artificial intelligence to expand our use of that data.  

Do you have any interesting examples of how AI is being applied in CISA and the federal government today? Or what you are working toward, if that’s more appropriate. 

I have a recent use case. We’ve been working with some partners over the past couple of months to apply AI to a humanitarian assistance and disaster relief type of mission. So, within CISA, we also have responsibilities for critical infrastructure. During hurricane season, we always have a role to play in helping advise what the potential impacts are to critical infrastructure sites in the affected path of a hurricane.  

We prepared to conduct an experiment leveraging AI algorithms and overhead imagery to figure out if we could analyze the data from a National Oceanic and Atmospheric Administration flight over the affected area. We compared that imagery with the base imagery from Google Earth or ArcGIS and used AI to identify any affected critical infrastructure. We could see the extent to which certain assets, such as oil refineries, were physically flooded. We could make an assessment as to whether they hit a threshold of damage that would warrant additional scrutiny, or we didn’t have to apply resources because their resilience was intact, and their functions could continue.   

That is a nice use case, a simple example of letting a computer do the comparisons and make a recommendation to our human operators. We found that it was very good at telling us which critical infrastructure sites did not need any additional intervention. To use a needle in a haystack analogy, one of the useful things AI can help us do is blow hay off the stack in pursuit of the needle. And that’s a win also. The experiment was very promising in that sense.  

How does CISA work with private industry, and do you have any examples of that?  

We have an entire division dedicated to stakeholder engagement. Private industry owns over 80% of the critical infrastructure in the nation. So CISA sits at the intersection of the private sector and the government to share information, to ensure we have resilience in place for both the government entities and the private entities, in the pursuit of resilience for those national critical functions. Over the past year we’ve defined a set of 55 functions that are critical for the nation.  

When we work with private industry in those areas we try to share the best insights and make decisions to ensure those function areas will continue unabated in the face of a physical or cyber threat. 

Cloud computing is growing rapidly. We see different strategies, including using multiple vendors of the public cloud, and a mix of private and public cloud in a hybrid strategy. What do you see is the best approach for the federal government? 

In my experience the best approach is to provide guidance to the CIO’s and CISO’s across the federal government and allow them the flexibility to make risk-based determinations on their own computing infrastructure as opposed to a one-size-fits-all approach.   

We issue a series of use cases that describeat a very high levela reference architecture about a type of cloud implementation and where security controls should be implemented, and where telemetry and instrumentation should be applied. You have departments and agencies that have a very forward-facing public citizen services portfolio, which means access to information, is one of their primary responsibilities. Public clouds and ease of access are most appropriate for those. And then there are agencies with more sensitive missions. Those have critical high value data assets that need to be protected in a specific way. Giving each the guidance they need to handle all of their use cases is what we’re focused on here. 

I wanted to talk a little bit about job roles. How are you defining the job roles around AI in CISA, as in data scientists, data engineers, and other important job titles and new job titles?  

I could spend the remainder of our time on this concept of job roles for artificial intelligence; it’s a favorite topic for me. I am a big proponent of the discipline of data science being a team sport. We currently have our engineers and our analysts and our operators. And the roles and disciplines around data science and data engineers have been morphing out of an additional duty on analysts and engineers into its own sub sector, its own discipline. We’re looking at a cadre of data professionals that serve almost as a logistics function to our operators who are doing the mission-level analysis. If you treat data as an asset that has to be moved and prepared and cleaned and readied, all terms in the data science and data engineering world now, you start to realize that it requires logistics functions similar to any other asset that has to be moved. 

If you get professionals dedicated to that end, you will be able to scale to the data problems you have without overburdening your current engineers who are building the compute platforms, or your current mission analysts who are trying to interpret the data and apply the insights to your stakeholders. You will have more team members moving data to the right places, making data-driven decisions. 

Are you able to hire the help you need to do the job? Are you able to find qualified people? Where are the gaps? 

As the domain continues to mature, as we understand more about the different roles, we begin to see gapseducation programs and training programs that need to be developed. I think maybe three, five years ago, you would see certificates from higher education in data science. Now we’re starting to see full-fledged degrees as concentrations out of computer science or mathematics. Those graduates are the pipeline to help us fill the gaps we currently have. So as far as our current problems, there’s never enough people. It’s always hard to get the good ones and then keep them because the competition is so high. 

Here at CISA, we continue to invest not only in our own folks that are re-training, but in the development of a cyber education and training group, which is looking at the partnerships with academia to help shore up that pipeline. It continually improves. 

Do you have a message for high school or college students interested in pursuing a career in AI, either in the government or in business, as to what they should study? 

Yes and it’s similar to the message I give to the high schoolers that live in my house. That is, don’t give up on math so easily. Math and science, the STEM subjects, have foundational skills that may be applicable to your future career. That is not to discount the diversity and variety of thought processes that come from other disciplines. I tell my kids they need the mathematical foundation to be able to apply the thought processes you learn from studying music or studying art or studying literature. And the different ways that those disciplines help you make connections. But have the mathematical foundation to represent those connections to a computer.   

One of the fallacies around machine learning is that it will just learn [by itself]. That’s not true. You have to be able to teach it, and you can only talk to computers with math, at the base level.  

So if you have the mathematical skills to relay your complicated human thought processes to the computer, and now it can replicate those patterns and identify what you’re asking it to do, you will have success in this field. But if you give up on the math part too earlyit’s a progressive disciplineif you give up on algebra two and then come back years later and jump straight into calculus, success is going to be difficult, but not impossible. 

You sound like a math teacher.  

A simpler way to say it is: if you say no to math now, it’s harder to say yes later. But if you say yes now, you can always say no later, if data science ends up not being your thing.  

Are there any incentives for young people, let’s say a student just out of college, to go to work for the government? Is there any kind of loan forgiveness for instance?  

We have a variety of programs. The one that I really like, that I have had a lot of success with as a hiring manager in the federal government, especially here at DHS over the past 10 years, is a program called Scholarship for Service. It’s a CyberCorps program where interested students, who pass the process to be accepted can get a degree in exchange for some service time. It used to be two years; it might be more now, but they owe some time and service to the federal government after the completion of their degree. 

I have seen many successful candidates come out of that program and go on to fantastic careers, contributing in cyberspace all over. I have interns that I hired nine years ago that are now senior leaders in this organization or have departed for private industry and are making their difference out there. It’s a fantastic program for young folks to know about.  

What advice do you have for other government agencies just getting started in pursuing AI to help them meet their goals? 

My advice for my peers and partners and anybody who’s willing to listen to it is, when you’re pursuing AI, be very specific about what it can do for you.   

I go back to the decisions you make, what people are counting on you to do. You bear some responsibility to know how you make those decisions if you’re really going to leverage AI and machine learning to make decisions faster or better or some other quality of goodnessThe speed at which you make decisions will go both ways. You have to identify your benefit of that decision being made if it’s positive and define your regret if that decision is made and it’s negative. And then do yourself a simple HIGH-LOW matrix; the quadrant of high-benefit, low-regret decisions is the target. Those are ones that I would like to automate as much as possible. And if artificial intelligence and machine learning can help, that would be great. If not, that’s a decision you have to make. 

I have two examples I use in our cyber mission to illustrate the extremes here. One is for incident triage. If a cyber incident is detected, we have a triage process to make sure that it’s real. That presents information to an analyst. If that’s done correctly, it has a high benefit because it can take a lot of work off our analysts. It has lowtomedium regret if it’s done incorrectly, because the decision is to present information to an analyst who can then provide that additional filter. So that’s a high benefit, low regret. That’s a no-brainer for automating as much as possible. 

On the other side of the spectrum is protecting next generation 911 call centers from a potential telephony denial of service attack. One of the potential automated responses could be to cut off the incoming traffic to the 911 call center to stunt the attack. Benefit: you may have prevented the attack. Regret: potentially you’re cutting off legitimate traffic to a 911 call center, and that has life and safety implications. And that is unacceptable. That’s an area where automation is probably not the right approach. Those are two extreme examples, which are easy for people to understand, and it helps illustrate how the benefit regret matrix can work. How you make decisions is really the key to understanding whether to implement AI and machine learning to help automate those decisions using the full breadth of data.  

Learn more about the Cybersecurity & Infrastructure Security Agency.  



Europe sets out the rules of the road for its data reuse plan




European Union lawmakers have laid out a major legislative proposal today to encourage the reuse of industrial data across the Single Market by creating a standardized framework of trusted tools and techniques to ensure what they describe as “secure and privacy-compliant conditions” for sharing data.

Enabling a network of trusted and neutral data intermediaries, and an oversight regime comprised of national monitoring authorities and a pan-EU coordinating body, are core components of the plan.

The move follows the European Commission’s data strategy announcement in February, when it said it wanted to boost data reuse to support a new generation of data-driven services powered by data-hungry artificial intelligence, as well as encouraging the notion of using “tech for good” by enabling “more data and good quality data” to fuel innovation with a common public good (like better disease diagnostics) and improve public services.

The wider context is that personal data is already regulated in the bloc (such as under the General Data Protection Regulation; GDPR), which restricts reuse. While commercial considerations can limit how industrial data is shared.

The EU’s executive believes harmonzied requirements that set technical and/or legal conditions for data reuse are needed to foster legal certainty and trust — delivered via a framework that promises to maintain rights and protections and thus get more data usefully flowing.

The Commission sees major business benefits flowing from the proposed data governance regime. “Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it writes in a press release.

It has further data-related proposals incoming in 2021, in addition to a package of digital services legislation it’s due to lay out early next month — as part of a wider reboot of industrial strategy which prioritises digitalization and a green new deal.

All legislative components of the strategy will need to gain the backing of the European Council and parliament so there’s a long road ahead for implementing the plan.

Data Governance Act

EU lawmakers often talk in shorthand about the data strategy being intended to encourage the sharing and reuse of “industrial data” — although the Data Governance Plan (DGA) unveiled today has a wider remit.

The Commission envisages the framework enabling the sharing of data that’s subject to data protection legislation — which means personal data; where privacy considerations may (currently) restrain reuse — as well as industrial data subject to intellectual property, or which contains trade secrets or other commercially sensitive information (and is thus not typically shared by its creators primarily for commercial reasons). 

In a press conference on the data governance proposals, internal market commissioner Thierry Breton floated the notion of “data altruism” — saying the Commission wants to provide citizens with an organized way to share their own personal data for a common/public good, such as aiding research into rare diseases or helping cities map mobility for purposes like monitoring urban air quality.

“Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose,” the Commission writes in a Q&A on the proposal.

It’s planning a public register where entities will be able to register as a “data altruism organisation” — provided they have a not-for-profit character; meet transparency requirements; and implement certain safeguards to “protect the rights and interests of citizens and companies” — with the aim of providing “maximum trust with minimum administrative burden”, as it puts it.

The DGA envisages different tools, techniques and requirements governing how private sector bodies share data versus private companies.

For public sector bodies there may be technical requirements (such as encryption or anonymization) attached to the data itself or further processing limitations (such as requiring it to take place in “dedicated infrastructures operated and supervised by the public sector”), as well as legally binding confidentiality agreements that must be signed by the reuser.

“Whenever data is being transferred to a reuser, mechanisms will be in place that ensure compliance with the GDPR and preserve the commercial confidentiality of the data,” the Commission’s PR says.

To encourage businesses to get on board with pooling their own data sets — for the promise of a collective economic upside via access to bigger volumes of pooled data — the plan is for regulated data intermediaries/marketplaces to provide “neutral” data-sharing services, acting as the “trusted” go-between/repository so data can flow between businesses.

“To ensure this neutrality, the data-sharing intermediary cannot exchange the data for its own interest (e.g. by selling it to another company or using it to develop their own product based on this data) and will have to comply with strict requirements to ensure this neutrality,” the Commission writes on this.

Under the plan, intermediaries’ compliance with data handling requirements would be monitored by public authorities at a national level.

But the Commission is also proposing the creation of a new pan-EU body, called the European Data Innovation Board, that would try to knit together best practices across Member States — in what looks like a mirror of the steering/coordinating role undertaken by the European Data Protection Board (which links up the EU’s patchwork of data protection supervisory authorities).

“These data brokers or intermediaries that will provide for data sharing will do that in a way that your rights are protected and that you have choices,” said EVP Margrethe Vestager, who heads up the bloc’s digital strategy, also speaking at today’s press conference.

“So that you can also have personal data spaces where your data is managed. Because, initially, when you ask people they say well actually we do want to share but we don’t really know how to do it. And this is not only the technicalities — it’s also the legal certainty that’s missing. And this proposal will provide that,” she added.

Data localization requirements — or not?

The commissioners faced a number of questions over the hot button issue of international data transfers.

Breton was asked whether the DGA will include any data localization requirements. He responded by saying — essentially — that the rules will bake in a series of conditions which, depending on the data itself and the intended destination, may mean that storing and processing the data in the EU is the only viable option.

“On data localization — what we do is to set a GDPR-type of approach, through adequacy decisions and standard contractual clauses for only sensitive data through a cascading of conditions to allow the international transfer under conditions and in full respect of the protected nature of the data. That’s really the philosophy behind it,” Breton said. “And of course for highly sensitive data [such as] in the public health domain it is necessary to be able to set further conditions, depending on the sensitivity, otherwise… Member States will not share them.”

“For instance it could be possible to limit the reuse of this data into public secure infrastructures so that companies will come to use the data but not keep them. It could be also about restricting the number of access in third countries, restricting the possibility to further transfer the data and if necessary also prohibiting the transfer to a third country,” he went on, adding that such conditions would be “in full respect” of the EU’s WTO obligations.

In a section of its Q&A that deals with data localization requirements, the Commission similarly dances around the question, writing: “There is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice. At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

At the presser, Breton also noted that companies that want to gain access to EU data that’s been made available for reuse will need to have legal representation in the region. “This is important of course to ensure the enforceability of the rules we are setting,” he said. “It is very important for us — maybe not for other continents but for us — to be fully compliant.”

The commissioners also faced questions about how the planned data reuse rules would be enforced — given ongoing criticism over the lack of uniformly vigorous enforcement of Europe’s data protection framework, GDPR.

“No rule is any good if not enforced,” agreed Vestager. “What we are suggesting here is that if you have a data-sharing service provider and they have notified themselves it’s then up to the authority with whom they have notified actually to monitor and to supervise the compliance with the different things that they have to live up to in order to preserve the protection of these legitimate interests — could be business confidentiality, could be intellectual property rights.

“This is a thing that we will keep on working on also in the future proposals that are upcoming — the Digital Services Act and the Digital Markets Act — but here you have sort of a precursor that the ones who receive the notification in Member States they will also have to supervise that things are actually in order.”

Also responding on the enforcement point, Breton suggested enforcement would be baked in up front, such as by careful control of who could become a data reuse broker.

“[Firstly] we are putting forward common rules and harmonized rules… We are creating a large internal market for data. The second thing is that we are asking Member States to create specific authorities to monitor. The third thing is that we will ensure coherence and enforcement through the European Data Innovation Board,” he said. “Just to give you an example… enforcement is embedded. To be a data broker you will need to fulfil a certain number of obligations and if you fulfil these obligations you can be a neutral data broker — if you don’t

Alongside the DGA, the Commission also announced an Intellectual Property Action Plan.

Vestager said this aims to build on the EU’s existing IP framework with a number of supportive actions — including financial support for SMEs involved in the Horizon Europe R&D program to file patents.

The Commission is also considering whether to reform the framework for filing standards essential patents. But in the short term Vestager said it would aim to encourage industry to engage in forums aimed at reducing litigation.

“One example could be that the Commission could set up an independent system of third party essentiality checks in view of improving legal certainty and reducing litigation costs,” she added of the potential reform, noting that protecting IP is an important component of the bloc’s industrial strategy.


Continue Reading


How Do You Differentiate AI From Automation?




A lot of us use the terms artificial intelligence (AI) and automation interchangeably to describe the technological take over in the human-operated processes, and a lot of us would stare blankly at the person who asks the difference between the two. I know I did when I was asked the same.

It has been common to use these words interchangeably, even in professional use, to describe the innovative advancement in the regular processes. However, in actuality, these terms are not as similar as you think them to be. There are huge differences between the intricacy levels of the two.

While automation means making software or hardware that can get things done automatically with minimal human intervention, artificial intelligence refers to making machines intelligent. Automation is suitable for automating the repetitive, daily tasks and may require minimum or no human intervention.

It is based on specific programming and rules. If an organization wishes to convert this automation into AI, it will need to power it with data. Such data is termed as big data and comprises of Machine Learning, graphs, and neural networks. The output of automation is specific; however, AI carries the risk of uncertainty just like a human brain.

AI and automation play a vital role in the modern workplace simultaneously due to the availability of vast data and rapid technological development. Although the Gartner Survey claims that more than 37% (one-third) organizations use artificial intelligence in some form, these digits do not consider the implementation complexities.

While it is true that both of these advancements make our work easy, many employees believe that AI and automation are here to take over their jobs. Job loss is an unavoidable phenomenon and is going to take place, automation, or not. However, the evolved job replacing the traditional one is going to be more engaging and productive compared to the outdated ones.

“Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race.” 


Look around. You’ll find yourself surrounded by the automated systems. The reason you don’t have to wait for long hours at the bank or the reason you don’t have to rewrite the same mail a thousand times is automation. Automation’s sole purpose is to let machines take over repetitive, tedious, and monotonous tasks.

The primary benefit of employing automation in your business processes is that it frees up the employees’ time, enabling them to focus on more critical tasks. These tasks are those that require personal skill or human judgment. The secondary benefit is the efficiency of business with reduced cost and a productive workforce.

Organizations are more open to adopting automated machinery despite its high installation charges because the machinery never requires a sick leave or a holiday. It always gets the work done on time, without a break.

The point to consider for its differentiation from AI is that the machines are all piloted by manual configuration. It is a conceptualized way of saying that you have to configure your automation system to suit your organization’s needs and requirements. It is nothing superior to a machine that has the smarts to follow orders.

Artificial Intelligence

We want machinery that could replicate the human thought process, but we do not wish to experience the real-life cases of Interstellar, The Matrix, or Wall-E. That is a precise summation of AI – assisting human life without taking control of it. It is a technology that mimics what a human can say, so and think but won’t be stirred by natural limitations like age and death.

Unlike automation, AI is not compatible with repetitive tasks or following orders. Its purpose is to seek patterns, learn from experience, and select the appropriate responses in certain situations without depending on human intervention or guidance.

“According to Market and Markets, by 2025, the AI industry will grow up to a $190 billion industry.” 

Differences In AI And Automation

As discussed above, people use the terms – AI and Automation – interchangeably. These terminologies have different objectives. AI’s main objective is to create brilliant machines to carry out the tasks that require intelligent thinking. It is the engineering and science of making devices so smart that they can mimic human behaviour and intelligence.

AI creates technology that enables computers and machines to think and behave like humans and learn from them. On the contrary, automation focuses on simplifying and speeding up the routine, repetitive tasks to increase the efficiency and quality of the output with minimum to no human intervention. Besides their terminology and objectives, AI and automation differ on the following basis.

Sr. No. Basis of Differentiation Automation Artificial Intelligence
1. Meaning It is a pre-set program that self runs to perform specific tasks. It is engineering the systems to have human-like thinking capability.
2. Purpose To help the employees by automizing the routine, repetitive, and monotonous processes to save time. To help the employees by making machines that can carry out the tasks that require human-like intelligent thinking and decision making.
3. Nature of tasks performed It performs repetitive, routine, and monotonous tasks. It performs more intelligent and critical tasks that require the thinking and judgment of the human brain.
4. Added Features It does not have highly exclusive added features. It involves self-learning and development from experiences.
5. Human Intervention It may require a little or least human intervention (to switch the system on/ off) It requires no human intervention as it takes the necessary information from the data and self-learns from the experiences or data feeds.

How Are AI And Automation Connected?

Now that we have seen what differentiates them from each other and have understood the meaning of each individually let’s see what similarities they hold.

There is one single thing that drives both AI and automation, and that is data. The automated devices collect and combine the data while the systems with artificial intelligence understand it. Indeed, the success or failure of a company depends on numerous factors like increased productivity, employees’ ability to contribute to the organization’s expansion, and business efficiency.

However, the factor more significant than any of these is data. The automated machinery relentlessly and obsessively feeds on the data. With artificial intelligence employed in the systems and automated machines chewing on the data, the companies can make smarter business decisions than before. 

The two systems are highly compatible. The businesses flourish at an entirely different level when these two are combined. Take the example of the modern-day cloud-based payroll solution. The software does the task of calculation or allocation of payroll with the help of programming. AI comes into the picture by taking the data of individual employees and passing it to the automated calculative system and taking the calculated amount and transferring the amount into individual employees’ accounts.

It coordinated with software like leave management or attendance management to maintain accuracy in the calculation process. The program calculates as it is programmed to do, without checking whether the data is correct or not. It is AI that sorts the information and gives relevant data to the program to calculate the payroll, thereby becoming the “brain” of the software.

A combination of AI and automation can birth a software that requires no human intervention and gives 100% accuracy and lawful compliance of the process. It is, however, just the tip of the iceberg. Imagine how powerful the organizations can become in the future by coupling the machines capable of collecting massive amounts of data with the systems that can brilliantly make that information meaningful.

“The famous futurologist and visionary and CEO of Tesla — Elon Musk — said that Robots and AI will be able to do everything better than us, creating the biggest risk that we face as a civilization.” 

Final Thoughts

When we think about how technology has changed everything about our lives, we observe that technologies such as automation and AI are becoming more dominant forms of brilliance and are plunging themselves into our environments. The fact – AI has transformed from being our assistant to something so powerful while the automated systems are swiftly outperforming us in many pursuits – is not bogus.

Technology is all about exploring the possibilities, and that is precisely what AI and automation serve. They explore new opportunities to outsmart the human brain. That being said, artificial intelligence is about making the machines smart to supersede human brilliance and behaviours, while automation simplifies and speeds up processes with minimal or zero human intervention.


Continue Reading


MIT Study: Effects of Automation on the Future of Work Challenges Policymakers  




The 2020 MIT Task Force on the Future of Work suggests how the productivity gains of automation can coexist with opportunity for low-wage workers. (Credit: Getty Images)

By John P. Desmond, AI Trends Editor  

Rising productivity brought on by automation has not led an increase in income for workers. This is among the conclusions of the 2020 report from the MIT Task Force on the Future of Work, founded in 2018 to study the relation between emerging technologies and work, to shape public discourse and explore strategies to enable a sharing of prosperity.  

Dr. Elisabeth Reynolds, Executive Director, MIT Task Force on the Work of the Future

“Wages have stagnated,” said Dr. Elisabeth Reynolds, Executive Director, MIT Task Force on the Work of the Future, who shared results of the new task force report at the AI and the Work of the Future Congress 2020 held virtually last week.   

The report made three areas of recommendations, the first around translating productivity gains from advances in automation to better quality jobs. “The quality of jobs in this country has been falling and not keeping up with those in other countries,” she said. Among rich countries, the US is among the worst places for the less educated and low-paid workers.” For example, the average hourly wage for low-paid workers in the US is $10/hour, compared to $14/hour for similar workers in Canada, who have health care benefits from national insurance. 

“Our workers are falling behind,” she said.  

The second area of recommendation was to invest and innovate in education and skills training. “This is a pillar of our strategy going forward,” Reynolds said. The report focuses on workers between high school and a four-year degree. “We focus on multiple examples to help workers find the right path to skilled jobs,” she said.  

Many opportunities are emerging in health care, for example, specifically around health information technicians. She cited the IBM P-TECH program, which provides public high school students from underserved backgrounds with skills they need for competitive STEM jobs, as a good example of education innovation. P-TECH schools enable students to earn both their high school diploma and a two year associate degree linked to growing STEM fields.  

The third area of innovation is to shape and expand innovation.  

“Innovation creates jobs and will help the US meet competitive challenges from abroad,” Reynolds said. R&D funding as a percent of GDP in the US has stayed fairly steady for states from 1953 to 2015, but support from the federal government has declined over that time. “We want to see greater activity by the US government,” she said. 

In a country that is politically divided and economically polarized, many have a fear of technology. Deploying new technology into the existing labor market has the potential to make such divisions worse, continuing downward pressure on wages, skills and benefits, and widening income inequality. “We reject the false tradeoffs between economic growth and having a strong labor market,” Dr. Reynolds said. “Other countries have done it better and the US can do it as well,” she said, noting many jobs that exist today did not exist 40 years ago. 

The COVID-19 crisis has exacerbated the different realities between low-paid workers deemed “essential” needing to be physically present to earn their livings, and higher-paid workers able to work remotely via computers, the report noted.   

The Task Force is co-chaired by MIT Professors David Autor, Economics, and David Mindell, Engineering, in addition to Dr. Reynolds. Members of the task force include more than 20 faculty members drawn from 12 departments at MIT, as well as over 20 graduate students. The 2020 Report can be found here.   

Low-Wage Workers in US Fare Less Well Than Those in Other Advanced Countries  

James Manyika, Senior Partner, McKinsey & Co.

In a discussion on the state of low-wage jobs, James Manyika, Senior Partner, McKinsey & Co., said low-wage workers have not fared well across the 37 countries of the Organization for Economic Cooperation and Development (OECD), “and in the US, they have fared far worse than in other advanced countries,” he said. Jobs are available, but the wages are lower and, “Work has become a lot more fragile,” with many jobs in the gig worker economy (Uber, Lyft for example) and not full-time jobs with some level of benefits. 

Addressing cost of living, Manyika said the cost of products such as cars and TVs have declined as a percentage of income, but costs of housing, education and health care have increased dramatically and are not affordable for many. The growth in the low-wage gig-worker type of job has coincided with “the disappearance of labor market protections and worker voice,” he said, noting, “The power of workers has declined dramatically.” 

Geographically, two-thirds of US job growth has happened in 25 metropolitan areas. “Other parts of the country have fared far worse,” he said. “This is a profound challenge.”  

In a session on Labor Market Dynamics, Susan Houseman, VP and Director of Research, W.E. Upjohn Institute for Employment Research, drew a comparison to Denmark for some contrasts. Denmark has a strong safety net of benefits for the unemployed, while the US has “one of the least generous unemployment systems in the world,” she said. “This will be more important in the future with the growing displacement caused by new technology.”  

Another contrast between the US and Denmark is the relationship of labor to management. “The Danish system has a long history of labor management cooperation, with two-thirds of Danish workers in a union,” she said. “In the US, unionization rates have dropped to 10%.”  

“We have a long history of labor management confrontation and not cooperation,” Houseman said. “Unions have really been weakened in the US.”  

As for recommendations, she suggested that the US strengthen its unemployment systems, help labor organizations to build, raise the federal minimum wage [Ed. Note: Federal minimum wage increased to $10/hour on Jan. 2, 2020, raised from $7.25/hour, which was set in 2009.], and provide universal health insurance, “to take it out of the employment market.” 

She suspects the number of workers designated as independent contractors is “likely understated” in the data.   

Jayaraman of One Fair Wage Recommends Sectoral Bargaining 

Saru Jayaraman, President One Fair Wage and Director, Food Labor Research Center, University of California, Berkeley

Later in the day, Saru Jayaraman, President One Fair Wage and Director, Food Labor Research Center, at the University of California, Berkeley, spoke about her work with employees and business owners. One Fair Wage is a non-profit organization that advocates for a fair minimum wage, including for example, a suggestion that tips be counted as a supplement to minimum wage for restaurant workers.  

“We fight for higher wages and better working conditions, but it’s more akin to sectoral bargaining in other parts of the world,” she said. Sectoral collective bargaining is an effort to reach an agreement covering all workers in a sector of the economy, as opposed to between workers for individual firms. “It is a social contract,” Jayaraman said.  

In France, 98% of workers were covered by sectoral bargaining as of 2015. ”The traditional models for improving wages and working conditions workplace by workplace do not work,” she said. She spoke of the need to maintain a “consumer base” of workers who put money back into the economy.   

With the pandemic causing many restaurants to scale back or close, more restaurant owners have reached out to her organization in an effort to get workers back with the help of more cooperative agreements. “We have been approached by hundreds of restaurants in the last six months who are saying it’s time to change to a minimum wage,” she said. “Many were moved that so many workers were not getting unemployment insurance. They are rethinking every aspect of their businesses. They want a more functional system where everybody gets paid, and we move away from slavery. It’s a sea change among employers.”   

She said nearly 800 restaurants are now members of her organization. 

For the future, “We don’t have time for each workplace to be organized. We need to be innovating with sectoral bargaining to raise wages and working conditions across sectors. That is the future combined with workplace organizing,” she said.  

Read the 2020 report from the MIT Task Force on the Future of Work; learn about IBM P-TECH and about One Fair Wage. 


Continue Reading


Power of AI With Cloud Computing is “Stunning” to Microsoft’s Nadella 




Microsoft CEO Satya Nadella said at the AI and the Future of Work Conference from MIT that the ability of cloud computing to harness massive computing power is ‘transformative.’ (Photo by Mohammad Rezaie on Unsplash.)

By AI Trends Staff  

Asked what in the march of technology he is most impressed with, Microsoft CEO Satya Nadella said at MIT’s AI and the Work of the Future Congress 2020 held virtually last week that he is struck by the ability of cloud computing to provision massive computing power.   

Satya Nadella, CEO, Microsoft

“The computing available to do AI is transformative,” Nadella said to David Autor, the Ford Professor of Economics at MIT, who conducted the Fireside Chat session.   

Nadella mentioned the GPT-3 general purpose language model from OpenAI, an AI lab searching for a commercial business model. GPT-3 is an autoregressive language model with 175 billion parameters. OpenAI agreed to license GPT-3 to Microsoft for their own products and services, while continuing to offer OpenAI’s API to the market. Today the API is in a limited beta as OpenAI and academic partners test and assess its capabilities.  

The Microsoft license is exclusive however, meaning Microsoft’s cloud computing competitors cannot access it in the same way. The agreement was seen as important to helping OpenAI with the expense of getting GPT-3 up and running and maintaining it, according to an account in TechTalks. These include an estimated $10 million in expenses to research GPT-3 and train the model, tens of thousands of dollars in monthly cloud computing and electricity costs to run the models, an estimated one million dollars annually to retrain the model to prevent decay, and additional costs of customer support, marketing, IT, legal and other requirements to put a software product on the market.  

Earlier this year at its Build developers conference, Microsoft announced it worked with OpenAI to assemble what Microsoft said was “one of the top five publicly disclosed supercomputers in the world,” according to an account on the Microsoft AI blog. The infrastructure will be available in Azure, Microsoft’s cloud computing offering, to train “extremely large” AI models.   

The partnership between Microsoft and OpenAI aims to “jointly create new supercomputing technologies in Azure,” the blog post stated.  

And it’s not just happening in the cloud, it’s happening on the edge,” Nadella said.  

Applications for cloud and edge computing working together—such as natural language generation, image completion, or virtual simulations from wearable sensors that see the work—are very compute-intensive. “It’s stunning to see the capability,” of the GPT-3 model applied to this work, Nadella said. “Something in the model architecture gives me confidence we will have more breakthroughs at an accelerating pace,” he said.  

Potential Strategic Advantage in Search, Voice Assistants from GPT-3 Models  

Strategically, it could be that the GPT-3 models will give Microsoft a real advantage, the article in TechTalks suggested. For example in the search engine market, Microsoft’s Bing has just over a 6% market share, behind Google’s 87%. Whether GPT-3 will enable Microsoft to roll out new features that redefine how search is used remains to be seen.   

Microsoft is also likely to explore potential advantages GPT-3 could bring to the voice assistant market, where Microsoft’s Cortana sees a 22% share, behind Apple’s Siri, which has 35%.  

Nadella does have concerns related to the power of AI and automation. “We need a set of design principles, from ethics to actual engineering and design and a process to allow us to be accountable, so the models are fair and not biased. We need to ‘de-bias’ the models and that is hard engineering work,” he said. “Unintended consequences” and “bad use cases” are also challenges, he said, without elaborating. [Ed. Note: A ‘misuse case” or bad use case describes a function the system should not allow, from Wikipedia.]  

Moderator Autor asked Nadella how Microsoft makes decisions on what problems to work on using AI. Nadella mentioned “real world small AI” and the company’s Power Platform tools, which enables several products to work well together as part of a business application platform. This foundation is built on what had been called the Common Data Service for apps, and as of this month (November), is called “Dataverse.” Data is stored in tables which can reside on the cloud. 

Using the tools, “People can take their domain expertise and turn it into automation using AI capabilities,” Nadella said. 

Asked what new job opportunities are being created from the use of AI he anticipates in the future, Nadella compared the transition going on today to the onset of computer spreadsheets and word processors. “The same thing is happening today,” as computing is getting embedded in manufacturing plants, retail settings, hospitals, and farms. “This will shape new jobs and change existing jobs,” he said. 

‘Democratization of AI’ Seen as Having Potential to Lower Barriers  

The two discussed whether the opportunities from AI extend to those workers without abstract skills like programming. Discussion ensued on “democratization of AI” which lowers barriers for individuals and organizations to gain experience with AI, allowing them, for example, to leverage publicly available data and algorithms to build AI models on a cloud infrastructure. 

Relating it to education, Autor wondered if access to education could be “democratized” more. Nadella said, “STEM is important, but we don’t need everyone to get a master’s in computer science. If you can democratize the expertise to help the productivity of the front line worker, that is the problem to solve.” 

Autor asked if technology has anything to do with the growing gap between low-wage and high-wage workers, and what could be done about it. Nadella said Microsoft is committed to making education that leads to credentials available. “We need a real-time feedback loop between the jobs of the future and the skills required,” Nadella said. “To credential those skills, we are seeing more companies invest in corporate training as part of their daily workflow. Microsoft is super focused on that.” 


A tax credit for corporations that invest in training would be a good idea, Nadella suggested. “We need an incentive mechanism,” he said, adding that a feedback loop would help training programs to be successful.  

Will “telepresence” remain after the pandemic is over? Autor asked. Nadella outlined four thoughts: first, the collaboration between front line workers and knowledge workers will continue, since the collaboration has proved to be more productive in some ways; second, meetings will change but collaboration will continue before, during, and after meetings; third, learning and the delivery of training will be better assisted with virtual tools; and “video fatigue” will be recognized as a real thing.   

“We need to get people out of their square boxes and into a shared sense of presence, to reduce cognitive load,” Nadella said. “One of my worries is that we are burning the social capital that got built up. We need to learn new techniques for building social capital back.”  

Learn more about AI and the Work of the Future Congress 2020, GPT-3 inTechTalks and on the Microsoft AI blog, the Power Platform and Dataverse. 


Continue Reading
AI5 hours ago

Europe sets out the rules of the road for its data reuse plan

AR/VR5 hours ago

Cybershoes for Quest Kickstarter Reaches Funding Goal in First Day

SaaS7 hours ago

AI9 hours ago

How Do You Differentiate AI From Automation?

Blockchain10 hours ago

Bitcoin Breaks Out but Is Stopped Short of New All-Time High

Blockchain11 hours ago

Ethereum Becomes One of the Largest Proof of Stake Chains Even Before Launch

Energy12 hours ago

XCMG lança X-GSS na Bauma China 2020, e mostra como se tornar digital na fabricação de máquinas

Energy12 hours ago

XCMG lanza X-GSS en Bauma China 2020 y muestra cómo aplicar la digitalización en la manufactura de maquinaria

Esports12 hours ago

KT Rolster signs top laner Doran

Blockchain13 hours ago

Dormant Ripple Whale Linked to Genesis Address Moves 40 Million XRP to Bitstamp

Esports13 hours ago

Suning parts ways with support SwordArt

Blockchain14 hours ago

Will TRON (TRX) Finally Break Out Above $0.04?

Energy14 hours ago

Sinopec gibt Startschuss für ausführliche Untersuchungen zur Energiewende und Klimaneutralität

Energy14 hours ago

Sinopec lance une recherche approfondie sur le pic d’émissions de CO2 et la neutralité carbone

Cyber Security14 hours ago

GoDaddy Workers in Action Against Cryptocurrency Resources Hackers Trick

Energy15 hours ago

Sinopec inicia uma extensa pesquisa sobre o pico de emissões de CO2 e o carbono neutro

AR/VR17 hours ago

Oculus CTO Wants Android Apps on Quest, But is “not winning” the Debate Within Facebook

Energy17 hours ago

LONGi alcanza los 10 GW de envíos de módulos bifaciales

Energy17 hours ago

Remessas de módulos bifaciais LONGi de alta eficiência chegam a 10GW

Energy17 hours ago

Los cargamentos de módulos bifaciales de alta eficiencia de LONGi alcanzan los 10 GW

Energy18 hours ago

Continental Resources Announces Early Results And Upsizing Of Cash Tender Offers

Energy19 hours ago

Neutrinovoltaic Energy and Electromagnetic Capacitors Set to Usher in a New Era of Truly Clean Energy

Cyber Security19 hours ago

Cyber Security: What Is The First Thing To Do In 2021

Energy19 hours ago

/R E P E A T — KORE Mining Considering Spin-Out of South Cariboo Gold Exploration Assets to KORE Shareholders/

Energy20 hours ago

Line 3 Moves Forward to Construction

AI20 hours ago

MIT Study: Effects of Automation on the Future of Work Challenges Policymakers  

AI20 hours ago

Power of AI With Cloud Computing is “Stunning” to Microsoft’s Nadella 

AI20 hours ago

IT Departments Find Timing is Good to Modernize Legacy Systems; AI Can Help 

Energy20 hours ago

Venture Global LNG gibt KBR EPC Zuschlag für die LNG-Exportanlage in Plaquemines

AI20 hours ago

AI Autonomous Cars Contending With Human Bullying Drivers