Connect with us

Ai

Blockchain Will Be Most In-Demand Hard Skill in 2020: LinkedIn

Published

on

Blockchain will be the most in-demand hard skill in 2020, according to a new study by the educational subsidiary of professional social network LinkedIn.

A newcomer to LinkedIn’s annual list of top-demanded hard skills, blockchain now tops the list of the most-needed skills in 2020, according to a LinkedIn Learning blog post on Jan. 1.

Blockchain to surpass cloud computing and AI in 2020

In 2019, blockchain-as-a-skill overtook major hard skills including cloud computing, analytical reasoning, artificial intelligence (AI), and user experience (UX) design, becoming the number one hard skill in demand among global employers in 2020, according to LinkedIn Learning.

In contrast, a similar list of skills provided by LinkedIn Learning for 2019 does not include blockchain technology at all. Titled “The Skills Companies Need Most in 2019 – And How to Learn Them,” the list ranks cloud computing as the most in-demand of that year. In order, other top hard skills in 2019 included AI, analytical reasoning, people management and UX design.

Top 10 hard skills companies need most in 2020

Top 10 hard skills companies need most in 2020. Source: LinkedIn Learning

Methodology: demand vs. supply

As noted in the post, demand in the recent study was defined by analyzing skills that were in high demand versus their supply. Specifically, demand was measured by identifying the skills listed on the LinkedIn profiles of people who were employed at the highest rates.

The study only analyzed cities with 100,000 LinkedIn members, the blog post notes.

As opposed to soft skills, hard skills refer to an employee’s ability to do a specific task and include specialized knowledge and technical abilities such as software development, tax accounting, or patent law expertise. Meanwhile, soft skills refer more to the way those tasks are done — how employees adapt, collaborate, solve problems and make decisions.

Linkedin says the promise of blockchain is huge

In the blog post, Linkedin outlined the immense potential of blockchain technology in terms of providing a cost- and time-efficient, secure and decentralized method of tracking transactions of all types. 

It emphasized that a number of high-profile global firms such as IBM, Oracle, JPMorgan, Amazon as well as LinkedIn’s parent firm Microsoft have been actively implementing the technology. The post advises global recruiters to start becoming more aware of blockchain technology:

“Blockchain has emerged from the once shadowy world of cryptocurrency to become a business solution in search of problems. Which means that you don’t have to be in financial services to be seeking new hires who have background and expertise in putting blockchain to use. So, recruiters should start becoming familiar with how blockchain works, what its perceived benefits are, and who are the people best suited to help your company explore where this budding technology might have a role.”

LinkedIn has previously outlined the importance of distributed ledger technologies like blockchain. Earlier in 2019, a LinkedIn Asia Pacific report listed blockchain among the most demanded skills in the coming years as part of its regular feature “The Future of Skills.”

Source: https://cointelegraph.com/news/blockchain-will-be-most-in-demand-hard-skill-in-2020-linkedin

Ai

UN Secretary-General: US-China Tech Divide Could Cause More Havoc Than the Cold War

Published

on

By

WIRED recently spoke with António Guterres, the Secretary-General of the United Nations, about a topic of increasingly grave concern to him: the fracturing of the internet and the possibility that a technology meant to bring nations together might drive them apart.

A condensed version of this interview is featured in issue 28.02. The full interview, which originally was published on November 25, 2019, is below. The conversation has been lightly edited for clarity.


Nicholas Thompson: It's an honor to get the opportunity to conduct this interview. Recently you gave a speech in Paris, where you talked about five great threats to the world. And you talked about the technological break. What did you mean? Why is it so in your mind right now?

António Guterres: I think we have three risks of divides: a geostrategic divide, a social divide, and a technological divide. Geostrategically, if you look at today's world, with the two largest economies, the Chinese economy and the American economy, and with the trade and technology confrontation that exists, there is a risk—I'm not saying it will happen—there is a risk of a decoupling in which all of a sudden each of these two areas will have its own market, its own currency, its own rules, its own internet, its own strategy in artificial intelligence. And that inevitably, when that happens, its own military and geostrategic strategies. And then the risks of confrontation increase dramatically.

February 2020. Subscribe to WIRED.

Photograph: Art Streiber

Then we have a social divide. I mean, today the internet is a fantastic tool. If we're looking at the Sustainable Development Goals or our blueprint for a fair globalization to solve the problems of poverty, hunger, lack of education, lack of health in the world, it's clear that the digital economy, the digital technologies, are a fantastic instrument to allow us to achieve those goals. But at the same time, they have risks, and they have clear possibilities of being used for nefarious objectives. And we have terrorist organizations that use the internet, you have drug trafficking and trafficking of human beings using the internet, you have different kinds of cybercrime, you have problems of cybersecurity at different levels. And I think it's important to have the capacity—and I believe the UN is in a unique position for that, because we have a platform where different sectors can come together and discuss how to make the internet a force for good, how to make cyberspace a force for good. And my deep belief is that the traditional forms of intergovernmental conventions to regulate sectors do not apply to the digital world. Because things move so quickly that the convention that takes five years to discuss and approve and then two years to ratify will come too late. We need to have much more flexible mechanisms in which different stakeholders come together regularly, and they adopt a number of protocols, codes of conduct, define some red lines, and create the conditions to have a flexible mechanism of governance that allows the internet to become a force for good.

And then we still have the other divide that is linked to the divide between rich and poor. Half of the population of the world is not linked to the internet. The capacities of countries are completely different. Artificial Intelligence in some countries will, of course, destroy jobs, which will create new jobs and will allow for enormous progress and development. But other countries will face a negative impact. So to make sure that we don't increase these divides, these inequalities in the world, we need to transform the digital technologies into an instrument to attenuate the inequality—and not into an instrument that makes more and more inequality prevail in today's world.

Advertisement

And we see the impact of inequality more and more, not only among countries but within each country, and we see the disquiet in so many societies because people feel frustrated that they are left behind.

NT: That was a profound description of the problems on all three levels. Let's start with the first one—the geostrategic level. One of the metaphors that people sometimes use for this fracture between the US internet and the Chinese internet is that we’ll have a new Cold War. And countries will have to choose sides—they’ll have to choose whether they want to build with American or Western technology, or with Chinese technology. Do you think that is an appropriate metaphor? And how does it differ from the Cold War we had before?

AG: The Cold War in the past was more predictable and more well defined. In the end, there were two worlds that were indeed separated. But the risks of confrontation were limited. The main risk was, of course, atomic confrontation. But with time and with wisdom, after some risky situations, mechanisms were created and a disarmament agenda was in place that, in the last decades of the last century, worked. And we have seen remarkable reductions in nuclear arsenals.

When we look at cyberspace, it's much more complicated. First of all, I am convinced that if one day would have a major confrontation, it would start with a massive, massive cyber attack, not only on military installations, but some civilian infrastructure. And we do not have clarity on legal frameworks on this. I mean, there is a general principle that international law applies in cyberspace, it is not clear how international method in law applies and these other laws of war. The self-defense principle of the UN—how does it apply in this context? When is it war, when is it not war in these situations? And then, of course, artificial intelligence will develop new kinds of weapons.

We are totally against—and this is a position I've been stressing strongly—we are against weapons, autonomous weapons, that can have the right to choose targets and kill people without human interference. And we know that the technology is available for that.

And there is no consensus in the world about how to regulate it. Some countries think that they should be forbidden, as I believe; some countries think that no, that is not justified.

NT: Quick side point: Would you forbid the use of unmanned defensive weapon systems, or just offensive?

AG: It’s very difficult to distinguish what it is defensive and what is offensive. Our position is that weapons, autonomous weapons, that have the right to kill people, that they choose without human interference, when accountability mechanisms cannot be established, should be banned. But that is our position. There is no consensus in the international community about it. What I'm trying to say is that the Cold War of the past was much more predictable than an environment in which there will be no serious international cooperation in the future if this decoupling takes place—and in which the number of ways in which we can create havoc in the world is much bigger.

So I mean, the level of uncertainty and the unpredictability is bigger. That is the reason why I strongly believe that an effort must be made to address this challenge, and to create the conditions, as I said, to have a universal economy, a universal internet, and to have a number of mechanisms of dialog and coordination and cooperation, to establish a set of rules that allow for these risks to be minimized. So, to use an old expression, it was the rise of Athens, and the fear that rise created in Sparta, that made war inevitable. Now, I don't believe that war is inevitable. On the contrary, history proves that in many situations like these there was no war. But we need to have leadership on both sides and on the international community committed to create the conditions for this evolution to take place in a harmonious way and to avoid forms of decoupling or separation that might create bigger risks in the future.

Advertisement

WIRED editor Nicholas Thompson and UN Secretary-General António Guterres.Photograph: Laurel Golio

NT: So the decoupling is proceeding relatively quickly right now. We're just seeing, for example, that Huawei is making phones without Android. The United States and China are splitting further and further apart on technology. In the near-term future, what do you want to have happen to reduce the speed of the decoupling or even to reverse the process?

AG: To reverse the process. But, you need to build trust. You need to have cooperation. You need to have dialog. You need to understand each other, to understand the differences and to have a serious commitment also in relation to other areas that can be divisive on this. For instance, human rights. We need to make sure that t7hese technologies respect human rights, respect human privacy. We need to make sure that we don't use these systems to fully control human lives, both politically or economically. And we know that today, we are all to a certain extent, in the eyes of different entities that interconnected with us. We have not only all our devices that we use—mobile phones, all the other gadgets, computers—but we have the internet of things that is evolving. So more and more, we need, as I said, not rigid regulatory frameworks that are no longer possible, but to bring the actors together. And some of the actors are governments, and governments need to understand that they need to cooperate.

NT: So is the role of the United Nations to convene and to get people in the same room to talk? Or is it to actually set a new global regulatory framework?

AG: I think first we need to bring people together. That's why we appointed a high-level panel on digital cooperation. And there are a number of recommendations that were made. For each recommendation, we are now creating a group of champions—governments, companies and other entities to try to push for digital cooperation, which means in each area, and these are complex issues, we need to bring together actors. And we can be the platform where they come together, and then of course, we need to move ahead with other instruments, like going to the Internet Governance Forum. It is an institution that can do more, in my opinion, can be enhanced, can be strengthened. We have a lot of other instruments today in the world. We need to create the conditions for this kind of, I say, soft, flexible regulation to progressively be accepted by the different actors, and for all actors to cooperate in defining those protocols that I mentioned, those red lines, those mechanisms of cooperation that will allow us to minimize the risks.

Advertisement

NT: So the UN's role would be to convene, and then soft regulations, protocols, red lines…

AG: And in some aspects, law.

NT: What would be a law?

AG: I would be in favor of banning autonomous weapons. In some aspects if there is consensus in the world, international law. In other aspects, as I said, forms of more flexible governance, that in any case adapt better to something that is changing very quickly, as you know.

Sign Up Today
Sign up for our Longreads newsletter for the best features, ideas, and investigations from WIRED.

NT: Let me ask you a big question that troubles me. If you look at the last five years, maybe even the last 10 years, the number of democracies in the world has been declining. And the number of authoritarian states has been increasing. And there are lots of causes for this. But is it possible that technology is one of the causes? Do you think technology is having the opposite effect of what we all hoped?

AG: First, technology can help democracy.

NT: Absolutely.

AG: It can connect people. And we see that many social movements in favor of democracy have been boosted by technology. But it's also true that the way we are now interlinked is sometimes by tribe, and different tribes tend to have their own systems of interconnection, and that generates divides. And this is not only true about social media, it's also true about, sometimes in some countries, traditional media. And then people not only have different opinions, they see facts differently. And then we have all the discussion about fake news and all those things. So that is a reality we need to take into account.

But, I would say, more dangerous than that are the mechanisms that exist today that allow for the control of people. And we see how they can influence elections, we have seen examples of that. Because of the information they have about me, companies might be able to even try to push for changing my tastes, to be able to buy what they want. And there are mechanisms that allow for the control, the political control, of people that are extremely worrying, and that if applied in a society can fully undermine democracy. So indeed I believe that our democratic systems needs to be able to evolve to preserve democratic values. We cannot just blindly move ahead as if nothing is happening. Things are happening, and they are real threats to democracy.

I'm not pessimistic about that, I have to say, because let's not forget that today we are seeing an evolution into semi-liberal democracies. But at the same time, in the last decades, we have a huge number of countries that move for authoritarianism to democracy, so this seems we are not witnessing a long-term trend. And we are seeing reactions of people that are very interesting: We are seeing a disquiet of people; we are seeing people wanting to make sure that their voices are heard, that political systems become more participatory. I have an enormous faith in human beings, and I think that human beings will be able to overcome these difficulties and to preserve the democratic values that are so essential for our societies.

NT: And do you think that access to the internet should be a human right and that there should be international law, for example, forbidding the government of Iran from turning off access to the internet, as they did just recently?

AG: I think the internet should be a right. I mean, there are situations—I'm not talking about any country specifically or any situation specifically. I can imagine, as we have in all constitutions, states of emergency that can be declared in certain circumstances by the democratic bodies of the country. So in the context of a full democracy, that can happen. But we shouldn't, in my opinion, use these technologies as an instrument of political control.

Advertisement

NT: And then last question. You've given some ideas for how the world order can be shaped. But for people watching this or reading us who care about the future of democracy and care about the world not splitting apart, what can they do? What should they be thinking about?

AG: Oh, they're doing. I mean, look at the students in so many parts of the world, people are doing, people are assuming responsibility. People are saying all the voices must be heard. The idea of a very small group of people can decide for everything is now being put into question very seriously. There is a very, I mean, when we see everything that's happening, of course, in each country, the trigger is different. In some cases it’s an economic-driven occasion, in others it’s pressure on the political system, in others corruption, and people react. But I see more and more people wanting to assume responsibility, wanting their voices to be heard. And that is the best guarantee we have that political systems will not be corrupted.

NT: And technology is often at their service.

AG: Technology can be used against people, but it can be used by people for good cause.

NT: Thank you so much, Secretary-General Guterres.


A Guide to Rebooting Politics

Read more: https://www.wired.com/story/un-secretary-general-antonio-guterres-internet-risks/

Continue Reading

Ai

Save over $200 with discounted student tickets to Robotics + AI 2020

Published

on

By

If you’re a current student and you love robots — and the AI that drives them — you do not want to miss out on TC Sessions: Robotics + AI 2020. Our day-long deep dive into these two life-altering technologies takes place on March 3 at UC Berkeley and features the best and brightest minds, makers and influencers.

We’ve set aside a limited number of deeply discounted tickets for students because, let’s face it, the future of robotics and AI can’t happen without cultivating the next generation. Tickets cost $50, which means you save more than $200. Reserve your student ticket now.

Not a student? No problem, we have a savings deal for you, too. If you register now, you’ll save $150 when you book an early-bird ticket by February 14.

More than 1,000 robotics and AI enthusiasts, experts and visionaries attended last year’s event, and we expect even more this year. Talk about a targeted audience and the perfect place for students to network for an internship, employment or even a future co-founder.

What can you expect this year? For starters, we have an outstanding lineup of speaker and demos — more than 20 presentations — on tap. Let’s take a quick look at just some of the offerings you don’t want to miss:

  • Saving Humanity from AI: Stuart Russell, UC Berkeley professor and AI authority, argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Opening the Black Box with Explainable AI: Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI International will discuss what we’re doing about it and what still needs to be done.
  • Engineering for the Red Planet: Maxar Technologies has been involved with U.S. space efforts for decades and is about to send its fifth robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian, general manager of robotics at Maxar, will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

That’s just a sample — take a gander at the event agenda to help you plan your time accordingly. We’ll add even more speakers in the coming weeks, so keep checking back.

TC Sessions: Robotics + AI 2020 takes place on March 3 at UC Berkeley. It’s a full day focused on exploring the future of robotics and a great opportunity for students to connect with leading technologists, founders, researchers and investors. Join us in Berkeley. Buy your student ticket today and get ready to build the future.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics + AI 2020? Contact our sponsorship sales team by filling out this form.

Read more: https://techcrunch.com/2020/01/15/save-over-200-with-discounted-student-tickets-to-robotics-ai-2020/

Continue Reading

Ai

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

Published

on

By

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Read more: https://techcrunch.com/2020/01/17/eu-lawmakers-are-eyeing-risk-based-rules-for-ai-per-leaked-white-paper/

Continue Reading

Trending