Connect with us

ZDNET

Best video conferencing software in 2021

Avatar

Published

on

It’s a provable fact, backed by scientific research, that communication is more effective when you can see the person you’re talking to. Seeing the other person’s facial expressions, for example, makes it easy to tell a serious request from an offhand remark or a joke.

But as the business world has learned since the start of the pandemic, you don’t have to brave airports and public spaces for an effective face-to-face meeting. Instead, video conferencing software and collaboration services have become the tool of choice for meetings and classes, with everyone from schoolkids to mortgage brokers to grandparents to international award show hosts learning (sometimes awkwardly) how to unmute themselves.

Using the webcam on a Windows laptop, a Mac, or a mobile device, you can meet one-on-one or with a group, no matter how widely scattered the members of your class/team/family are. We’ve assembled the leading conferencing software platforms, all capable of providing high-quality video and full-featured collaboration tools. While many of these video conferencing platforms also offer live streaming and webinar capabilities, our focus here is primarily on virtual meetings. 

Note that several vendors responded to initial coronavirus concerns with free video conferencing offers. Many of those offers have since expired, but plenty of free and discounted options still exist.

The best-known video conferencing brand, by far

zoom.jpg

zoom.com

After a successful IPO in 2019, Zoom solidified its status as one of the leaders in the video conferencing industry, although recent security and privacy concerns have tarnished that reputation somewhat. Its conferencing software allows simple one-to-one chat sessions that can escalate into group calls, training sessions and webinars for internal and external audiences, and global video meetings with up to 1,000 participants and as many as 49 HD videos on-screen simultaneously.

Zoom sessions can start from a web browser or in dedicated client apps for every desktop and mobile platform, with end-to-end encryption, role-based user security (including HIPAA compliance), and easy-to-use screen sharing and collaboration tools. Meeting invitations integrate smoothly with popular calendaring systems, and meetings can be recorded as local or cloud-based files, with searchable transcripts.

The free tier allows unlimited 1:1 meetings but limits group sessions to 40 minutes and 100 participants. Paid plans start at $15 per month per host and scale up to full-featured Business and Enterprise plans.

View Now at Zoom

New owner Verizon has cut prices and beefed up security

bluejeans.jpg

Billing itself as “the meetings platform for the modern workplace,” BlueJeans Meetings is a video conferencing solution that focuses on instant connections, using a mobile or desktop app or directly from a browser (with no download required). Verizon acquired the company in April 2020 and kept the quirky name, which comes from the founders’ desire to make video conferencing software  “as comfortable and casual as your pair of jeans. After the purchase closed, Verizon quickly lowered prices and added a slew of new features, including support for end-to-end AES-256 GCM encryption. The company plans to “deeply integrate” BlueJeans into its 5G product roadmap.

The meeting technology, powered by Dolby Voice, includes background noise cancellation and integrates with hardware-based conference room systems as well as enterprise applications like Microsoft Teams, Slack, and Facebook Workplace. A full array of whiteboard and screen sharing tools add collaboration capabilities to any meeting. (For livestreams and large-scale web-based presentations, you’ll need a separate product called BlueJeans Events.)

After an initial free trial of the conferencing software, BlueJeans Meetings requires one of three plans, which can be billed monthly or annually, at a 20% discount. The Standard plan, designed for individuals and small businesses, costs $12.49 per meeting host per month; it supports up to 50 attendees and 5 hours of meeting recordings but doesn’t integrate with messaging apps like Slack. The Pro plan, at $17.49 per host per month or $167.88 per year, supports up to 75 attendees and includes 25 hours of cloud recordings per host. The Enterprise plan, with unlimited cloud recordings and an assortment of enterprise-focused tools, supports up to 200 attendees and requires a custom quote.

View Now at BlueJeans Meetings

Best for businesses and schools that already use Office apps

teams.jpg

Microsoft Teams, a successor to Skype for Business, isn’t so much a product as it is a feature of Microsoft 365, which tells you a lot about its design and who its features are best suited for: businesses and educational organizations of all sizes. Anyone can sign up for the free version of Microsoft Teams using a personal email address; that tier supports up to 300 meeting participants, with guest access, one-on-one and group video and audio calls, shared files (2GB per user and 10GB per team), screen sharing, and document collaboration using online Office web apps.

Where Teams begins to deliver its full promise as a video conferencing solution is in an organization that runs on a Business or Enterprise version of Microsoft 365, where Teams is just another feature (and the successor to Lync and Skype for Business). In that environment, administrators have access to a full range of management, security, and compliance tools. Team members can share files (up to 1TB per user), schedule meetings directly from Outlook, record meetings, and collaborate on documents using the desktop Office programs and SharePoint Online. Those paid plans also support online training sessions and webinars.

Microsoft 365 plans start at $5 per user per month, and Redmond has been rolling out new features at a steady clip for the past year. For organizations that aren’t deeply embedded in the Microsoft way of working, the Teams feature set can be baffling. But for anyone who already lives in SharePoint and Outlook, Microsoft’s conferencing software should be a tight fit.

View Now at Microsoft Teams

A pioneer of remote software broadens its horizon

gotomeeting.jpg

LogMeIn has been on an acquisition tear in recent years, with GoToMeeting and a collection of related collaboration tools acquired from Citrix back in 2016. A major update to the video conferencing software released in late 2019 includes a long list of new features and what LogMeIn calls “a completely reimagined product” that works in a web browser (no download required) or through desktop and mobile apps. After a 14-day free trial, you’ll need to choose a paid plan; options include Professional ($12 per organizer per month, up to 150 participants) and Business ($16 per organizer per month for up to 250 participants). An Enterprise plan supports up to 3,000 participants.

The reworked user experience in LogMeIn’s GoToMeeting conferencing solution is consistent across platforms and integrates with calendar solutions and platforms from Office 365, G Suite, Salesforce, Zoho, and Slack. For each call, you can take notes in real-time, which are then embedded and saved in the meeting transcript. Besides the normal option to save to video, you can also capture presentation slides from a meeting and share them in a PDF for later download.

View Now at GoToMeeting

Big-time services from a company that’s not too big

anymeeting.jpg

AnyMeeting has been around for nearly a decade, and the video conferencing software’s user base had grown to more than 1 million when the company was acquired in 2017 by Intermedia. Today, AnyMeeting is available as part of Intermedia Unite, a unified communication and collaboration platform that integrates its video conferencing, chat, and screen sharing functions into a cloud-based service that also includes VOIP capabilities and an enterprise-grade PBX system. If that’s overkill for your small business’ video conferencing needs, AnyMeeting is available separately in Lite and Pro plans that cost $10 and $13 per user per month, respectively.

Video conferencing software features are essentially the same between the two plans, with the ability to create custom meeting URLs, schedule recurring meetings, and integrate with productivity tools from Google, Microsoft, Slack, and others. HIPAA compliance and end-to-end encryption are standard features as well. Upgrading to a Pro plan increases the number of web-based participants from 10 to 30 (but a maximum of 12 in Full HD). The Pro plan also includes the ability to record and transcribe meetings and unlimited cloud storage of meetings.

We’ve been a big fan of Intermedia for years, precisely because it offers the option to use big-time software and services from a company that’s not too big to care.

View Now at Intermedia AnyMeeting

A completely browser-based alternative

zoho.jpg

Since its founding nearly a quarter-century ago, Zoho has grown to 50 million users worldwide. Its flagship product is Zoho One, a web-based suite of services and mobile apps designed to tie together sales, marketing, accounting, HR, and operations. Zoho Meetings offers tools for webinars, training, and online meetings, with plans starting at $10 per host per month (or $8 per month if you pay for a full year). The price tag of this video conferencing solution includes support for up to 100 participants and storage for 10 recorded meetings.

On PCs and Macs, Zoho Meetings is a completely browser-based conferencing solution, with no downloads required. For audio, participants can dial in over the phone (toll-free numbers are an extra-cost option), and in-session chat is available as well. Meetings can be recorded from any endpoint, including mobile devices. Zoho says the service is GDPR-compliant and is certified to the Privacy Shield Frameworks; more granular privacy tools include the ability for moderators to lock meetings and mute or eject participants. Although the video conferencing service integrates with Google Calendar, its primary strength is for organizations that are already invested in Zoho’s CRM and Projects tools.

View Now at Zoho Meeting

Nobody ever got fired for choosing Webex

webex.jpg

Webex is truly one of the graybeards of the video conferencing software segment, founded in 1995 and acquired by Cisco in 2007. The free conferencing plan (up to three users) is surprisingly full-featured, with HD video, screen sharing on desktop and mobile devices, and limited recording options; it supports up to 50 participants per meeting, with meeting times capped at 40 minutes and online storage limited to 1GB.

If the limitations of the free tier get in your way, three paid plans are available: Starter ($13.50 per host per month, 50 attendees), Plus ($17.95 per month, 100 attendees), and Business ($26.95 per month, with a five-license minimum, supporting up to 200 attendees). Enterprise plans are also available. Each step up includes additional cloud storage and management features; single sign-on and support for Exchange and Active Directory requires the Business plan. An interesting add-on, Call Me, allows you to start a meeting by receiving a phone call; you’ll pay $4 per host per month for this feature for domestic calls, with the tariff for international calls going up to a pricey $35.75 per month. 

View Now at Cisco WebEx

Best for those on a tight budget

joinme.jpg

This member of the LogMeIn family should be on the video conferencing software shortlist for businesses on a tight budget. Audio meetings with screen sharing for up to three participants are free, with a unique interface that puts each participant’s face in a bubble that bounces around the screen. Paid conferencing plans start with Lite ($10 per host per month, five meeting participants, no time limits), with no webcam streams but support for screen and window sharing. Upgrading to Pro ($20 per month) increases the number of meeting participants to 250 and adds 50GB of cloud storage plus recording options. Go to the $30 -per-month Business plan for 1TB of storage, single sign-on support and Salesforce integration.

It’s unclear whether Join.me will thrive in the shadow of its bigger sibling, GoToMeeting, but for now, at least, it has an identity all its own.

View Now at Join.Me

Designed to work in Chrome browser

google-hangout-tr.jpg

Google’s ever-evolving lineup of communications and collaboration apps split in two back in 2017, with the classic version of Google Hangouts video conferencing marked for retirement. Google Hangouts Meet is the business version, enabling video meetings for G Suite subscribers. External participants can also connect.

Naturally, the service is designed to work in the Google Chrome browser (although limited support for Internet Explorer 11 is also available), with mobile apps available on iOS and Android. The exact feature set depends on your G Suite version; the number of participants, for example, is limited to 100 for G Suite Basic, 150 for Business, and 250 for Enterprise. For live streaming (up to 100,000 audience members) and the ability to record meetings and save them to Google Drive, you’ll need G Suite Enterprise.

If your business is standardized on Google’s productivity and email tools, this video conferencing option should be on your shortlist.

View Now at Google Hangouts Meet

Video calls only on desktop platforms

slack.jpg

If your organization has a paid workspace that uses Slack’s collaboration tools, you already have access to a handful of limited video calling options that might be good enough for basic meetings and team collaboration needs. Just be aware that video calls are available only on desktop platforms (Mac, Windows, and Linux); the iOS and Android apps are limited to voice calls only. 

For the full range of screen-sharing features, including options to stream presentations and draw on a shared screen, you’ll need the Slack app. With Google Chrome (the only supported browser), you can view a teammate’s screen, but you can’t start a screen share. The company’s support site warns Mac users to download the Slack app from its website for full access to screen sharing features, which are not available in the App Store version.

View Now at Slack

What to look for when evaluating video conferencing solutions

What should you look for when putting video communication software to the test? After a full year of empty offices and no-travel orders, most companies have realized that remote work and online meetings can be extremely effective and are likely to demand much more from these tools than the basic feature set that might have been sufficient in pre-pandemic times.

We narrowed down the list of contenders in this guide using the same criteria we recommend you take into account when you’re in the market for one of these services. Think of our list as a starting point to help you organize your search: One or more of the products that didn’t make our cut might well deserve to be on the shortlist for your business needs.

Our most important criteria was reputation. Every product on this list has a solid track record in terms of performance and reliability. We were pleasantly surprised, in fact, to find some firms in this business that have been going strong for more than two decades.

In business terms, we know that our readers represent a broad swath of sizes, shapes, and cultures. So we went out of our way to find a mix of products that work for cash-conscious small businesses (three of them are free, in fact, for organizations with three people or fewer). For slightly larger organizations, including schools, we’ve tried to highlight commercial plans that are reasonably priced if you can live with their limits, such as the number of meeting participants and the length of each meeting.

Others offer high-end options ideal for large companies that want control over livestreams and training sessions involving large audiences. The biggest differentiator here is the number of people you can have in the audience. That makes sense for organizations that do webcasts and presentations to large numbers of employees, customers, or members on a worldwide basis. Those plans are where you’re much more likely to find support for advanced features like full recording options and the ability to generate a PDF from the slide deck that powered your online session.

If all you want is the ability to talk face to face, with the occasional bit of screen sharing and whiteboarding, you have plenty of choices. If you want to make those sessions available for replay online, you’ll need to look carefully at the cloud file storage options associated with each plan.

As you’ll see from this list, you might already have access to effective tools because of subscriptions you’re already paying for. We included three popular names from that group: Microsoft 365, Google’s GSuite, and Intermedia’s Office in the Cloud. (A fourth option, Slack, is excellent for messaging and 1:1 calls but doesn’t have the feature set to compete with the other products on this list.) Shops that have standardized on one of those plans might find that the included conferencing and communication features are the ideal way to keep your company connected without extra deployment and training costs.

Finally, we considered the sorts of features that reduce friction in using this type of product. That list includes integration with other software you currently use, such as seamlessly connecting an online meeting to your calendar and your organization’s directory. And then there are ease-of-use features, including the ability to connect from a browser instead of being forced to download a client app or plug-in, and the ability to invite external participants to meetings.

ZDNet Recommends

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://www.zdnet.com/article/best-video-conferencing/#ftag=RSSbaffb68

ZDNET

Europe wants to set the rules for AI. Not everyone thinks it’s going to work

Avatar

Published

on

istock-973159608.jpg

The European Commission has published a new legal framework that will regulate the use of AI in the bloc.  

Getty Images/iStockphoto

After years of consulting with experts, a few leaked drafts, and plenty of petitions and open letters from activist groups, the European Union has finally unveiled its new rules on artificial intelligence – a world-first attempt to temper fears that the technology could lead to an Orwellian future in which automated systems will make decisions about the most sensitive aspects of our lives. 

The European Commission has published a new legal framework that will apply to both the public and private sectors, for any AI system deployed within the bloc or affecting EU citizens, whether the technology is imported or developed inside member states. 

At the heart of the framework is a hierarchy comprising four levels of risk, topped by what the Commission describes as “unacceptable risk”: those uses of AI that violate fundamental rights, and which will be banned.  

Artificial Intelligence

They include, for example, automated systems that manipulate human behavior to make users act in a way that might cause them harm, as well as systems that allow governments to socially score their citizens. 

But all eyes are on the contentious issue of facial recognition, which has stirred much debate in the past years because of the technology’s potential to enable mass surveillance. The Commission proposes a ban on facial recognition, and more widely on biometric identification systems, when used in public spaces, in real-time, and by law enforcement agencies.  
 
This comes with some exceptions: on a case-by-case basis, law enforcement agencies will still be able to carry out surveillance thanks to technologies like live facial recognition to search for victims of a crime (such as missing children), to prevent a terror attack, or to detect the perpetrator of a criminal offence. 

The rules, therefore, fall short of the blanket ban that many activist groups have been pushing for on the use of facial recognition for mass surveillance, and criticism is already mounting of a proposal that is deemed too narrow, and that allows for too many loopholes. 

“This proposal does not go far enough to ban biometric mass surveillance,” tweeted the European digital rights network EDRi

For example, biometric identification systems that are not used by law enforcement agencies, or which are not carried out in real-time, will slip from “unacceptable risk” to “high risk” – the second category of AI described by the Commission, and which will be authorized subject to specific requirements. 

High-risk systems also include emotion recognition systems, as well as AI models that determine access to education, employment, or essential private and public services such as credit scoring. Algorithms used at the border to manage immigration, to administer justice or that interfere with critical infrastructure equally fall under the umbrella of high-risk systems. 

For those models to be allowed to enter the EU market, strict criteria will have to be met, ranging from carrying out adequate risk assessments to ensuring that algorithms are trained on high-quality datasets, through providing high levels of transparency, security and human oversight. All high-risk systems will have to be registered within a new EU database. 

Crucially, the providers of high-risk AI systems will have to make sure that the technology goes through assessments to certify that the tool complies with legal requirements of trustworthy AI. But this assessment, except in specific cases such as for facial recognition technology, will not have to be carried out by a third-party. 

“In effect, what this is going to do is allow AI developers to mark their own homework,” Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. “And of course the ones developing it will be incentivized to say that what they are developing does conform.” 

“It’s a real stretch to call it regulation if it’s being outsourced to the very entities that profit from having their AI in as many places as possible. That’s very worrying.” 

A world-first

Despite its shortcomings, Jakubowska observes that the European Commission’s recognition that some uses of AI should be prohibited is a positive step in a field that is lacking regulation, which has at times caused the industry to be described as a “Wild West”. 

To date, businesses have mostly relied on self-ascribed codes of conducts to drive their AI initiatives – that is, when they weren’t held to account by employee activism voicing concerns at the development of harmful algorithms

The evidence suggests that the existing methods, or rather lack thereof, have some shortcomings. From biometric technologies keeping track of Muslim Uighur minorities in China, through policing algorithms unfairly targeting citizens on the basis of race: examples abound of AI systems informing high-stakes decisions with little oversight, but often dramatic life-changing consequences for those who are affected. 

Calls to develop clear rules to control the technology have multiplied over the years, with a particular focus on restricting AI models that can automatically recognize sensitive characteristics such as gender, sexuality, race and ethnicity, health status or disability.  

This is why facial recognition has been in the spotlight – and in this context, the Commission’s proposed ban is likely to be welcomed by many activist groups. For Jakubowska, however, the rules need to go one step further, with a more extensive list of prohibited uses of AI. 

“Civil society is being listened to, to some extent,” she says. “But the rules absolutely don’t go far enough. We’d like to see, for example, predictive policing, uses of AI at the border for migration, and the automated recognition of people’s protected characteristics, also prohibited – as well as a much stronger stance against all forms of biometric mass surveillance, not just the limited examples covered in the proposal.” 

But while Jakubowska’s stance will be shared by many digital rights groups, it is by no means a position shared by all within the industry.  

In effect, what is seen by some as an attempt to prevent AI from wreaking social havoc can also be perceived as placing barriers in the way of the best-case scenario – that where innovative businesses are incentivized to develop AI systems in the EU that could hugely benefit society, from improving predictions in healthcare to better combatting climate change. 

The case for AI doesn’t need to be made anymore: the technology is already known to contribute significantly to economic and social growth. In sales and marketing, AI could generate up to $2.6 trillion worldwide, according to analysts; while World Bank reports show that data companies have up to 20% higher operating margins than traditional companies.  

It’s not only about revenue. AI can help local councils deliver more efficient public services, assist doctors in identifying diseases, tell farmers how to optimize crop yields and bring about the future smart city with, among other things, driverless cars. 

For all this and more to happen, businesses have to innovate, and entrepreneurs need a welcoming environment to launch their start-ups. This is why the US, for example, has adopted a lax approach to regulation, with a “do what it takes” position that promotes light-touch rules that won’t come in the way of new ideas. 

It’s easy to argue that the EU Commission is doing the exact opposite. With AI still a young and fast-evolving technology, any attempt at prematurely regulating some use cases could stop many innovative projects from even being given a chance. 

For example, the rules ban algorithms that manipulate users into acting in a way that might cause them harm; but the nuances of what does and does not constitute harm are yet to be defined, even though they could determine whether a system should be allowed on the EU market. 

For Nick Holliman, professor at Newcastle University’s school of computing, the vagueness of the EU’s new rules reflect a lack of understanding of a technology that takes on many different shapes. “There are risks of harm from not regulating AI systems, especially in high-risk areas, but the nature of the field is such that regulations are being drafted onto a moving target,” Holliman tells ZDNet. 

In practice, says Holliman, the regulation seems unworkable, or designed to be defined in detail through case law – and the very idea of having to worry about this type of overhead is likely to drive many businesses away.  

“It seems that it will push EU AI systems development down very different risk-averse directions to that in the UK, US and China,” says Holliman. “While other regions will have flexibility, they will have to account for EU regulations in any products that might be used in the EU.” 

The race for AI 

When it comes to leading in AI, the EU is not winning. In fact, it is falling behind: the bloc is rarely cited as even participating in the race for AI, which is rather more often shown as a competition between the US and China.  

The EU’s tendency to embrace the regulation of new technologies has previously been pointed as the reason for the bloc’s shortcomings. A recent World Bank report showed that the EU launched 38% of investigations into data compliance in 2019, compared to only 12% in North America. For some economists, this “business-unfriendly” environment is the reason that many companies pick other locations to grow. 

“The EU has wider issues to do with the tech ecosystem: it’s very bureaucratic, it’s hard to get funding, it’s a top-down mentality,” Wolfgang Fengler, lead economist in trade and competitiveness at the World Bank, tells ZDNet. “The challenge is that these new rules can be seen as business-unfriendly – and I’m not talking for Google, but for small start-ups operating in the EU.” 

In its new regulations of AI, the Commission lays down the expected costs of compliance. Supplying a high-risk system could cost up to €7,000 ($8,400), with another €7,500 ($9,000) to be spent on verification costs.  

Perhaps more importantly, penalties are prohibitive. Commercializing a banned system could lead to fines of up to €30 million ($36 million), or 6% of turnover; failing to comply with the requirements tied to high-risk systems could cost €20 million ($24 million) or 4% of turnover; and supplying incorrect information about the models could lead to €10 million ($12 million) fines, or 2% of turnover. 

For Fengler, the lesson is clear: talented AI engineers and entrepreneurs will be put off by the potential costs of compliance, which only add to an existing mentality that stifles innovation. And without talent, Europe will find it hard to compete against the US and China. 

“We don’t want Big Brother societies, and there is a clear danger of that,” says Fengler. “It’s good to protect against fears, but if that’s your main driver, then we’ll never get anywhere. You think you can plan this out exactly, but you can’t. We don’t know how some AI experiments are going to end, and there are many examples where AI will make the world a much better place.” 

Competing at all costs 

For digital rights experts like EDRi’s Jakubowska, however, AI systems have already gone past raising fears, and have already demonstrated tangible harms that need to be addressed.  

Rather than calling for a ban on all forms of AI, she highlights, EDRi is pledging for restrictions on use cases that have been shown to impact fundamental rights. Just like knives should be allowed, but the use of a knife as a weapon should be illegal, so should problematic uses of AI be banned, she argues.  

More importantly, the EU should not seek to compete against other nations for the development of AI systems that might threaten fundamental rights. 

“We shouldn’t be competing at all costs. We don’t want to compete for the exceptionally harmful applications of AI,” says Jakubowska. “And we shouldn’t see boundless innovation as being on the same level as fundamental rights. Of course, we should do whatever we can to make sure that European businesses can flourish, but with the caveat that this has to be within the bounds of fundamental rights protections.” 

This is certainly the narrative adopted by the European Commission, which cites the need to remain human-centric while not unnecessarily constraining businesses. In striving to achieve this delicate, near-impossible balance, however, the bloc seems to have inevitably failed to satisfy either end of the spectrum. 

For Lilian Edwards, professor of law, innovation and society at Newcastle University, the Commission’s new rules are hardly surprising given the EU’s long-established positioning as the regulator of the world. More importantly, like all laws, they will continuously be debated and defied. 

“As an academic, I’ll say: What did you expect?” she tells ZDNet. “The devil is going to be in the detail. That is the nature of the law, and people will fight for years on the wording.” 

Whether the strategy will bear fruit is another question entirely. The European Parliament and the member states will now need to adopt the Commission’s proposals on AI following the ordinary legislative procedure, after which the regulations will be directly applicable across the EU. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.zdnet.com/article/europe-wants-to-set-the-rules-for-ai-not-everyone-thinks-its-going-to-work/#ftag=RSSbaffb68

Continue Reading

ZDNET

Europe wants to set the rules for AI. Not everyone thinks it’s going to work

Avatar

Published

on

istock-973159608.jpg

The European Commission has published a new legal framework that will regulate the use of AI in the bloc.  

Getty Images/iStockphoto

After years of consulting with experts, a few leaked drafts, and plenty of petitions and open letters from activist groups, the European Union has finally unveiled its new rules on artificial intelligence – a world-first attempt to temper fears that the technology could lead to an Orwellian future in which automated systems will make decisions about the most sensitive aspects of our lives. 

The European Commission has published a new legal framework that will apply to both the public and private sectors, for any AI system deployed within the bloc or affecting EU citizens, whether the technology is imported or developed inside member states. 

At the heart of the framework is a hierarchy comprising four levels of risk, topped by what the Commission describes as “unacceptable risk”: those uses of AI that violate fundamental rights, and which will be banned.  

Artificial Intelligence

They include, for example, automated systems that manipulate human behavior to make users act in a way that might cause them harm, as well as systems that allow governments to socially score their citizens. 

But all eyes are on the contentious issue of facial recognition, which has stirred much debate in the past years because of the technology’s potential to enable mass surveillance. The Commission proposes a ban on facial recognition, and more widely on biometric identification systems, when used in public spaces, in real-time, and by law enforcement agencies.  
 
This comes with some exceptions: on a case-by-case basis, law enforcement agencies will still be able to carry out surveillance thanks to technologies like live facial recognition to search for victims of a crime (such as missing children), to prevent a terror attack, or to detect the perpetrator of a criminal offence. 

The rules, therefore, fall short of the blanket ban that many activist groups have been pushing for on the use of facial recognition for mass surveillance, and criticism is already mounting of a proposal that is deemed too narrow, and that allows for too many loopholes. 

“This proposal does not go far enough to ban biometric mass surveillance,” tweeted the European digital rights network EDRi

For example, biometric identification systems that are not used by law enforcement agencies, or which are not carried out in real-time, will slip from “unacceptable risk” to “high risk” – the second category of AI described by the Commission, and which will be authorized subject to specific requirements. 

High-risk systems also include emotion recognition systems, as well as AI models that determine access to education, employment, or essential private and public services such as credit scoring. Algorithms used at the border to manage immigration, to administer justice or that interfere with critical infrastructure equally fall under the umbrella of high-risk systems. 

For those models to be allowed to enter the EU market, strict criteria will have to be met, ranging from carrying out adequate risk assessments to ensuring that algorithms are trained on high-quality datasets, through providing high levels of transparency, security and human oversight. All high-risk systems will have to be registered within a new EU database. 

Crucially, the providers of high-risk AI systems will have to make sure that the technology goes through assessments to certify that the tool complies with legal requirements of trustworthy AI. But this assessment, except in specific cases such as for facial recognition technology, will not have to be carried out by a third-party. 

“In effect, what this is going to do is allow AI developers to mark their own homework,” Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. “And of course the ones developing it will be incentivized to say that what they are developing does conform.” 

“It’s a real stretch to call it regulation if it’s being outsourced to the very entities that profit from having their AI in as many places as possible. That’s very worrying.” 

A world-first

Despite its shortcomings, Jakubowska observes that the European Commission’s recognition that some uses of AI should be prohibited is a positive step in a field that is lacking regulation, which has at times caused the industry to be described as a “Wild West”. 

To date, businesses have mostly relied on self-ascribed codes of conducts to drive their AI initiatives – that is, when they weren’t held to account by employee activism voicing concerns at the development of harmful algorithms

The evidence suggests that the existing methods, or rather lack thereof, have some shortcomings. From biometric technologies keeping track of Muslim Uighur minorities in China, through policing algorithms unfairly targeting citizens on the basis of race: examples abound of AI systems informing high-stakes decisions with little oversight, but often dramatic life-changing consequences for those who are affected. 

Calls to develop clear rules to control the technology have multiplied over the years, with a particular focus on restricting AI models that can automatically recognize sensitive characteristics such as gender, sexuality, race and ethnicity, health status or disability.  

This is why facial recognition has been in the spotlight – and in this context, the Commission’s proposed ban is likely to be welcomed by many activist groups. For Jakubowska, however, the rules need to go one step further, with a more extensive list of prohibited uses of AI. 

“Civil society is being listened to, to some extent,” she says. “But the rules absolutely don’t go far enough. We’d like to see, for example, predictive policing, uses of AI at the border for migration, and the automated recognition of people’s protected characteristics, also prohibited – as well as a much stronger stance against all forms of biometric mass surveillance, not just the limited examples covered in the proposal.” 

But while Jakubowska’s stance will be shared by many digital rights groups, it is by no means a position shared by all within the industry.  

In effect, what is seen by some as an attempt to prevent AI from wreaking social havoc can also be perceived as placing barriers in the way of the best-case scenario – that where innovative businesses are incentivized to develop AI systems in the EU that could hugely benefit society, from improving predictions in healthcare to better combatting climate change. 

The case for AI doesn’t need to be made anymore: the technology is already known to contribute significantly to economic and social growth. In sales and marketing, AI could generate up to $2.6 trillion worldwide, according to analysts; while World Bank reports show that data companies have up to 20% higher operating margins than traditional companies.  

It’s not only about revenue. AI can help local councils deliver more efficient public services, assist doctors in identifying diseases, tell farmers how to optimize crop yields and bring about the future smart city with, among other things, driverless cars. 

For all this and more to happen, businesses have to innovate, and entrepreneurs need a welcoming environment to launch their start-ups. This is why the US, for example, has adopted a lax approach to regulation, with a “do what it takes” position that promotes light-touch rules that won’t come in the way of new ideas. 

It’s easy to argue that the EU Commission is doing the exact opposite. With AI still a young and fast-evolving technology, any attempt at prematurely regulating some use cases could stop many innovative projects from even being given a chance. 

For example, the rules ban algorithms that manipulate users into acting in a way that might cause them harm; but the nuances of what does and does not constitute harm are yet to be defined, even though they could determine whether a system should be allowed on the EU market. 

For Nick Holliman, professor at Newcastle University’s school of computing, the vagueness of the EU’s new rules reflect a lack of understanding of a technology that takes on many different shapes. “There are risks of harm from not regulating AI systems, especially in high-risk areas, but the nature of the field is such that regulations are being drafted onto a moving target,” Holliman tells ZDNet. 

In practice, says Holliman, the regulation seems unworkable, or designed to be defined in detail through case law – and the very idea of having to worry about this type of overhead is likely to drive many businesses away.  

“It seems that it will push EU AI systems development down very different risk-averse directions to that in the UK, US and China,” says Holliman. “While other regions will have flexibility, they will have to account for EU regulations in any products that might be used in the EU.” 

The race for AI 

When it comes to leading in AI, the EU is not winning. In fact, it is falling behind: the bloc is rarely cited as even participating in the race for AI, which is rather more often shown as a competition between the US and China.  

The EU’s tendency to embrace the regulation of new technologies has previously been pointed as the reason for the bloc’s shortcomings. A recent World Bank report showed that the EU launched 38% of investigations into data compliance in 2019, compared to only 12% in North America. For some economists, this “business-unfriendly” environment is the reason that many companies pick other locations to grow. 

“The EU has wider issues to do with the tech ecosystem: it’s very bureaucratic, it’s hard to get funding, it’s a top-down mentality,” Wolfgang Fengler, lead economist in trade and competitiveness at the World Bank, tells ZDNet. “The challenge is that these new rules can be seen as business-unfriendly – and I’m not talking for Google, but for small start-ups operating in the EU.” 

In its new regulations of AI, the Commission lays down the expected costs of compliance. Supplying a high-risk system could cost up to €7,000 ($8,400), with another €7,500 ($9,000) to be spent on verification costs.  

Perhaps more importantly, penalties are prohibitive. Commercializing a banned system could lead to fines of up to €30 million ($36 million), or 6% of turnover; failing to comply with the requirements tied to high-risk systems could cost €20 million ($24 million) or 4% of turnover; and supplying incorrect information about the models could lead to €10 million ($12 million) fines, or 2% of turnover. 

For Fengler, the lesson is clear: talented AI engineers and entrepreneurs will be put off by the potential costs of compliance, which only add to an existing mentality that stifles innovation. And without talent, Europe will find it hard to compete against the US and China. 

“We don’t want Big Brother societies, and there is a clear danger of that,” says Fengler. “It’s good to protect against fears, but if that’s your main driver, then we’ll never get anywhere. You think you can plan this out exactly, but you can’t. We don’t know how some AI experiments are going to end, and there are many examples where AI will make the world a much better place.” 

Competing at all costs 

For digital rights experts like EDRi’s Jakubowska, however, AI systems have already gone past raising fears, and have already demonstrated tangible harms that need to be addressed.  

Rather than calling for a ban on all forms of AI, she highlights, EDRi is pledging for restrictions on use cases that have been shown to impact fundamental rights. Just like knives should be allowed, but the use of a knife as a weapon should be illegal, so should problematic uses of AI be banned, she argues.  

More importantly, the EU should not seek to compete against other nations for the development of AI systems that might threaten fundamental rights. 

“We shouldn’t be competing at all costs. We don’t want to compete for the exceptionally harmful applications of AI,” says Jakubowska. “And we shouldn’t see boundless innovation as being on the same level as fundamental rights. Of course, we should do whatever we can to make sure that European businesses can flourish, but with the caveat that this has to be within the bounds of fundamental rights protections.” 

This is certainly the narrative adopted by the European Commission, which cites the need to remain human-centric while not unnecessarily constraining businesses. In striving to achieve this delicate, near-impossible balance, however, the bloc seems to have inevitably failed to satisfy either end of the spectrum. 

For Lilian Edwards, professor of law, innovation and society at Newcastle University, the Commission’s new rules are hardly surprising given the EU’s long-established positioning as the regulator of the world. More importantly, like all laws, they will continuously be debated and defied. 

“As an academic, I’ll say: What did you expect?” she tells ZDNet. “The devil is going to be in the detail. That is the nature of the law, and people will fight for years on the wording.” 

Whether the strategy will bear fruit is another question entirely. The European Parliament and the member states will now need to adopt the Commission’s proposals on AI following the ordinary legislative procedure, after which the regulations will be directly applicable across the EU. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.zdnet.com/article/europe-wants-to-set-the-rules-for-ai-not-everyone-thinks-its-going-to-work/#ftag=RSSbaffb68

Continue Reading

ZDNET

Europe wants to set the rules for AI. Not everyone thinks it’s going to work

Avatar

Published

on

istock-973159608.jpg

The European Commission has published a new legal framework that will regulate the use of AI in the bloc.  

Getty Images/iStockphoto

After years of consulting with experts, a few leaked drafts, and plenty of petitions and open letters from activist groups, the European Union has finally unveiled its new rules on artificial intelligence – a world-first attempt to temper fears that the technology could lead to an Orwellian future in which automated systems will make decisions about the most sensitive aspects of our lives. 

The European Commission has published a new legal framework that will apply to both the public and private sectors, for any AI system deployed within the bloc or affecting EU citizens, whether the technology is imported or developed inside member states. 

At the heart of the framework is a hierarchy comprising four levels of risk, topped by what the Commission describes as “unacceptable risk”: those uses of AI that violate fundamental rights, and which will be banned.  

Artificial Intelligence

They include, for example, automated systems that manipulate human behavior to make users act in a way that might cause them harm, as well as systems that allow governments to socially score their citizens. 

But all eyes are on the contentious issue of facial recognition, which has stirred much debate in the past years because of the technology’s potential to enable mass surveillance. The Commission proposes a ban on facial recognition, and more widely on biometric identification systems, when used in public spaces, in real-time, and by law enforcement agencies.  
 
This comes with some exceptions: on a case-by-case basis, law enforcement agencies will still be able to carry out surveillance thanks to technologies like live facial recognition to search for victims of a crime (such as missing children), to prevent a terror attack, or to detect the perpetrator of a criminal offence. 

The rules, therefore, fall short of the blanket ban that many activist groups have been pushing for on the use of facial recognition for mass surveillance, and criticism is already mounting of a proposal that is deemed too narrow, and that allows for too many loopholes. 

“This proposal does not go far enough to ban biometric mass surveillance,” tweeted the European digital rights network EDRi

For example, biometric identification systems that are not used by law enforcement agencies, or which are not carried out in real-time, will slip from “unacceptable risk” to “high risk” – the second category of AI described by the Commission, and which will be authorized subject to specific requirements. 

High-risk systems also include emotion recognition systems, as well as AI models that determine access to education, employment, or essential private and public services such as credit scoring. Algorithms used at the border to manage immigration, to administer justice or that interfere with critical infrastructure equally fall under the umbrella of high-risk systems. 

For those models to be allowed to enter the EU market, strict criteria will have to be met, ranging from carrying out adequate risk assessments to ensuring that algorithms are trained on high-quality datasets, through providing high levels of transparency, security and human oversight. All high-risk systems will have to be registered within a new EU database. 

Crucially, the providers of high-risk AI systems will have to make sure that the technology goes through assessments to certify that the tool complies with legal requirements of trustworthy AI. But this assessment, except in specific cases such as for facial recognition technology, will not have to be carried out by a third-party. 

“In effect, what this is going to do is allow AI developers to mark their own homework,” Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. “And of course the ones developing it will be incentivized to say that what they are developing does conform.” 

“It’s a real stretch to call it regulation if it’s being outsourced to the very entities that profit from having their AI in as many places as possible. That’s very worrying.” 

A world-first

Despite its shortcomings, Jakubowska observes that the European Commission’s recognition that some uses of AI should be prohibited is a positive step in a field that is lacking regulation, which has at times caused the industry to be described as a “Wild West”. 

To date, businesses have mostly relied on self-ascribed codes of conducts to drive their AI initiatives – that is, when they weren’t held to account by employee activism voicing concerns at the development of harmful algorithms

The evidence suggests that the existing methods, or rather lack thereof, have some shortcomings. From biometric technologies keeping track of Muslim Uighur minorities in China, through policing algorithms unfairly targeting citizens on the basis of race: examples abound of AI systems informing high-stakes decisions with little oversight, but often dramatic life-changing consequences for those who are affected. 

Calls to develop clear rules to control the technology have multiplied over the years, with a particular focus on restricting AI models that can automatically recognize sensitive characteristics such as gender, sexuality, race and ethnicity, health status or disability.  

This is why facial recognition has been in the spotlight – and in this context, the Commission’s proposed ban is likely to be welcomed by many activist groups. For Jakubowska, however, the rules need to go one step further, with a more extensive list of prohibited uses of AI. 

“Civil society is being listened to, to some extent,” she says. “But the rules absolutely don’t go far enough. We’d like to see, for example, predictive policing, uses of AI at the border for migration, and the automated recognition of people’s protected characteristics, also prohibited – as well as a much stronger stance against all forms of biometric mass surveillance, not just the limited examples covered in the proposal.” 

But while Jakubowska’s stance will be shared by many digital rights groups, it is by no means a position shared by all within the industry.  

In effect, what is seen by some as an attempt to prevent AI from wreaking social havoc can also be perceived as placing barriers in the way of the best-case scenario – that where innovative businesses are incentivized to develop AI systems in the EU that could hugely benefit society, from improving predictions in healthcare to better combatting climate change. 

The case for AI doesn’t need to be made anymore: the technology is already known to contribute significantly to economic and social growth. In sales and marketing, AI could generate up to $2.6 trillion worldwide, according to analysts; while World Bank reports show that data companies have up to 20% higher operating margins than traditional companies.  

It’s not only about revenue. AI can help local councils deliver more efficient public services, assist doctors in identifying diseases, tell farmers how to optimize crop yields and bring about the future smart city with, among other things, driverless cars. 

For all this and more to happen, businesses have to innovate, and entrepreneurs need a welcoming environment to launch their start-ups. This is why the US, for example, has adopted a lax approach to regulation, with a “do what it takes” position that promotes light-touch rules that won’t come in the way of new ideas. 

It’s easy to argue that the EU Commission is doing the exact opposite. With AI still a young and fast-evolving technology, any attempt at prematurely regulating some use cases could stop many innovative projects from even being given a chance. 

For example, the rules ban algorithms that manipulate users into acting in a way that might cause them harm; but the nuances of what does and does not constitute harm are yet to be defined, even though they could determine whether a system should be allowed on the EU market. 

For Nick Holliman, professor at Newcastle University’s school of computing, the vagueness of the EU’s new rules reflect a lack of understanding of a technology that takes on many different shapes. “There are risks of harm from not regulating AI systems, especially in high-risk areas, but the nature of the field is such that regulations are being drafted onto a moving target,” Holliman tells ZDNet. 

In practice, says Holliman, the regulation seems unworkable, or designed to be defined in detail through case law – and the very idea of having to worry about this type of overhead is likely to drive many businesses away.  

“It seems that it will push EU AI systems development down very different risk-averse directions to that in the UK, US and China,” says Holliman. “While other regions will have flexibility, they will have to account for EU regulations in any products that might be used in the EU.” 

The race for AI 

When it comes to leading in AI, the EU is not winning. In fact, it is falling behind: the bloc is rarely cited as even participating in the race for AI, which is rather more often shown as a competition between the US and China.  

The EU’s tendency to embrace the regulation of new technologies has previously been pointed as the reason for the bloc’s shortcomings. A recent World Bank report showed that the EU launched 38% of investigations into data compliance in 2019, compared to only 12% in North America. For some economists, this “business-unfriendly” environment is the reason that many companies pick other locations to grow. 

“The EU has wider issues to do with the tech ecosystem: it’s very bureaucratic, it’s hard to get funding, it’s a top-down mentality,” Wolfgang Fengler, lead economist in trade and competitiveness at the World Bank, tells ZDNet. “The challenge is that these new rules can be seen as business-unfriendly – and I’m not talking for Google, but for small start-ups operating in the EU.” 

In its new regulations of AI, the Commission lays down the expected costs of compliance. Supplying a high-risk system could cost up to €7,000 ($8,400), with another €7,500 ($9,000) to be spent on verification costs.  

Perhaps more importantly, penalties are prohibitive. Commercializing a banned system could lead to fines of up to €30 million ($36 million), or 6% of turnover; failing to comply with the requirements tied to high-risk systems could cost €20 million ($24 million) or 4% of turnover; and supplying incorrect information about the models could lead to €10 million ($12 million) fines, or 2% of turnover. 

For Fengler, the lesson is clear: talented AI engineers and entrepreneurs will be put off by the potential costs of compliance, which only add to an existing mentality that stifles innovation. And without talent, Europe will find it hard to compete against the US and China. 

“We don’t want Big Brother societies, and there is a clear danger of that,” says Fengler. “It’s good to protect against fears, but if that’s your main driver, then we’ll never get anywhere. You think you can plan this out exactly, but you can’t. We don’t know how some AI experiments are going to end, and there are many examples where AI will make the world a much better place.” 

Competing at all costs 

For digital rights experts like EDRi’s Jakubowska, however, AI systems have already gone past raising fears, and have already demonstrated tangible harms that need to be addressed.  

Rather than calling for a ban on all forms of AI, she highlights, EDRi is pledging for restrictions on use cases that have been shown to impact fundamental rights. Just like knives should be allowed, but the use of a knife as a weapon should be illegal, so should problematic uses of AI be banned, she argues.  

More importantly, the EU should not seek to compete against other nations for the development of AI systems that might threaten fundamental rights. 

“We shouldn’t be competing at all costs. We don’t want to compete for the exceptionally harmful applications of AI,” says Jakubowska. “And we shouldn’t see boundless innovation as being on the same level as fundamental rights. Of course, we should do whatever we can to make sure that European businesses can flourish, but with the caveat that this has to be within the bounds of fundamental rights protections.” 

This is certainly the narrative adopted by the European Commission, which cites the need to remain human-centric while not unnecessarily constraining businesses. In striving to achieve this delicate, near-impossible balance, however, the bloc seems to have inevitably failed to satisfy either end of the spectrum. 

For Lilian Edwards, professor of law, innovation and society at Newcastle University, the Commission’s new rules are hardly surprising given the EU’s long-established positioning as the regulator of the world. More importantly, like all laws, they will continuously be debated and defied. 

“As an academic, I’ll say: What did you expect?” she tells ZDNet. “The devil is going to be in the detail. That is the nature of the law, and people will fight for years on the wording.” 

Whether the strategy will bear fruit is another question entirely. The European Parliament and the member states will now need to adopt the Commission’s proposals on AI following the ordinary legislative procedure, after which the regulations will be directly applicable across the EU. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.zdnet.com/article/europe-wants-to-set-the-rules-for-ai-not-everyone-thinks-its-going-to-work/#ftag=RSSbaffb68

Continue Reading

HRTech

Google Workspace revamps Meet UI, aims to address ‘collaboration equity’

Avatar

Published

on

Google Workspace is rolling out a more streamlined user interface with improvements to pinning, presenting, customization and controls in an effort to address “collaboration equity.”

The general idea from Google Workplace is that features can enable remote and office workers to operate from the same baseline. Collaboration equity refers to the ability of workers to contribute regardless of location, role, experience, language and device. 

Starting in May, Google Workplace will bring the new UI to desktop and laptop users that will include new video backgrounds, Autozoom and automatic light adjustments. Google Workspace has been retooling Meet in recent weeks. Google rebranded G Suite as Google Workspace in October

Google Workplace is adding a bevy of features designed to address a hybrid workplace as well as accessibility and engagement. Google is also adding more controls to Meet in a bid to reduce meeting fatigue. Settings will give customers more control over how they view themselves via resizing and repositioning of tiles. You will also be able to minimize your feed and hide it from your view.

google-meet-ui.png
Google

Here’s a look at some of the feature updates to Meet.

  • Multi-pinning, drag and drop and scrollable overflow. Meet is allowing a presenter to unpin a presentation, so it becomes the same size as other participant tiles. The idea is that the presenter can better gauge reactions.
  • Video feed updates that will enable Google Meet customers to pin multiple video feeds and mix and match people and content. This feature could be handy since it allows you to prioritize people and content on the fly.
  • More streamlined controls. Meet’s controls will be consolidating in the bottom right to create more space.
  • Light adjustments. Meet will automatically detect when a user is underexposed and enhance brightness. With this feature, Google is leveraging AI to enhance video much like it does with its computational photography.
  • Data saver, a feature designed to limit data usage on mobile networks. Data saver will launch this month and come in handy in markets with high data costs such as Brazil and Mexico.
  • Autozoom, an AI tool that will position a user in front of the camera. Autozoom will be available to paid Google Workspace subscribers.
  • Video background replacement will come with three options including a classroom, a party and a forest. Google said it will add more video backgrounds.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.zdnet.com/article/google-workspace-revamps-meet-ui-aims-to-address-collaboration-equity/#ftag=RSSbaffb68

Continue Reading
Esports5 days ago

C9 White Keiti Blackmail Scandal Explains Sudden Dismissal

Esports5 days ago

Overwatch League 2021 Day 1 Recap

Esports5 days ago

Fortnite: Epic Vaults Rocket Launchers, Cuddlefish & Explosive Bows From Competitive

Esports5 days ago

Gamers Club and Riot Games Organize Women’s Valorant Circuit in Latin America

Blockchain5 days ago

15. BNB Burn: Binance zerstört Coins im Wert von 600 Mio. USD

Esports5 days ago

LoL gameplay design director pulled, transferred to Riot’s MMO

Esports5 days ago

Lakeland University Partners With Bucks Gaming for the 2021 NBA 2K League Season

Esports5 days ago

How to counter Renekton in League of Legends

Blockchain4 days ago

Elon Musk twittert Dogecoin auf neues Allzeithoch

Esports4 days ago

Dota 2 patch 7.29b brings nerfs to Phantom Lancer and Lifestealer amongst other hero balance changes

Esports5 days ago

How to play League of Legends’ newest champion Gwen

Esports5 days ago

3 big reasons why Dota 2’s new hero Dawnbreaker is just bad

Esports4 days ago

Code S: Trap & Zest advance to RO8, playoff bracket set

Esports4 days ago

New CSGO Update Makes Items Purchased From Store Non Tradable for a Week

Esports5 days ago

Sumail’s absence; A gap year or fall from grace?

Esports4 days ago

Apex Legends Season 9 will add new hero, fix Banglore bugs

Blockchain2 days ago

Mining Bitcoin: How to Mine Bitcoin

Esports4 days ago

Radiant Valorant streamer Solista banned for cheating on live stream

Esports5 days ago

Levi’s Finds Partnership With NRG Esports is a Good Fit

Esports4 days ago

How to Calculate Steam Market Tax on CSGO Items

Trending