Connect with us

HRTech

Google Cloud lands 6-year deal with LVMH

Published

on

Special Feature

Managing the Multicloud

Here’s a look at managing multiple cloud providers, how to play them off each other and what vendors and tools can help you manage multiple clouds.

Read More

Cloud computing in 2021 has become the go-to model for information technology as companies prioritize as-a-service providers over traditional vendors, accelerate digital transformation projects, and enable the new normal of work following the COVID-19 pandemic. 

And while enterprises are deploying more multicloud arrangements the IT budgets are increasingly going to cloud giants. According to a recent survey from Flexera on IT budgets for 2021, money is flowing toward Microsoft Azure and its software-as-service offerings as well as Amazon Web Services. Google Cloud Platform is also garnering interest for big data and analytics workloads. But hybrid cloud and traditional data center vendors such as IBM, Dell Technologies, Hewlett-Packard Enterprise, and VMware have a role too. 

Meanwhile, Salesforce, ServiceNow, Adobe, and Workday are battling SAP and Oracle for more wallet and corporate data share. Salesforce and ServiceNow launched successful back-to-work enablement suites and cemented positions as major platforms. 

Also: The best web hosting providers: Find the right service for your site   

Key themes for 2021 include:

  • There’s a sales war happening by industry. Cloud providers are going vertical to corner industries. Gartner’s Magic Quadrant report on public cloud providers noted that the “capability gap between hyperscale cloud providers has begun to narrow; however, fierce competition for enterprise workloads extends to secondary markets worldwide.” Indeed, the financials from AWS, Microsoft Azure, and Google Cloud have all been strong.
gartner-iaas-mq-sept-2020.png
Gartner

With that backdrop, let’s get to the 2020 top cloud computing vendors. 

Infrastructure as a service

AWS was the first to offer cloud computing infrastructure as a service in 2008 and has never looked back. It’s launching new services at a breakneck pace and is creating its own compute stack that aims to be more efficient and pass those savings along. That plan isn’t likely to change as Adam Selipsky returns to become CEO of AWS as Andy Jassy takes over Amazon for Jeff Bezos.

AWS has expanded well beyond cloud compute and storage. If processors based on Arm become the norm in the data center, the industry can thank the gravitational pull of AWS, which launched a second-generation Graviton processor and instances based on it. If successful, the Graviton and the Nitro abstraction layer can be the differentiator for AWS in the cloud wars. 

AWS re:Invent

At re:Invent 2020, a virtual conference, AWS outlined custom processor roadmap, database advances and a bey of tools that solidify its lead in the cloud market. Jassy also took aim at Microsoft Azure in his keynote as well as Oracle and touted an AWS annual revenue run rate approaching $48 billion

While 2020 will be the year known for Amazon’s ability to deliver goods during COVID-19 lockdowns, it’s still worth noting that AWS delivers the most operating income in the company. 

The biggest question is whether enterprises are going to worry about AWS’ dominance as a digital transformation enabler. For now, AWS is becoming everything from a key AI and machine learning platform to call center engine to edge compute enabler. 

Some key developments include:

Must read:

While AWS growth rates have been slowing relative to rivals, the base of revenue is much higher. There is little evidence that AWS isn’t gaining a larger portion of the enterprise IT cloud-spend. AWS has hybrid cloud partnerships with the likes of VMware, developers, ecosystem, and large enterprise customer base to remain in the lead. 

Here’s what you need to watch with AWS in 2021:

The cheap and easy storyline is that Microsoft Azure and AWS are on a collision course to be the top cloud service provider. The reality is that the two foes barely rhyme. 

Here’s why:

  • There is still no publicly available data on Azure sales. Azure is the part of Microsoft’s cloud business that most rhymes with AWS, but is buried in the commercial cloud. 
  • Commercial cloud is a roll-up of multiple services from Microsoft. Enterprises are likely to buy a buffet that includes Azure but isn’t totally focused on it. That said, Microsoft commercial cloud annual revenue run rate is closing in on $70 billion.
  • Microsoft Azure benefits from its software-as-a-service footprint. The reality is that we could easily take Microsoft out of the IaaS category and put it in the SaaS section since most of the revenue is derived from Office 365, Dynamics, and a bevy of other cloud services that are software-based over infrastructure. 
  • Nevertheless, Azure and its AI, machine learning, and history in the enterprise make it a formidable player. Azure has edge computing efforts.

The COVID-19 pandemic provided rocket fuel to Microsoft’s cloud business as a bevy of enterprises used Microsoft Teams for remote work. In addition, Microsoft wrestled with capacity issues due to demand. Those capacity issues continued throughout 2020. Microsoft addressed capacity issues at its Ignite conference after Gartner gave Azure high marks, but raised concerns about outages

Also: Microsoft Teams: How to master remote work beyond the basics | TechRepublic cheat sheet on Microsoft Teams 

Microsoft CEO Satya Nadella argued that the company’s cloud unit sits in the middle of digital transformation efforts. “We have seen two years’ worth of digital transformation in two months. From remote teamwork and to sales and customer service to critical cloud infrastructure and security, we are working alongside customers every day to help them stay open for business in a world of remote everything,” said Nadella. 

To understand Azure’s competitive advantage, it helps to know some history courtesy of ZDNet’s Mary Jo Foley:

Simply put, Azure enjoys an incumbent role with enterprises as a cloud service provider, but pricing will blend multiple monetization models and bundles. The real battle between AWS and Microsoft will revolve around enterprises that go multi-cloud but want one preferred cloud service vendor. Will AWS or Microsoft be the preferred vendor? In that environment, Microsoft is a known commodity that can plug into Salesforce, which picked Azure for its Marketing Cloud, as well as other incumbents such as SAP, Oracle, and Adobe. In addition, Microsoft can pair its cloud offerings into its Microsoft 365 effort, which is a cloud and enterprise software buffet packaged for various industries but may have hidden costs if not negotiated properly

Microsoft has also honed its ground game for hybrid deployments as it has deep partnerships with server vendors to create integrated stacks to target hybrid cloud and private cloud. Azure Arc, Azure Stack, and Azure Stack Edge are all examples of these hybrid efforts. Some efforts of note include:

In the end, the Microsoft Azure battle with AWS will boil down to a sales war and thousands of foot soldiers pitching enterprises. You may become a Microsoft cloud customer via Teams, Office 365, Dynamics, Azure, or some combination of them all. The reality is that you’ll have both top cloud service providers in your company and neither one will own the whole stack. Multi-cloud efforts will begin with having Microsoft and AWS in your company. The wallet-share trench war begins there. (See: Can AWS be caught? Here’s how its cloud computing rivals can improve their chances)  

Must read:

Google Cloud Platform is coming off a year where it built out its strategy, sales team, and differentiating services, but also had performance hiccups. However, Google Cloud is getting a lift via COVID-19 and Google Meet and setting up a strategy to manage multi-cloud workloads. In 2021, you can expect Google Cloud to continue to expand its footprint with new regions and data centers.

Must read:

With an annual revenue run rate approaching $16 billion, Google Cloud Platform has been winning larger deals, has a strong leader with Oracle veteran Thomas Kurian, and is seen as a solid counterweight to AWS and Microsoft Azure. Kurian appears to be building out an Oracle-ish model where it targets industries and use cases where it can win. Think retail, where customers leverage Google ads, as well as cloud compute without worries about Amazon. Think education. Think finance. 

Must read:

Google CEO Sundar Pichai said COVID-19 was an inflection point for digital shifts. “Ultimately, we’ll see a long-term acceleration of movement from businesses to digital services, including increased online work, education, medicine, shopping, and entertainment. These changes will be significant and lasting,” he said. 

Must read:

Meanwhile, Google Cloud Platform has been building out partnerships with key enterprise players such as Salesforce, Informatica, VMware, and SAP. The company is also combining its G Suite and Google Cloud sales efforts. 

The Google Cloud Platform strategy requires a team that can sell vertically and competes with the sales know-how from AWS and Microsoft. Kurian has surrounded himself with enterprise software veterans. (See: Former Microsoft exec Javier Soltero to lead the Google G Suite team)

A recent hire is Hamidou Dia as Google Cloud’s vice president of solutions engineering. Hamidou was most recently Oracle’s chief of sales consulting, consulting, enterprise architecture, and customer success. Google Cloud also named John Jester vice president of customer experience. Jester will lead a services team focused on architecture and best practices. Jester was most recently corporate vice president of worldwide customer success at Microsoft.

Also: What makes Google Cloud Platform unique compared to Azure and Amazon

The primary cloud option in China

alibaba-com-trade-assurance-suppliers.jpg

(Image: Getty Images/iStockphoto)

Alibaba has scaled rapidly with a bevy of enterprise partners. What remains to be seen is whether Alibaba can expand beyond China. In either case, Alibaba has a lot of runway ahead. 

View Now at Alibaba

If your company has operations in China and is looking to go cloud, Alibaba is likely to be a key option.  

Alibaba’s cloud annual revenue run rate is nearly $10 billion exiting its most recent quarter. Perhaps the most notable disclosure was that 59% of the companies listed in China are Alibaba Cloud customers. Meanwhile, Alibaba is building out its next-gen cloud as well as capacity in China, EMEA, and elsewhere

While Alibaba Cloud flies under the radar for customers that are primarily focused on the EU and US, companies operating in China may use it as a preferred cloud vendor. To that end, Alibaba Cloud is forging alliances with key enterprise vendors and is seen as a leading cloud service provider in Asia. 

Must read:

The catch with Alibaba Cloud is that US-based customers are likely to run into politics, data concerns, and trade wars, but it’s quite possible that Alibaba Cloud can jump the rankings based on revenue just because the Chinese cloud market will be massive. 

Hybrid/multi-cloud 

With the battle between the hyperscale cloud vendors underway, you’d think that the legacy infrastructure players would recede to the background. Instead, the likes of IBM, Dell Technologies, and HPE aim to become the glue between multicloud deployments that feature a blend of private and public clouds as well as owned data centers. After all, most enterprises are looking at a multicloud strategy.

The two multicloud enablers in this mix are open source pioneer Red Hat, owned by IBM, and VMware, which is owned by Dell Technologies. Toss in Hewlett-Packard Enterprise, Lenovo, and Cisco Systems for solving select issues and you have a vibrant hybrid and multi-cloud space to consider. Here’s a look at the key players that aim to be the point guards of the public cloud and how they’ll connect to the hyperscale providers. 

IBM outlined the rationale for the $34 billion Red Hat purchase and its strategy for turbo-charging its growth in the future. 

In 2020, IBM doubled down on Red Hat and is spinning of its managed services unit in 2021. Here’s the setup for IBM going into 2021:

CEO Arvind Krishna has said IBM’s big bets revolve around hybrid cloud, automation and AI. He has also said that the spin-off of the managed infrastructure unit will give IBM more focus. 

Krishna North Star for IBM goes like this:

I want IBM-ers to lead with a more technical approach. I want our teams to showcase the value of our solutions as early as possible. Likewise, there must be a relentless focus on quality. Our products must speak for themselves in terms of user experience, design and ease of use. My approach is straightforward: I am going to focus on growing the value of the company. This includes better aligning our portfolio around hybrid cloud and AI to meet the evolving needs of the market.  

One key item to watch is how IBM blends its cloud and hybrid approach with emerging technologies. Consider:

red-hat-ecosystem.png

VMware has an incumbent position, key partnership with AWS, and a parent in Dell Technologies that is using the cloud management platform to power its own platform. VMware has a knack for evolving as the cloud ecosystem shifts. For instance, VMware was focused primarily on virtualization and has fully adopted containers. VMware powers legacy enterprise data centers, but has extended to being the connector to public cloud providers after being a leader in private cloud deployments. In addition to its lucrative AWS partnership, VMware also has partnerships with Microsoft Azure and Google Cloud Platform. And for good measure, VMware has integrated system partnerships with multiple hardware vendors. 

But VMware also needs to name a new CEO given Pat Gelsinger is now running Intel

The company’s VMworld 2020 virtual conference also highlighted how the company is eyeing AI workloads via partnerships with Nvidia as well as architectures such as Project Monterey to scale them. 

Recent headlines give a flavor for VMware’s evolution and where it fits into the enterprise mix:

So, where does Dell Technologies fit? Like IBM and Red Hat, Dell Technologies is looking to VMware as the software glue to give it a cloud platform that can span internal and public resources. VMware is the linchpin to the Dell Technologies’ cloud effort

Dell Technologies’ long-game for the hybrid cloud revolves around a leadership position in integrated and converged systems, a vast footprint in servers, networking, and storage, and VMware’s ability to bridge clouds. Dell Technologies is also aiming to deliver everything as a service. 

At Dell Technologies World conference in Las Vegas, the company outlined a hybrid cloud strategy that aims to knit its data center and hybrid cloud technologies with public cloud providers such as Amazon Web Services and IBM Cloud with more to come. The effort is dubbed the Dell Technologies Cloud. VMware is also launching VMware Cloud on Dell EMC, which will include vSphere, vSAN, and NSX running on Dell EMC’s infrastructure. 

In addition, Dell Technologies is launching a data-center-as-a-service effort where it manages infrastructure in a model that lines up with cloud computing one-year and three-year deals. VMware Cloud on Dell EMC is also designed for companies running their own data centers, but want a cloud operating model. Dell Technologies data center as a service effort is built on a VMWare concept highlighted last year called Project Dimension.

Enterprises are likely to be either in the Red Hat or the VMware camp, and both companies have big parents that have the scale into private clouds and hybrid data centers. 

Hewlett Packard Enterprise’s hybrid cloud strategy revolves around its stack of hardware — servers, edge compute devices via Aruba, storage and networking gear — and its various software platforms such as Greenlake, SimpliVity, and Synergy. HPE prefers the term “hybrid IT” over multicloud, but its approach rhymes with what IBM and Dell Technologies are trying to do. The catch is that HPE doesn’t have the scale that Red Hat and VMware have. 

Nevertheless, HPE has key partnerships with Red Hat, VMware, and integrated and converged systems with cloud providers. HPE’s stated goal is to offer its entire portfolio as a service over timeHPE CEO Antonio Neri outlined the strategy in an interview with ZDNet. Neri said:

We want to be known as the edge-to-cloud platform as-a-service company. And in that there are three major components. One is, as-a- service because obviously customers want to consume their solutions in a more consumption driven, pay only for what you consume. And that experience, at the core is simplicity and automation for all the apps and data, wherever they live.

Obviously, the edge is the next frontier. And we said two years ago that the enterprise of the future will be edge-centric, cloud-enabled and data-driven. Well, guess what? The future is here now. The edge is where we live and work.

Must read:

Where HPE’s approach to hybrid deployments is differentiated is in its Aruba unit, which provides edge computing platforms. HPE aims to extend its cloud platform to edge networks. That cloud-to-edge approach could pay off in the future, but edge computing is still a developing market. In the meantime, HPE is tapping into Azure for management talent. 

hpe-analyst-day-2019-2.png
HPE

Keith White, a former Microsoft executive, will lead HPE’s Greenlake business, which aims to help transform the company into an as-a-service juggernaut.

HPE is also looking to address container management and sprawl with its BlueData software

Cisco Systems has a bevy of multi-cloud products and applications, but the headliner is ACI, short for an architecture called Application Centric Infrastructure. Cisco is also melding AppDynamics, cloud management, and DevOps

Those parts are adding up to Cisco pursuing an everything-as-a-service model starting with an effort called Cisco Plus

Not surprisingly, Cisco’s approach to multi-cloud is network-centric and ACI focuses on policy, management, and operations for applications deployed across cloud environments. 

Cisco has partnerships with Azure and AWS and has expanded a relationship with Google Cloud. Add in AppDynamics, which specializes in application and container management, and Cisco has the various parts to address hybrid and multi-cloud deployments. In addition, Cisco is a key hyper-converged infrastructure player and its servers and networking gear are staples in data centers. 

Must read:

Software as a Service

Software as a service is expected to be the largest revenue slice of the cloud pie. According to Gartner, SaaS revenue in 2020 is expected to be $166 billion compared to $61.3 billion for IaaS. 

For large enterprises, there are a few realities. For starters, you’re likely to have Salesforce in your company. You’ll probably have Oracle and SAP, too. And then there may be a dose of Workday as well as Adobe. We’ll focus on those five big vendors and their prospects. It’s also worth noting that some of the previous vendors mentioned are primarily SaaS vendors. Microsoft Dynamics and Office are two software products likely to be delivered as a service. Your roster of software providers is as diverse as ever.

Here’s a look at the leading cloud software vendors.

Salesforce’s ambitions are pretty clear. The company wants to enable its customers to utilize its data to provide personal experiences, sell you its portfolio of clouds, and put its Salesforce Customer 360 effort in the center of the tech world. In 2020, Salesforce expanded its reach with Work.com, a suite to enable workers to head back to the office during the COVID-19 pandemic.

Vaccine management is also a hot area for Salesforce. Salesforce said that its vaccine management tools are used by more than 150 government agencies and healthcare organizationsSalesforce’s Vaccine Cloud is being used to build and manage COVID-19 vaccine efforts and track outbreaks.

Recent developments highlight Salesforce’s approach to invest through a downturn. Salesforce also said it will acquire Slack to connect its various clouds. The company outlined its 2021 ambitions at its Dreamforce conference:

Salesforce executives have outlined the road to doubling revenue in fiscal 2025. Indeed, Salesforce has acquired or built out what could be an entire enterprise stack as it pertains to customer data. Its acquisition of Tableau may also be transformative since the analytics company has a broader footprint and gives Salesforce another way to reach the broader market. 

Also: Salesforce launches Salesforce Anywhere, app that embeds collaboration, data across platforms

What remains to be seen is whether Salesforce’s Customer 360 platform can bring all of its clouds together in a way that prods enterprises to buy the entire portfolio in a SaaS buffet. At its analyst meeting, Salesforce noted that it had one customer in its top 25 with five clouds from the company, no customer with six, and a handful with three or four clouds. Slack will also bring more customers and reach to Salesforce.

multicloud.png

Salesforce will need its top customers to adopt more clouds if the company is going to get to its $35 billion revenue target in fiscal 2025. 

Salesforce’s current lineup consists of clouds for integration, commerce, analytics, marketing, service platform, and sales. Service and sales clouds are the most mature, but others are growing quickly. Salesforce’s Einstein is an example of AI functionality that’s an upsell to its clouds. In the end, Salesforce sees a $168 billion total addressable market. Work.com could add more to that tally.

Must read:

Oracle does infrastructure. Oracle does platform. Oracle does database, which is increasingly autonomous. Despite its IaaS and PaaS footprint, Oracle is mostly a software provider when it comes to cloud. With the addition of NetSuite, the company can cover small, mid-sized, and large enterprises. 

While Oracle came into 2020 as an afterthought in IaaS, it has had an eventful second half of the year. Oracle landed Zoom as a reference customer for its cloud and is seeing momentum into 2021.

Must read:

Edward Screven, Oracle’s chief corporate architect, said in an interview that the company is expanding its hyperscale reach for IaaS and plans to hit 36 facilities by the end of the year. While SaaS is core, Oracle is also landing new users with infrastructure and a free tier. “A lot of conversations we have are about SaaS, but enterprises need to build SaaS using the tools we have so they look at the platform. And everyone is looking for a fast, reliable and cost-effective compute,” said Screven. 

In other words, IaaS players start with compute and storage and move up the stack. Oracle can start at the high end and work back into infrastructure. “AWS was first, but we have a lot of customers with experience already with Oracle Cloud,” he said. Screven said that Oracle Cloud is seeing more developer interest due to a free tier.

The big win for Oracle’s cloud business will be SaaS and autonomous database services. Oracle’s cloud is optimized for its own stack, and that will appeal to its customer base. Oracle’s Cloud at Customer product line is also appealing to hybrid cloud customers. Oracle will put an optimized autonomous database in an enterprise and manage it as if it was its own cloud.

Will Oracle go multi-cloud and partner with frenemies? Yes and no. Microsoft Azure and Oracle are partnered to combine data centers and swap data with speedy network connections. Oracle isn’t likely to partner with Google Cloud given its court battles with the company. Oracle isn’t likely to cozy up to AWS either.

For enterprises, Oracle’s cloud efforts will be powered by SaaS and it will be a player in other areas. It’s unclear whether Oracle’s bet on what it calls Generation 2 Cloud Infrastructure will pay off, but its enterprise resource planning, human capital management, supply chain, sales and service, marketing, and NetSuite clouds will keep it a contender.  

SAP CEO Christian Klein is looking to keep its cloud momentum, expand HANA and Qualtrics and battle Salesforce, Oracle, and Workday. Klein is also looking to focus SAP and simplify. He’s also looking to shift SAP’s customer base to the cloud on an accelerated timetable. 

Klein said:

Instead of doing everything ourselves, we are co-innovating. We have always been the leading on-premise application platform. Thousands of partners and customers have built applications and extensions on SAP for almost 50 years. Our intention is to repeat that for the cloud to position SAP as the leading cloud platform to transform and change the way enterprises work in the digital age. To get there, we have put a lot of work into our cloud platform over the past 12 months, and we will continue to invest in innovation. The time when SAP developed and engaged with customers in silos are over.

SAP’s 2021 plan is to migrate its customers to the cloud faster and create one data model. Klein added:

We will bring the full force of our business applications and platform to drive holistic business transformation. By enabling our customers to seamlessly design, evolve or in win new business models with agility and speed. To do so, all our main solutions will adopt the cloud platform and share one semantical data model, one AI and analytics layer, one common security and authorization model and the same application business services such as workflow management, with our cloud platform, powered by SAP HANA. Process can be changed, enabling agile workflows. Innovations and extensions can be developed quickly by customers and partners accessing our open platform, using exactly the same data model in business services as our own SAP app. We are convinced that the real value driver of intelligent enterprises in the cloud will be the ability to adapt and on new business model holistically end-to-end with one consistent data model.  

Must read:

Workday has more than 3,000 customers and the human capital management software vendor is increasingly adding financial management customers too. As a result, Workday is among the cloud vendors gaining wallet share, according to a Flexera report. 

The company is at an inflection point where it is selling more clouds and has a big market to chase as it courts mid-market companies. While the SaaS menu at Workday is decidedly more limited than what rivals SAP and Oracle offer, the company enjoys tighter focus. 

Workday co-CEO Aneel Bhusri said that his company is entering an expansion phase that rhymes with the Salesforce playbook. Workday ultimately sees its financial platform being the equal of its HR footprint. Planning and procurement are other new areas. Ultimately, Workday’s SaaS challenge will be to sell multiple clouds to customers.

Bhusri said:

“I would point you to the transition that Salesforce went through. They’re 6 years older than us, one of our best partners. They went from being a sales company to a sales and services company to a sales and service and marketing company and platform. Now they’ve got analytics. We’re going through that same journey and growth rates kind of ebb and flow as the different pillars take off.”

Workday is infusing machine learning and automation throughout its platform.  

Adobe has been a well-established cloud vendor among content creators and marketers, but a plan to focus on digital experiences and data management will put it on a collision course with the likes of Salesforce, Oracle, and SAP in areas like marketing. So far, so good

The company continually expands its addressable market

For enterprises, Adobe’s plan to dramatically expand its total addressable market can be a good thing — especially if the company can be used as leverage against incumbent providers. 

The company is also looking to be a key part of your data and digital transformation strategies. Adobe has hired former Informatica CEO Anil Chakravarthy as head of its digital experience unit. The move highlights how Adobe sees data integration as key to its expansion. “Every single business is going through the same digital transformation that we were lucky enough to go through almost a decade ago. And if a company cannot engage digitally with the customer, understand how the funnel, all the way from acquiring customers to renewing them, can be done digitally, they’re going to be disadvantaged,” said Adobe CEO Shantanu Narayen. 

ServiceNow had a strong 2020 and emerged as a SaaS provider delivering growth and becoming a platform of platform for various workflows.

Although ServiceNow is best known for its IT service management platform, it has expanded into a bevy of other corporate functions. In addition, CEO Bill McDermott has aimed the ServiceNow platform at industry specific use cases, including vaccine management as it evolves. McDermott said:

Here are a few trends shaping the overarching environment for ServiceNow. This unprecedented environment is breaking physical supply chains. It is exposing the weak links in the old value chains, illuminating how companies struggle cross-functionally to deliver the workflows that create great experiences for customers, employees and partners. The world is experiencing a seismic shift from the obsolete business process evolution to the new workflow revolution.

Must read:

The game plan for ServiceNow is to be a digital transformation engine by connecting systems of records to be a system of action. 

One key example is how ServiceNow has aimed its platform at back-to-work management efforts. Among the key 2020 developments for ServiceNow:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.zdnet.com/article/google-cloud-lands-6-year-deal-with-lvmh/#ftag=RSSbaffb68

HRTech

“Happiness lies in the moment of truth,” Shailesh Singh, chief people officer, Max Life Insurance

Published

on

[email protected] is a regular series, where HRKatha talks about how companies are ensuring happiness at work. With work stress and employees’ mental wellbeing becoming a major cause of concern for many Indian companies, happiness of employees at work is something, which can result in better engagement, stronger bonds, improved employee health and overall productivity.

[email protected] is powered by Happyness.me, a part of the consulting division of House of Cheer Networks, a full-service people, technology, media and entertainment hub specialising in Creation, Curation and Consultancy, to help companies reimagine their business and growth strategy.

In this interview, Shailesh Singh, chief people officer, Max Life Insurance shares that the concept of happiness has shifted from long term satisfaction to short term desires. The new generation wants to ‘live in the moment’. He also shares how Max Life Insurance ensures happiness at their workplaces.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.hrkatha.com/special/happiness-work/happiness-lies-in-the-moment-of-truth-shailesh-singh-chief-people-officer-max-life-insurance/

Continue Reading

HRTech

Does hiring a star CHRO impact the employer brand?

Published

on

Given the name and fame that back star leaders, the organisations that hire them definitely stand to gain. Not only does their reputation get a boost, but even their processes are impacted and they gain a positive outlook overall. It is just like hiring a hot shot CEO or a CMO, where the clients, customers, internal stakeholders and the employees of the company themselves start taking pride in the person hired. So, yes, in the case of CXO roles, the impact is definitely significantly positive, but what about star CHROs?

Like in any other domain, there exist some very popular figures in the HR community in our country. In terms of their performance, media coverage and social-media popularity, these people have a significant following. When a company hires such hot shot profiles, do they experience a positive impact on their employer brand?

“It is easier to make changes with someone with a legendry background and a history of phenomenal work. This is because, bringing in changes requires winning over the belief of the organisation, and that is pretty easy for someone who is well regarded and has a following”

Shailesh Singh, chief people officer, Max Life Insurance

Charisma and craftsmanship

As per Abhijit Bhaduri, HR leader & author, Dreamers & Unicorns, it definitely impacts the employer brand. He believes that sometimes people are so impressed and mesmerised with the individual’s work and craftsmanship that people tend to get attracted towards them. He cites examples of famous surgeons and doctors who are very popular in their field. As a result, no matter which hospital they work for, people will seek them out. Similarly, in the advertising industry, many a time, when creative directors move to another company, the entire account moves with them, because the clients pay for the creativity and craftsmanship of the creator. “The individual’s craftsmanship, charisma and popularity attract a lot of people, and this applies to the HR world too,” shares Bhaduri.

Limelight can distract

A research paper published in 2007, called the ‘Superstar CEOs’, studied the growth trajectory of more than 250 award-winning CEOs between 1993 and 2002. It concluded that all such CEOs were doing fine in their personal lives but the firms they were working for had started underperforming after their having received such recognitions. The companies had underperformed both in terms of stock returns and returns on assets, over the one-, two- and three-year periods following the award.

One explanation to this phenomenon can be that, such people tend to get distracted on becoming popular ‘stars’ in their domain of work, given the increasing rate of outside interest. They start focussing on authoring books and sitting on boards. This can also be the case when an organisations gets a popular CHRO in its leadership team. There can be a reverse effect.

“A celebrity leader’s craftsmanship, charisma and popularity attract a lot of people, and this applies to the HR world too”

Abhijit Bhaduri, HR leader & author, Dreamers & Unicorns

Team work

As per VDV Singh, former VP-HR, JK Cement, hiring a star CHRO may help but with certain conditions. “Those who possess business acumen and have proved themselves in the field, begin to be respected by the board and the HR function gets a space in the board, which eventually creates an impact internally,” enunciates Singh former HR leader from JK Cement.

Singh, however, believes that creating a strong employer brand requires team work. There have to be policies in place, an ideal environment and a strong culture to create a strong employer brand, which can only be achieved as a team. “I would say, the face of a popular CHRO with no strong team in place to support him, will only result in a short-term impact,” tells Singh former HR leader from JK Cement.

Track record

On the other hand, Shailesh Singh, chief people officer, Max Life Insurance, sees this phenomenon through two lenses. First is a short-term lens where he does agree that hiring a superstar CHRO gives an employer brand a temporary boost. “It is easier to make changes with someone with a legendry background and a history of phenomenal work. This is because, bringing in changes requires winning over the belief of the organisation, and that is pretty easy for someone who is well regarded and has a following,” points out Singh from Max Life Insurance. He cautions, however, that actions speak louder than words. If the popular CHROs fail to replicate their reputation in terms of their actions post hiring, then this positive impact will remain short lived.

“The face of a popular CHRO with no strong team in place to support him, will only result in a short-term impact”

VDV Singh, former VP-HR, JK Cement

Visibility

The impact will be different in smaller and bigger brands. The hiring of a star CHRO by a smaller organisation will catch the imagination of people more rapidly than if the hiring is done by a bigger organisation. In the latter case, building an employer brand and culture is a collective call. “Sometimes, in bigger firms, where things are performed in certain ways, there are no individual heroes. It is a collective effort by all. The overall impact of hiring a star CHRO may not be visible because the company’s brand is bigger than the individual,” explains Singh from Max Life Insurance.

Taking on a star CHRO can be rather eye catching and head turning, as it will definitely give a company’s employer brand a short-term boost. However, if such celebrity hires do not show real action on ground, the positive effect will soon wane.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.hrkatha.com/employee-branding/does-hiring-a-star-chro-impact-the-employer-brand/

Continue Reading

HRTech

What is AI? Here’s everything you need to know about artificial intelligence

Published

on

What is artificial intelligence (AI)?

It depends who you ask.

Back in the 1950s, the fathers of the field, Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That’s obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system’s ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for,” he said.

“Intelligence is not skill itself; it’s not what you can do; it’s how well and how efficiently you can learn new things.”

It’s a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated ‘narrow AI’, the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the different types of AI?

At a very high level, artificial intelligence can be split into two broad types: 

Narrow AI

Narrow AI is what we see all around us in computers today — intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

General AI

General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience. 

This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today – and AI experts are fiercely divided over how soon it will become a reality.

What can Narrow AI do?

There are a vast number of emerging applications for narrow AI:

  • Interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines.
  • Organizing personal and business calendars.
  • Responding to simple customer-service queries.
  • Coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location.
  • Helping radiologists to spot potential tumors in X-rays.
  • Flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices.
  • Generating a 3D model of the world from satellite imagery… the list goes on and on.

New applications of these learning systems are emerging all the time. Graphics card designer Nvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead of animating a small number of static images of the caller in a manner designed to reproduce the callers facial expressions and movements in real-time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point is self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk’s original timeline for the car’s Autopilot system being upgraded to “full self-driving” from the system’s more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

What can General AI do?

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called ‘superintelligence‘ – which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI. 

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCun say society is nowhere near developing AGI. Given the scepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that a general artificial intelligence will disrupt society in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.

What are recent landmarks in the development of AI?

watson-1.jpg

IBM

While modern narrow AI may be limited to performing specific tasks, within their specialisms, these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include: 

  • In 2009 Google showed its self-driving Toyota Prius could complete more than 10 journeys of 100 miles each, setting society on a path towards driverless vehicles.
  • In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that is processed to answer human-posed questions, often in a fraction of a second.
  • In 2012, another breakthrough heralded AI’s potential to tackle a multitude of new tasks previously thought of as too complex for any machine. That year, the AlexNet system decisively triumphed in the ImageNet Large Scale Visual Recognition Challenge. AlexNet’s accuracy was such that it halved the error rate compared to rival systems in the image-recognition contest.

AlexNet’s performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore’s Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, with Google training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public’s attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that are searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently, Google refined the training process with AlphaGo Zero, a system that played “completely random” games against itself and then learned from it. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones: a system trained by OpenAI has defeated the world’s top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own language to cooperate and achieve their goal more effectively, followed by Facebook training agents to negotiate and lie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3’s ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly, impressive results followed in other areas, with its ability to convincingly answer questions on a broad range of topics and even pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn’t pass muster, offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There’s still considerable interest in using the model’s natural language understanding as to the basis of future services. It is available to select developers to build into software via OpenAI’s beta API. It will also be incorporated into future services available via Microsoft’s Azure cloud platform.

Perhaps the most striking example of AI’s potential came late in 2020 when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system’s ability to look at a protein’s building blocks, known as amino acids, and derive that protein’s 3D structure could profoundly impact the rate at which diseases are understood, and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 determined the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has been heralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

What is machine learning?

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today, they are generally talking about machine learning. 

Currently enjoying something of a resurgence, in simple terms, machine learning is where a computer system learns how to perform a task rather than being programmed how to do so. This description of machine learning dates all the way back to 1959 when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world’s first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset are important for building a system able to carry out its designated task accurately. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

What are neural networks?

The key to machine learning success is neural networks. These mathematical models are able to tweak internal parameters to change what they output. A neural network is fed datasets that teach it what it should spit out when presented with certain data during training. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits — zeroes and ones — that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks are very loosely based on the connections between neurons in the brain. Neural networks are made up of interconnected layers of algorithms that feed data into each other. They can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During the training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired. At that point, the network will have ‘learned’ how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. These deep neural networks have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

There are various types of neural networks with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) — understanding the meaning of text — and speech recognition, while convolutional neural networks have their roots in image recognition and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchers refining a more effective form of deep neural network called long short-term memory or LSTM — a type of RNN architecture used for tasks such as NLP and for stock market predictions – allowing it to operate fast enough to be used in on-demand systems like Google Translate. 

ai-ml-neural-network.jpg

The structure and training of deep neural networks.

Image: Nuance

What are other types of AI?

Another area of AI research is evolutionary computation.

It borrows from Darwin’s theory of natural selection. It sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution. It could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

What is fueling the resurgence in AI?

As outlined above, the biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent. 

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently, training, machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are used to train up models for DeepMind and Google Brain and the models that underpin Google Translate and the image recognition in Google Photos and services that allow the public to build machine-learning models using Google’s TensorFlow Research Cloud. The third generation of these chips was unveiled at Google’s I/O conference in May 2018 and have since been packaged into machine-learning powerhouses called pods that can carry out more than one hundred thousand trillion floating-point operations per second (100 petaflops). These ongoing TPU upgrades have allowed Google to improve its services built on top of machine-learning models, for instance, halving the time taken to train models used in Google Translate.

What are the elements of machine learning?

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using many labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once trained, the system can then apply these labels to new data, for example, to a dog in a photo that’s just been uploaded.

This process of teaching a machine by example is called supervised learning. Labelling these examples is commonly carried out by online workers employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively –although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size — Google’s Open Images Dataset has about nine million images, while its labelled video repository YouTube-8M links to seven million labelled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50 000 people — most of whom were recruited through Amazon Mechanical Turk — who checked, sorted, and labelled almost one billion candidate pictures. 

Having access to huge labelled datasets may also prove less important than access to large amounts of computing power in the long run.

In recent years, Generative Adversarial Networks (GANs) have been used in machine-learning systems that only require a small amount of labelled data alongside a large amount of unlabelled data, which, as the name suggests, requires less manual work to prepare.

This approach could allow for the increased use of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t set up in advance to pick out specific types of data; it simply looks for data that its similarities can group, for example, Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick. In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on the screen.

By also looking at the score achieved in each game, the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

The approach is also used in robotics research, where reinforcement learning can help teach autonomous robots the optimal way to behave in real-world environments.

ai-ml-gartner-hype-cycle.jpg

Many AI-related technologies are approaching, or have already reached, the “peak of inflated expectations” in Gartner’s Hype Cycle, with the backlash-driven ‘trough of disillusionment’ lying in wait.

Image: Gartner / Annotations: ZDNet

Which are the leading firms in AI?

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that have probably made the biggest impact on the public awareness of AI.

Which AI services are available?

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine-learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offering a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offerings designed to streamline the process of training up machine-learning models and recently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don’t want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile, IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella, and having invested $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Which of the major tech firms is winning the AI race?

amazon-echo-plus-2.jpg

Image: Jason Cipriani/ZDNet

Internally, each tech giant and others such as Facebook use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing and needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

Over time, these assistants are gaining abilities that make them more responsive and better able to handle the types of questions people ask in regular conversations. For example, Google Assistant now offers a feature called Continued Conversation, where a user can ask follow-up questions to their initial query, such as ‘What’s the weather like today?’, followed by ‘What about tomorrow?’ and the system understands the follow-up question also relates to the weather.

These assistants and associated services can also handle far more than just speech, with the latest incarnation of the Google Lens able to translate text into images and allow you to search for clothes or furniture using photos.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with Amazon’s Alexa now available for free on Windows 10 PCs. At the same time, Microsoft revamped Cortana’s role in the operating system to focus more on productivity tasks, such as managing the user’s schedule, rather than more consumer-focused features found in other assistants, such as playing music.  

Which countries are leading the way in AI?

It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo, invest heavily in AI in fields ranging from e-commerce to autonomous driving. As a country, China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by the end of 2020 to become the world’s leading AI power by 2030.

Baidu has invested in developing self-driving cars, powered by its deep-learning algorithm, Baidu AutoBrain. After several years of tests, with its Apollo self-driving car having racked up more than three million miles of driving in tests, it carried over 100 000 passengers in 27 cities worldwide.

Baidu launched a fleet of 40 Apollo Go Robotaxis in Beijing this year. The company’s founder has predicted that self-driving vehicles will be common in China’s cities within five years. 

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to 1 in China’s favor.

baidu-autonomous-car.jpg

Baidu’s self-driving car, a modified BMW 3 series.

Image: Baidu

How can I get started with AI?

While you could buy a moderately powerful Nvidia GPU for your PC — somewhere around the Nvidia GeForce RTX 2060 or faster — and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on-demand.

How will AI change the world?

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, AI is helping robots move into new areas such as self-driving carsdelivery robots and helping robots learn new skills. At the start of 2020, General Motors and Honda revealed the Cruise Origin, an electric-powered driverless car and Waymo, the self-driving group inside Google parent Alphabet, recently opened its robotaxi service to the general public in Phoenix, Arizona, offering a service covering a 50-square mile area in the city.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people’s images, with tools already being created to splice famous faces into adult films convincingly.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95%. Microsoft’s Artificial Intelligence and Research group also reported it had developed a system that transcribes spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99% accuracy, expect speaking to computers to become increasingly common alongside more traditional forms of human-machine interaction.

Meanwhile, OpenAI’s language prediction model GPT-3 recently caused a stir with its ability to create articles that could pass as being written by a human.

Facial recognition and surveillance

In recent years, the accuracy of facial recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99% accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China, the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and has also expanded the use of facial-recognition glasses by police.

Although privacy regulations vary globally, it’s likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread. However, a growing backlash and questions about the fairness of facial recognition systems have led to Amazon, IBM and Microsoft pausing or halting the sale of these systems to law enforcement.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs. The recent breakthrough by Google’s AlphaFold 2 machine-learning system is expected to reduce the time taken during a key step when developing new drugs from months to hours.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson clinical decision support tool, which oncologists train at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Reinforcing discrimination and bias 

A growing concern is the way that machine-learning systems can codify the human biases and societal inequities reflected in their training data. These fears have been borne out by multiple examples of how a lack of variety in the data used to train such systems has negative real-world consequences. 

In 2018, an MIT and Microsoft research paper found that facial recognition systems sold by major tech companies suffered from error rates that were significantly higher when identifying people with darker skin, an issue attributed to training datasets being composed mainly of white men.

Another study a year later highlighted that Amazon’s Rekognition facial recognition system had issues identifying the gender of individuals with darker skin, a charge that was challenged by Amazon executives, prompting one of the researchers to address the points raised in the Amazon rebuttal.

Since the studies were published, many of the major tech companies have, at least temporarily, ceased selling facial recognition systems to police departments.

Another example of insufficiently varied training data skewing outcomes made headlines in 2018 when Amazon scrapped a machine-learning recruitment tool that identified male applicants as preferable. Today research is ongoing into ways to offset biases in self-learning systems.

AI and global warming

As the size of machine-learning models and the datasets used to train them grows, so does the carbon footprint of the vast compute clusters that shape and run these models. The environmental impact of powering and cooling these compute farms was the subject of a paper by the World Economic Forum in 2018. One 2019 estimate was that the power required by machine-learning systems is doubling every 3.4 months.

The issue of the vast amount of energy needed to train powerful machine-learning models was brought into focus recently by the release of the language prediction model GPT-3, a sprawling neural network with some 175 billion parameters. 

While the resources needed to train such models can be immense, and largely only available to major corporations, once trained the energy needed to run these models is significantly less. However, as demand for services based on these models grows, power consumption and the resulting environmental impact again becomes an issue.

One argument is that the environmental impact of training and running larger models needs to be weighed against the potential machine learning has to have a significant positive impact, for example, the more rapid advances in healthcare that look likely following the breakthrough made by Google DeepMind’s AlphaFold 2.

Will AI kill us all?

Again, it depends on who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civilization”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI, he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking warned that once a sufficiently advanced AI is created, it will rapidly advance to the point at which it vastly outstrips human capabilities. A phenomenon is known as a singularity and could pose an existential threat to the human race.

Yet, the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away.”

Will an AI steal your job?

14-amazon-kiva.png

Amazon

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. There are now 27 Amazon Go stores and cashier-free supermarkets where customers just take items from the shelves and walk out in the US. What this means for the more than three million people in the US who work as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 200 000 bots in its fulfilment centers, with plans to add more. But Amazon also stresses that as the number of bots has grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working on automating the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions, the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.

Yet, some of the easiest jobs to automate won’t even require robotics. At present, there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies as software gets better at automatically updating systems and flagging the important information, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment rather than replace workers. Not only that, but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own.

There’s a broad range of opinions about how quickly artificially intelligent systems will surpass human capabilities among AI experts.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

See More:

IBM adds Watson tools for reading comprehension, FAQ extraction.

Related coverage

How ML and AI will transform business intelligence and analytics
Machine learning and artificial intelligence advances in five areas will ease data prep, discovery, analysis, prediction, and data-driven decision making.

Report: Artificial intelligence is creating jobs, generating economic gains
A new study from Deloitte shows that early adopters of cognitive technologies are positive about their current and future roles.

AI and jobs: Where humans are better than algorithms, and vice versa
It’s easy to get caught up in the doom-and-gloom predictions about artificial intelligence wiping out millions of jobs. Here’s a reality check.

How artificial intelligence is unleashing a new type of cybercrime (TechRepublic)
Rather than hiding behind a mask to rob a bank, criminals are now hiding behind artificial intelligence to make their attack. However, financial institutions can use AI as well to combat these crimes.

Elon Musk: Artificial intelligence may spark World War III (CNET)
The serial CEO is already fighting the science fiction battles of tomorrow, and he remains more concerned about killer robots than anything else.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.zdnet.com/article/what-is-ai-heres-everything-you-need-to-know-about-artificial-intelligence/#ftag=RSSbaffb68

Continue Reading

HRTech

Alphabet launches company to make industrial robots more adaptable

Published

on

Alphabet’s X, its R&D lab, announced Friday morning that its next big bet is in industrial robotics. Its new early-stage company Intrinsic is a robotics software and AI company that wants to help robots sense and learn, thereby making them more adaptable to different environments. 

“The surprisingly manual and bespoke process of teaching robots how to do things, which hasn’t changed much over the last few decades, is currently a cap on their potential to help more businesses,” Wendy Tan-White, Intrinsic’s CEO, wrote in a blog post. “Specialist programmers can spend hundreds of hours hard coding robots to perform specific jobs, like welding two pieces of metal, or gluing together an electronics case. And many dexterous and delicate tasks, like inserting plugs or moving cords, remain unfeasible for robots because they lack the sensors or software needed to understand their physical surroundings.”

After developing its technology for five and-a-half years at X, Intrinsic is launching as an independent Alphabet company to further build and validate its product. The company is currently looking for partners in the automotive, electronics and health care industries that are already using industrial robotics.

So far, Tan-White wrote, the company has been testing software that uses techniques like automated perception, deep learning, reinforcement learning, motion planning, simulation and force control.

Tan-White joined X two and-a-half years ago after serving as a partner at the capital growth fund BGF Ventures and as a general partner at Entrepreneur First, a global technology talent investor in AI, robotics and biotech. Early in her career, she co-founded and was CEO of Moonfruit, the world’s first SAAS website builder platform. She also helped launch Zopa.com,

the first European peer-to-peer lending site, and Egg.com, the UK’s first internet bank. 

Intrinsic’s also includes leading roboticists and AI experts, such as CTO Torsten Kroeger, Engelberger Award winner Martin Haegele, robotics innovator Rainer Bischoff, and reinforcement learning expert Stefan Schaal.

Other companies that have spun out of Alphabet’s X lab (which was the Google X lab, prior to Alphabet’s formation) include the autonomous car company Waymo and Verily Life Sciences.

Prior and related coverage: 

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.zdnet.com/article/alphabet-launches-company-to-make-industrial-robots-more-adaptable/#ftag=RSSbaffb68

Continue Reading
Esports4 days ago

How to reduce lag and increase FPS in Pokémon Unite

Esports5 days ago

Coven skins for Ashe, Evelynn, Ahri, Malphite, Warwick, Cassiopeia revealed for League of Legends

Esports4 days ago

Will New World closed beta progress carry over to the game’s full release?

Esports4 days ago

How to add friends and party up in New World

Esports4 days ago

Can you sprint in New World?

Esports4 days ago

How to claim New World Twitch drops

Esports4 days ago

Twitch streamer gets banned in New World after milking cow

AR/VR4 days ago

Moth+Flame partners with US Air Force to launch Virtual Reality sexual assault prevention and response training

Blockchain5 days ago

Uniswap (UNI) and AAVE Technical Analysis: What to Expect?

Blockchain5 days ago

Rothschild Investment Purchases Grayscale Bitcoin and Ethereum Trusts Shares

Esports5 days ago

Konami unveils Yu-Gi-Oh! Master Duel, a digital version of the Yu-Gi-Oh! TCG and OCG formats

Esports4 days ago

How to change or join a new world in New World

Esports4 days ago

Best Akshan builds in League of Legends

Esports4 days ago

How to turn off and on PvP in New World

Esports4 days ago

Here are all the servers in the New World closed beta

Esports5 days ago

Team BDS adds GatsH to VALORANT roster as sixth man before EU Stage 3 Challengers 2

Esports5 days ago

Overwatch League 2021 Grand Finals to be held in Los Angeles, playoff bracket in Dallas

Blockchain5 days ago

NexWEB Technologies Chooses Butterfly Protocol for Powering its Blockchain Domain-Based NFT Platform

Gaming5 days ago

Why Is It Better to Play Slots Using Cryptocurrency?

Esports9 hours ago

Who won Minecraft Championships (MCC) 15? | Final Standings and Scores

Trending