Connect with us

Networks

Cloudflare’s new serverless platform lets its Workers run for 15 minutes before giving them the boot

Avatar

Published

on

Interview Cloudflare CEO Matthew Prince doubts developers care all that much about speed.

It’s nice to have, he told The Register in a phone interview about his company’s latest serverless product, Cloudflare Workers Unbound. But, he added, it’s not the most important thing in an edge computing platform.

“I think we were dead wrong in the value proposition of the platform,” Prince said. “We thought that speed was the most important thing. Speed is actually the least important thing. A lot of the world is getting the value proposition of edge computing wrong.”

What matters more? Cost, he said, then ease of use. And finally: compliance.

Compliance, he conceded, may not matter much for individual developers, but for large companies it’s a big deal.

“It’s going to turn out that regulatory compliance is the most important thing of all,” he insisted.

Cloudflare CEO Matthew Prince

The boss … Matthew Prince. Source: Cloudflare

The reason, Prince explained, is that governments around the world are imposing restrictions on technology companies, for example on where they can store their data.

“If you work at a big bank or insurance company or healthcare company or consumer products brand, if you’re the CIO or general counsel, what you’re terrified by is that increasingly countries are saying the data from their users has to remain local,” said Prince. “If you’re running all your instances from AWS East, that’s a problem.”

Edge computing – where processing happens at the edge of the network, close to the client, instead of in a distant data center that may be in another country – helps address that concern.

“What we’re hearing from our largest customers is this is the real killer app of edge computing,” said Prince. “It’s that it will be able to deal with an increasingly complicated regulatory environment.”

EU flags against cloudy backdrop

Franco-German cloud framework floated to protect European’s data from foreign tech firms slurpage

READ MORE

Cloudflare debuted its initial serverless product, Cloudflare Workers, in 2017. It allows developers to run JavaScript or Rust code against the Service Worker API.

Service Workers are essentially proxy servers that mediate between web applications, the network, and the browser. They’re used in Progressive Web Apps (PWAs), for example, to intercept requests to the application server when the PWA is running in the user’s browser without a network connection.

Cloudflare Service Workers run on the network edge rather than in the browser. They’re used by developers to handle HTTP requests in serverless applications, which are designed to start up, respond to requests, then shut down until called upon again.

A related product, a key-value store called Workers KV, was introduced in 2018.

On Monday, the firm plans to announce Cloudflare Workers Unbound as a private beta, meaning developers have to sign up to be considered for admission.

Workers Unbound improves on Cloudflare Service Workers, now renamed Workers Bundled, by vastly expanding the execution time limit from 10ms (Free tier) and 50ms (on the evidently misnamed Unlimited tier) to 15 minutes.

What’s more, Cloudflare is rolling out serverless improvements for both Workers Unbound and Workers Bundled such as instant cold starts: one of the major challenges of serverless platforms is that it generally takes several hundred milliseconds to load application code into memory and get it running.

Unless someone invents a time machine, we don’t think anyone will have a faster start time

“We did something pretty clever,” said Prince. “The first thing that has to happen when you connect is the TLS handshake. The very first request as part of the handshake, we use that as a hint there’s going to be a request. During the time that handshake happens, we pre-warm the Worker so it loads instantly. Unless someone invents a time machine, we don’t think anyone will have a faster start time.”

Both services promise unthrottled CPUs – other serverless platforms dial down their CPUs – and rapid updates that go live in 15 seconds rather than minutes. And both are getting expanded programming language support. Instead of just JavaScript, C/C++, and Rust, developers will be able to write Cloudflare Worker code in Python, Go, Scala, Kotlin, and COBOL. There’s also a way for developers to add other preferred languages.

“If you want to add Lua, you can do that,” said Prince.

Then there’s the price, which is broken down by resource consumption (data transfer, execution time, and request price) with Workers Unbound and combined into a single figure with Workers Bundled.

Cloudflare claims Workers Unbound costs 75 per cent less than AWS Lambda, 52 per cent less than Google Cloud Functions, and 24 per cent less than Microsoft Azure Functions.

Ninety per cent of the savings, said Prince, come from building a sandboxing platform based on Isolates that is more efficient with underlying computing resources than VMs or containers. The other 10 per cent, he said, comes from lower operating costs, a consequence of a symbolic and mutually beneficial relationship with ISPs around the world that provide access to their data center infrastructure.

Cloudflare’s serverless sandboxing relies on the V8 JavaScript engine, “one of the most battle-tested, bug-bountied codebases out there,” said Prince, who also noted Cloudflare’s platform had been reviewed by some of the researchers involved in uncovering the Spectre process flaws.

Cloudflare, he said, has done a series of mitigations to stay in front of Spectre-style timing attacks. “Because we control the timers, we can stop them to make sure code isn’t being used to exfiltrate data,” he explained.

Prince expects there will be naysayers, just as VM fans said containers can’t be secure. “More and more, platforms will offer an Isolates approach,” he said.

To participate in the Workers Unbound private beta, you can sign up on Cloudflare’s website. ®

Source: https://go.theregister.com/feed/www.theregister.com/2020/07/27/cloudflare_serverless/

Networks

Facebook is leaky, creepy, and trashy. Now it wants to host some of your customer data

Avatar

Published

on

Facebook wants to host some of your customer data, an offer that hurts its own partner community.

The antisocial giant says it will host data generated by WhatsApp, specifically when used alongside the messaging service’s Business API. That interface lets businesses manage messages to and from customers, and to integrate e-commerce and other apps into the messaging platform. Facebook lets partners implement the API and choose where data is stored.

The Social Network™ on Thursday announced “a new way for businesses to store and manage their WhatsApp chats with customers using Facebook’s secure hosting infrastructure which will roll out early next year.”

Facebook says customers who take up its offer “will make it easier to onboard to WhatsApp Business API … respond to WhatsApp messages faster, keep their inventory up to date and sell products through chats.”

But the Silicon Valley giant also says that using a third party – even Facebook – breaks end-to-end encryption.

Facebook’s anti-trademark bot torpedoes .org website that just so happened to criticize Zuck’s sucky ethics board

READ MORE

“If a business chooses to use a third party vendor to operate the WhatsApp Business API on their behalf, we do not consider that to be end-to-end encrypted since the business you are messaging has chosen to give a third-party vendor access to those messages,” Facebook said. “This will also be the case if that third-party vendor is Facebook.”

Facebook will therefore disclose when it is hosting chats on behalf of a customer albeit without revealing the degraded encryption.

The web goliath said it will also “expand our partnerships with business solution providers we’ve worked with over the last two years,” so while it says it will offer a better on-boarding experience, it’s throwing them another unspecified bone.

The Social Network™ said its hosting services will emerge in coming months, which gives us all plenty of time to ponder whether you want to get into business with a corporation that has failed to suppress misinformation, allowed live-streaming of a racist terror attack, leaked personal data, and took years to figure out that holocaust denial has no place in public conversations. ®

Source: https://go.theregister.com/feed/www.theregister.com/2020/10/23/facebook_whatsapp_business_api_hosting/

Continue Reading

Networks

How to get started with Intel Optane

Avatar

Published

on

Sponsored If you take your data centre infrastructure seriously, you’ll have taken pains to construct a balanced architecture of compute, memory and storage precisely tuned to the needs of your most important applications.

You’ll have balanced the processing power per core with the appropriate amount of memory, and ensured that both are fully utilised by doing all you can to can get data off your storage subsystems and to the CPU as quickly as possible.

Of course, you’ll have made compromises. Although the proliferation of cores in today’s processors puts an absurd amount of compute power at your disposal, DRAM is expensive, and can only scale so far. Likewise, in recent years you’ll have juiced up your storage with SSDs, possibly going all flash, but there are always going to be bottlenecks en route to those hungry processors. You might have stretched to some NVMe SSDs to get data into compute quicker, but even when we’re pushing against the laws of physics, we are still constrained by the laws of budgets. This is how it’s been for over half a century.

So, if someone told you that there was a technology that could offer the benefits of DRAM, but with persistence, and which was also cheaper than current options, your first response might be a quizzical, even sceptical, “really”. Then you might lean in, and ask “really?”

That is the promise of Intel® Optane™, which can act as memory or as storage, potentially offering massive price performance boosts on both scores. And drastically improve the utilisation of those screamingly fast, and expensive, CPUs.

So, what is Optane™? And where does it fit into your corporate architecture?

Intel describes Optane™ as persistent memory, offering non-volatile high capacity with low latency at near DRAM performance. It’s based on the 3D XPoint™ technology developed by Intel and Micron Technology. It is byte and bit addressable, like DRAM. At the same time, it offers a non-volatile storage medium without the latency and endurance issues associated with regular flash. So, the same media is available in both SSDs, for use as storage on the NVMe bus, and as DIMMs for use as memory, with up to 512GB per module, double that of current conventional memory.

Platform

It’s also important to understand what Intel means when it talks about the Optane™ Technology platform. This encompasses both forms of Optane™ – memory and storage – together with the Intel® advanced memory controller and interface hardware and software IP. This opens up the possibility not just of speeding up hardware operations, but of optimising your software to make the most efficient use of the hardware benefits.

So where will Optane™ help you? Let’s assume that the raw compute issue is covered, given that today’s data centre is running CPUs with multiple cores. The problem is more about ensuring those cores are fully utilised. Invariably they are not, simply because the system cannot get data to them fast enough.

DRAM has not advanced at the same rate as processor technology, as Alex Segeda, Intel’s EMEA business development manager for memory and storage, explains, both in terms of capacity growth and in providing persistency. The semiconductor industry has pretty much exhausted every avenue available when it comes to improving price per GB. When it comes to the massive memory pools needed in powerful systems, he explains, “It’s pretty obvious that DRAM becomes the biggest contributor to the cost of the hardware…in the average server it’s already the biggest single component.”

Meanwhile, flash – specifically NAND – has become the default storage technology in enterprise servers, and manufacturers have tried everything they can to make it cheaper, denser and more affordable. Segeda compares today’s SSDs to tower blocks – great for storing something, whether data or people, but problems arise when you need to get a lot of whatever you’re storing in or out at the same time. While the cost of flash has gone down, endurance and performance, especially on write operations, means “it’s not fit for the purpose of solving the challenge of having a very fast, persistent storage layer”.

Moreover, Segeda maintains, many people are not actually aware of these issues. “They’re buying SSDs, often SAS SSDs, and they think it is fast enough. It’s not. You are most likely not utilising your hardware to the full potential. You paid a few thousand dollars for your compute, and you’re just not feeding it with data.”

To highlight where those chokepoints are in typical enterprise workloads, Intel has produced a number of worked examples. For example, when a 375GB Optane™ SSD DC P4800X is substituted for a 2TB Intel® SSD DC P4500 as the storage tier for a MySQL installation running 80 virtual cores, CPU utilisation jumps from 20 per cent to 70 per cent, while transaction throughput per second is tripled, and latency drops from over 120ms to around 20ms.

This latency reduction, says Segeda, “is what matters if you’re doing things like ecommerce, high frequency trading.”

The same happens when running virtual machines, using Optane™ in the caching tier for the disk groups in a VMware vSAN cluster, says Segeda. “We’re getting half of the latency and we’re getting double the IO from storage. It means I can have more virtual machines accessing my storage at the same time. Right on the same hardware. Or maybe I can have less nodes in my cluster, just to deliver the same performance.”

A third example uses Intel® Optane™ DC Persistent memory as a system memory extension in a Redis installation. The demo compares a machine with total available memory of 1.5TB of DRAM and a machine using 192GB of DRAM and 1.5TB of DCPMM. The latter delivered the same degree of CPU utilization, with up to 90 per cent of the throughput efficiency of the DRAM only server.

Real-time analytics

These improvements hold out the prospect of cramming more virtual machines or containers on the same server, says Segeda, or keeping more data closer to the PC, to allow real time analytics. This is important because while modern applications generate more and more data, only a “small, small fraction” is currently meaningfully analysed, says Segeda. “If you’re not able to do that, and get that insight, what’s the point of capturing the data? For compliance?” Clearly, compliance is important but it doesn’t help companies monetise the data they’re generating or giving them an edge over rivals.

The prospect of opening up storage and memory bottlenecks will obviously appeal, whether your infrastructure is already straining, or because while things are ticking over right this minute, you know that memory and storage demands are only likely to go in one direction in future. So, how do you work out how and where Optane™ will deliver the most real benefit for your own infrastructure?

On a practical level, the first step is to identify where the problems are. Depending on your team’s engineering expertise, this could be something you can do inhouse, using your existing monitoring tools. Intel® also provides a utility called Storage Performance Snapshot to run traces on your infrastructure and visualise the data to highlight where data flow is being choked off.

Either way, you’ll want to ask yourself some fundamental questions, says Segeda: “What’s your network bandwidth? Is it holding you back? What’s your storage workload? What’s your CPU utilisation? Is the CPU waiting for storage? Is the CPU waiting for network? [Then] you can start making very meaningful assumptions.” This should give you an indication of whether expanding the memory pool, or accelerating your storage, or both will help.

Next steps

As for practical next steps, Segeda suggests talking through options with your hardware suppliers, and Intel account manager if you have one, to take a holistic view of the problem.

Simply retrofitting your existing systems can be an option he says. Add in an Optane™ SSD on NVMe, and you have a very fast storage device. Optane™ memory can be added to the general memory pool, giving memory expansion at a relatively lower cost.

However, Segeda says, “You can have a better outcome if you do some reengineering, and explicit optimization.”

Using Optane™ as persistent memory requires significant modification to the memory controller, something that is currently offered in the Intel® Second Generation Xeon® Scalable Gold or Platinum Processors. This will enable the use of App Direct Mode, which allows suitably modified applications to be aware of memory persistence. So, for example Segeda explains, this will allow an in memory database like SAP Hana to exploit the persistence, meaning it does not have to constantly reload data.

Clearly, an all-new installation raises the option of a more efficient setup, with software optimised to take full advantage of the infrastructure, and with fewer but more compute powerful nodes. All of which gives which the potential to save not just on DRAM and storage, but on electricity, real estate, and also on software licenses.

For years, infrastructure and software engineers and data centre architects have had to delicately balance computer, storage, memory, and network. With vast pools of persistent memory and faster storage now in reach, at lower cost, that juggling act may just be about to get much, much easier.

Sponsored by Intel®

Source: https://go.theregister.com/feed/www.theregister.com/2020/10/22/get_started_with_intel_optane/

Continue Reading

Networks

The hills are alive with the sound of Azure as Microsoft pledges Austrian bit barns

Avatar

Published

on

Microsoft has announced yet another cloud region, this time in Austria.

As is ever the case, Microsoft has not said where the facility will be or detailed its disposition, or revealed said when it will open. But it has said that the facility will bring Azure, Microsoft 365, Dynamics 365 and the Power Platform to Austrian soil.

The region will be Microsoft’s 64th Azure facility.

Local politicians all lauded the decision, suggesting it will bring the land of Mozart, Strauss, Freud, radio pioneer Heddy Lamarr and strudel roaring into the digital age and let a thousand startups bloom.

Microsoft has also committed to work with Austria’s Ministry of Digitalization to launch a “Center of Digital Excellence”, establish a security network with business, academia and government, and train public servants and private citizens alike in cybersecurity.

Here at The Register we think an Austrian cloud also creates terrific chance for some show tunes, as the new facility will mean the hills are alive with the sound of Azure. The improved resilience that a full Microsoft bit barn brings will mean salespeople can break into a chorus of “You are six nines, I am seven nines.”

If that resilience proves as elusive as an Edelweiss, we can imagine spontaneous outbursts of “So Long, Farewell”.

We’ll leave it to readers to decide how to deal with “The Lonely Goatherd” and its frequent yodeling interjections. ®

Source: https://go.theregister.com/feed/www.theregister.com/2020/10/22/azure_austria/

Continue Reading
Blockchain News8 hours ago

Ethereum City Builder MCP3D Goes DeFi with $MEGA Token October 28

Blockchain News9 hours ago

Why Bitcoin’s Price Is Rising Despite Selling Pressure from Crypto Whales

Blockchain News10 hours ago

Smart Contract 101: MetaMask

Blockchain News11 hours ago

New Darknet Markets Launch Despite Exit Scams as Demand Rises for Illicit Goods

Blockchain News11 hours ago

Bitcoin Millionaires at an All-Time High as Analysts Warn of a Pullback Before BTC Moves Higher

Fintech11 hours ago

The Impact of BPM On the Banking And Finance Sector

Energy12 hours ago

New Found Intercepts 22.3 g/t Au over 41.35m and 31.2 g/t Au over 18.85m in Initial Step-Out Drilling at Keats Zone, Queensway Project, Newfoundland

Energy12 hours ago

Kennebec County Community Solar Garden Reaches Project Milestone

Energy12 hours ago

Kalaguard® SB Sodium Benzoate Registered Under EPA FIFRA

Energy12 hours ago

LF Energy Launches openLEADR to Streamline Integration of Green Energy for Demand Side Management

Energy12 hours ago

Thermal Barrier Coatings Market To Reach USD 25.82 Billion By 2027 | CAGR of 4.9%: Reports And Data

Blockchain News13 hours ago

$1 Billion in Bitcoin Moved, Making It the Largest Dollar Value Crypto Transaction in History

AR/VR13 hours ago

Digital Catapult’s Augmentor Programme Reveals 10 new XR Startups

Singapore
Esports14 hours ago

erkaSt joins NG

AR/VR14 hours ago

Hands-on: Impressive PS5 DualSense Haptics & Tracking Tech Bodes Well for Future PSVR Controllers

Blockchain News15 hours ago

Alibaba Founder Jack Ma Criticizes Current Financial Regulations

EdTech15 hours ago

Google Classroom Comments: All You Need to Know! – SULS086

Blockchain News15 hours ago

Bank for International Settlements to Issue a PoC CBDC With the Swiss Central Bank Before the End of 2020

Blockchain News16 hours ago

Ripple CEO Disagrees with Coinbase CEO’s Apolitical Work Policy, Considers Relocating Overseas

Cyber Security16 hours ago

Smart Solutions to Screen Mirroring iPad to Samsung TV

Esports18 hours ago

Video: TeSeS vs. Vitality

Big Data19 hours ago

Seven Tools for Effective CDO Leadership

Big Data19 hours ago

Key Considerations for Executing a Successful M&A Data Migration or Carve-Out

Cyber Security19 hours ago

Best Powered Subwoofer Car Reviews and Buying Guide

AR/VR20 hours ago

Jorjin Technologies announcing J7EF, the latest of its J-Reality

Big Data20 hours ago

Parallel ways of Data Scientist and Machine Learning

Supply Chain21 hours ago

The New Role of Agricultural Machinery to Work the Land

Energy23 hours ago

LONGi fornece 101 MW em módulos bifaciais para uma usina de larga escala no Chile.

Energy23 hours ago

LONGi suministra 101 MW en módulos bifaciales para una planta de energía ultra grande en Chile

Energy24 hours ago

Unabhängige Test bestätigen, dass der neue flüssigkeitsgekühlte Brennstoffzellenstapel von HYZON Motors bei der Leistungsdichte weltweit führend ist

Cyber Security1 day ago

Francisco Partners to Buy Forcepoint from Raytheon Technologies

Energy1 day ago

WHO experts acclaim Arawana as an oil of the 5G era, and they recommend the consumption of trans-fat-free cooking oils

Energy1 day ago

FIBRA Prologis Anuncia a Carlos Elizondo Mayer Serra como nuevo Miembro Independiente del Comité Técnico

Payments1 day ago

Post Office to close 600 ATMs

Payments1 day ago

Westpac rolls out customer complaint resolution system

Cyber Security1 day ago

Threat Landscaping

Ecommerce1 day ago

VTEX Opens Office in Singapore to better serve its Global Customers in…

Ecommerce1 day ago

StrikeTru Accelerates Momentum with New Client Wins & Strategic…

Ecommerce1 day ago

Guidance Celebrates Winning BigCommerce 2020 Partner Award

Ecommerce1 day ago

Introducing A Brand New Revolutionary Tech-Infused Apparel Company…

Trending