Connect with us

Networks

TriggerMesh hooks up with AWS EventBridge to connect ‘virtually any application’ with cloudy service

Avatar

Published

on

TriggerMesh has introduced an integration with AWS EventBridge, now in preview, that enables virtually any application, on-premises or elsewhere, to fire events in the service for automated workflows.

The way EventBridge works is that it receives events, processes them according to rules the developer defines, and then forwards them to targets such as functions running in AWS Lambda, logs in AWS CloudWatch, or a queue in AWS Simple Queue Service (SQS). The source of the event is either another AWS service, with many predefined events such as calls to the S3 (Simple Storage Service API), or an integration with a third-party service such as Datadog, MongoDB, or Zendesk. EventBridge also has a PutEvents API that developers can call from custom code.

TriggerMesh, launched in November 2018, is an independent cloud service which is in concept somewhat similar to EventBridge, but instead of being restricted to target only AWS services, it can target multiple platforms including Azure Functions, Google Cloud Run, Kubernetes, Apache Kafka, OpenShift, as well as AWS services such as Lambda. TriggerMesh itself is built on Kubernetes, Knative and Istio.

In AWS EventBridge, the list of pre-configured third-party services which can serve as sources for events is relatively short. TriggerMesh has introduced a new integration with its own cloud service, which means that any event source TriggerMesh supports can now be forwarded to EventBridge. This also means that existing TriggerMesh users can now target any AWS service which EventBridge supports.

Why use TriggerMesh, when developers can already call the PutEvents API from any application? “We do believe that many folks end up doing just what you suggest, adding the integration to each piece of code you are executing,” TriggerMesh co-founder and CEO Mark Hinkle told The Register.

“We believe this is redundant and only allows for a hard-coded integration to a single service, in this case EventBridge. However, by using TriggerMesh it would give you the option to trigger workloads on any cloud native architecture: Azure Functions, Google Cloud Functions, OpenShift Serverless, or the Kubernetes flavour of your choice including Rancher, OpenShift Container Engine, Google Kubernetes Engine, and/or Amazon EKS. Or you could create application flows to other services that aren’t part of AWS for example, populating Confluent with event data or triggering a function on Twilio without a rewrite to your code.”

TriggerMesh has particular value if you are working across multiple clouds or diverse services. “For example, a new line added to a database may trigger an ETL [Extract, Transform, Load] function on Amazon and output the results to S3,” said Hinkle. “However, it could be used to notify a supply chain every time a new record is entered in the database. Or it could do both simultaneously.”

That said, hooking up event sources to TriggerMesh will not always be straightforward. As with EventBridge, there is a list of pre-baked integrations, as well as the ability to connect to on-premises event buses like IBM MQ. In some cases TriggerMesh is still working on “event transformation to make those events into a recognized format so Amazon EventBridge can consume them,” said Hinkle. There is also an offer to create one-off integrations for “users who may not have the expertise to extract and transform events on their own”.

Redmonk analyst James Governor wrote of “the coming SMOKEstack”, a term invented by Hinkle to describe composable services which are “Serviceful, Mashable, Open, K(C)composable, Event-driven.”

Businesses that embrace the idea of using an event-driven, serverless model for multi-cloud integration need some kind of cloud broker, and TriggerMesh is aiming to be that broker, though it is early days. ®

Source: https://go.theregister.com/feed/www.theregister.com/2020/08/05/triggermesh_hooks_up_with_aws/

Networks

Did this airliner land in the North Sea? No. So what happened? El Reg probes flight tracker site oddity

Avatar

Published

on

An airliner that appeared to crash into the North Sea earlier this week in fact landed safely. Yet multiple flight tracker websites showed it spiralling into the ocean. Experts have explained to The Register what really happened.

It began when Reg reader Ross noticed that a flight scheduled to land at Aberdeen on Tuesday 15 September had not arrived. Upon looking at several popular flight tracking websites, he found that the aircraft – an Avro RJ / BAe 146 four-engined regional airliner – seemed to have crashed around 75 miles (121km) south of the Scottish airport.

“I wondered even if it was GPS mangling but it still doesn’t look right,” he told El Reg.

Sure enough, the trace on multiple flight-tracking websites showed that flight ENZ212P had taken off from Southend in Essex, flown north and then seemed to have lost height over the east coast. It then turned through 180 degrees at low level, last being recorded a few hundred feet above the sea.

G-JOTR, a BAe 146, appeared to have landed in the North Sea earlier this week. Pic: Flight Radar 24

G-JOTR, a BAe 146 operating as ENZ212P, appeared to have landed in the North Sea earlier this week. Pic: Flight Radar 24

Yet there was no public sign that anything was amiss. No sign of coastguard helicopters or lifeboats being scrambled to rescue those aboard the jet from the cold waters of the North Sea.

It was impossible for the jet to have crashed without anyone noticing. As a commercial flight, ENZ212P (link will stop working for non-Flight Radar 24 subscribers after Sunday 20 September) would have been in constant contact with air traffic control.

Failing to respond to radio messages while descending without clearance would have triggered an almost immediate emergency response.

But that’s what it did, the internet said so

Flight tracking websites pick up aeroplanes’ positions through detecting radio signals emitted by the aircraft themselves. Most work through ADS-B: Automatic Dependent Surveillance – Broadcast. ADS-B signals transmitted by airliners include latitude, longitude, height, and speed among others.

Some sites also track using the MLAT (multilateration) technique, where a network of receiver stations picks up transponder signals. By cross-referencing signals from multiple stations and comparing their precise time of arrival, an operator can triangulate the location of an aircraft that has its transponder turned on. The technique also works for aeroplanes which are outside ADS-B range – or have their ADS-B equipment turned off.

Ken Munro and Alex Lomas of Pen Test Partners scratched their heads over the cause of the “crash.” Both agreed the aircraft in question, G-JOTR, had not crashed – especially because it was airborne again the following day and being tracked by the very same sites which last (apparently) had it plunging into the sea.

“We are leaning towards GPS spoofing as there are trials going on in other parts of Scotland that are just about in range,” Munro told El Reg as he pointed to an Ofcom page about military GPS-jamming exercises. One exercise was ongoing at the time of the flight: a unit was exercising on the Ministry of Defence’s West Freugh air weapons range, a block of land and airspace dedicated for fighter jets to practice the ways of aerial warfare.

Although the West Freugh exercise was only supposed to affect airborne GPS units within 60 miles, such distances are largely guesswork. Commercial aviators who spoke to The Register testified that GPS jamming often plays havoc with navigation well outside notified jamming zones, especially over the Eastern Mediterranean.

ADS-B position signals are fed by several navigation systems aboard most commercial airliners, with onboard Mode S enhanced transponders doubling as the ADS-B signal source.

“It’s possible that whatever was feeding ADS-B was faulty, but wasn’t a source of data used by the pilots for navigation, so they may not have noticed,” said Munro.

Lomas added: “Looking at the Flight Radar 24 playback you get no GPS altitude for most of the flight then it suddenly jumps up and then down, so I’m assuming there’s a fault with their installation maybe?”

The Flyers and the Flustered: Aberdeen Drift

Canadian open-source intelligence bod Steffan Watkins, whose recent flight tracking research revealed that US intelligence-gathering aircraft were switching transponder codes to pose as benign Malaysian flights off the coast of China, looked at ENZ212P’s online tracks and immediately dismissed the idea that something bad had happened.

He also pointed out that the online flight tracks were recorded through ADS-B, ruling out independent MLAT data.

“This is a beautiful example of the aircraft transmitting a false track that would have been properly triangulated with MLAT,” he told The Register. “With ADS-B you only need one receiver, and that receiver trusts whatever it was told by the plane.”

A copy of a BAe 146 flight crew operations manual (FCOM), seen by The Register, states that the jet’s enhanced Mode S transponders “receive data from the IRSs” when transmitting position information. IRS stands for Inertial Reference System, a rather old technology for determining an aeroplane’s position.

In the days before GPS was affordable and available to all, it was difficult to pinpoint an aircraft’s location mid-air unless you were in range of two or more ground radio beacons. IRS, also known as INS (Inertial Navigation System), uses gyro-stabilised accelerometers and a computer to figure out where an aeroplane has flown to from a precisely known starting point, as explained in depth here.

Its USP is that you don’t need any external inputs (like radio beacons or a GPS) to track where you are. Like any gyro instrument, however, INSs tend to drift over time.

Flight Radar 24’s Ian Petchenik independently reviewed that website’s stock of data for G-JOTR and concluded that the jet’s INS has an entirely unsurprising habit of drifting: “After reviewing the data, what we’re looking at is an extreme example of inertial navigation drift. This aircraft (and many older aircraft) use inertial navigation to provide their position. The unit is calibrated before takeoff and then reports its position based on travel from that position.”

Aberdeen Airport, ENZ212P’s destination, was around 70 miles (112km) north of its online position. The trace of its descent and turns precisely fit what an airliner approaching from the south to land on Aberdeen’s runway 16 (on a bearing of roughly 160 degrees) would have done.

Petchenik also dug into Flight Radar 24’s archives and found other examples of G-JOTR appearing to land in weird places, supporting the INS drift hypothesis.

G-JOTR is a landplane, and despite this Flight Radar 24 track it is not capable of sailing up the River Mersey

G-JOTR is a landplane, and despite this Flight Radar 24 track, it is not capable of sailing up the River Mersey

G-JOTR did not actually crash into a residential street near Southend Airport: its INS had drifted

G-JOTR did not actually crash into a residential street near Southend Airport: its INS had drifted

So the mystery was solved: ENZ212P hadn’t landed in the North Sea at all. Because its onboard IRS had drifted during flight, it appeared to be ditching while in reality it was making a routine, uneventful approach to Aberdeen’s runway 16, around 70 miles north.

Open-source bod Watkins sighed: “All of these systems were developed with the idea everyone wanted everyone else to have accurate data, for safety, and there are few checks and balances in place to validate the authenticity of the data.”

The next time you’re looking at a flight tracker site and wondering why granny’s return from Benidorm has ended in a field instead of gate 6, remember: not everything on the internet is precisely accurate. ®

Source: https://go.theregister.com/feed/www.theregister.com/2020/09/18/flight_tracking_adsb_oddity_ins_drift/

Continue Reading

Networks

This is how demon.co.uk ends, not with a bang but a blunder: Randomer swipes decommissioning domain

Avatar

Published

on

The last vestige of ye olde UK ISP Demon Internet, in the form of the demon.co.uk subdomain, was given its marching orders this year – after internet services outfit Namesco told customers to change their email address by 29 May.

Vodafone extended the licence to September to give Namesco’s customers a little more time to get their affairs in order, but all good things must come to an end… even email addresses that have loyally served users for decades. Except it didn’t quite manage that.

In a final twist of fate, the decommissioning of the sub-domain was swatted by the dread hand of bork.

“As a result of human error,” the company explained in an email to customers, “an incorrect dummy domain name was used to manage the decommissioning process, and this domain was subsequently registered by a third party.”

The result was that between the end of July and first week of August, sending an email to the now-defunct demon address would see both sender and recipient potentially logged by the mystery third-party server. Namesco was at pains to point out that “no email content was ever delivered to the third party, as the server rejected this content.”

Oops. Once the mistake was spotted, Namesco swiftly changed the dummy domain name. And the third party in question submitted an undertaking promising that no shenanigans were intended.

Namesco email ‘scripting error’ has last bastion of Demon Internet holdouts scratching their heads

READ MORE

Namesco reported the incident to the UK’s Information Commissioner’s Office (ICO), and just over a month after the cock-up occurred, affected Register readers received the company’s apology email. Exactly how a human managed to do the deed and what will stop something similar happening in the future remains unclear.

An ICO spokesperson told The Register: “People have the right to expect that organisations will handle their personal information securely and responsibly.

“When a data incident occurs, we would expect an organisation to consider whether it is appropriate to contact the people affected, and to consider whether there are steps that can be taken to protect them from any potential adverse effects.

“Names.co.uk has reported an incident to us and we will be making enquiries.”

A spokesperson for Namesco told us the company had “undertaken a full investigation” into the matter, “and have obtained a signed legally binding undertaking from the operator of the third-party server confirming that no personal data, including in the form of email content, was accessed, forwarded, viewed or stored.”

“Additionally,” it said, “we have confirmed through our technical investigations that the logs were never accessed and have been permanently deleted.”

The spokesperson also confirmed that most of the former Demon customers whose sub-domains were decommissioned this year were affected.

Still, those who have followed the fate of those elderly Demon email addresses (some of which were nearing the 30-year mark) will hopefully be pleased that they shuffled into the long night not quietly, or with head bowed, but with one final, human-induced TITSUP*. ®

* Transfer Into Temporary Sub-domain Utter Pants

Source: https://go.theregister.com/feed/www.theregister.com/2020/09/18/demon_decommissioning_oopsie/

Continue Reading

Networks

Should we all consolidate databases for the storage benefits? Reg vultures deploy DevOps, zoos, haircuts

Avatar

Published

on

Register Debate You’d think debating the benefits of database consolidation for storage would be a relatively straightforward affair. Not when it’s a Register Debate.

This week our writers turned their attention to the following motion: Consolidating databases has significant storage benefits, therefore everyone should be doing it. In the process, they conjured up images of hideous chimeras, slated inefficient programming, and drew a straight line between DevOps practices and the perfect barbershop experience.

Maybe it’s not such a surprise. This is an area that encompasses your company’s precious data and whole thickets of thorny hardware and software engineering problems. Make the wrong choice and your company could be subsidising your vendor account manager’s holiday villa for years to come. There’s a lot at stake here.

Database consolidation is a server issue, not a storage game

First to take the floor on Monday was El Reg’s storage supremo Chris Mellor, arguing against the motion because, “The idea that consolidating databases has significant storage benefits and therefore everyone should be doing it is missing the point.” Switching to an all flash array, for example, is not an issue of consolidation, Chris argued, “It’s database acceleration.”

“Database consolidation onto fewer servers saves server cost because you need fewer servers, and also saves database instance licensing expense as you need fewer per-server instance licenses,” he concluded. “There is no storage benefit here but the potentially significant server-based benefits make database consolidation an attractive idea that can serve you right.”

Arguing for the motion was Dave Cartwright, who is a chartered engineer, a chartered IT pro, and a member of the British Computer Society. Dave took a long view of the issue, noting sagely that: “Some of us learned about technology in the days when you had to be mindful of how you used it… you made darned sure that you stored as few copies of your data as you could, because you didn’t have much storage; part of this limitation was the technical limits of the hardware, but most of it was the sheer cost of the stuff.”

These days, he argued, “the technology is so fast, cheap and forgiving that you can use it inefficiently and it’ll save your bacon through raw speed and size … most of the time, anyway.” Because, if we’re honest, we all know that devs have been quietly copying parts of databases, or departments have been spinning up their own stores, often overlapping info with other departments, and no one ever deletes any of this, because… well, just in case.

The result? Wasted storage obviously, but also information audit challenges, data protection issues, and all the other problems that spring from an incontinent approach to data and storage.

Ultimately, Dave said, consolidating databases can address all of those issues, save a “boatload of storage,” and probably improve performance as you will “be making sure you index stuff properly and write queries to access fewer data stores.”

And who wouldn’t want all of that. Well, not all Reg readers. You can see some of the most upvoted comments in the box below, but suffice to say the phrase “eggs in one basket” popped up a couple of times, along with Oracle RDB and UNIVACs. And commenter PeterCorless raised a series of points, including the observation that “there’s no way your standard ERP system is keeping up with the raw rate of ingestion and analytics of IIoT. And no way the CFO is going to let quarter close be impacted because someone’s trying to run an ad hoc data query on the ERP system.”

So it was no surprise that Chris weighed back into the fray on Wednesday with a nightmarish vision of just what could happen if you really think through database consolidation.

Trying to consolidate RDBMS’s and NoSQL stores – for example – into a single database, on a single storage vault is “an impractical curiosity” akin to “trying to combine a horse and a fish, and building a noisy crowded zoo” to keep them in. Just think of the mess. Apart from ACID and CAP issues, the poor storage admins face the problems of disparate metadata and log data, as well as sizing and IO processing challenges.

Or, as Chris summed it up, horses can’t live in the sea with fish, or fish on the land with horses. (We now fully expect a database consolidation startup to appear called Seahorse. Or Landfish.)

After this nightmarish image of database chimeras prowling around expanding menageries, it was down to El Reg’s APAC editor Simon Sharwood to tie things up by turning the argument on its head, then giving it a good haircut into the bargain.

In this world where software rules and businesses bend over backwards for developers, simplicity is valuable

That’s because Simon used the example of his barber’s app, which shows a real-time queue, allows him to choose a cut in advance, and book and pay for it. The only part that doesn’t rely on a database – for now at least – is the part where scissors actually meet hair and the client says no, they don’t need something for the weekend.

We don’t normally consider the implications of DevOps for barbering, but, Simon argued: “In this world where software rules and businesses bend over backwards for developers, simplicity is valuable. Which is why database consolidation is a fine thing.”

Yes, this might send the ops team reaching for a hot towel, but “smart organisations don’t let it get to the stage where they are caught in a web of legacy tech…hostage to a shrinking pool of tech and services vendors who can ratchet up prices.”

In the end, and with upwards of 300 readers taking part, it seems the readers came in favour of the motion. But each vote reflects the state of play in each voter’s own organisation, at least to an extent. So, are customers and/or devs setting the agenda at the majority of organisations? And does this mean a focus on application delivery and customer experience trumps the views – and bitter experience – of storage/ops folks? Sounds like another debate topic. Expect fireworks. ®

Top comments upvoted by you, selected by us

“Err, no. I deal with a couple of legacy databases, Oracle RDB, and a hierarchical database that originated on UNIVACs. Neither of those is going to be on the table for consolidation. …. There are a few reasons for consolidation, but there are many reasons to refrain. Packing your favorite bowl or cup in your attic chest of porcelain means that you will constantly dis/reassemble the contents, and things will likely get broken. Certain architectural aspects become brittle and very difficult to change” – Chasil

“In a hypothetical situation and hear me out on this, having everything in one database makes queries a bitch especially when every person and their dog are doing it. It also leaves you wide open to user errors if you don’t set the permissions right which I know can be an issue with multiple databases and yes I have heard of backups but it’s just too risky. Divide and conquer I say. There is a reason we don’t put our eggs in one basket. Best case is also local duplication for anything that doesn’t require real time access. This is just my opinion on the matter” – Anonymous Coward

“Consolidation is putting all your eggs in one basket Any breakage and nothing works. It also means that the one database/cluster/… does all the work ie a higher workload than when it is distributed. One humongous machine might be more costly than several smaller ones — maybe” – Alain williams

“Have to agree with Chris… Here’s a guy who actually knows what he’s talking about. But here’s the thing… I don’t know if the question is being framed properly. When you say ‘database consolidation, what do you mean exactly. Yes, it’s a strange question, but think about it. You have databases that are OLTP transaction processing systems of truth. Then you have Data Warehouses (OLAP) that are used to drive analytics. Then you have Data Lakes which in itself is a Data Warehouse consolidation by removing the silos. (Here the number of DWs goes down, but the storage requirements go up. ) And it’s not just the CPUs getting better, or storage, but also networking. 40GbE is becoming Cisco’s norm. 100GbE is also there… But at 40GbE you can start to consider data fabric as your storage layer. The issue is cost versus density and performance has to be evaluated on a case by case basis. The networking also allows for a segregation of COTS and specialty hardware to get the most bang for your buck. You can weave a GPU appliance into your data fabric and then consolidate compute servers using K8s to allow distributed OLTP RDBMs to take better advantage of the hardware. (This is where the network can be a bottleneck. )

“What’s interesting and a side note… when you look at this… its in *YOUR* Data Center. Not on the cloud. (Although it could be in the Cloud too.) These advances will spell a down turn in the cloud over the next 5 years. Thats not to say that there won’t be a reason for cloud but more of a hybrid approach. Just some random thought from someone who’s been around this for far too long but too broke to retire. ;-)” – Mike the FlyingRat

“Improving tech is bad? On the argument that improving tech results in poor, lazy coding: Not quite. In current development cycles, delivering something on time (aka AGILE) is important. What is delivered is less so, but to hit these targets, developers take short cuts, write lazy code and rely on the tech to cover for them – who has time to optimise the code when you have to deliver in three days?

“So what’s written is ‘good enough’ in that it works thanks to the faster CPU, and it doesn’t matter that it takes more space ’cause disk is cheap, right? And when does the optimisation happen? When do we get to go back to make sure it’s efficient? When something breaks and we have no choice.” – Helcat

Source: https://go.theregister.com/feed/www.theregister.com/2020/09/18/storage_consolidation_debate_results/

Continue Reading
AI29 mins ago

How Artificial Intelligence Can Bring Online Casinos to the Next Level

Techcrunch5 hours ago

WeWork sells majority stake in Chinese entity, seeks localization

Blockchain5 hours ago

Ripple Aims To Expand Its Financial Institutions Network

CNBC5 hours ago

Seagate’s 1TB Game Drive for Xbox Series X, Series S costs $220

Automotive6 hours ago

Mobileye signs driver-assistance deal with Geely, one of China’s largest privately-held auto makers

Automotive7 hours ago

2021 Acura TLX First Drive | The mojo is returning

Techcrunch7 hours ago

Facebook gives more details about its efforts against hate speech before Myanmar’s general election

CNBC8 hours ago

TikTok ask the court to prevent a US ban from taking effect

CNBC8 hours ago

ByteDance applies for export license from China as TikTok deal waits for approval

CNBC8 hours ago

Nintendo just surprised Switch owners by releasing ‘Kirby Fighters 2’

CNBC9 hours ago

Stock futures little changed following sell-off on Wall Street

Blockchain10 hours ago

These 4 Trends Show That Bitcoin’s Likely to Move Higher After 20% Drop

Blockchain10 hours ago

RockX Launches $20M Investment Program to Support Polkadot

CNBC11 hours ago

Economist Stephen Roach issues new dollar crash warning, sees double-dip recession odds above 50%

CNBC11 hours ago

Jim Cramer on GoodRx’s IPO: Start buying if the stock pulls back a little

CNBC11 hours ago

‘Among Us’ developers cancel sequel plans, focus on their new/old smash hit

CNBC11 hours ago

Trump won’t commit to peaceful transfer of power if he loses the election

Blockchain12 hours ago

Here’s the Crucial Level Bitcoin Needs to Close Above to Kick Off Its Uptrend

Techcrunch12 hours ago

New report finds VC investment into climate tech growing five times faster than overall VC

Startups12 hours ago

Peterson Ventures, a firm that quietly backed Allbirds and Bonobos, just closed a $65 million fund

CNBC12 hours ago

JPMorgan to pay almost $1 billion fine to resolve US investigation into trading practices

AI13 hours ago

Gaining insights into winning football strategies using machine learning

CNBC13 hours ago

‘Amnesia: The Dark Descent’ and its sequel go open source

CNBC13 hours ago

Should front-line medical workers get the coronavirus vaccine first? Not necessarily

CNBC13 hours ago

VW unveils new global ID.4 electric SUV; U.S. production starts in 2022

Big Data13 hours ago

Machine learning and its impact in various facets of healthcare

CNBC14 hours ago

Stocks making the biggest moves after hours: Tesla, Dollar Tree, Jefferies Financial Group & more

CNBC14 hours ago

Google will try ‘hybrid’ work-from-home models, as most employees don’t want to come in every day

Blockchain15 hours ago

Analyst: Ethereum May Grind to $280 Before ETH 2.0 Hype Propels It Higher

SaaS15 hours ago

Email Marketing Consulting

Automotive15 hours ago

Autoblog is turning beautiful car photos into jigsaw puzzles

Author profile picture
Publications16 hours ago

To Remote Work or Not to Remote Work: That is the Question

Automotive16 hours ago

2021 Rolls-Royce Ghost has a fascinating new part to make it one of the most comfortable cars in the world

Visual Capitalist16 hours ago

The New Rules of Leadership: 5 Forces Shaping Expectations of CEOs

Twitter social icon
Publications16 hours ago

How to Measure your Trading Strategy Returns

how-to-improve-organic-ranking-by-resolving-wrongly-ranked-pages.png
Publications16 hours ago

How To Improve Organic Ranking By Resolving Wrongly Ranked Pages?

Automotive16 hours ago

Join Autoblog AMA on Thursday at noon ET | Bring your Bronco, Tesla, ID.4 and any other questions

Author profile picture
Publications16 hours ago

4 Reasons Why Email Is Obsolete, and You Should Move On

death-taxes-and-password-negligence-the-inevitability-of-pwned-passwords.jpg
Publications17 hours ago

Death, Taxes, and Password Negligence: The Inevitability of Pwned Passwords

Automotive17 hours ago

Why the Volkswagen ID.4 is a Very Big Deal

Trending