India has inaugurated a submarine cable between the mainland and the Andaman and Nicobar islands, an archipelago 1350km from the nation’s east coast.
When the islands make international news it is often because of the inhabitants of North Sentinel Island, a speck of land that it is forbidden to approach because its residents appear not to have left the stone age and like it that way. Indeed, the island’s residents are openly hostile to visitors: a missionary who paddled onto the island in 2018 was murdered.
But the islands are also of enormous strategic significance, because they are sovereign Indian territory and closer to vital south-east Asian shipping routes than the rest of India. The nation has therefore built substantial military infrastructure on the islands, expanded them in recent times, and even floated the idea of permitting allies’ navies to access its ports just to show the world that it can get a lot of guns to the island in short order .
India’s home minister Amit Shah yesterday inaugurated the 400Gbps submarine cable to the islands, saying: “this will lead to great benefits like e-education, banking facilities, telemedicine and surge employment by providing a major boost to the tourism sector.”
Shah is probably not wrong, because the islands are remote and India is adopting e-government services at such speed that island residents will need good connectivity. It may even help tourism too, because a tropical island holiday just isn’t a tropical island holiday these days unless there’s really good WiFi in your resort.
But India’s military will surely also use the new cables because less latency will be very handy when doing things like working on India’s long-range missiles, assisting its Air Force (which recently acquired five new Rafale fighter jets from France) or managing its Navy which is one of only 15 capable of blue water operations and one of seven capable of projecting significant force beyond a nation’s own territory.
So once international travel resumes, by all means visit the Andamans. And as you read The Register while sipping on whatever cocktail takes your fancy, remember that the new submarine cable keeps you connected and perhaps also protected. Or exposed, depending on your view of things. ®
Did this airliner land in the North Sea? No. So what happened? El Reg probes flight tracker site oddity
An airliner that appeared to crash into the North Sea earlier this week in fact landed safely. Yet multiple flight tracker websites showed it spiralling into the ocean. Experts have explained to The Register what really happened.
It began when Reg reader Ross noticed that a flight scheduled to land at Aberdeen on Tuesday 15 September had not arrived. Upon looking at several popular flight tracking websites, he found that the aircraft – an Avro RJ / BAe 146 four-engined regional airliner – seemed to have crashed around 75 miles (121km) south of the Scottish airport.
“I wondered even if it was GPS mangling but it still doesn’t look right,” he told El Reg.
Sure enough, the trace on multiple flight-tracking websites showed that flight ENZ212P had taken off from Southend in Essex, flown north and then seemed to have lost height over the east coast. It then turned through 180 degrees at low level, last being recorded a few hundred feet above the sea.
G-JOTR, a BAe 146 operating as ENZ212P, appeared to have landed in the North Sea earlier this week. Pic: Flight Radar 24
Yet there was no public sign that anything was amiss. No sign of coastguard helicopters or lifeboats being scrambled to rescue those aboard the jet from the cold waters of the North Sea.
It was impossible for the jet to have crashed without anyone noticing. As a commercial flight, ENZ212P (link will stop working for non-Flight Radar 24 subscribers after Sunday 20 September) would have been in constant contact with air traffic control.
Failing to respond to radio messages while descending without clearance would have triggered an almost immediate emergency response.
But that’s what it did, the internet said so
Flight tracking websites pick up aeroplanes’ positions through detecting radio signals emitted by the aircraft themselves. Most work through ADS-B: Automatic Dependent Surveillance – Broadcast. ADS-B signals transmitted by airliners include latitude, longitude, height, and speed among others.
Some sites also track using the MLAT (multilateration) technique, where a network of receiver stations picks up transponder signals. By cross-referencing signals from multiple stations and comparing their precise time of arrival, an operator can triangulate the location of an aircraft that has its transponder turned on. The technique also works for aeroplanes which are outside ADS-B range – or have their ADS-B equipment turned off.
Ken Munro and Alex Lomas of Pen Test Partners scratched their heads over the cause of the “crash.” Both agreed the aircraft in question, G-JOTR, had not crashed – especially because it was airborne again the following day and being tracked by the very same sites which last (apparently) had it plunging into the sea.
“We are leaning towards GPS spoofing as there are trials going on in other parts of Scotland that are just about in range,” Munro told El Reg as he pointed to an Ofcom page about military GPS-jamming exercises. One exercise was ongoing at the time of the flight: a unit was exercising on the Ministry of Defence’s West Freugh air weapons range, a block of land and airspace dedicated for fighter jets to practice the ways of aerial warfare.
Although the West Freugh exercise was only supposed to affect airborne GPS units within 60 miles, such distances are largely guesswork. Commercial aviators who spoke to The Register testified that GPS jamming often plays havoc with navigation well outside notified jamming zones, especially over the Eastern Mediterranean.
ADS-B position signals are fed by several navigation systems aboard most commercial airliners, with onboard Mode S enhanced transponders doubling as the ADS-B signal source.
“It’s possible that whatever was feeding ADS-B was faulty, but wasn’t a source of data used by the pilots for navigation, so they may not have noticed,” said Munro.
Lomas added: “Looking at the Flight Radar 24 playback you get no GPS altitude for most of the flight then it suddenly jumps up and then down, so I’m assuming there’s a fault with their installation maybe?”
The Flyers and the Flustered: Aberdeen Drift
Canadian open-source intelligence bod Steffan Watkins, whose recent flight tracking research revealed that US intelligence-gathering aircraft were switching transponder codes to pose as benign Malaysian flights off the coast of China, looked at ENZ212P’s online tracks and immediately dismissed the idea that something bad had happened.
He also pointed out that the online flight tracks were recorded through ADS-B, ruling out independent MLAT data.
“This is a beautiful example of the aircraft transmitting a false track that would have been properly triangulated with MLAT,” he told The Register. “With ADS-B you only need one receiver, and that receiver trusts whatever it was told by the plane.”
A copy of a BAe 146 flight crew operations manual (FCOM), seen by The Register, states that the jet’s enhanced Mode S transponders “receive data from the IRSs” when transmitting position information. IRS stands for Inertial Reference System, a rather old technology for determining an aeroplane’s position.
In the days before GPS was affordable and available to all, it was difficult to pinpoint an aircraft’s location mid-air unless you were in range of two or more ground radio beacons. IRS, also known as INS (Inertial Navigation System), uses gyro-stabilised accelerometers and a computer to figure out where an aeroplane has flown to from a precisely known starting point, as explained in depth here.
Its USP is that you don’t need any external inputs (like radio beacons or a GPS) to track where you are. Like any gyro instrument, however, INSs tend to drift over time.
Flight Radar 24’s Ian Petchenik independently reviewed that website’s stock of data for G-JOTR and concluded that the jet’s INS has an entirely unsurprising habit of drifting: “After reviewing the data, what we’re looking at is an extreme example of inertial navigation drift. This aircraft (and many older aircraft) use inertial navigation to provide their position. The unit is calibrated before takeoff and then reports its position based on travel from that position.”
Aberdeen Airport, ENZ212P’s destination, was around 70 miles (112km) north of its online position. The trace of its descent and turns precisely fit what an airliner approaching from the south to land on Aberdeen’s runway 16 (on a bearing of roughly 160 degrees) would have done.
Petchenik also dug into Flight Radar 24’s archives and found other examples of G-JOTR appearing to land in weird places, supporting the INS drift hypothesis.
G-JOTR is a landplane, and despite this Flight Radar 24 track, it is not capable of sailing up the River Mersey
So the mystery was solved: ENZ212P hadn’t landed in the North Sea at all. Because its onboard IRS had drifted during flight, it appeared to be ditching while in reality it was making a routine, uneventful approach to Aberdeen’s runway 16, around 70 miles north.
Open-source bod Watkins sighed: “All of these systems were developed with the idea everyone wanted everyone else to have accurate data, for safety, and there are few checks and balances in place to validate the authenticity of the data.”
The next time you’re looking at a flight tracker site and wondering why granny’s return from Benidorm has ended in a field instead of gate 6, remember: not everything on the internet is precisely accurate. ®
This is how demon.co.uk ends, not with a bang but a blunder: Randomer swipes decommissioning domain
The last vestige of ye olde UK ISP Demon Internet, in the form of the demon.co.uk subdomain, was given its marching orders this year – after internet services outfit Namesco told customers to change their email address by 29 May.
Vodafone extended the licence to September to give Namesco’s customers a little more time to get their affairs in order, but all good things must come to an end… even email addresses that have loyally served users for decades. Except it didn’t quite manage that.
In a final twist of fate, the decommissioning of the sub-domain was swatted by the dread hand of bork.
“As a result of human error,” the company explained in an email to customers, “an incorrect dummy domain name was used to manage the decommissioning process, and this domain was subsequently registered by a third party.”
The result was that between the end of July and first week of August, sending an email to the now-defunct demon address would see both sender and recipient potentially logged by the mystery third-party server. Namesco was at pains to point out that “no email content was ever delivered to the third party, as the server rejected this content.”
Oops. Once the mistake was spotted, Namesco swiftly changed the dummy domain name. And the third party in question submitted an undertaking promising that no shenanigans were intended.
Namesco email ‘scripting error’ has last bastion of Demon Internet holdouts scratching their heads
Namesco reported the incident to the UK’s Information Commissioner’s Office (ICO), and just over a month after the cock-up occurred, affected Register readers received the company’s apology email. Exactly how a human managed to do the deed and what will stop something similar happening in the future remains unclear.
An ICO spokesperson told The Register: “People have the right to expect that organisations will handle their personal information securely and responsibly.
“When a data incident occurs, we would expect an organisation to consider whether it is appropriate to contact the people affected, and to consider whether there are steps that can be taken to protect them from any potential adverse effects.
“Names.co.uk has reported an incident to us and we will be making enquiries.”
A spokesperson for Namesco told us the company had “undertaken a full investigation” into the matter, “and have obtained a signed legally binding undertaking from the operator of the third-party server confirming that no personal data, including in the form of email content, was accessed, forwarded, viewed or stored.”
“Additionally,” it said, “we have confirmed through our technical investigations that the logs were never accessed and have been permanently deleted.”
The spokesperson also confirmed that most of the former Demon customers whose sub-domains were decommissioned this year were affected.
Still, those who have followed the fate of those elderly Demon email addresses (some of which were nearing the 30-year mark) will hopefully be pleased that they shuffled into the long night not quietly, or with head bowed, but with one final, human-induced TITSUP*. ®
* Transfer Into Temporary Sub-domain Utter Pants
Should we all consolidate databases for the storage benefits? Reg vultures deploy DevOps, zoos, haircuts
Register Debate You’d think debating the benefits of database consolidation for storage would be a relatively straightforward affair. Not when it’s a Register Debate.
This week our writers turned their attention to the following motion: Consolidating databases has significant storage benefits, therefore everyone should be doing it. In the process, they conjured up images of hideous chimeras, slated inefficient programming, and drew a straight line between DevOps practices and the perfect barbershop experience.
Maybe it’s not such a surprise. This is an area that encompasses your company’s precious data and whole thickets of thorny hardware and software engineering problems. Make the wrong choice and your company could be subsidising your vendor account manager’s holiday villa for years to come. There’s a lot at stake here.
Database consolidation is a server issue, not a storage game
First to take the floor on Monday was El Reg’s storage supremo Chris Mellor, arguing against the motion because, “The idea that consolidating databases has significant storage benefits and therefore everyone should be doing it is missing the point.” Switching to an all flash array, for example, is not an issue of consolidation, Chris argued, “It’s database acceleration.”
“Database consolidation onto fewer servers saves server cost because you need fewer servers, and also saves database instance licensing expense as you need fewer per-server instance licenses,” he concluded. “There is no storage benefit here but the potentially significant server-based benefits make database consolidation an attractive idea that can serve you right.”
Arguing for the motion was Dave Cartwright, who is a chartered engineer, a chartered IT pro, and a member of the British Computer Society. Dave took a long view of the issue, noting sagely that: “Some of us learned about technology in the days when you had to be mindful of how you used it… you made darned sure that you stored as few copies of your data as you could, because you didn’t have much storage; part of this limitation was the technical limits of the hardware, but most of it was the sheer cost of the stuff.”
These days, he argued, “the technology is so fast, cheap and forgiving that you can use it inefficiently and it’ll save your bacon through raw speed and size … most of the time, anyway.” Because, if we’re honest, we all know that devs have been quietly copying parts of databases, or departments have been spinning up their own stores, often overlapping info with other departments, and no one ever deletes any of this, because… well, just in case.
The result? Wasted storage obviously, but also information audit challenges, data protection issues, and all the other problems that spring from an incontinent approach to data and storage.
Ultimately, Dave said, consolidating databases can address all of those issues, save a “boatload of storage,” and probably improve performance as you will “be making sure you index stuff properly and write queries to access fewer data stores.”
And who wouldn’t want all of that. Well, not all Reg readers. You can see some of the most upvoted comments in the box below, but suffice to say the phrase “eggs in one basket” popped up a couple of times, along with Oracle RDB and UNIVACs. And commenter PeterCorless raised a series of points, including the observation that “there’s no way your standard ERP system is keeping up with the raw rate of ingestion and analytics of IIoT. And no way the CFO is going to let quarter close be impacted because someone’s trying to run an ad hoc data query on the ERP system.”
So it was no surprise that Chris weighed back into the fray on Wednesday with a nightmarish vision of just what could happen if you really think through database consolidation.
Trying to consolidate RDBMS’s and NoSQL stores – for example – into a single database, on a single storage vault is “an impractical curiosity” akin to “trying to combine a horse and a fish, and building a noisy crowded zoo” to keep them in. Just think of the mess. Apart from ACID and CAP issues, the poor storage admins face the problems of disparate metadata and log data, as well as sizing and IO processing challenges.
Or, as Chris summed it up, horses can’t live in the sea with fish, or fish on the land with horses. (We now fully expect a database consolidation startup to appear called Seahorse. Or Landfish.)
After this nightmarish image of database chimeras prowling around expanding menageries, it was down to El Reg’s APAC editor Simon Sharwood to tie things up by turning the argument on its head, then giving it a good haircut into the bargain.
In this world where software rules and businesses bend over backwards for developers, simplicity is valuable
That’s because Simon used the example of his barber’s app, which shows a real-time queue, allows him to choose a cut in advance, and book and pay for it. The only part that doesn’t rely on a database – for now at least – is the part where scissors actually meet hair and the client says no, they don’t need something for the weekend.
We don’t normally consider the implications of DevOps for barbering, but, Simon argued: “In this world where software rules and businesses bend over backwards for developers, simplicity is valuable. Which is why database consolidation is a fine thing.”
Yes, this might send the ops team reaching for a hot towel, but “smart organisations don’t let it get to the stage where they are caught in a web of legacy tech…hostage to a shrinking pool of tech and services vendors who can ratchet up prices.”
In the end, and with upwards of 300 readers taking part, it seems the readers came in favour of the motion. But each vote reflects the state of play in each voter’s own organisation, at least to an extent. So, are customers and/or devs setting the agenda at the majority of organisations? And does this mean a focus on application delivery and customer experience trumps the views – and bitter experience – of storage/ops folks? Sounds like another debate topic. Expect fireworks. ®
Top comments upvoted by you, selected by us
“Err, no. I deal with a couple of legacy databases, Oracle RDB, and a hierarchical database that originated on UNIVACs. Neither of those is going to be on the table for consolidation. …. There are a few reasons for consolidation, but there are many reasons to refrain. Packing your favorite bowl or cup in your attic chest of porcelain means that you will constantly dis/reassemble the contents, and things will likely get broken. Certain architectural aspects become brittle and very difficult to change” – Chasil
“In a hypothetical situation and hear me out on this, having everything in one database makes queries a bitch especially when every person and their dog are doing it. It also leaves you wide open to user errors if you don’t set the permissions right which I know can be an issue with multiple databases and yes I have heard of backups but it’s just too risky. Divide and conquer I say. There is a reason we don’t put our eggs in one basket. Best case is also local duplication for anything that doesn’t require real time access. This is just my opinion on the matter” – Anonymous Coward
“Consolidation is putting all your eggs in one basket Any breakage and nothing works. It also means that the one database/cluster/… does all the work ie a higher workload than when it is distributed. One humongous machine might be more costly than several smaller ones — maybe” – Alain williams
“Have to agree with Chris… Here’s a guy who actually knows what he’s talking about. But here’s the thing… I don’t know if the question is being framed properly. When you say ‘database consolidation, what do you mean exactly. Yes, it’s a strange question, but think about it. You have databases that are OLTP transaction processing systems of truth. Then you have Data Warehouses (OLAP) that are used to drive analytics. Then you have Data Lakes which in itself is a Data Warehouse consolidation by removing the silos. (Here the number of DWs goes down, but the storage requirements go up. ) And it’s not just the CPUs getting better, or storage, but also networking. 40GbE is becoming Cisco’s norm. 100GbE is also there… But at 40GbE you can start to consider data fabric as your storage layer. The issue is cost versus density and performance has to be evaluated on a case by case basis. The networking also allows for a segregation of COTS and specialty hardware to get the most bang for your buck. You can weave a GPU appliance into your data fabric and then consolidate compute servers using K8s to allow distributed OLTP RDBMs to take better advantage of the hardware. (This is where the network can be a bottleneck. )
“What’s interesting and a side note… when you look at this… its in *YOUR* Data Center. Not on the cloud. (Although it could be in the Cloud too.) These advances will spell a down turn in the cloud over the next 5 years. Thats not to say that there won’t be a reason for cloud but more of a hybrid approach. Just some random thought from someone who’s been around this for far too long but too broke to retire. ;-)” – Mike the FlyingRat
“Improving tech is bad? On the argument that improving tech results in poor, lazy coding: Not quite. In current development cycles, delivering something on time (aka AGILE) is important. What is delivered is less so, but to hit these targets, developers take short cuts, write lazy code and rely on the tech to cover for them – who has time to optimise the code when you have to deliver in three days?
“So what’s written is ‘good enough’ in that it works thanks to the faster CPU, and it doesn’t matter that it takes more space ’cause disk is cheap, right? And when does the optimisation happen? When do we get to go back to make sure it’s efficient? When something breaks and we have no choice.” – Helcat
New Poll: What Python IDE / Editor you used the most in 2020?
Machine Learning from Scratch: Free Online Textbook
Facebook vows to restrict users if U.S. election descends into chaos: FT
The Potential of Predictive Analytics in Labor Industries
TikTok proposes social media coalition to curb harmful content
Tesla warns on challenges of scaling up production
TikTok’s promise of 25,000 new U.S. jobs sets lofty goal
Crypto.com Blocks Withdrawals After Alleging Users Attempted Illegitimate Trades
DeFi Tokens Take a Beating Amid Cryptocurrency Market Hemorrhaging
7 Ways to Build Your Brand with Blockchain Marketing
Litecoin short-term Price Analysis: 22 September
The SEC made the award public without naming the whistleblower
New Zealand regulator imposes new guidelines on CLSA’s local license
Non-Fungible Token Craze Explodes Despite DeFi Market Woes
Bitcoin Gets Knocked Back After Fib Rejection
BlizzCon will return as an online-only event in February 2021
Bitcoin Cash, VeChain, Dogecoin Price Analysis: 22 September
YAM Finance Attempts A Comeback: YAMv2 Price Plunges
Europe’s Point Nine outs new ~€100M fund to back early-stage SaaS and digital marketplaces
ECB President Christine Lagarde Says Digital Euro Could Complement Cash
‘The Dark Overlord’ hacking group member sentenced to five years in prison
Phil Anderson Accepts Crypto Donations For His Assembly Campaign
5 Key Highlights From Huawei’s Developer Conference
China’s electric carmaker WM Motor pulls in $1.47 billion Series D
Royole’s FlexPai 2 5G foldable phone costs under $1,500
CoinJar builds world-class cryptocurrency exchange on AWS
Gen Z Versus Millennials – Who Fared Better at Saving Money and Budgeting?
New $280 million ASX tech share listing Wednesday
Microsoft keeps the same price for its new wireless Xbox controllers
Elon Musk warns that Tesla’s ‘Battery Day’ tech is two years away
PayBito Experiences Record Sign-ups Post Referral Program Launch in…
Impossible Foods nabs some Canadian fast food franchises as it expands in North America
U.S SEC, OCC issue first regulatory guidance for stablecoins
Daily Crunch: This TikTok deal is pretty confusing
Australian FinTech company profile #102 – mx51
Nucleus Wealth and Arrow Financial Advice join forces to offer Robo white label advice
CDC removes updated guidelines around COVID-19 aerosol transmission, but this expert explains why it should reverse the reversal
Illumina Buys Out Big Investors With $8B Acquisition Of Grail
Neobank Volt, LAB3, and Microsoft partner to build “Volt 2.0”
29 Psychological Tricks To Make You Buy More
Esports1 week ago
Valorant Ego Skins Teased
Esports7 days ago
Championship LeBlanc Chromas, Price, Release Date, How to Get
Start Ups7 days ago
Chinese Virologist Dr. Li-Meng Yan says the Chinese Communist Party intentionally created and unleashed COVID19 upon the world as part of biological warfare (video)
AR/VR6 days ago
Quest 2 Official Accessories Include Elite Strap, Elite Battery Strap, Carrying Case, & Fit Pack
Gaming5 days ago
Microsoft Flight Simulator 2020 update 18.104.22.168 patch notes
Blockchain1 week ago
eToro Launches GoodDollar and Leverages Yield Farming and Staking to Deliver a Sustainable Global Basic Income
Esports6 days ago
League of Legends: Wild Rift regional closed beta is getting 6 new champions
Covid191 week ago
Job Searching in Pandemic Times