Konsentus today announced it has launched an international infrastructure platform with the ability to accelerate a country’s implementation of open banking.
The Konsentus Open Banking Hub (OB Hub), a SaaS based solution, runs in a national cloud infrastructure, has end to end functionality and helps create, support and secure vibrant open banking economies across the globe.
The OB Hub removes the complexity involved in setting up a central and secure open banking ecosystem within a country, enabling regulated entities to quickly and easily share data and execute payment transactions with each other. The OB Hub has three core components.
- i) Participant on-boarding and registration services
Through Identity and Verification services, authorised participants (both organisations and individuals) can register and on-board within the open banking ecosystem enabling them to share data and execute payment transactions with other regulated entities in a secure and safe environment.
- ii) Directory Services
A central directory service that participants can access in real time, online, 24/7 to verify the identity and regulatory status of individual entities when transaction requests are made. The OB Hub is a central repository of the latest available regulatory information on all registered and regulated entities. This includes performance and availability data, contact details, data updates and revocation information and history for all participants
iii) Certificate Authority
The OB Hub issues and manages the digital credentials of all participants in the open banking ecosystem. This enables participants to positively identify themselves to other regulated entities in order to perform open banking transactions such as sharing data and executing payment transactions.
The OB Hub provides a rich data management system enabling national authorities to monitor the adoption and success of open banking in their country. An immutable audit log is a record of all transaction requests, providing valuable information for dispute management processing. Quick and easy to set up, OB Hub provides a trusted central system, enabling all regulated entities to interact with each other in a safe and secure environment.
Additional elements of the OB Hub enable publication of National API standards, a third party provider (TPP) testing sandbox, a central API and App marketplace, and an API monitoring service to show real time national API performance.
All the above comes with a secure messaging platform for participant to participant messaging reporting and a dedicated helpdesk for support services.
In addition to the onboarding, directory and certificate checking services, the complete Konsentus OB Hub solution delivers a programme management forum to educate market participants, alongside testing and support services to enable a fully functioning national open banking environment.
Mike Woods, CEO, Konsentus, commented “The creation of a central open banking platform enables us to help individual countries realise their open banking ambitions without having to understand, build and execute their own systems. We already deliver first-class technology that’s scalable, resilient and built for maximum availability. Data consolidation, standardisation and formatting in a secure real-time, online environment are why we are global leaders in our field and the trusted partner for safe and secure open banking economies.”
California’s Proposition 24 Confirms the Fate of Data Privacy
Click to learn more about author Kyle McNabb.
The rolling thunder of data regulations rumbles on — much to the dismay of companies and the delight of consumers. The latest rainmaker (or taker) is California’s Proposition 24. This consumer privacy ballot initiative, containing the Consumer Privacy Rights Act (CPRA), was passed on November 3, 2020, establishing a new standard for data privacy in the state. The CPRA builds on the California Consumer Privacy Act (CCPA), addressing its predecessor’s shortcomings and expediting California’s legislation on data privacy.
While Proposition 24 has been nicknamed CCPA 2.0, it is much more than another drop in the regulatory bucket. It will enforce new requirements that companies must take note of and prepare for — both with their compliance strategies and long-term approach to data privacy, which is clearly here to stay.
Proposition 24 Mean for Data Privacy?
There is a
key difference between the CCPA, which just became enforceable months ago, and
Proposition 24 (and the CPRA). Proposition 24 will become a state law as
written, not legislatively-enacted — which means it can’t be amended without
more voter action, like another ballot initiative. Why does this matter?
of Proposition 24 in California is further proof that consumers want a say in
how they are tracked on the internet and how their data is used by companies. They
feel so strongly about these rights that they’ve already improved upon the CCPA
and ensured these improvements were more legislatively permanent. That’s
telling. Proposition 24 represents more than a surge in regulations — it
embodies an awakening of the modern consumer.
greater burden placed on businesses to stay on top of cybersecurity audits and
risk assessments, it’s increasingly important they have a handle on how much
data lives within their organization, how sensitive it is, and how much risk is
involved in their handling of that data.
Proposition 24 Change the CCPA?
The new legislation will ultimately strengthen and give new teeth to the existing CCPA by creating new privacy rights for consumers, obligations for businesses, and enforcement mechanisms through a new state agency. Under Proposition 24, consumers gain the right to:
- Correct personal information
- Know the length of data retention
- Opt-out of advertisers using precise
- Restrict usage of sensitive personal
new legislation does roll back requirements on companies to respond to
individual data requests and provide full data reports, other laws still require
businesses to provide individuals with information about how their data is used.
In other words, companies shouldn’t be thinking about relaxing any data privacy
and security efforts they have in place. Instead, businesses should look out
for four big changes from Proposition 24:
- It defines a new category of “sensitive personal information,” which
is broader and stricter than just “personal information.” For instance, new
stipulations include increasing penalties three times for violations concerning
consumers younger than 16 years old.
- It creates a new state agency: the California Privacy Protection
Agency (CPPA), the first of its kind in the United States. The CPPA will have
full administrative power and oversight for enforcement, including audits.
- It prohibits precise geolocation tracking to a location within roughly
250 acres. To accommodate this change, companies will have to adjust their data
- It allows consumers to limit the use and disclosure of sensitive
personal information based on the broader category.
here is that the legislation still gives consumers data rights they didn’t have
previously, and companies will need to actively make changes to their data
How Should Companies
Prepare for Proposition 24?
new legislation won’t go into effect until the start of 2023, consumers’ right
to access their personal information will extend back to data collected by
companies on or after January 1, 2022. That gives businesses just a year to
prepare for these massive changes, so it’s critical they begin their
preparations now. In fact, state-specific legislation will drive data privacy
regulations to go national. To prepare for the future, businesses must invest
in tools that make it easier to protect the privacy of consumers’ information
and govern that information in compliance with regulations.
Organizations need to build trust with their data — knowing where it lives, where it came from, and who has touched it. For many companies, trust begins with building an automated “as is” data inventory, which collects metadata from sources inside and outside the business. Proposition 24, like other data privacy regulations, requires that companies can quickly locate all sensitive personal information to respond to data consumer requests or opt-outs. A data inventory automates the scanning and identification of sensitive personal data across the entire organization — giving companies a full view of the information they have and where it is.
That said, data intelligence is not enough for compliance alone — companies also need visibility into where sensitive personal information resides within their documents, content, and records, too. This is a major roadblock for companies. Most businesses lack the ability both to find sensitive information within content and to associate that information with a specific person — and it’s only getting worse with remote work and content sprawl. Companies must operationalize privacy compliance in order to adhere to consumer requests around their data. They need a governance strategy that can locate personal information anywhere in the enterprise. Having solutions with capabilities such as rules-based retention, redaction, and auditability of access makes this process much easier, especially when responding to consumer questions/requests.
By implementing a privacy-aware information management
strategy — for both structured and unstructured data — organizations can
understand their entire ecosystem. Heading into 2021, it will be increasingly
important to proactively seek out dark data, tackle compliance, and prepare for
current and future data privacy regulations like Proposition 24.
It’s no longer enough to simply manage data and content.
As the GDPR, CCPA, and now CPRA have shown, data privacy regulations will only
keep coming — and they will be increasingly targeted, intentional, and perhaps
even stricter. Companies outside of California, or the EU for that matter, must
resist the urge to turn a blind eye while they are not the direct subjects of
data regulations. Because while data privacy laws may sound like distant
thunder today, the lightning is on its way.
Three Reasons the Technical Talent Gap Isn’t to Blame for Failing AI Projects
Click to learn more about author David Talby.
A shortage of technical talent has long
been a challenge for getting AI projects off the ground. While research shows
that this may still be the case, it’s not the end-all-be-all and certainly not
the only reason so many AI initiatives are doomed from the start.
Deloitte’s recent State of AI in the Enterprise survey found the type of talent most in-demand — AI developers and engineers, AI researchers, and data scientists — was fairly consistent across all levels of AI proficiency. However, business leaders, domain experts, and project managers fell lower on the list. While there’s no disputing that technical talent is valuable and necessary, the lack of attention on the latter titles should be a bigger part of the conversation.
It’s likely that the technical skills gap will persist for the next few years, as university programs play catch up to real-world applications of AI, and organizations implement internal training or opt for outsourcing entirely. That doesn’t mean businesses can wait for these problems to solve themselves or for the talent pool to grow. In order to avoid being one of the 85 percent of AI projects that fail to deliver on their intended promises, there are three areas organizations can focus on to give their projects a fighting chance.
Organizational Buy-In: AI-Driven Product, Revenue, and Customer Success
Understanding how AI will work within a professional and product environment and how it translates to a better customer experience and new revenue opportunities is critical — and that spans far beyond the IT team. Being able to train and deploy accurate AI models doesn’t address the question of how to most effectively use them to help your customers. Doing this requires educating all organizational disciplines — sales, marketing, product, design, legal, customer success — on why this is useful and how it will impact their job function.
When done well, new capabilities
unlocked by AI enable product teams to completely rethink the user experience.
It’s the difference between adding Netflix or Spotify recommendations as a side
feature versus designing the user interface around content discovery. More
aspirationally, it’s the difference between adding a lane departure alert to
your new car versus building a self-driving vehicle that doesn’t have pedals or
wheels. Cross-functional collaboration and buy-in on AI projects is a vital
part of the success and scaling and should be a priority from the get-go.
Realistic Expectations: The Lab vs. the Real World
We’re at an exciting juncture for AI development, and it’s easy to get caught up in the “new shiny object” mentality. While eagerness to implement new AI-enabled efficiencies is a good thing, jumping in before setting expectations is a sure-fire way to end up disappointed. A real instance of the challenges organizations face when implementing and scaling AI projects comes from a recent Google Research paper about a new deep learning model used to detect diabetic retinopathy from images of patients’ eyes. Diabetic retinopathy, when untreated, causes blindness, but if detected early, it can often be prevented. As a response, scientists trained a deep learning model to identify early stages of the disease symptom to accelerate detection and prevention.
Google had access to advanced machines for model training
and data from environments that followed proper protocols for testing. So,
while the technology itself was as accurate, if not more so than human
specialists, this didn’t matter when applied to clinics in rural Thailand.
There, the quality of the machines, lighting in the rooms in the clinic, and
patients’ willingness to participate for a host of reasons were quite different
than the conditions the model was trained on. The lack of appropriate infrastructure
and understanding of practical limitations is a prime example of the discord
between Data Science success and business success.
The Right Foundation: Tools and Processes to Operate Safely
Successful AI products and services
require applied skills in three layers. First, data scientists must be
available, productively tooled, and have domain expertise and access to
relevant data. While AI technology is becoming well understood, from bias
prevention, explainability, concept drift, and similar issues, many teams are
still struggling with this first layer of technical issues. Second,
organizations must learn how to deploy and operate AI models in production.
This requires DevOps, SecOps, and newly emerging “AI Ops” tools and processes
to be put in place, so models continue working accurately in production over
time. Third, product managers and business leaders must be involved from the
start in order to redesign new technical capabilities and how they will be
applied to make customers and end-users successful.
There’s been tremendous progress in
education and tooling over the past five years, but it’s still early days for
operating AI models in production. Unfortunately, design and product management
are far behind, and becoming one of the most common barriers to AI success.
This is why it might be time for respondents of the aforementioned Deloitte
survey to start putting overall business success and organizational buy-in
before finding the top technical talent to lead the way. The antidote for this
is investing in hands-on education and training, and fortunately, from the
classroom to technical training courses, these are becoming more widely
Although a relatively new technology, AI has the power to
change how we work and live for the better. That said, like any technology, AI
success hinges on proper training, education, buy-in, and well-understood
expectations and business value. Aligning all of these factors takes time, so
be patient, and be sure to have a strategy in place to ensure your AI efforts
Traveling in the Age of COVID-19: Big Data Is Watching
Click to learn more about author Bernard Brode.
With news of the first dose of a vaccine successfully administered, it
appears that we might finally be seeing the beginning of the end of the COVID-19
pandemic. However, it’s also clear that the impact of the virus — and the ways
we have responded to it — will last for many years. Long after the health and
economic effects have faded.
Those of us who work in technology have been aware of this for some time, of course. Back at the beginning of the pandemic, we were warning that the security of medical devices might become a very real problem this year. Similarly, we warned that the use of big data to fight the pandemic ran the risk of setting a problematic precedent when it came to the right to personal privacy.
We are now living with the consequences of that decision. Traveling
today means greater privacy intrusion than ever before, and we have the
pandemic to blame for that. In this article, we’ll look at how we ended up in
this position and how we can avoid this becoming the new normal.
Beating the Virus with Big Data
Most of the mainstream analyses of the way that technology has been leveraged to fight the COVID-19 virus have focused on the expansion of data acquisition systems. This was the focus, for instance, of an April article in the New York Times, which set the tone for most of the reporting on the apparent tension between personal privacy and public health surveillance.
That article noted that many countries around the world — from Italy to Israel — have begun to harvest geolocation data from their citizens’ smartphones in order to track their movements. This move was certainly unprecedented and represented a radical expansion of a nation state’s ability to keep track of citizens. In terms of fighting the pandemic, however, it was less than useful.
To understand why, it’s instructive to reflect on this article in HealthITAnalytics, also from April 2020. The interview is with James Hendler, the Tetherless World Professor of Computer, Web, and Cognitive Science at Rensselaer Polytechnic Institute (RPI) and Director of the Rensselaer Institute for Data Exploration and Applications (IDEA). He told the magazine that fighting the virus was not merely a question of being able to collect data; rather, the bottleneck was in being able to manipulate and analyze it in a way that would produce actionable insights.
In other words, Hendler pointed out, fighting the virus is “a big data problem,” and one where “artificial intelligence can play a big role.” And with more than 4.5 billion people already online by the end of 2020, our ability to process and secure these data lags significantly behind our ability to collect it.
This central insight — that analyzing the data produced by large-scale surveillance networks required the deployment of big data tools — is likely to have a remarkable impact on the way that we travel in the next few years.
The biggest impact, for most of us, will be an expansion of the kind of
“intelligent” systems that are used to make personalized recommendations for
products and services to buy. Several of the companies who run such engines
were keen to offer their expertise to public health researchers early in the
pandemic. Amazon Web Services, Google Cloud, and others have all offered
researchers free access to open datasets and analytics tools to help them
develop COVID-19 solutions faster.
Many travelers — indeed, many citizens — should be worried about that. As we noted early this year, asking whether big data can save us from the virus was never really the issue — it was clear that this kind of analysis would be of great utility from a public health perspective. The problem was what would happen to this data after the pandemic and what kind of precedent this surveillance would set.
In other words, most people were happy to have their movements tracked
in order to beat the virus, but will governments ever stop tracking us? Or will
they merely sell this information to advertising companies?
The New Normal?
Consumers are, of course, aware of these issues. Every time there is an expansion in the surveillance infrastructure used by the state and by advertisers, we see a simultaneous rise in search interest related to online privacy tools intended to prevent this kind of tracking.
However, consumers can only go so far when it comes to protecting
themselves and their privacy. Ultimately, in order to prevent our every flight,
drive, and even walk from being tracked, we will need to build a legal framework
that matches the sophistication of the networks used to collect this
There are promising signs that this is happening. STAT’s Casey Ross recently wrote about a number of initiatives that seek to put an inherent limit on governmental ability to share location data outside of specific circumstances — such as a global pandemic.
However, most analysts also agree that there is a glaring inconsistency
when it comes to arguments that try to limit governments’ abilities to track
their citizens. This is that many citizens who claim to worry about the privacy
implications of this are happy to share their location data with private
companies who operate under far less stringent protocols and legislation.
As Jack Dunn recently put it on the IAPP website, how can we reasonably evaluate the costs and benefits of Google or Facebook sharing location data with the federal government when it has been perfectly legal for Walgreens to share access to customer data with pharmaceutical advertisers? How does aggregating and anonymizing data safeguard privacy when a user’s personal data can be revealed through other data points?
This, unfortunately, is the reality of traveling today — that, even if
the government is not tracking your movements, there are plenty of apps on your
phone that probably are. Thus, as it did in many other ways, the pandemic has
done more to exacerbate existing issues with the way we approach technology
rather than representing a totally unprecedented event.
Not that this makes moving forward after the pandemic any easier, of course. But we should recognize that the issues with big data, and with data acquisition more generally, go much deeper than just the past year.
Slides: Moving from a Relational Model to NoSQL
To view just the On Demand recording of this presentation, click HERE>>
This webinar is sponsored by:
About the Webinar
Businesses are quickly moving to NoSQL databases to power their modern applications. However, a technology migration involves risk, especially if you have to change your data model. What if you could host a relatively unmodified RDBMS schema on your NoSQL database, then optimize it over time?
We’ll show you how Couchbase makes it easy to:
- Use SQL for JSON to query your data and create joins
- Optimize indexes and perform HashMap queries
- Build applications and analysis with NoSQL
About the Speaker
Senior Product Marketing Manager, Couchbase
Matthew D. Groves is a guy who loves to code. It doesn’t matter if it’s C#, jQuery, or PHP: He’ll submit pull requests for anything. He has been coding professionally ever since he wrote a QuickBASIC point-of-sale app for his parent’s pizza shop back in the ’90s. He currently works as a Developer Advocate for Couchbase. His free time is spent with his family, watching the Reds, and getting involved in the developer community. He is the author of AOP in .NET (published by Manning), a Pluralsight author, and a Microsoft MVP.
SpaceX’s most important Super Heavy booster part makes first appearance
Tesla’s Elon Musk shares his optimism for new US administration
Singing plays a key role in thyroid cancer test
Bitcoin SV, Ontology, Zcash Price Analysis: 26 January
Tesla FSD Beta pushed to limits in real-world torture test
AI clocks first-known ‘binary sextuply-eclipsing sextuple star system’. Another AI will be along shortly to tell us how to pronounce that properly
Highly specific synaptic plasticity in addiction
Satellite fires up iodine-fueled ion thruster for the first time
Breakthrough design at UBCO vastly improves mechanical heart valve
Bridgetown 2 Holdings Limited (BTNB) Prices $260M IPO
Tesla Giga Berlin’s next-gen paint shop takes form with dipping pool deliveries
Exclusive: Wheels Up in talks with SPAC to go public: sources
Report: SPACs 2.0
Northern Star Investment Corp. II (NTSB.U) Prices Upsized $350M IPO
Science Strategic Acquisition Corp. Alpha (SSAAU) Prices Upsized $270M IPO
Biotech Acquisition Company (BIOTU) Prices $200M IPO
Lockheed Martin and Boeing debut Defiant X advanced assault helicopter
LMF Acquisition Opportunities, Inc. (LMAOU) Prices Upsized $90M IPO
Horizon presents its long-range Cavorite X5 hybrid eVTOL
Former Goldman Sachs exec: Bitcoin ‘could work,’ but will attract more regulation
Hymer CrossOver 4×4 camper vans escape the grid for up to 10 days
Fossil analysis suggests newly hatched tyrannosaurs were dog-sized
Review: 2021 Mercedes-Benz GLE and GLS hit the high note
Dakota Power wins N.J. project approval, has billion-dollar solar plans
Breaking Down Joe Biden’s $10B Cybersecurity ‘Down Payment’
Tesla, EV makers to benefit as President Biden announces electrified Gov’t fleet
Other than Bitcoin, Coinbase notes institutional demand for Ethereum as well
National lab and Youngstown State partner to develop battery manufacturing workforce
Outgoing FCC Chair Issues Final Security Salvo Against China
2.28M MeetMindful Daters Compromised in Data Breach
Blockchain1 week ago
The Countdown is on: Bitcoin has 3 Days Before It Reaches Apex of Key Formation
Blockchain1 week ago
Litecoin, VeChain, Ethereum Classic Price Analysis: 17 January
Blockchain1 week ago
Is Ethereum Undervalued, or Polkadot Overvalued?
Blockchain7 days ago
5 Best Bitcoin Alternatives in 2021
SPAC Insiders7 days ago
Churchill Capital IV (CCIV) Releases Statement on Lucid Motors Rumor
Blockchain1 week ago
Here’s why Bitcoin or altcoins aren’t the best bets
Blockchain3 days ago
Buying the Bitcoin Dip: MicroStrategy Scoops $10M Worth of BTC Following $7K Daily Crash
Cyber Security6 days ago
Critical Cisco SD-WAN Bugs Allow RCE Attacks
Blockchain3 days ago
Bitcoin Correction Intact While Altcoins Skyrocket: The Crypto Weekly Recap
Blockchain1 week ago
Bitcoin Worth $140 Billion Lost Says UK Council
Blockchain3 days ago
MicroStrategy CEO claims to have “thousands” of executives interested in Bitcoin
Blockchain3 days ago
Canadian VR Company Sells $4.2M of Bitcoin Following the Double-Spending FUD