Connect with us


macOS Monterey: A cheat sheet



Apple’s powerful new macOS was announced at WWDC 2021. This guide covers everything you need to know about macOS Monterey, including features, requirements and how to get it.

macOS Monterey

” data-credit=”Image: Apple”>macos-monterey-2.jpg

macOS Monterey

Image: Apple

During the keynote of Apple’s WWDC21 conference on June 7, 2021, the company announced its long-awaited new macOS: Monterey. Like WWDC 2020, this year’s week-long conference aimed at developers around the world is virtual and free to the public via Apple’s website, the Apple Developer app, the Apple TV app and YouTube. 

With macOS Monterey, Apple is sticking with the theme of naming its OSs after California locales. This version of macOS offers many new features for users, including Universal Control, Shortcuts and AirPlay for iMac, updated capabilities in FaceTime, a redesign for Safari (including Tab Groups) and more. 

This macOS Monterey cheat sheet details the operating system’s main features, lists which devices support the OS, explains how to get it and more. We’ll update this macOS 12 guide when new features are released.

SEE: How to build a successful developer career (free PDF) (TechRepublic)

What is macOS Monterey?

More about Apple

MacOS Monterey is Apple’s newest operating system, and it’s designed to be completely seamless across all Apple devices. It’s even more customizable than previous OSs, and it allows users to share content with friends and across iPad and iOS like never before. According to Apple, macOS Monterey offers “new ways for users to connect, get more done and work more fluidly across their Apple devices.”

What are the main features of macOS Monterey?

Universal Control
One of the most interesting features Apple announced for macOS Monterey is Universal Control. With this feature, users can utilize a single mouse or keyboard to work between Macs and iPads seamlessly—no setup required. Users can drag and drop files between devices; this is intended to allow users to work with fewer interruptions and increase productivity.  

Improvements to Safari
Safari is getting a makeover. Now, users can use Apple’s new tab design to see more of the page as they scroll. The new tab bar assumes the color of the webpage and combines tabs, the toolbar and search field into a single place.

Apple also introduced Tab Groups for Safari, which is ideal for planning projects and storing tabs that users visit on a regular basis. Tab Groups syncs across Mac, iPhone and iPad to allow for easy sharing. Safari will now offer Privacy Reports and Intelligent Tracking Prevention for a more secure browsing experience.  

Shortcuts for Mac
Shortcuts will now be available on Macs as well. With Shortcuts, users can get more done faster and automate tasks for productivity. As with the iPhone and iPad, Mac users can quickly accomplish tasks with their most-used apps. The Shortcuts Editor on Mac allows users to customize shortcuts to match workflows. It is available via Finder, the menu bar, Spotlight and hands-free with Siri. Automator workflows can also be imported, making it easier to get up and running right away. 

SEE: Mobile device security policy (TechRepublic Premium)

New FaceTime capabilities
FaceTime has new audio and video features that make calls feel more natural and lifelike. Using spatial audio and Voice Isolation, voices in a FaceTime call sound like they are coming from where the person is positioned on the screen and ensures that the user’s voice is clear. The feature uses machine learning to eliminate background noise and Wide Spectrum to allow all the sound in the area to come through so participants can hear everything. Using the Apple Neural Engine in the M1 chip, the user’s background is blurred for a stunning video effect in Portrait Mode, and the new Grid View shows participants in same-sized tiles.

This set of features enables users to have shared experiences while on a FaceTime call; they can share their favorite music, TV shows, movies, projects and more with friends, family or colleagues in real time (collaborating in apps through screen sharing). SharePlay also has an API built for easy adoption, meaning third-party developers can share their apps in FaceTime.

Shared with You 
This feature makes it simple to locate content that’s shared through Messages, including photos, videos, articles and more. Users can review shared content via the Shared with You tab within Photos, Safari, Apple Podcasts, Apple News and the Apple TV app.

AirPlay for Mac
Now users can play, present and share anything with whomever they choose across all Apple devices. The high-fidelity sound system on the Mac can also be used as an AirPlay speaker, allowing users to play music, podcasts or use the Mac as a secondary speaker for multi-room audio. 

SEE: All of TechRepublic’s cheat sheets and smart person’s guides

Updates to Notes
In Notes, users can work through projects with friends or colleagues, add mentions, see edits in the new Activity View and categorize Notes with tags to quickly and easily find them in the new Tag Browser and in tag-based Smart Folders.

Quick Note
According to Apple, Quick Note is “a new way for users to jot down notes on any app or website systemwide, making it easy to capture thoughts and ideas wherever inspiration strikes.” Links from any app can also be added to Quick Notes.

Apple’s iCloud+ offers users new premium features, including Hide My Email, expanded HomeKit Secure Video support and an innovative new internet privacy service called iCloud Private Relay (no additional cost).

With the new Focus feature, users can automatically filter out notifications unrelated to their current activity and signal their status to let others know when they are not available. Focus automatically sets across other Apple devices and can be customized based on current activity.

Privacy updates 
With Mail Privacy Protection, users can choose whether emails can collect information about their Mail activity. The Mac recording indicator now also shows which app may be accessing the Mac’s microphone.

SEE: How to migrate to a new iPad, iPhone, or Mac (TechRepublic Premium)

Accessibility updates 
Now anyone can add alternative image descriptions using Markup. Other updates include improved Full Keyboard Access, and new cursor customization options allow for more flexibility when navigating Mac.

Updates to Maps
With the new interactive globe in Maps, it now has an immersive, detailed city experience. 

Live Text
Using on-device machine learning to detect text in photos (phone numbers, websites, addresses, tracking numbers, etc.), Live Text allows users to copy and paste, make phone calls, open websites and find more information. The Visual Lookup feature is intended to help users discover and learn about various topics like animals, art, landmarks, etc. These features work across macOS, including in apps like Photos, Messages and Safari.

AirPods Pro and AirPods Max 
AirPods Pro and AirPods Max now use spatial audio on Macs with the M1 chip to deliver a better listening experience.

SEE: 10 ways to prevent developer burnout (free PDF) (TechRepublic)

Which devices support macOS Monterey?

MacOS Monterey is available on MacBook Pro (2016 and later), MacBook (2016 and later), MacBook Air (2018 and later), iMac (2017 and later), iMac (5K Retina 27-inch, Late 2015), iMac Pro, Mac mini (2018 and later) and Mac Pro (2019). It is also available for iPad Pro, iPad Air (3rd generation and later), iPad (6th generation and later) and iPad mini (5th generation and later). 

In order to update to macOS Monterey on Mac and iPad, both devices must be signed in to iCloud with the same Apple ID using two-factor authentication and cannot share a cellular and internet connection. To use wirelessly, both devices must have Bluetooth, Wi-Fi and Handoff turned on and must be within 30 feet of each other. To use over USB, you must trust your Mac on the iPad.

When can I get macOS Big Sur?

According to Apple’s site, the developer beta of macOS Monterey is available to Apple Developer Program members at starting June 7, 2021. The public beta will be available to Mac users in July 2021 via, and macOS Monterey will be available fall 2021 as a free software update. Some features may not be available in all regions or languages.

Also see

Coinsmart. Beste Bitcoin-Börse in Europa


Bank of America uses AI to predict business volatility, acquisitions



Bank of America is using artificial intelligence (AI) to reliably predict which companies are likely to be acquired close to a year in advance — just one way the $2.2 trillion bank is employing AI in its trading business. At a North America Fintech Connect virtual conference today, Rajesh Krishnamachari, global head of data science […]

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading

Artificial Intelligence

SoftBank Vision Fund 2 leads $140M funding in Vishal Sikka’s Vianai



Vianai Systems, an AI startup founded by Vishal Sikka, former chief executive of Indian IT services giant Infosys, said on Wednesday it has raised $140 million in a round led by SoftBank Vision Fund 2.

The two-year-old startup said a number of industry luminaries also participated in the new round, which brings its total to-date raise to at least $190 million. The startup raised $50 million in its seed financing round, but there’s no word on the size of its Series A round.

Details about what exactly the Palo Alto-headquartered startup does is unclear. In a press statement, Dr. Vishal Sikka said the startup is building a “better AI platform, one that puts human judgment at the center of systems that bring vast AI capabilities to amplify human potential.” Sikka, 54, resigned from the top role at Infosys in 2017 after months of acrimony between the board and a cohort of founders.

Vianai helps its customers amplify the transformation potential within their organizations using a variety of advanced AI and ML tools with a distinct approach in how it thoughtfully brings together humans with technology. This human-centered approach differentiates Vianai from other platform and product companies and enables its customers to fulfill AI’s true promise,” the startup said.

The startup claims it has already amassed as its customers many of the world’s largest and most respected businesses, including insurance giant Munich Re.

Its investors include Jim Davidson (co-founder of Silver Lake), Henry Kravis and George Roberts (co-founders of KKR), and Jerry Yang (founding partner of AME and co-founder of Yahoo). Dr. Fei-Fei Li (co-director of the Stanford Institute for Human-Centered AI) has joined Vianai Systems’ advisory board.

“With the AI revolution underway, we believe Vianai’s human-centered AI platform and products provide global enterprises with operational and customer intelligence to make better business decisions,” said Deep Nishar, senior managing partner at SoftBank Investment Advisers, in a statement. “We are pleased to partner with Dr. Sikka and the Vianai team to support their ambition to fulfill AI’s promise to drive fundamental digital transformations.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)



An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)

Understanding why your AI-based models make the decisions they do is crucial for deploying practical solutions in the real-world. Here, we review some techniques in the field of Explainable AI (XAI), why explainability is important, example models of explainable AI using LIME and SHAP, and demonstrate how Explainable Boosting Machines (EBMs) can make explainability even easier.

By Chaitanya Krishna Kasaraneni, Data Science Intern at Predmatic AI.


Photo by Rock’n Roll Monkey on Unsplash.

In recent times, machine learning has become the core of developments in many fields such as sports, medicine, science, and technology. Machines (computers) have become so intelligent that they even defeated professionals in games like Go. Such developments raise questions if machines would also make for better drivers (autonomous vehicles) or even better doctors.

In many machine learning applications, the users rely on the model to make decisions. But, a doctor certainly cannot operate on a patient simply because “the model said so.” Even in low-risk situations, such as when choosing a movie to watch from a streaming platform, a certain measure of trust is required before we surrender hours of our time based on a model.

Despite the fact that many machine learning models are black boxes, understanding the rationale behind the model’s predictions would certainly help users decide when to trust or not to trust their predictions. This “understanding the rationale” leads to the concept called Explainable AI (XAI).

What is Explainable AI (XAI)?

Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. [Wikipedia]

How is Explainable AI different from Artificial Intelligence?

Ai Process 5 questions

XAI Process 5 statements

Difference Between AI and XAI.

In general, AI arrives at a result using an ML algorithm, but the architects of the AI systems do not fully understand how the algorithm reached that result.

On the other hand, XAI is a set of processes and methods that allows users to understand and trust the results/output created by a machine learning model/algorithm. XAI is used to describe an AI model, its expected impact, and potential biases. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision-making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.

Famous examples of such explainers are Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP).

  • LIME explains the predictions of any classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction.
  • SHAP is a game theoretic approach to explain the output of any machine learning model.

Explaining Predictions using SHAP

SHAP is a novel approach to XAI developed by Scott Lundberg here at Microsoft and eventually opened sourced.

SHAP has a strong mathematical foundation. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details).

Shapley values

With Shapley values, each prediction can be broken down into individual contributions for every feature.

For example, consider your input data has 4 features (x1, x2, x3, x4) and the output of a model is 75, using Shapley values, you can say that feature x1 contributed 30, feature x2 contributed 20, feature x3 contributed -15, and feature x4 contributed 40. The sum of these 4 Shapley values is 30+20–15+40=75, i.e., the output of your model. This feels great, but sadly, these values are extremely hard to calculate.

For a general model, the time taken to compute Shapley values is exponential to the number of features. If your data has 10 features, this might still be okay. But if the data has more features, say 20, depending on your hardware, it might be impossible already. To be fair, if your model consists of trees, there are faster approximations to compute Shapley values, but it can still be slow.

SHAP using Python

In this article, we’ll be using red wine quality data to understand SHAP. The target value of this dataset is the quality rating from low to high (0–10). The input variables are the content of each wine sample, including fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulfates, and alcohol. There are 1,599 wine samples. The code can be found via this GitHub link.

In this post, we will build a random forest regression model and will use the TreeExplainer in SHAP. There is a SHAP Explainer for any ML algorithm — either tree-based or non-tree-based algorithms. It is called the KernelExplainer. If your model is a tree-based machine learning model, you should use the tree explainer TreeExplainer() that has been optimized to render fast results. If your model is a deep learning model, use the deep learning explainer DeepExplainer(). For all other types of algorithms (such as KNNs), use KernelExplainer().

The SHAP value works for either the case of a continuous or binary target variable.

Variable Importance Plot — Global Interpretability

A variable importance plot lists the most significant variables in descending order. The top variables contribute more to the model than the bottom ones and thus have high predictive power. Please refer to this notebook for code.

Variable Importance Plot using SHAP.

Summary Plot

Although the SHAP does not have built-in functions, you can output the plot using the matplotlib library.

The SHAP value plot can further show the positive and negative relationships of the predictors with the target variable.

Summary Plot.

This plot is made of all the dots in the train data. It demonstrates the following information:

  • Variables are ranked in descending order.
  • The horizontal location shows whether the effect of that value is associated with a higher or lower prediction.
  • The color shows whether that variable is high (in red) or low (in blue) for that observation.

Using SHAP, we can generate partial dependence plots. The partial dependence plot shows the marginal effect one or two features have on the predicted outcome of a machine learning model (J. H. Friedman 2001). It tells whether the relationship between the target and a feature is linear, monotonic, or more complex.

Black-box explanations are much better than no explanation at all. However, as we have seen, both LIME and SHAP have some shortcomings. It would be better if the model is performing well and is interpretable at the same time—Explainable Boosting Machine (EBM) is a representative of such a method.

Explainable Boosting Machine (EBM)

EBM is a glassbox model designed to have accuracy comparable to state-of-the-art machine learning methods like Random Forest and BoostedTrees, while being highly intelligible and explainable.

The EBM Algorithm is a fast implementation of the GA²M algorithm. In turn, the GA²M algorithm is an extension of the GAM algorithm. Therefore, let’s start with what the GAM algorithm is.

GAM Algorithm

GAM stands for Generalized Additive Model. It is more flexible than logistic regression but still interpretable. The hypothesis function for GAM is as follows:

The key part to notice is that instead of a linear term 𝛽ixi for a feature, now we have a function fi(xi). We will come back later to how this function is computed in EBM.

One limitation of GAM is that each feature function is learned independently. This prevents the model from capturing interactions between features and pushes the accuracy down.

GA²M algorithm

GA²M seeks to improve this. To do so, it also considers some pairwise interaction terms in addition to the function learned for each feature. This is not an easy problem to solve because there are a larger number of interaction pairs to consider, which increases compute time drastically. In GA²M, they use the FAST algorithm to pick up useful interactions efficiently. This is the hypothesis function for GA²M. Note the extra pairwise interaction terms.

By adding pairwise interaction terms, we get a stronger model while still being interpretable. This is because one can use a heatmap and visualize two features in 2D and their effect on the output clearly.

EBM Algorithm

Finally, let us talk about the EBM algorithm. In EBM, we learn each feature function fi(xi) using methods such as bagging and gradient boosting. To make the learning independent of the order of features, the authors use a very low learning rate and cycle through the features in a round robin fashion. The feature function fi for each feature represents how much each feature contributes to the model’s prediction for the problem and is hence directly interpretable. One can plot the individual function for each feature to visualize how it affects the prediction. The pairwise interaction terms can also be visualized on a heatmap as described earlier.

This implementation of EBM is also parallelizable, which is invaluable in large-scale systems. It also has the added advantage of having an extremely fast inference time.

Training the EBM

The EBM training part uses a combination of boosted trees and bagging. A good definition would probably be bagged boosted bagged shallow trees.

Shallow trees are trained in a boosted way. These are tiny trees (with a maximum of 3 leaves by default). Also, the boosting process is specific: Each tree is trained on only one feature. During each boosting round, trees are trained for each feature one after another. It ensures that:

  • The model is additive.
  • Each shape function uses only one feature.

This is the base of the algorithm, but other techniques further improve the performance:

  • Bagging, on top of this base model.
  • Optional bagging, for each boosting step. This step is disabled by default because it increases the training time.
  • Pairwise interactions.

Depending on the task, the third technique can dramatically boost performance. Once a model is trained with individual features, a second pass is done (using the same training procedure) but with pairs of features. The pair selection uses a dedicated algorithm that avoids trying all possible combinations (which would be infeasible when there are many features).

Finally, after all these steps, we have a tree ensemble. These trees are discretized simply by running them with all the possible values of the input features. This is easy since all features are discretized. So the maximum number of values to predict is the number of bins for each feature. In the end, these thousands of trees are simplified to binning and scoring vectors for each feature.

EBM using Python

We will use the same red wine quality data to understand InterpretML. The code can be found via this GitHub Link.

Exploring Data

The “summary” of training data displays a histogram of the target variable.

Summary displaying a histogram of Target.

When an individual feature (here fixed acidity) is selected, the graph shows the Pearson Correlation of that feature with the target. Also, a histogram of the selected feature is shown in blue color against the histogram of the target variable in red color.

Individual Feature against Target.

Training the Explainable Boosting Machine (EBM)

The ExplainableBoostingRegressor() model of the InterpretML library with the default hyper-parameters is used here. RegressionTree() and LinearRegression() are also trained for comparison.

ExplainableBoostingRegressor Model.

Explaining EBM Performance

RegressionPerf() is used to assess the performance of each model on the test data. The R-squared value of EBM is 0.37, which outperforms the R-squared error of linear regression and regression tree models.

Performance of (a) EBM, (b) Linear Regression, and (c) Regression Tree.

The global and local interpretability of each model can also be generated using the model.explain_global() and model.explain_local()methods, respectively.

InterpretML also provides a feature to combine everything and generate an interactive dashboard. Please refer to the notebook for graphs and a dashboard.


With the increasing growth of requirements for explainability and existing shortcomings of the XAI models, the times when one had to choose between accuracy and explainability are long gone. EBMs can be as efficient as boosted trees while being as easily explainable as logistic regression.

Original. Reposted with permission.


Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading

Artificial Intelligence

Digitizing Retail with New IoT Chip Adoption



Chip Adoption
Illustration: © IoT For All

Qualcomm has announced seven new chips designed to support new IoT devices in the retail sector. The line, which includes high-end and entry-level chips, includes chips with AI and image processing technology that will help make IoT devices with cameras more effective.

The launch is part of a broader trend in the IoT industry towards new applications in the retail sector — where business as usual has been significantly disrupted by the COVID-19 pandemic. These innovations could support major changes in the retail industry — like smart stores, interactive displays, and streamlined payment options.

Qualcomm Expands Chip Options to Support Retail IoT

The new chips, which may help accelerate the adoption of “smart retail,” are also designed to support new IoT applications in the warehousing and manufacturing sectors.

The line includes both entry-level chips, designed to support simpler IoT options for retailers and other businesses, as well as high-end chips that support a new range of devices and IoT features, including some powered by AI.

According to Qualcomm senior director of product management Nagaraju Naik, the high-end chips will support high-resolution video cameras and enable features like electronic pan, tilt, and zoom (or ePTZ).

The highest-end of the new chips accomplishes this with a range of features not present in many existing IoT chips — including reduced latency, triple-image signal processor (ISP) architecture, and an AI engine that supports up to seven concurrent cameras with 4K resolution each.

For several retail IoT applications — like interactive displays or security cameras that assist in smart store operations — these chips could help significantly improve device performance. Naik also said the chips would support new checkout and payment processing options — like “touchless [payment], smart carts, self-checkout, and mobile payments.”

In addition to these retail applications, the high-end chips will enable devices like autonomous picking robots in the manufacturing and warehousing industries.

IoT May Help Retailers Respond to a Changing Market

COVID-19 accelerated several existing trends in retail, and it’s likely that the pandemic significantly altered how consumers shop. According to research from WSL Strategic Retail, 48% of the population say they are now shopping for others they weren’t shopping for before the pandemic.

At the same time, the number of consumers shopping primarily or exclusively online has grown rapidly, and some industry observers believe these consumers will continue to shop online long after it is safe to return to stores.

New practices like Omni-shopping — the practice of consumers shopping in-store and using a retailer’s online storefront — will likely inform the tactics retailers will need to adopt if they want to succeed post-COVID-19.

The potential IoT offers, both in terms of data gathering and streamlining the in-store shopping experience, could be critical for retailers.

IoT devices enable touchless and smart payment options, such as allowing consumers to check out without needing to touch a credit card reader or similar device. In some cases, the new tech may enable checkout processes that do not require interacting with a cashier at all.

This new checkout experience is both streamlined and potentially more hygienic than the conventional experience. As a result, it could be appealing to customers who have left physical stores for convenient online shopping.

Novel IoT applications enabled by hardware like Qualcomm’s new chip line could help accelerate the digitization of retail over the next few years.

As data-gathering store sensors and interactive advertisements become more powerful and cost-effective, they will likely help businesses personalize advertising, optimize store layouts, and improve supply chain management.

These shifts could make in-store shopping a better proposition for customers who can just as easily shop online.

How New IoT Tech May Shape Retail’s Digital Future

The IoT industry has begun to invest in retail technology seriously. New hardware like Qualcomm’s IoT chips will likely help enable more powerful and cost-effective smart retail devices.

As the retail industry digitizes and adapts to the post-COVID-19 world, these devices could prove invaluable. Customers are turning away from in-store retail in favor of online shopping. Still, process changes and personalization made possible by new IoT technology could convince consumers to return to physical stores.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
Esports5 days ago

Lost Ark Founders Pack: Everything You Need to Know

Aviation4 days ago

Delta Air Lines Flight Diverts To Oklahoma Over Unruly Off-Duty Flight Attendant

Aviation3 days ago

Spirit Airlines Just Made The Best Argument For Lifting LaGuardia’s Perimeter Rule

Energy5 days ago

Daiki Axis Co., Ltd. (4245, First Section, Tokyo Stock Exchange) Overview of Operating Performance for the First Three Months Ended March 31, 2021

Esports2 days ago

Clash of Clans June 2021 Update patch notes

Esports4 days ago

Genshin Impact Murals: Location Guide

Blockchain3 days ago

Africa Leading Bitcoin P2P Trading Volume Growth in 2021

Esports2 days ago

Legends of Runeterra Patch 2.10.0 brings bug fixes and Pool Party cosmetics

Blockchain5 days ago

DCR Technical Analysis: Look for Support Levels of $130.13 and $126.01

Gaming3 days ago

Forza Horizon 5 Announced, Launches November 9

Aviation2 days ago

Boeing 727 Set To Be Turned Into Luxury Hotel Experience

Cyber Security5 days ago

Cybersecurity Degrees in Colorado

Esports4 days ago

Genshin Impact Music Rock Puzzle Guide

Blockchain5 days ago

Bitcoin Weekly Chart Appears to be in the Green Zone

Big Data2 days ago

In El Salvador’s bitcoin beach town, digital divide slows uptake

Blockchain2 days ago

Since It Adopted Bitcoin As Legal Tender, The World Is Looking At El Salvador

Blockchain4 days ago

Kyber Network (KNC) Price Prediction 2021-2025: Will KNC Hit $4 by 2021?

Blockchain4 days ago

Binance Is Launching a Decentralized NFT Platform

HRTech2 days ago

Pre-Owned Luxury Car dealer Luxury Ride to add 80 Employees across functions to boost growth

Blockchain4 days ago

Ripple price analysis: Ripple retests $0.80 support, prepares to push higher?