Connect with us

Big Data

Capitalizing on remote work, U.S. cities draw in tech workers

Published

on

By Danielle Kaye

NEW YORK (Reuters) – At the start of the COVID-19 pandemic, Jaleesa Garland, a marketing manager at an e-commerce startup, came across an opportunity to relocate from the San Francisco Bay Area to Tulsa, Oklahoma – a city previously not on her radar.

It was an enticing offer: $10,000 in relocation assistance and community-building opportunities provided by a recruitment initiative called Tulsa Remote. When the pandemic hit, her company got rid of its San Francisco office and allowed employees to work from anywhere. Garland moved to the south-central U.S. city with a population of about 400,000 in October 2020.

“The pandemic was fundamental to a lot of people’s decisions to move,” Garland said. “I couldn’t afford Bay Area rent, electricity – all of those things add up. I have so much more wiggle room here in Tulsa.”

About 30% of remote workers plan on moving, according to two recent surveys: an April poll of 1,000 tech workers by nonprofit One America Works and a June survey of 1,006 national remote workers for MakeMyMove, focused on intentions for the next 18 months.

Facebook and Twitter are among the major tech companies allowing employees to work from home if their jobs can be done remotely. According to a Twitter spokesperson, embracing remote work is in part an effort to attract more diverse talent.

Smaller cities typically aim to support dozens or hundreds of remote-worker moves annually. That does not threaten Silicon Valley’s dominance of tech, but it could allow California companies to become more diverse, and it might make them try harder to keep workers.

“The cost-benefit dynamics are quickly shifting away from us, and this must be addressed,” Jason Baker, senior vice president of transportation, health and housing at the tech industry’s Silicon Valley Leadership Group, said in a statement.

And the numbers mean a lot for some towns and cities that have seen “brain drains” to larger metropolitan areas, said Prithwiraj Choudhury, associate professor at the Harvard Business School.

Tulsa Remote, which has helped relocate over 900 people since 2018, has received more than 23,000 applications this year, up from around 15,000 in all of last year, a trend echoed by smaller programs in Alabama and Arkansas. Relocation initiatives are redoubling efforts, and new ones, including Chicago, are joining in.

“It’s important that we move quickly and tap into this moment of reconsideration,” said Erin Amico, chief operating officer at P33, a nonprofit that launched a campaign in May to bring tech talent to Chicago.

The number of applicants for the Remote Shoals relocation program in Alabama jumped from fewer than 25 in February 2020 to almost 200 in April 2021. About 50% of program participants have jobs in tech. In January, Remote Shoals successfully petitioned its board to double its funding to $500,000, enough to help relocate 50 people.

More than 90% of first-year Tulsa Remote members have stayed in Tulsa.

Social media company Reddit has seen employees relocate amid the shift to permanent remote work, but Chief People Officer Nellie Peshkov says “it’s still too soon to tell if it’s a long-term shift.” Reddit pays its employees the same regardless of where they’re based, while bigger companies cut pay based on cost of living, a strategy some employees publicly criticize.

Garland herself says that, for all the benefits of Tulsa, her ties to the San Francisco Bay Area are likely to pull her back. “Any move that I make is temporary because at some point in time, I’ll go back home,” she said.

(Reporting by Danielle Kaye; editing by Peter Henderson and Lisa Shumaker)

Image Credit: Reuters

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://datafloq.com/read/capitalizing-remote-work-us-cities-draw-tech-workers/16333

Big Data

If you did not already know

Published

on

DataOps google
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations. From a process and methodology perspective, DataOps applies Agile software development, DevOps software development practices and the statistical process control used in lean manufacturing, to data analytics. In DataOps, development of new analytics is streamlined using Agile software development, an iterative project management methodology that replaces the traditional Waterfall sequential methodology. Studies show that software development projects complete significantly faster and with far fewer defects when Agile Development is used. The Agile methodology is particularly effective in environments where requirements are quickly evolving – a situation well known to data analytics professionals. DevOps focuses on continuous delivery by leveraging on-demand IT resources and by automating test and deployment of analytics. This merging of software development and IT operations has improved velocity, quality, predictability and scale of software engineering and deployment. Borrowing methods from DevOps, DataOps seeks to bring these same improvements to data analytics. Like lean manufacturing, DataOps utilizes statistical process control (SPC) to monitor and control the data analytics pipeline. With SPC in place, the data flowing through an operational system is constantly monitored and verified to be working. If an anomaly occurs, the data analytics team can be notified through an automated alert. DataOps is not tied to a particular technology, architecture, tool, language or framework. Tools that support DataOps promote collaboration, orchestration, agility, quality, security, access and ease of use. …

CoSegNet google
We introduce CoSegNet, a deep neural network architecture for co-segmentation of a set of 3D shapes represented as point clouds. CoSegNet takes as input a set of unsegmented shapes, proposes per-shape parts, and then jointly optimizes the part labelings across the set subjected to a novel group consistency loss expressed via matrix rank estimates. The proposals are refined in each iteration by an auxiliary network that acts as a weak regularizing prior, pre-trained to denoise noisy, unlabeled parts from a large collection of segmented 3D shapes, where the part compositions within the same object category can be highly inconsistent. The output is a consistent part labeling for the input set, with each shape segmented into up to K (a user-specified hyperparameter) parts. The overall pipeline is thus weakly supervised, producing consistent segmentations tailored to the test set, without consistent ground-truth segmentations. We show qualitative and quantitative results from CoSegNet and evaluate it via ablation studies and comparisons to state-of-the-art co-segmentation methods. …

Stochastic Computation Graph (SCG) google
Stochastic computation graphs are directed acyclic graphs that encode the dependency structure of computation to be performed. The graphical notation generalizes directed graphical models. …

Smooth Density Spatial Quantile Regression google
We derive the properties and demonstrate the desirability of a model-based method for estimating the spatially-varying effects of covariates on the quantile function. By modeling the quantile function as a combination of I-spline basis functions and Pareto tail distributions, we allow for flexible parametric modeling of the extremes while preserving non-parametric flexibility in the center of the distribution. We further establish that the model guarantees the desired degree of differentiability in the density function and enables the estimation of non-stationary covariance functions dependent on the predictors. We demonstrate through a simulation study that the proposed method produces more efficient estimates of the effects of predictors than other methods, particularly in distributions with heavy tails. To illustrate the utility of the model we apply it to measurements of benzene collected around an oil refinery to determine the effect of an emission source within the refinery on the distribution of the fence line measurements. …

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://analytixon.com/2021/10/24/if-you-did-not-already-know-1540/

Continue Reading

Big Data

If you did not already know

Published

on

DataOps google
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations. From a process and methodology perspective, DataOps applies Agile software development, DevOps software development practices and the statistical process control used in lean manufacturing, to data analytics. In DataOps, development of new analytics is streamlined using Agile software development, an iterative project management methodology that replaces the traditional Waterfall sequential methodology. Studies show that software development projects complete significantly faster and with far fewer defects when Agile Development is used. The Agile methodology is particularly effective in environments where requirements are quickly evolving – a situation well known to data analytics professionals. DevOps focuses on continuous delivery by leveraging on-demand IT resources and by automating test and deployment of analytics. This merging of software development and IT operations has improved velocity, quality, predictability and scale of software engineering and deployment. Borrowing methods from DevOps, DataOps seeks to bring these same improvements to data analytics. Like lean manufacturing, DataOps utilizes statistical process control (SPC) to monitor and control the data analytics pipeline. With SPC in place, the data flowing through an operational system is constantly monitored and verified to be working. If an anomaly occurs, the data analytics team can be notified through an automated alert. DataOps is not tied to a particular technology, architecture, tool, language or framework. Tools that support DataOps promote collaboration, orchestration, agility, quality, security, access and ease of use. …

CoSegNet google
We introduce CoSegNet, a deep neural network architecture for co-segmentation of a set of 3D shapes represented as point clouds. CoSegNet takes as input a set of unsegmented shapes, proposes per-shape parts, and then jointly optimizes the part labelings across the set subjected to a novel group consistency loss expressed via matrix rank estimates. The proposals are refined in each iteration by an auxiliary network that acts as a weak regularizing prior, pre-trained to denoise noisy, unlabeled parts from a large collection of segmented 3D shapes, where the part compositions within the same object category can be highly inconsistent. The output is a consistent part labeling for the input set, with each shape segmented into up to K (a user-specified hyperparameter) parts. The overall pipeline is thus weakly supervised, producing consistent segmentations tailored to the test set, without consistent ground-truth segmentations. We show qualitative and quantitative results from CoSegNet and evaluate it via ablation studies and comparisons to state-of-the-art co-segmentation methods. …

Stochastic Computation Graph (SCG) google
Stochastic computation graphs are directed acyclic graphs that encode the dependency structure of computation to be performed. The graphical notation generalizes directed graphical models. …

Smooth Density Spatial Quantile Regression google
We derive the properties and demonstrate the desirability of a model-based method for estimating the spatially-varying effects of covariates on the quantile function. By modeling the quantile function as a combination of I-spline basis functions and Pareto tail distributions, we allow for flexible parametric modeling of the extremes while preserving non-parametric flexibility in the center of the distribution. We further establish that the model guarantees the desired degree of differentiability in the density function and enables the estimation of non-stationary covariance functions dependent on the predictors. We demonstrate through a simulation study that the proposed method produces more efficient estimates of the effects of predictors than other methods, particularly in distributions with heavy tails. To illustrate the utility of the model we apply it to measurements of benzene collected around an oil refinery to determine the effect of an emission source within the refinery on the distribution of the fence line measurements. …

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://analytixon.com/2021/10/24/if-you-did-not-already-know-1540/

Continue Reading

Big Data

If you did not already know

Published

on

Correntropy google
Correntropy is a nonlinear similarity measure between two random variables.
Learning with the Maximum Correntropy Criterion Induced Losses for Regression


Patient Event Graph (PatientEG) google
Medical activities, such as diagnoses, medicine treatments, and laboratory tests, as well as temporal relations between these activities are the basic concepts in clinical research. However, existing relational data model on electronic medical records (EMRs) lacks explicit and accurate semantic definitions of these concepts. It leads to the inconvenience of query construction and the inefficiency of query execution where multi-table join queries are frequently required. In this paper, we propose a patient event graph (PatientEG) model to capture the characteristics of EMRs. We respectively define five types of medical entities, five types of medical events and five types of temporal relations. Based on the proposed model, we also construct a PatientEG dataset with 191,294 events, 3,429 distinct entities, and 545,993 temporal relations using EMRs from Shanghai Shuguang hospital. To help to normalize entity values which contain synonyms, hyponymies, and abbreviations, we link them with the Chinese biomedical knowledge graph. With the help of PatientEG dataset, we are able to conveniently perform complex queries for clinical research such as auxiliary diagnosis and therapeutic effectiveness analysis. In addition, we provide a SPARQL endpoint to access PatientEG dataset and the dataset is also publicly available online. Also, we list several illustrative SPARQL queries on our website. …

LogitBoost Autoregressive Networks google
Multivariate binary distributions can be decomposed into products of univariate conditional distributions. Recently popular approaches have modeled these conditionals through neural networks with sophisticated weight-sharing structures. It is shown that state-of-the-art performance on several standard benchmark datasets can actually be achieved by training separate probability estimators for each dimension. In that case, model training can be trivially parallelized over data dimensions. On the other hand, complexity control has to be performed for each learned conditional distribution. Three possible methods are considered and experimentally compared. The estimator that is employed for each conditional is LogitBoost. Similarities and differences between the proposed approach and autoregressive models based on neural networks are discussed in detail. …

Discretification google
Discretification’ is the mechanism of making continuous data discrete. If you really grasp the concept, you may be thinking ‘Wait a minute, the type of data we are collecting is discrete in and of itself! Data can EITHER be discrete OR continuous, it can’t be both!’ You would be correct. But what if we manually selected values along that continuous measurement, and declared them to be in a specific category? For instance, if we declare 72.0 degrees and greater to be ‘Hot’, 35.0-71.9 degrees to be ‘Moderate’, and anything lower than 35.0 degrees to be ‘Cold’, we have ‘discretified’ temperature! Our readings that were once continuous now fit into distinct categories. So, where we do we draw the boundaries for these categories? What makes 35.0 degrees ‘Cold’ and 35.1 degrees ‘Moderate’? At is at this juncture that the TRUE decision is being made. The beauty of approaching the challenge in this manner is that it is data-centric, not concept-centric. Let’s walk through our marketing example first without using discretification, then with it. …

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://analytixon.com/2021/10/23/if-you-did-not-already-know-1539/

Continue Reading

Big Data

Capturing the signal of weak electricigens: a worthy endeavour

Published

on

Recently several non-traditional electroactive microorganisms have been discovered. These can be considered weak electricigens; microorganisms that typically rely on soluble electron acceptors and donors in their lifecycle but are also capable of extracellular electron transfer (EET), resulting in either a low, unreliable, or otherwise unexpected current. These unanticipated electroactive microorganisms represent a new chapter in electromicrobiology and have important medical, environmental, and biotechnological relevance.
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.cell.com/trends/biotechnology/fulltext/S0167-7799(21)00229-8?rss=yes

Continue Reading
Blockchain3 days ago

People’s payment attitude: Why cash Remains the most Common Means of Payment & How Technology and Crypto have more Advantages as a Means of payment

Automotive4 days ago

7 Secrets That Automakers Wish You Don’t Know

Startups3 days ago

The 12 TikTok facts you should know

Energy3 days ago

U Power ties up with Bosch to collaborate on Super Board technology

Supply Chain3 days ago

LPG tubes – what to think about

Gaming4 days ago

New Steam Games You Might Have Missed In August 2021

Blockchain4 days ago

What Is the Best Crypto IRA for Me? Use These 6 Pieces of Criteria to Find Out More

Gaming4 days ago

How do casinos without an account work?

IOT4 days ago

The Benefits of Using IoT SIM Card Technology

Blockchain4 days ago

The Most Profitable Cryptocurrencies on the Market

Gaming4 days ago

Norway will crack down on the unlicensed iGaming market with a new gaming law

Blockchain5 days ago

What does swapping crypto mean?

Energy3 days ago

Piperylene Market Size to Grow by USD 428.50 mn from 2020 to 2024 | Growing Demand for Piperylene-based Adhesives to Boost Growth | Technavio

Energy2 days ago

Notice of Data Security Breach Incident

AR/VR5 days ago

Preview: Little Cities – Delightful City Building on Quest

Blockchain2 days ago

Blockchain & Infrastructure Post-Event Release

Blockchain3 days ago

Week Ahead – Between a rock and a hard place

Cyber Security2 days ago

Ransomware Took a New Twist with US Leading a Law Enforcement Effort to Hack Back

Esports3 days ago

How to get Shiny Zacian and Zamazenta in Pokémon Sword and Shield

Code2 days ago

How does XML to JSON converter work?

Trending