Connect with us

AI

Building, automating, managing, and scaling ML workflows using Amazon SageMaker Pipelines

Avatar

Published

on

We recently announced Amazon SageMaker Pipelines, the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning (ML). SageMaker Pipelines is a native workflow orchestration tool for building ML pipelines that take advantage of direct Amazon SageMaker integration. Three components improve the operational resilience and reproducibility of your ML workflows: pipelines, model registry, and projects. These workflow automation components enable you to easily scale your ability to build, train, test, and deploy hundreds of models in production, iterate faster, reduce errors due to manual orchestration, and build repeatable mechanisms.

SageMaker projects introduce MLOps templates that automatically provision the underlying resources needed to enable CI/CD capabilities for your ML development lifecycle. You can use a number of built-in templates or create your own custom template. You can use SageMaker Pipelines independently to create automated workflows; however, when used in combination with SageMaker projects, the additional CI/CD capabilities are provided automatically. The following screenshot shows how the three components of SageMaker Pipelines can work together in an example SageMaker project.

The following screenshot shows how the three components of SageMaker Pipelines can work together in an example SageMaker project.

This post focuses on using an MLOps template to bootstrap your ML project and establish a CI/CD pattern from sample code. We show how to use the built-in build, train, and deploy project template as a base for a customer churn classification example. This base template enables CI/CD for training ML models, registering model artifacts to the model registry, and automating model deployment with manual approval and automated testing.

MLOps template for building, training, and deploying models

We start by taking a detailed look at what AWS services are launched when this build, train, and deploy MLOps template is launched. Later, we discuss how to modify the skeleton for a custom use case.

To get started with SageMaker projects, you must first enable it on the Amazon SageMaker Studio console. This can be done for existing users or while creating new ones. For more information, see SageMaker Studio Permissions Required to Use Projects.

For more information, see SageMaker Studio Permissions Required to Use Projects.

In SageMaker Studio, you can now choose the Projects menu on the Components and registries menu.

In SageMaker Studio, you can now choose the Projects menu on the Components and registries menu.

On the projects page, you can launch a preconfigured SageMaker MLOps template. For this post, we choose MLOps template for model building, training, and deployment.

On the projects page, you can launch a preconfigured SageMaker MLOps template.

Launching this template starts a model building pipeline by default, and while there is no cost for using SageMaker Pipelines itself, you will be charged for the services launched. Cost varies by Region. A single run of the model build pipeline in us-east-1 is estimated to cost less than $0.50. Models approved for deployment incur the cost of the SageMaker endpoints (test and production) for the Region using an ml.m5.large instance.

After the project is created from the MLOps template, the following architecture is deployed.

After the project is created from the MLOps template, the following architecture is deployed.

Included in the architecture are the following AWS services and resources:

  • The MLOps templates that are made available through SageMaker projects are provided via an AWS Service Catalog portfolio that automatically gets imported when a user enables projects on the Studio domain.
  • Two repositories are added to AWS CodeCommit:
    • The first repository provides scaffolding code to create a multi-step model building pipeline including the following steps: data processing, model training, model evaluation, and conditional model registration based on accuracy. As you can see in the pipeline.py file, this pipeline trains a linear regression model using the XGBoost algorithm on the well-known UCI Abalone dataset. This repository also includes a build specification file, used by AWS CodePipeline and AWS CodeBuild to run the pipeline automatically.
    • The second repository contains code and configuration files for model deployment, as well as test scripts required to pass the quality gate. This repo also uses CodePipeline and CodeBuild, which run an AWS CloudFormation template to create model endpoints for staging and production.
  • Two CodePipeline pipelines:
    • The ModelBuild pipeline automatically triggers and runs the pipeline from end to end whenever a new commit is made to the ModelBuild CodeCommit repository.
    • The ModelDeploy pipeline automatically triggers whenever a new model version is added to the model registry and the status is marked as Approved. Models that are registered with Pending or Rejected statuses aren’t deployed.
  • An Amazon Simple Storage Service (Amazon S3) bucket is created for output model artifacts generated from the pipeline.
  • SageMaker Pipelines uses the following resources:
    • This workflow contains the directed acyclic graph (DAG) that trains and evaluates our model. Each step in the pipeline keeps track of the lineage and intermediate steps can be cached for quickly re-running the pipeline. Outside of templates, you can also create pipelines using the SDK.
    • Within SageMaker Pipelines, the SageMaker model registry tracks the model versions and respective artifacts, including the lineage and metadata for how they were created. Different model versions are grouped together under a model group, and new models registered to the registry are automatically versioned. The model registry also provides an approval workflow for model versions and supports deployment of models in different accounts. You can also use the model registry through the boto3 package.
  • Two SageMaker endpoints:
    • After a model is approved in the registry, the artifact is automatically deployed to a staging endpoint followed by a manual approval step.
    • If approved, it’s deployed to a production endpoint in the same AWS account.

All SageMaker resources, such as training jobs, pipelines, models, and endpoints, as well as AWS resources listed in this post, are automatically tagged with the project name and a unique project ID tag.

Modifying the sample code for a custom use case

After your project has been created, the architecture described earlier is deployed and the visualization of the pipeline is available on the Pipelines drop-down menu within SageMaker Studio.

To modify the sample code from this launched template, we first need to clone the CodeCommit repositories to our local SageMaker Studio instance. From the list of projects, choose the one that was just created. On the Repositories tab, you can select the hyperlinks to locally clone the CodeCommit repos.

On the Repositories tab, you can select the hyperlinks to locally clone the CodeCommit repos.

ModelBuild repo

The ModelBuild repository contains the code for preprocessing, training, and evaluating the model. The sample code trains and evaluates a model on the UCI Abalone dataset. We can modify these files to solve our own customer churn use case. See the following code:

|-- codebuild-buildspec.yml
|-- CONTRIBUTING.md
|-- pipelines
| |-- abalone
| | |-- evaluate.py
| | |-- __init__.py
| | |-- pipeline.py
| | |-- preprocess.py
| |-- get_pipeline_definition.py
| |-- __init__.py
| |-- run_pipeline.py
| |-- _utils.py
| |-- __version__.py
|-- README.md
|-- sagemaker-pipelines-project.ipynb
|-- setup.cfg
|-- setup.py
|-- tests
| -- test_pipelines.py
|-- tox.ini

We now need a dataset accessible to the project.

  1. Open a new SageMaker notebook inside Studio and run the following cells:
    !wget http://dataminingconsultant.com/DKD2e_data_sets.zip
    !unzip -o DKD2e_data_sets.zip
    !mv "Data sets" Datasets import os
    import boto3
    import sagemaker
    prefix = 'sagemaker/DEMO-xgboost-churn'
    region = boto3.Session().region_name
    default_bucket = sagemaker.session.Session().default_bucket()
    role = sagemaker.get_execution_role() RawData = boto3.Session().resource('s3')
    .Bucket(default_bucket).Object(os.path.join(prefix, 'data/RawData.csv'))
    .upload_file('./Datasets/churn.txt') print(os.path.join("s3://",default_bucket, prefix, 'data/RawData.csv'))

  1. Rename the abalone directory to customer_churn. This requires us to modify the path inside codebuild-buildspec.yml as shown in the sample repository. See the following code:
    run-pipeline --module-name pipelines.customer-churn.pipeline 

  1. Replace the preprocess.py code with the customer churn preprocessing script found in the sample repository.
  2. Replace the pipeline.py code with the customer churn pipeline script found in the sample repository.
    1. Be sure to replace the “InputDataUrl” default parameter with the Amazon S3 URL obtained in step 1:
      input_data = ParameterString( name="InputDataUrl", default_value=f"s3://YOUR_BUCKET/RawData.csv",
      )

    2. Update the conditional step to evaluate the classification model:
      # Conditional step for evaluating model quality and branching execution
      cond_lte = ConditionGreaterThanOrEqualTo( left=JsonGet(step=step_eval, property_file=evaluation_report, json_path="binary_classification_metrics.accuracy.value"), right=0.8
      )

    One last thing to note is the default ModelApprovalStatus is set to PendingManualApproval. If our model has greater than 80% accuracy, it’s added to the model registry, but not deployed until manual approval is complete.

  1. Replace the evaluate.py code with the customer churn evaluation script found in the sample repository. One piece of the code we’d like to point out is that, because we’re evaluating a classification model, we need to update the metrics we’re evaluating and associating with trained models:
    report_dict = { "binary_classification_metrics": { "accuracy": { "value": acc, "standard_deviation" : "NaN" }, "auc" : { "value" : roc_auc, "standard_deviation": "NaN" }, },
    } evaluation_output_path = '//zephyrnet.com/opt/ml/processing/evaluation/evaluation.json'
    with open(evaluation_output_path, 'w') as f: f.write(json.dumps(report_dict))

The JSON structure of these metrics are required to match the format of sagemaker.model_metrics for complete integration with the model registry. 

ModelDeploy repo

The ModelDeploy repository contains the AWS CloudFormation buildspec for the deployment pipeline. We don’t make any modifications to this code because it’s sufficient for our customer churn use case. It’s worth noting that model tests can be added to this repo to gate model deployment. See the following code:

├── build.py
├── buildspec.yml
├── endpoint-config-template.yml
├── prod-config.json
├── README.md
├── staging-config.json
└── test ├── buildspec.yml └── test.py

Triggering a pipeline run

Committing these changes to the CodeCommit repository (easily done on the Studio source control tab) triggers a new pipeline run, because an Amazon EventBridge event monitors for commits. After a few moments, we can monitor the run by choosing the pipeline inside the SageMaker project.

After a few moments, we can monitor the run by choosing the pipeline inside the SageMaker project. The following screenshot shows our pipeline details.The following screenshot shows our pipeline details. Choosing the pipeline run displays the steps of the pipeline, which you can monitor.

Choosing the pipeline run displays the steps of the pipeline, which you can monitor.

When the pipeline is complete, you can go to the Model groups tab inside the SageMaker project and inspect the metadata attached to the model artifacts.

When the pipeline is complete, you can go to the Model groups tab inside the SageMaker project and inspect the metadata attached to the model artifacts.

If everything looks good, we can manually approve the model.

This approval triggers the ModelDeploy pipeline and exposes an endpoint for real-time inference.

This approval triggers the ModelDeploy pipeline and exposes an endpoint for real-time inference.

This approval triggers the ModelDeploy pipeline and exposes an endpoint for real-time inference. 

Conclusion

SageMaker Pipelines enables teams to leverage best practice CI/CD methods within their ML workflows. In this post, we showed how a data scientist can modify a preconfigured MLOps template for their own modeling use case. Among the many benefits is that the changes to the source code can be tracked, associated metadata can be tied to trained models for deployment approval, and repeated pipeline steps can be cached for reuse. To learn more about SageMaker Pipelines, check out the website and the documentation. Try SageMaker Pipelines in your own workflows today.


About the Authors

Sean MorganSean Morgan is an AI/ML Solutions Architect at AWS. He previously worked in the semiconductor industry, using computer vision to improve product yield. He later transitioned to a DoD research lab where he specialized in adversarial ML defense and network security. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons.

Hallie CrosbyHallie Weishahn is an AI/ML Specialist Solutions Architect at AWS, focused on leading global standards for MLOps. She previously worked as an ML Specialist at Google Cloud Platform. She works with product, engineering, and key customers to build repeatable architectures and drive product roadmaps. She provides guidance and hands-on work to advance and scale machine learning use cases and technologies. Troubleshooting top issues and evaluating existing architectures to enable integrations from PoC to a full deployment is her strong suit.

Shelbee EigenbrodeShelbee Eigenbrode is an AI/ML Specialist Solutions Architect at AWS. Her current areas of depth include DevOps combined with ML/AI. She has been in technology for 23 years, spanning multiple roles and technologies. With over 35 patents granted across various technology domains, her passion for continuous innovation combined with a love of all things data turned her focus to the field of d ata science. Combining her backgrounds in data, DevOps, and machine learning, her current passion is helping customers to not only embrace data science but also ensure all models have a path to production by adopting MLOps practices. In her spare time, she enjoys reading and spending time with family, including her fur family (aka dogs), as well as friends.

Source: https://aws.amazon.com/blogs/machine-learning/building-automating-managing-and-scaling-ml-workflows-using-amazon-sagemaker-pipelines/

AI

Ethereum Rises on NFT Boom

Avatar

Published

on

Ethereum has been rising for the past two days, increasing by $150 yesterday (10%) to $1697 with it currently trading at $1640.

The ratio reversal coincides with the approval of EIP1559, a cryptoeconomics change that burn fees and will increase capacity both by doubling it as well as by removing miner’s incentives to play with capacity through spamming the network.

Momentum then was added by that tokenized Jack Dorsey tweet, which can be taken as the debut of NFTs because even the BBC covered it.

This space has now seemingly grown considerably from pretty much non existent to about $10 million in trading volumes.

The most expensive NFT, excluding Jack Dorsey’s tweet, is the Formula 1 Grand Prix de Monaco 2020 1A that went for 1,079 eth, worth about $2 million.

That’s the actual race course used in F1® Delta Time, a blockchain game running on Ethereum.

That’s one of quite a few games now running on eth. You can see some footballers’ cards if you browse by recently sold, Decentraland plots of lands, Gods Unchained cards, then there’s an actual planet.

Bitcoin will not be measured against the dollar anymore. “You’d measure it against whatever you’d buying with it, such as planets or solar systems,” said Jesse Powell, Kraken’s founder.

Well, for now these planets are being bought with eth on this 0xUniverse that describes itself as “a revolutionary blockchain-based game. Become an explorer in a galaxy of unique, collectable planets. Colonize them to extract resources and build spacecrafts. Keep expanding your fleet until you’ve uncovered the final mysteries of the universe!”

One mystery they might discover is why is a bull gif worth 112 eth? Apparently this was made prior to the election and changed based on who won it, but $200,000 for that?

Then there’s actual art, like the featured image we’ve chosen above called American censorship.

We’re no art critics, but it’s an image that made us stop for a bit and look at it. Now you can look at it too, and since this is a news article and the image is part of the news, they can’t file copyright complaints as far as we are aware.

So why is someone willing to pay 0.2 eth for this, or $330? Well, because he or she likes it of course, and because it is an original piece of work and he is the owner of it and if he wanted to, he could enforce copyrights where it concerns for business use like selling a print version of this or a card version.

He could sell licensing rights and if the artist of this work becomes famous, he has a proof that it is by this artist and more importantly a proof that this is the original by the artist him/her self.

We could have modified that featured image and obviously we saying we haven’t has plenty of credibility, but having proof of the original itself perhaps has as much value as having the original Mona Lisa rather than a copy.

In any event, the mysteries of art speculation we don’t profess to know, but what seems to be happening is an art experimentation, and many artists must be looking to see just what is going on.

Some of those ‘normie’ artists may well Tik Tok about it and they may even go viral in this day and age where any of us can sometime be the show.

So that frothiness should translate to eth not least because all of this is being priced in eth. The son’s and daughters of traditional art collectors, therefore, may well turn some of their dady’s fiat into eth to speculate on what may well become a new world for art.

Whether it will, remains to be seen, but there’s something new and clearly people seem to be excited about it.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Check out Nord
Make your Money Grow with Mintos
Source: https://www.trustnodes.com/2021/03/07/ethereum-rises-on-nft-boom

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://coingenius.news/ethereum-rises-on-nft-boom/?utm_source=rss&utm_medium=rss&utm_campaign=ethereum-rises-on-nft-boom

Continue Reading

Artificial Intelligence

Generative Adversarial Transformers: Using GANsformers to Generate Scenes

Avatar

Published

on

Author profile picture

@whatsaiLouis Bouchard

I explain Artificial Intelligence terms and news to non-experts.

They basically leverage transformers’ attention mechanism in the powerful StyleGAN2 architecture to make it even more powerful!

Watch the Video:

Chapters:
0:00​ Hey! Tap the Thumbs Up button and Subscribe. You’ll learn a lot of cool stuff, I promise.
0:24​ Text-To-Image translation
0:51​ Examples
5:50​ Conclusion

References

Paper: https://arxiv.org/pdf/2103.01209.pdf
Code: https://github.com/dorarad/gansformer
Complete reference:
Drew A. Hudson and C. Lawrence Zitnick, Generative Adversarial Transformers, (2021), Published on Arxiv., abstract:

“We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables longrange interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model’s strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and outdoor scenes, showing it achieves state-of-theart results in terms of image quality and diversity, while enjoying fast learning and better data efficiency. Further qualitative and quantitative experiments offer us an insight into the model’s inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. An implementation of the model is available at https://github.com/dorarad/gansformer​.”

Video Transcript

Note: This transcript is auto-generated by Youtube and may not be entirely accurate.

00:00

the basically leveraged transformers

00:02

attention mechanism in the powerful stat

00:04

gun 2 architecture to make it even more

00:06

powerful

00:10

[Music]

00:14

this is what’s ai and i share artificial

00:16

intelligence news every week

00:18

if you are new to the channel and would

00:19

like to stay up to date please consider

00:21

subscribing to not miss any further news

00:24

last week we looked at dali openai’s

00:27

most recent paper

00:28

it uses a similar architecture as gpt3

00:31

involving transformers to generate an

00:33

image from text

00:35

this is a super interesting and complex

00:37

task called

00:38

text to image translation as you can see

00:41

again here the results were surprisingly

00:43

good compared to previous

00:45

state-of-the-art techniques this is

00:47

mainly due to the use of transformers

00:49

and a large amount of data this week we

00:52

will look at a very similar task

00:54

called visual generative modelling where

00:56

the goal is to generate a

00:58

complete scene in high resolution such

01:00

as a road or a room

01:02

rather than a single face or a specific

01:04

object this is different from delhi

01:06

since we are not generating the scene

01:08

from a text but from a trained model

01:10

on a specific style of scenes which is a

01:13

bedroom in this case

01:14

rather it is just like style gun that is

01:17

able to generate unique and non-existing

01:19

human faces

01:20

being trained on a data set of real

01:22

faces

01:24

the difference is that it uses this gan

01:26

architecture in a traditional generative

01:28

and discriminative way

01:29

with convolutional neural networks a

01:32

classic gun architecture will have a

01:34

generator

01:35

trained to generate the image and a

01:36

discriminator

01:38

used to measure the quality of the

01:40

generated images

01:41

by guessing if it’s a real image coming

01:43

from the data set

01:44

or a fake image generated by the

01:46

generator

01:48

both networks are typically composed of

01:50

convolutional neural networks where the

01:52

generator

01:53

looks like this mainly composed of down

01:56

sampling the image using convolutions to

01:58

encode it

01:59

and then it up samples the image again

02:02

using convolutions to generate a new

02:04

version

02:05

of the image with the same style based

02:07

on the encoding

02:08

which is why it is called style gun then

02:12

the discriminator takes the generated

02:14

image or

02:15

an image from your data set and tries to

02:17

figure out whether it is real or

02:18

generated

02:19

called fake instead they leverage

02:22

transformers attention mechanism

02:24

inside the powerful stargane 2

02:26

architecture to make it

02:27

even more powerful attention is an

02:30

essential feature of this network

02:32

allowing the network to draw global

02:34

dependencies between

02:36

input and output in this case it’s

02:39

between the input at the current step of

02:41

the architecture

02:42

and the latent code previously encoded

02:44

as we will see in a minute

02:46

before diving into it if you are not

02:48

familiar with transformers or attention

02:50

i suggest you watch the video i made

02:52

about transformers

02:54

for more details and a better

02:55

understanding of attention

02:57

you should definitely have a look at the

02:58

video attention is all you need

03:01

from a fellow youtuber and inspiration

03:03

of mine janik

03:04

kilter covering this amazing paper

03:07

alright

03:07

so we know that they use transformers

03:09

and guns together to generate better and

03:12

more realistic scenes

03:13

explaining the name of this paper

03:15

transformers

03:17

but why and how did they do that exactly

03:20

as for the y they did that to generate

03:22

complex and realistic scenes

03:24

like this one automatically this could

03:26

be a powerful application for many

03:28

industries like movies or video games

03:30

requiring a lot less time and effort

03:33

than having an

03:34

artist create them on a computer or even

03:36

make them

03:37

in real life to take a picture of it

03:40

also

03:40

imagine how useful it could be for

03:42

designers when coupled with text to

03:44

image translation generating many

03:46

different scenes from a single text

03:48

input

03:48

and pressing a random button they use a

03:51

state-of-the-art style gun architecture

03:53

because guns are powerful generators

03:55

when we talk about the general image

03:58

because guns work using convolutional

04:00

neural networks

04:01

they are by nature using local

04:03

information of the pixels

04:05

merging them to end up with the general

04:07

information regarding the image

04:09

missing out on the long range

04:11

interaction of the faraway pixel

04:13

for the same reason this causes guns to

04:15

be powerful generators for the overall

04:18

style of the image

04:19

still they are a lot less powerful

04:21

regarding the quality of the small

04:23

details in the generated image

04:25

for the same reason being unable to

04:27

control the style of localized regions

04:30

within the generated image itself this

04:33

is why they had the idea to combine

04:34

transformers and gans in one

04:36

architecture they called

04:38

bipartite transformer as gpt3 and many

04:41

other papers already proved transformers

04:44

are powerful for long-range interactions

04:46

drawing dependencies between them and

04:48

understanding the context of text

04:50

or images we can see that this simply

04:53

added attention layers

04:54

which is the base of the transformer’s

04:56

network in between the convolutional

04:58

layers of both the generator and

05:00

discriminator

05:01

thus rather than focusing on using

05:03

global information and controlling

05:05

all features globally as convolutions do

05:07

by nature

05:08

they use this attention to propagate

05:10

information from the local pixels to the

05:12

global high level representation

05:14

and vice versa like other transformers

05:17

applied to images

05:18

this attention layer takes the pixel’s

05:20

position and the style gun to latent

05:23

spaces w

05:24

and z the latent space w is an encoding

05:27

of the input into an intermediate latent

05:30

space

05:30

done at the beginning of the network

05:32

denoted here

05:34

as a while the encoding z is just the

05:37

resulting features of the input at the

05:39

current step of the network

05:40

this makes the generation much more

05:42

expressive over the whole image

05:44

especially in generating images

05:46

depicting multi-object

05:48

scenes which is the goal of this paper

05:51

of course this was just an overview of

05:53

this new paper by facebook ai research

05:55

and stanford university

05:57

i strongly recommend reading the paper

05:59

to have a better understanding of this

06:00

approach it’s the first link in the

06:02

description below

06:03

the code is also available and linked in

06:05

the description as well

06:07

if you went this far in the video please

06:08

consider leaving a like

06:10

and commenting your thoughts i will

06:12

definitely read them and answer you

06:14

and since there’s still over 80 percent

06:16

of you guys that are not subscribed yet

06:18

please consider clicking this free

06:20

subscribe button

06:21

to not miss any further news clearly

06:23

explained

06:24

thank you for watching

06:33

[Music]

Author profile picture

Read my stories

I explain Artificial Intelligence terms and news to non-experts.

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://hackernoon.com/generative-adversarial-transformers-using-gansformers-to-generate-scenes-013d33h4?source=rss

Continue Reading

AI

Newsletter: Elon Musk’s genius + Surgical robots

Avatar

Published

on

  • FDA greenlights Memic’s breakthrough robotic surgery
  • Elon Musk’s genius is battery powered
  • Webinar, Mar. 10: Making families affordable
  • D-ID drives MyHeritage deep nostalgia animation
  • Rewire banks $20M, led by OurCrowd
  • Consumer Physics’ SCiO lets you wake up and taste the coffee
  • Medisafe raises $30M for AI that helps people take medication
  • Arcadia guides companies on homeworking energy costs
  • Volvo unit teams with Israel’s Foretellix on autonomous safety
  • Tmura: Startup options help those in need
  • Introductions
  • 1,000 high-tech jobs

FDA greenlights Memic’s breakthrough robotic surgery


The FDA has granted De Novo Marketing Authorization for the breakthrough Hominis robotic surgery system developed by Memic – a notable first. The patented technology allows the surgeon to control tiny, human-like robotic arms that enables procedures currently considered unfeasible, and reduces scarring. “This FDA authorization represents a significant advance in the world of robot-assisted surgery and fulfills an unmet need in the world of robotic gynecological surgery,” said Professor Jan Baekelandt, MD, PhD, a gynecologist at Imelda Hospital in Bonheiden, Belgium, who performed the first hysterectomy using the Hominis system. “Research shows vaginal hysterectomy provides optimal clinical benefits to patients including reduced pain, recovery time and rates of infection. Hominis is the only robot specifically developed for transvaginal surgery and is therefore small and flexible enough to perform surgery through a small incision.” Memic’s strong management team is led by Chairman Maurice R. Ferré MD, who founded Mako Surgical, a surgical robotics company that was sold for $1.65B.

Learn More.

Elon Musk’s genius is battery powered


In 2004, PayPal co-founder Elon Musk took what appeared to be a huge and perhaps reckless gamble. Departing from the seamless technology of the booming online payments business, Musk sank $30 million of his newly-minted internet fortune into a year-old startup that dreamed of transforming a century-old global industry mired in heavy industrial plants, expensive freight and complex global supply chains. Musk’s startup, of course, was Tesla, and the dream was an all-electric car. Today, the electric car is a mass-market reality and Musk’s gamble has sent him powering past Gates and Bezos straight to the Forbes top slot – at least for a while. Musk’s winning bet was not on automobiles, but on energy. The heart of Tesla is not mobility or cars, it is power and battery storage. In 2004, Musk was way ahead of the curve in foreseeing the transformation of energy from fossil fuels to renewables. Today, the signs are everywhere. Read more in my regular ‘Investors on the Frontlines’ column here.

Webinar, Mar. 10: Making families affordable

A key pandemic-driven trend is the acceleration of the fertility tech sector, including egg freezing and IVF, with a market predicted to be worth $37+ billion in less than 10 years. Hear from:

  • Claire Tomkins, PhD, founder and CEO of Future Family, startup veteran who was formerly an Investment Bank advisor and director of Richard Branson’s Carbon War Room accelerator, who will speak on the category potential and her startup’s disruptive solution
  • Angeline N. Beltsos, Chief Medical Officer and CEO, Vios Fertility Institute, on the challenges and progress represented by fertility issues
  • Ashley Gillen Binder, Future Family client, on her life-changing experience with the company

Future Family is the first company to bring together financing, technology, and concierge care in an easy-to-use online platform. With established revenues and growth, Future Family is a leading example of next-gen FinTech, which will provide customized solutions for specific verticals. Moderated by Richard Norman, Managing Director, OurCrowd.

Wednesday, March 10th at 9:00AM San Francisco | 12:00PM New York | 7:00PM Israel 
Register Here. 

D-ID drives MyHeritage deep nostalgia animation


Jimmy Kimmel didn’t get much work done on Wednesday. He said he was too busy using technology developed by OurCrowd portfolio company D-ID to bring a photograph of his great grandfather to life. Genealogy platform MyHeritage released a feature that animates faces in still photos using video reenactment technology designed by D-ID. Called Deep Nostalgia, it produces a realistic depiction of how a person could have moved and looked if they were captured on video, using pre-recorded driver videos that direct the movements in the animation and consist of sequences of real human gestures. Users can upload a picture regardless of the number of faces in it. The photos are enhanced prior to animation using the MyHeritage Photo Enhancer, USA Today reports. MyHeritage tells the BBC that some people might find the feature “creepy” while others would consider it “magical”. It’s been a good few days for MyHeritage, which was acquired by Francisco Partners last week for a reported $600M.

Rewire banks $20M, led by OurCrowd


OurCrowd led a successful Series B $20M round for Rewire, a provider of cross-border online banking services for expatriate workers. The finance will be used to add new products to its platform, such as bill payments, insurance, savings, and credit and loan services. New investors included Renegade Partners, Glilot Capital Partners and AME Cloud Ventures as well as previous investors Viola FinTech, BNP Paribas through their venture capital fund Opera Tech Ventures, Moneta Capital, and private angel investors. Rewire has also secured an EU Electronic Money Institution license from the Dutch Central Bank to support its European expansion plans, PitchBook reports. Rewire was also granted an expanded Israeli Financial Asset Service Provider license. Acquiring these licenses is another major step for the FinTech startup in its mission to provide secure and accessible financial services for migrant workers worldwide, Globes reports.

Learn More.

Top Tech News

Consumer Physics’ SCiO lets you wake up and taste the coffee

  


Israeli and Colombian agriculture tech startup Demetria emerged from stealth with $3M seed round to support its artificial-intelligence-powered green coffee quality analysis system. Demetria has added AI to the SCiO technology developed by OurCrowd portfolio company Consumer Physics to enable fast sampling and quality control for coffee. “We use this sensory fingerprint in combination with complex AI and cupping data to discern ‘taste,’” Felipe Ayerbe, CEO and co-founder of Demetria, tells Daily Coffee News. “Demetria is an accurate predictor of cupping analysis. We have conducted the same process that is required to train and certify a ‘cupper,’ but instead of using human senses of taste and smell, we use state-of-the-art sensors that read the biochemical markers of taste, and couple that information with AI.” The technology “has pioneered the digitization of coffee aroma and taste, the most important quality variables of the coffee bean. For the first time, quality and taste can now be assessed at any stage of the coffee production and distribution process, from farm to table,” says Food and Drink International

Medisafe raises $30M for AI that helps people take medication
Medisafe, an OurCrowd portfolio company developing a personalized medication management platform to help patients stay on top of their prescriptions, raised $30M in a round led by Sanofi Ventures and Alive Israel HealthTech Fund with participation from Merck Ventures, Octopus Ventures, OurCrowd and others. About 20% to 30% of medication prescriptions are never filled, and approximately 50% of medications for chronic disease aren’t taken as prescribed, causing approximately 125,000 deaths and costing the U.S. health care system more than $100B a year. “Medisafe can email and text patients to remind them to take their medications on time. Moreover, the platform can target ‘rising-risk’ patients with analytics and insights based on real-time behavioral assessments, boosting adherence up to 20%”, VentureBeat reports.

Arcadia guides companies on homeworking energy costs
With millions of staff working from home, who should foot the bill for heating and other energy costs? Biotechnology company Biogen and financial giant Goldman Sachs are both working with alternative energy firm Arcadia to help employees switch their homes to wind or solar power – just a couple of the companies assisting staff to explore renewable energy sources for residential use. Alexa Minerva, Arcadia’s senior director of partnerships, says the offering is just one of many new kinds of employee benefits that may arise as work becomes more geographically flexible in the pandemic era and beyond, and perks like an office cafeteria become less of a draw. “It says something not just about…how you value a person, but it also says what you value as a company,” Minerva tells Time magazine. “It’s a great hiring strategy and retention tactic.”

Volvo unit teams with Israel’s Foretellix on autonomous safety
Volvo Autonomous Solutions has formed a partnership with OurCrowd portfolio company Foretellix to jointly create a coverage-driven verification solution for autonomous vehicles and machines on public roads and in confined areas such as mines and quarries. The technology will facilitate testing of millions of different scenarios, which will validate autonomous vehicles and machines’ ability to deal with anything they might encounter. Foretellix has developed a novel verification platform that uses intelligent automation and big data analytics tools that coordinate and monitor millions of driving scenarios, to expose bugs and edge cases, including the most extreme cases. “The partnership with Foretellix gives us access to the state-of-the-art verification tools and accelerating our time to market,” Magnus Liljeqvist, Global Technology Manager, Volvo Autonomous Solutions tells Globes.  

Tmura: Startup options help those in need
The sale of MyHeritage to Francisco Partners for a reported $600M will directly help Israeli nonprofits working in education and youth projects through the sale of options the startup donated in 2013 to Tmura, an Israeli nonprofit, The Times of Israel reports. OurCrowd has supported Tmura since its inception, encouraging all our startups to donate a percentage of their options. If they have a big exit, it benefits those who need help. A total of 718 Israeli companies have made donations to Tmura, with a record number of 62 new donor companies contributing in 2020, as nonprofit organizations struggled to raise funds elsewhere during the pandemic. Exit proceeds in 2020 totaled $1.9 million, the second-highest year ever, after 2013, when Waze was sold to Google for some $1 billion. Waze’s options,

Introductions

Your portfolio gets stronger when the OurCrowd network gets involved. Visit our Introductions page to see which of our companies are looking for connections that you may be able to help with.

1,000 High-Tech Jobs

Read the OurCrowd Quarterly Jobs Index here.
Despite the coronavirus pandemic, there are hundreds of open positions at our global portfolio companies. See some opportunities below:

Search and filter through Portfolio Jobs to find your next challenge.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://blog.ourcrowd.com/elon-musks-genius-surgical-robots/

Continue Reading

Artificial Intelligence

How I’d Learn Data Science If I Were To Start All Over Again

Avatar

Published

on

Author profile picture

@santiviquezSantiago Víquez

Physicist turned data scientist. Creator of datasciencetrivia.com

A couple of days ago I started thinking if I had to start learning machine learning and data science all over again where would I start? The funny thing was that the path that I imagined was completely different from that one that I actually did when I was starting.

I’m aware that we all learn in different ways. Some prefer videos, others are ok with just books and a lot of people need to pay for a course to feel more pressure. And that’s ok, the important thing is to learn and enjoy it.

So, talking from my own perspective and knowing how I learn better I designed this path if I had to start learning Data Science again.

As you will see, my favorite way to learn is going from simple to complex gradually. This means starting with practical examples and then move to more abstract concepts.

Kaggle micro-courses

I know it may be weird to start here, many would prefer to start with the heaviest foundations and math videos to fully understand what is happening behind each ML model. But from my perspective starting with something practical and concrete helps to have a better view of the whole picture.

In addition, these micro-courses take around 4 hours/each to complete so meeting those little goals upfront adds an extra motivational boost.

Kaggle micro-course: Python

If you are familiar with Python you can skip this part. Here you’ll learn basic Python concepts that will help you start learning data science. There will be a lot of things about Python that are still going to be a mystery. But as we advance, you will learn it with practice.

Link: https://www.kaggle.com/learn/python

Price: Free

Kaggle micro-course: Pandas

Pandas is going to give us the skills to start manipulating data in Python. I consider that a 4-hour micro-course and practical examples is enough to have a notion of the things that can be done.

Link: https://www.kaggle.com/learn/pandas

Price: Free

Kaggle micro-course: Data Visualization

Data visualization is perhaps one of the most underrated skills but it is one of the most important to have. It will allow you to fully understand the data with which you will be working.

Link: https://www.kaggle.com/learn/data-visualization

Price: Free

Kaggle micro-course: Intro to Machine Learning

This is where the exciting part starts. You are going to learn basic but very important concepts to start training machine learning models. Concepts that later will be essential to have them very clear.

Link: https://www.kaggle.com/learn/intro-to-machine-learning

Precio: Free

Kaggle micro-course: Intermediate Machine Learning

This is complementary to the previous one but here you are going to work with categorical variables for the first time and deal with null fields in your data.

Link: https://www.kaggle.com/learn/intermediate-machine-learning

Price: Free

Let’s stop here for a moment. It should be clear that these 5 micro-courses are not going to be a linear process, you are probably going to have to come and go between them to refresh concepts. When you are working in the Pandas one you may have to go back to the Python course to remember some of the things you learned or go to the pandas documentation to understand new functions that you saw in the Introduction to Machine Learning course. And all of this is fine, right here is where the real learning is going to happen.

Now, if you realize these first 5 courses will give you the necessary skills to do exploratory data analysis (EDA) and create baseline models that later you will be able to improve. So now is the right time to start with simple Kaggle competitions and put in practice what you’ve learned.

Kaggle Playground Competition: Titanic

Here you’ll put into practice what you learned in the introductory courses. Maybe it will be a little intimidating at first, but it doesn’t matter it’s not about being first on the leaderboard, it’s about learning. In this competition, you will learn about classification and relevant metrics for these types of problems such as precision, recall, and accuracy.

Link: https://www.kaggle.com/c/titanic

Kaggle Playground Competition: Housing Prices

In this competition, you are going to apply regression models and learn about relevant metrics such as RMSE.

Link: https://www.kaggle.com/c/home-data-for-ml-course

By this point, you already have a lot of practical experience and you’ll feel that you can solve a lot of problems, buuut chances are that you don’t fully understand what is happening behind each classification and regression algorithms that you have used. So this is where we have to study the foundations of what we are learning.

Many courses start here, but at least I absorb this information better once I have worked on something practical before.

Book: Data Science from Scratch

At this point, we will momentarily separate ourselves from pandas, scikit-learn ,and other Python libraries to learn in a practical way what is happening “behind” these algorithms.

This book is quite friendly to read, it brings Python examples of each of the topics and it doesn’t have much heavy math, which is fundamental for this stage. We want to understand the principle of the algorithms but with a practical perspective, we don’t want to be demotivated by reading a lot of dense mathematical notation.

Link: Amazon

Price: $26 aprox

If you got this far I would say that you are quite capable of working in data science and understand the fundamental principles behind the solutions. So here I invite you to continue participating in more complex Kaggle competitions, engage in the forums, and explore new methods that you find in other participants’ solutions.

Online Course: Machine Learning by Andrew Ng

Here we are going to see many of the things that we have already learned but we are going to watch it explained by one of the leaders in the field and his approach is going to be more mathematical so it will be an excellent way to understand our models even more.

Link: https://www.coursera.org/learn/machine-learning

Price: Free without the certificate — $79 with the certificate

Book: The Elements of Statistical Learning

Now the heavy math part starts. Imagine if we had started from here, it would have been an uphill road all along and we probably would have given up easier.

Link: Amazon

Price: $60, there is an official free version on the Stanford page.

Online Course: Deep Learning by Andrew Ng

By then you have probably already read about deep learning and play with some models. But here we are going to learn the foundations of what neural networks are, how they work, and learn to implement and apply the different architectures that exist.

Link: https://www.deeplearning.ai/deep-learning-specialization/

Price: $49/month

At this point it depends a lot on your own interests, you can focus on regression and time series problems or maybe go more deep into deep learning.

I wanted to tell you that I launched a Data Science Trivia game with questions and answers that usually come out on interviews. To know more about this Follow me on Twitter.

Also published at https://towardsdatascience.com/if-i-had-to-start-learning-data-science-again-how-would-i-do-it-78a72b80fd93

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://hackernoon.com/how-id-learn-data-science-if-i-were-to-start-all-over-again-5o2733tn?source=rss

Continue Reading
Gaming5 days ago

Betfred Sports, Represented by SCCG Management, Signs Multi-year Marketing Agreement with the Colorado Rockies

Blockchain4 days ago

‘Bitcoin Senator’ Lummis Optimistic About Crypto Tax Reform

Blockchain4 days ago

Dogecoin becomes the most popular cryptocurrency

Blockchain4 days ago

Billionaire Hedge Fund Manager and a Former CFTC Chairman Reportedly Invested in Crypto Firm

Blockchain4 days ago

Bitcoin Price Analysis: Back Above $50K, But Facing Huge Resistance Now

Blockchain4 days ago

NEXT Chain: New Generation Blockchain With Eyes on the DeFi Industry

AR/VR5 days ago

‘Farpoint’ Studio Impulse Gear Announces a New VR Game Coming This Year

Blockchain4 days ago

Institutional Investors Continue to Buy Bitcoin as Price Tops $50K: Report

Blockchain4 days ago

Elrond & Reef Finance Team Up for Greater Connectivity & Liquidity

Big Data2 days ago

Online learning platform Coursera files for U.S. IPO

Blockchain4 days ago

Here’s why Bitcoin could be heading towards $45,000

Blockchain4 days ago

SushiSwap Goes Multi-Chain after Fantom Deployment

Blockchain5 days ago

Dogecoin price reclaims $0.050 as crypto firm supports DOGE at 1800 ATMs

Blockchain4 days ago

UK Budget Avoids Tax Hikes for Bitcoin Gains

Blockchain4 days ago

eToro and DS TECHEETAH Change Face of Sponsorship With Profit-Only Deal

Blockchain4 days ago

Ethereum’s price prospects: What you need to know

Blockchain4 days ago

Apple Pay Users Can Now Buy COTI Via Simplex

Blockchain5 days ago

Calls for Bitcoin Breakout Above $50,000 Grow; 3 Reasons Why

Business Insider2 days ago

Wall Street people moves of the week: Here’s our rundown of promotions, exits, and hires at firms like Goldman Sachs, JPMorgan, and Third Point

Blockchain4 days ago

Tron Dapps Market Gets A Boost As Bridge Oracle All Set to Launch MainNet Soon

Trending