Zephyrnet Logo

Tag: gradients

Introductory Guide on the Activation Functions

This article was published as a part of the Data Science Blogathon. What are Activation Functions? Activation functions are mathematical functions in an artificial neuron that decides whether it should be activated,i.e, turned on or not. An artificial neuron takes a linear combination of the inputs from the neurons of the previous layer and applies the […]

The post Introductory Guide on the Activation Functions appeared first on Analytics Vidhya.

A Basic Introduction to Activation Function in Deep Learning

This article was published as a part of the Data Science Blogathon. Introduction The activation function is defined as follows: The activation function calculates a weighted total and then adds bias to it to decide whether a neuron should be activated or not. The Activation Function’s goal is to introduce non-linearity into a neuron’s output. A […]

The post A Basic Introduction to Activation Function in Deep Learning appeared first on Analytics Vidhya.

Cricket Shot Classification using Pose of the Player Image

This article was published as a part of the Data Science Blogathon. Introduction Pose Detection is a subset of the Computer Vision (CV) technique that predicts the tracks and location of a person or object. This is done by looking at the combination of the poses and the direction of the given person or object. This […]

The post Cricket Shot Classification using Pose of the Player Image appeared first on Analytics Vidhya.

Vanishing Gradient Problem, Explained

This blog post aims to describe the vanishing gradient problem and explain how use of the sigmoid function resulted in it.

Decoding Logistic Regression Using MLE

This article was published as a part of the Data Science Blogathon. Introduction In this article, we shall explore the process of deriving the optimal coefficients for a simple logistic regression model. Most of us might be familiar with the immense utility of logistic regressions to solve supervised classification problems. Some of the complex algorithms that […]

The post Decoding Logistic Regression Using MLE appeared first on Analytics Vidhya.

Blood Cell Detection in Image Using Naive Approach

This article was published as a part of the Data Science Blogathon. The basics of object detection problems are how the data would look like. Now, this article will discuss the different deep learning architectures that we can use to solve object detection problems. Let us first discuss the problem statement that we’ll be working […]

The post Blood Cell Detection in Image Using Naive Approach appeared first on Analytics Vidhya.

Explaining Text Generation with LSTM

This article was published as a part of the Data Science Blogathon. An end-to-end guide on Text generation using LSTM                                                                   Source: develop-paper […]

The post Explaining Text Generation with LSTM appeared first on Analytics Vidhya.

Working Towards Explainable AI

“The hardest thing to understand in the world is the income tax.” This quote comes from the man who came up with the theory of relativity – not exactly the easiest concept to understand. That said, had he lived a bit longer, Albert Einstein might have said “AI” instead of “income tax.” Einstein died in […]

The post Working Towards Explainable AI appeared first on DATAVERSITY.

A Whistle-Stop Tour of 4 New CSS Color Features

I was just writing in my “What’s new in since CSS3?” article about recent and possible future changes to CSS colors. It’s weirdly a lot. There are just as many or more new and upcoming ways to define colors than …


A Whistle-Stop Tour of 4 New CSS Color Features originally published on CSS-Tricks. You should get the newsletter.

Robustness: linking strain design to viable bioprocesses

Microbial cell factories are becoming increasingly popular for the sustainable production of various chemicals. Metabolic engineering has led to the design of advanced cell factories; however, their long-term yield, titer, and productivity falter when scaled up and subjected to industrial conditions. This limitation arises from a lack of robustness – the ability to maintain a constant phenotype despite the perturbations of such processes. This review describes predictable and stochastic industrial perturbations as well as state-of-the-art technologies to counter process variability.

How To Design Your Data Science Portfolio

Read this overview of how the author created a data science portfolio that stands out and gets noticed.

Activation Functions for Neural Networks and their Implementation in Python

This article was published as a part of the Data Science Blogathon. Table of Contents Gradient Descent Importance of Non-Linearity/Activation Functions Activation Functions (Sigmoid, Tanh, ReLU, Leaky ReLU, ELU, Softmax) and their implementation Problems Associated with Activation Functions (Vanishing Gradient and Exploding Gradient) Endnotes Gradient Descent The work of the gradient descent algorithm is to update […]

The post Activation Functions for Neural Networks and their Implementation in Python appeared first on Analytics Vidhya.

Latest Intelligence

spot_img
spot_img