Connect with us

Big Data

Free Economics & Finance Courses for Data Scientists




Free Economics & Finance Courses for Data Scientists

Here is a selection of courses for those interested in diversifying their domain knowledge into the related realms of economics and finance, with the goal of being able to apply your data science skills to these domains.

Are you interested in expanding your economics and/or finance domain knowledge? The key to being able to apply your data science skills to any field is to have at least a minimal understanding of it. There also exists plenty of data in both economics and finance, providing myriad opportunities for the application of your skills.

Economics and finance are related, but distinct, fields; economics “is the social science that studies the production, distribution, and consumption of goods and services” (Wikipedia), while the finance “is a term for matters regarding the management, creation, and study of money and investments” (Wikipedia). It is this related nature which leads us to group and discuss them together herein.

This is a selection of economics and finance courses which are freely available online, and which can help further your understanding of these domains. There are a high number of free finance and economics courses available online, so instead of flooding the zone with all the courses, here are a few select choices to get you started. They have been separated into the broad topics of economics foundations, finance foundations, and specialized topics, which are a more advanced mix of the two. Definitions come directly from the respective course websites.


Image source: University of York

Economics Foundations

  • AP Macroeconomics, Advanced Placement (video playlist)
    Starting from zero? The Advanced Placement Macroeconomics video lectures playlist created for AP students to use during the COVID crisis may be a good, covering: Fiscal and Monetary Policy Actions in the Short Run, Money Growth and Inflation, Opportunity Cost, Government Deficits and the National Debt, Financial Assets, Interest Rates, Inflation, GDP, Monetary Policy, Economic Growth, Fiscal Policy, Market Equilibrium, Public Policy and Economic Growth, The Foreign Exchange Market, Exchange Rates, Effect of Changes in Policies on Economic Conditions, and much more.
  • AP Microeconomics, Advanced Placement (video playlist)
    Another Advanced Placement video lectures playlist for COVID crisis, this one on microeconomics, covers: Socially Efficient and Inefficient Market Outcomes, Public and Private Goods, Effects of Government Intervention in Different Market Structures, Profit Maximizing Behavior and Monopsonistic Markets, Scarcity and Resource Allocation in Economic Systems, Comparative Advantage and Trade, Supply and Demand, Market Equilibrium and Disequilibrium, Government Intervention and International Trade, Profit Maximization and Market Decisions, Perfect Competition, Oligopoly and Game Theory, and much more.
  • Principles of Macroeconomics, MIT (course)
    This course provides an overview of macroeconomic issues: the determination of output, employment, unemployment, interest rates, and inflation. Monetary and fiscal policies are discussed. Important policy debates such as, the sub-prime crisis, social security, the public debt, and international economic issues are critically explored. The course introduces basic models of macroeconomics and illustrates principles with the experience of the U.S. and foreign economies.
  • Principles of Microeconomics, MIT (course)
    Principles of Microeconomics is an introductory undergraduate course that teaches the fundamentals of microeconomics. This course introduces microeconomic concepts and analysis, supply and demand analysis, theories of the firm and individual behavior, competition and monopoly, and welfare economics. Students will also be introduced to the use of microeconomic applications to address problems in current economic policy throughout the semester.
  • Principles of Economics with Calculus, Caltech (course)
    This course provides a quantitative and model-based introduction to basic economic principles, and teaches how to apply them to make sense of a wide range of real world problems. Examples of applications include predicting the impact of technological changes in market prices, calculating the optimal gasoline tax, and measuring the value of new products.

Finance Foundations

  • Finance and capital markets, Khan Academy (course)
    This course progresses from beginner to advanced topics, and includes the modules: Interest and debt, Housing, Inflation, Taxes, Accounting and financial statements, Stocks and bonds, Investment vehicles, insurance, and retirement, Money, banking and central banks, Options, swaps, futures, MBSs, CDOs, and other derivatives, Current economics.
  • Financial Markets, Yale (course)
    An overview of the ideas, methods, and institutions that permit human society to manage risks and foster enterprise. Emphasis on financially-savvy leadership skills. Description of practices today and analysis of prospects for the future. Introduction to risk management and behavioral finance principles to understand the real-world functioning of securities, insurance, and banking industries. The ultimate goal of this course is using such industries effectively and towards a better society.
  • Economics of Money and Banking, Columbia University (course)
    The last three or four decades have seen a remarkable evolution in the institutions that comprise the modern monetary system. The financial crisis of 2007-2009 is a wakeup call that we need a similar evolution in the analytical apparatus and theories that we use to understand that system. Produced and sponsored by the Institute for New Economic Thinking, this course is an attempt to begin the process of new economic thinking by reviving and updating some forgotten traditions in monetary thought that have become newly relevant.

Specialized Topics

  • Behavioural Economics in Action, University of Toronto (course)
    How can we get people to save more money, eat healthy foods, engage in healthy behaviors, and make better choices in general? There has been a lot written about the fact that human beings do not process information and make decisions in an optimal fashion. This course builds on much of the fascinating work in the area of behavioral economics and allows learners to develop a hands-on approach by understanding its methods and more importantly, how it can be harnessed by suitably designing contexts to “nudge” choice.
  • Game Theory I, Stanford University/University of British Columbia (course)
    Popularized by movies such as “A Beautiful Mind,” game theory is the mathematical modeling of strategic interaction among rational (and irrational) agents. How could you begin to model keyword auctions, and peer to peer file-sharing networks, without accounting for the incentives of the people using them? The course will provide the basics: representing games and strategies, the extensive form (which computer scientists call game trees), Bayesian games (modeling things like auctions), repeated and stochastic games, and more. We’ll include a variety of examples including classic games and a few applications.
  • Game Theory II: Advanced Applications, Stanford University/University of British Columbia (course)
    Over four weeks of lectures, this advanced course considers how to design interactions between agents in order to achieve good social outcomes. Three main topics are covered: social choice theory (i.e., collective decision making and voting systems), mechanism design, and auctions.
  • The Global Financial Crisis, Yale (course)
    Former U.S. Secretary of the Treasury Timothy F. Geithner and Professor Andrew Metrick survey the causes, events, policy responses, and aftermath of the recent global financial crisis.
  • Globalization, Economic Growth and Stability Specialization, IE Business School (specialization)
    This Specialization aims to make economic concepts accessible to every learner, and to teach them to analyze current events using the toolkit of economics. It begins by explaining the basic parameters of the macroeconomy, and how governments can/should use both fiscal and monetary policy to influence growth, inflation and employment. It then moves on to the international arena, where countries interact, and explains the basic principles of free trade, exchange rates, the balance of payments and immigration and how these interactions affect our everyday lives. Finally, learners will apply these tools and concepts to the world’s leading economies and discover how they can “read” from a country’s economic data important lessons about the risks and opportunities of doing business in these countries, equipping them with tools that they can use in the workplace and even in their personal investment decisions.



Big Data

PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release




PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release

PyTorch Lightning, a very light-weight structure for PyTorch, recently released version 0.8.1, a major milestone. With incredible user adoption and growth, they are continuing to build tools to easily do AI research.

By William Falcon, Founder PyTorch Lightning

Today [Recently] we released 0.8.1 which is a major milestone for PyTorch Lightning. With incredible user adoption and growth, we’re continuing to build tools to easily do AI research.

This major release puts us on track for final API changes for our v1.0.0 coming soon!

PyTorch Lightning

PyTorch Lightning is a very light-weight structure for PyTorch — it’s more of a style guide than a framework. But once you structure your code, we give you free GPU, TPU, 16-bit precision support and much more!


Lightning is just structured PyTorch


This release has a major new package inside lightning, a multi-GPU metrics package!

There are two key facts about the metrics package in Lightning.

  1. It works with plain PyTorch!
  2. It automatically handles multi-GPUs for you via DDP. That means that even if you calculate the accuracy on one or 20 GPUs, we handle that for you automatically.

The metrics package also includes mappings to sklearn metrics to bridge between numpy, sklearn and PyTorch, as well as a fancy class you can use to implement your own metrics.

class RMSE(TensorMetric): def forward(self, x, y): return torch.sqrt(torch.mean(torch.pow(x-y, 2.0)))

The metrics package has over 18 metrics currently implemented (including functional metrics). Check out our documentation for a full list!


This release also cleaned up really cool debugging tools we’ve had in lightning for a while. The overfit_batches flag can now let you overfit on a small subset of data to sanity check that your model doesn’t have major bugs.

The logic is that if you can’t even overfit 1 batch of data, then there’s no use in training the rest of the model. This can help you figure out if you’ve implemented something correctly, or to make sure your math is correct


If you do this in Lightning, this is what you will get:

Faster multi-GPU training

Another key part of this release is speed-ups we made to distributed training via DDP. The change comes from allowing DDP to work with num_workers>0 in Dataloaders

Dataloader(dataset, num_workers=8)

Today when you use DDP by launching it via .spawn() and you try to use num_workers>0 in Dataloader, your program will likely freeze and not start training (this is also true outside of Lightning).

The solution for most is to set num_workers=0, but that means that your training is going to be reaaaaally slow. To enable num_workers>0 AND DDP, we now launch DDP under the hood without spawn. This removes a lot of other weird restrictions like the need to pickle everything and the need for model weights to not be available once training has finished (because the weights were learned in a subprocess with different memory).

Thus, our implementation of DDP here is much much faster than normal. But of course, we keep both for flexibility:

# very fast :)
Trainer(distributed_backend='ddp')# very slow

Other cool features of the release

  • .test() now automatically loads the best model weights for you!
model = Model()
trainer = Trainer() automatically loads the best weights!

  • Install lightning via conda now
conda install pytorch-lightning -c conda-forge

  • ModelCheckpoint tracks the path to the best weights
ckpt_callback = ModelCheckpoint(...)
trainer = Trainer(model_checkpoint=ckpt_callback) = ckpt_callback.best_model_path

  • Automatically move data to correct device during inference
class LitModel(LightningModule): @auto_move_data def forward(self, x): return xmodel = LitModel()
x = torch.rand(2, 3)
model = model.cuda(2)# this works!

  • many more speed improvements including single-TPU speed-ups (we already support multi-tpu out of the box as well)

Try Lightning today

If you haven’t yet! give Lightning a chance 🙂

This video explains how to refactor your PyTorch code into Lightning.

Bio: William Falcon is an AI Researcher, and Founder at PyTorch Lightning. He is trying to understand the brain, build AI and use it at scale.

Original. Reposted with permission.



Continue Reading


MIT takes down 80 Million Tiny Images data set due to racist and offensive content




Creators of the 80 Million Tiny Images data set from MIT and NYU took the collection offline this week, apologized, and asked other researchers to refrain from using the data set and delete any existing copies. The news was shared Monday in a letter by MIT professors Bill Freeman and Antonio Torralba and NYU professor Rob Fergus published on the MIT CSAIL website.

Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was recently found to contain a range of racist, sexist, and otherwise offensive labels such as nearly 2,000 images labeled with the N-word, and labels like “rape suspect” and “child molester.” The data set also contained pornographic content like non-consensual photos taken up women’s skirts. Creators of the 79.3 million-image data set said it was too large and its 32 x 32 images too small, making visual inspection of the data set’s complete contents difficult. According to Google Scholar, 80 Million Tiny Images has been cited more 1,700 times.

Above: Offensive labels found in the 80 Million Tiny Images data set

“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community — precisely those that we are making efforts to include,” the professors wrote in a joint letter. “It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”

The trio of professors say the data set’s shortcoming were brought to their attention by an analysis and audit published late last month (PDF) by University of Dublin Ph.D. student Abeba Birhane and Carnegie Mellon University Ph.D. student Vinay Prabhu. The authors say their assessment is the first known critique of 80 Million Tiny Images.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Both the paper authors and the 80 Million Tiny Images creators say part of the problem comes from automated data collection and nouns from the WordNet data set for semantic hierarchy. Before the data set was taken offline, the coauthors suggested the creators of 80 Million Tiny Images do like ImageNet creators did and assess labels used in the people category of the data set. The paper finds that large-scale image data sets erode privacy and can have a disproportionately negative impact on women, racial and ethnic minorities, and communities at the margin of society.

Birhane and Prabhu assert that the computer vision community must begin having more conversations about the ethical use of large-scale image data sets now in part due to the growing availability of image-scraping tools and reverse image search technology. Citing previous work like the Excavating AI analysis of ImageNet, the analysis of large-scale image data sets shows that it’s not just a matter of data, but a matter of a culture in academia and industry that finds it acceptable to create large-scale data sets without the consent of participants “under the guise of anonymization.”

“We posit that the deeper problems are rooted in the wider structural traditions, incentives, and discourse of a field that treats ethical issues as an afterthought. A field where in the wild is often a euphemism for without consent. We are up against a system that has veritably mastered ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping, and ethics shirking,” the paper states.

To create more ethical large-scale image data sets, Birhane and Prabhu suggest:

  • Blur the faces of people in data sets
  • Do not use Creative Commons licensed material
  • Collect imagery with clear consent from data set participants
  • Include a data set audit card with large-scale image data sets, akin to the model cards Google AI uses and the datasheets for data sets Microsoft Research proposed

The work incorporates Birhane’s previous work on relational ethics, which suggests that the creators of machine learning systems should begin their work by speaking with the people most affected by machine learning systems, and that concepts of bias, fairness, and justice are moving targets.

ImageNet was introduced at CVPR in 2009 and is widely considered important to the advancement of computer vision and machine learning. Whereas previously some of the largest data sets could be counted in the tens of thousands, ImageNet contains more than 14 million images. The ImageNet Large Scale Visual Recognition Challenge ran from 2010 to 2017 and led to the launch of a variety of startups like Clarifai and MetaMind, a company Salesforce acquired in 2017. According to Google Scholar, ImageNet has been cited nearly 17,000 times.

As part of a series of changes detailed in December 2019, ImageNet creators including lead author Jia Deng and Dr. Fei-Fei Li found that 1,593 of the 2,832 people categories in the data set potentially contain offensive labels, which they said they plan to remove.

“We indeed celebrate ImageNet’s achievement and recognize the creators’ efforts to grapple with some ethical questions. Nonetheless, ImageNet as well as other large image datasets remain troublesome,” the Birhane and Prabhu paper reads.


Continue Reading


Mozilla Common Voice updates will help train the ‘Hey Firefox’ wakeword for voice-based web browsing




Mozilla today released the latest version of Common Voice, its open source collection of transcribed voice data for startups, researchers, and hobbyists to build voice-enabled apps, services, and devices. Common Voice now contains over 7,226 total hours of contributed voice data in 54 different languages, up from 1,400 hours across 18 languages in February 2019.

Common Voice consists not only of voice snippets, but of voluntarily contributed metadata useful for training speech engines, like speakers’ ages, sex, and accents. It’s designed to be integrated with DeepSpeech, a suite of open source speech-to-text, text-to-speech engines, and trained models maintained by Mozilla’s Machine Learning Group.

Collecting the over 5.5 million clips in Common Voice required a lot of legwork, namely because the prompts on the Common Voice website had to be translated into each language. Still, 5,591 of the 7,226 hours have been confirmed valid by the project’s contributors so far. And according to Mozilla, five languages in Common Voice — English, German, French, Italian, and Spanish — now have over 5,000 unique speakers, while seven languages — English, German, French, Kabyle, Catalan, Spanish, and Kinyarwandan — have over 500 recorded hours.

Today also saw the release of Mozilla’s first-ever data set target segment, which aims to collect voice data for specific purposes and use cases. This segment includes the digits “zero” through “nine” as well as the words “yes,” “no,” “hey,” and “Firefox,” spoken by 11,000 people for 120 hours collectively across 18 languages. Previously, Common Voice product lead Megan Branson said it would be used partly for “Hey Firefox” wakeword testing.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

“This segment data will help Mozilla benchmark the accuracy of our open source voice recognition engine, DeepSpeech, in multiple languages for a similar task and will enable more detailed feedback on how to continue improving the dataset,” Branson wrote in a blog post. “With contributions from all over the globe, [our contributors] are helping us follow through on our goal to create a voice dataset that is publicly available to anyone and represents the world we live in.”

The Common Voice refresh follows a significant update to DeepSpeech that incorporated one of the fastest open source speech recognition models to date. The latest version added support for TensorFlow Lite, a distribution of Google’s TensorFlow machine learning framework that’s optimized for compute-constrained mobile and embedded devices, and cut down DeepSpeech’s memory consumption by 22 times while boosting its startup speed by over 500 times.

Both Common Voice and DeepSpeech inform work on Mozilla projects like Firefox Voice, a browser extension that adds voice recognition support to Firefox. Currently, Firefox Voice can understand commands like “What is the weather” and “Find the Gmail tab,” but the goal is to facilitate “meaningful interactions” with websites using voice alone.


Continue Reading
Blockchain8 mins ago

Second Lawsuit Emerges: Why Are Hackers Targeting AT&T Crypto Investors?

Blockchain15 mins ago

Bitcoin Breaks Below $9,000 as Sellers Invalidate Bullish Technical Pattern

IOT17 mins ago

PiGirrl Zero w/ Audio #3DThursday #3DPrinting

IOT17 mins ago

A Massive Star Has Seemingly Vanished from Space With No Explanation

Blockchain22 mins ago

Crypto Trading Woes: What They Are and How UpBots Solves Them

Payments23 mins ago

Centerbridge Sells KIK Piece to Voyant

Blockchain23 mins ago

Why Isn’t BCH Way More Popular on the Dark Web?

IOT36 mins ago

Review: Calculator Kit is Just a Few Hacks From Greatness

IOT37 mins ago

EYE on NPI – Alorium Evo M51 #EyeOnNPI #DigiKey #Adafruit @digikey @Adafruit @AloriumTech @Intel ​

IOT37 mins ago

Fursuit- or puppet-head base – version 71 – Toon dragon #3Dprinting #3DThursday

Covid1938 mins ago

St Paul’s bomb plotter now denies she got cold feet, court hears

Blockchain38 mins ago

Twitter Adds Crypto Emoji for Binance, Following Bitcoin (BTC) and Coin (CRO)

Blockchain38 mins ago

Will This Year’s Third Quarter Be Negative for Bitcoin?

Gaming38 mins ago

Fortnite Fireworks Locations: Where To Light Fireworks For Captain America Challenge

Payments39 mins ago

Evolution Managers Backs Another New Manager

Private Equity41 mins ago

Dominus Capital raises up to $350m for Fund 3 but the target size for the fund looks to have dropped down to $525m

Blockchain41 mins ago

Bitcoin Price Prediction: BTC/USD Couldn’t Push Higher; Price Remains Below $9,200

Blockchain42 mins ago

Unknown Cybercrime Gang Holds Thousands of Databases For Ransom

Blockchain45 mins ago

Peter Schiff: Bitcoin HODLers Remain Delusional

Publications48 mins ago

EARN IT Act amendments transfer the fight over Section 230 to the states

Gaming48 mins ago

Fable Trademark Renewed Ahead Of Xbox Series X First-Party Showcase

CovId1958 mins ago

David Starkey widely criticised for ‘slavery was not genocide’ remarks

Blockchain1 hour ago

Lithuania to Launch CBDC This Month — Intended for Collection Purposes, Not Trade

AR/VR1 hour ago

Facebook’s Prototype Photoreal Avatars Now Have Realistic Eyes

Blockchain1 hour ago

Sun Sets on Offshore Banking as Assets Worth $11 Trillion Uncovered

Payments1 hour ago

White paper: Three key trends driving financial services IT transformation

BBC1 hour ago

Jeffrey Epstein ex-girlfriend Ghislaine Maxwell arrested by FBI

Blockchain1 hour ago

TRX in Trouble? Binance Moves $300M Tether From Tron to Ethereum

Gaming1 hour ago

NBA 2K21 Is Now Available For Digital Pre-order And Pre-download On Xbox One

BBC1 hour ago

Child-interrupted TV broadcasts ‘show reality for working parents’

Covid191 hour ago

Ghislaine Maxwell charged with two perjury counts for role in Jeffrey Epstein case – live

Blockchain1 hour ago

Cardano Hits 2020 High Following Its Network Upgrade

Cyber Security1 hour ago

Apache Guacamole Opens Door for Total Control of Remote Footprint

Gaming1 hour ago

Ubisoft’s Hyper Scape Battle Royale Revealed, Twitch Drops Provide Access

Blockchain1 hour ago

OKEx Extends Support to COMP Governance Token, Foresees DeFi to #FinanceAll

Blockchain1 hour ago

Lithuania To Sell 24,000 Blockchain-Based Digital Tokens Called LBCOINs Next Week

Publications1 hour ago

Options traders set to cash in on Tesla’s big delivery beat

Blockchain1 hour ago

Austrian Mobile Operator A1 Adopts Fintech Firm’s Crypto Service

Blockchain2 hours ago

Core Scientific and Horizon Kinetics expand investment in next-gen crypto mining equipment

Blockchain2 hours ago

IOTA Collaborates with University of Oslo’s Department of Informatics to Accelerate Tangle Research