Connect with us

Code

Using MySQL with Node.js and the mysql JavaScript Client

Avatar

Published

on

NoSQL databases are rather popular among Node developers, with MongoDB (the “M” in the MEAN stack) leading the pack. When starting a new Node project, however, you shouldn’t just accept Mongo as the default choice. Rather, the type of database you choose should depend on your project’s requirements. If, for example, you need dynamic table creation, or real-time inserts, then a NoSQL solution is the way to go. If your project deals with complex queries and transactions, on the other hand, an SQL database makes much more sense.

In this tutorial, we’ll have a look at getting started with the mysql module — a Node.js client for MySQL, written in JavaScript. I’ll explain how to use the module to connect to a MySQL database and perform the usual CRUD operations, before looking at stored procedures and escaping user input.

This popular article was updated in 2020 to reflect current practices for using MySQL with Node.js. For more on MySQL, read Jump Start MySQL.

Quick Start: How to Use MySQL in Node

If you’ve arrived here looking for a quick way to get up and running with MySQL in Node, we’ve got you covered!

Here’s how to use MySQL in Node in five easy steps:

  1. Create a new project: mkdir mysql-test && cd mysql-test.
  2. Create a package.json file: npm init -y.
  3. Install the mysql module: npm install mysql.
  4. Create an app.js file and copy in the snippet below (editing the placeholders as appropriate).
  5. Run the file: node app.js. Observe a “Connected!” message.
const mysql = require('mysql');
const connection = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', database: 'database name'
});
connection.connect((err) => { if (err) throw err; console.log('Connected!');
});

Installing the mysql Module

Now let’s take a closer look at each of those steps.

mkdir mysql-test
cd mysql-test
npm init -y
npm install mysql

First of all we’re using the command line to create a new directory and navigate to it. Then we’re creating a package.json file using the command npm init -y. The -y flag means that npm will use defaults without going through an interactive process.

This step also assumes that you have Node and npm installed on your system. If this is not the case, then check out this SitePoint article to find out how to do that: Install Multiple Versions of Node.js using nvm.

After that, we’re installing the mysql module from npm and saving it as a project dependency. Project dependencies (as opposed to devDependencies) are those packages required for the application to run. You can read more about the differences between the two here.

If you need further help using npm, then be sure to check out this guide, or ask in our forums.

Getting Started

Before we get on to connecting to a database, it’s important that you have MySQL installed and configured on your machine. If this is not the case, please consult the installation instructions on their home page.

The next thing we need to do is to create a database and a database table to work with. You can do this using a
graphical interface, such as Adminer, or using the command line. For this article I’ll be using a database called sitepoint and a table called authors. Here’s a dump of the database, so that you can get up and running quickly if you wish to follow along:

CREATE DATABASE sitepoint CHARACTER SET utf8 COLLATE utf8_general_ci;
USE sitepoint; CREATE TABLE authors ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(50), city varchar(50), PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ; INSERT INTO authors (id, name, city) VALUES
(1, 'Michaela Lehr', 'Berlin'),
(2, 'Michael Wanyoike', 'Nairobi'),
(3, 'James Hibbard', 'Munich'),
(4, 'Karolina Gawron', 'Wrocław');

Using MySQL with Node.js & the mysql JavaScript Client

Connecting to the Database

Now, let’s create a file called app.js in our mysql-test directory and see how to connect to MySQL from Node.js.

const mysql = require('mysql'); // First you need to create a connection to the database
// Be sure to replace 'user' and 'password' with the correct values
const con = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password',
}); con.connect((err) => { if(err){ console.log('Error connecting to Db'); return; } console.log('Connection established');
}); con.end((err) => { // The connection is terminated gracefully // Ensures all remaining queries are executed // Then sends a quit packet to the MySQL server.
});

Now open up a terminal and enter node app.js. Once the connection is successfully established you should be able to see the “Connection established” message in the console. If something goes wrong (for example, you enter the wrong password), a callback is fired, which is passed an instance of the JavaScript Error object (err). Try logging this to the console to see what additional useful information it contains.

Using nodemon to Watch the Files for Changes

Running node app.js by hand every time we make a change to our code is going to get a bit tedious, so let’s automate that. This part isn’t necessary to follow along with the rest of the tutorial, but will certainly save you some keystrokes.

Let’s start off by installing a the nodemon package. This is a tool that automatically restarts a Node application when file changes in a directory are detected:

npm install --save-dev nodemon

Now run ./node_modules/.bin/nodemon app.js and make a change to app.js. nodemon should detect the change and restart the app.

Note: we’re running nodemon straight from the node_modules folder. You could also install it globally, or create an npm script to kick it off.

Executing Queries

Reading

Now that you know how to establish a connection to a MySQL database from Node.js, let’s see how to execute SQL queries. We’ll start by specifying the database name (sitepoint) in the createConnection command:

const con = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', database: 'sitepoint'
});

Once the connection is established, we’ll use the con variable to execute a query against the database table authors:

con.query('SELECT * FROM authors', (err,rows) => { if(err) throw err; console.log('Data received from Db:'); console.log(rows);
});

When you run app.js (either using nodemon or by typing node app.js into your terminal), you should be able to see the data returned from the database logged to the terminal:

[ RowDataPacket { id: 1, name: 'Michaela Lehr', city: 'Berlin' }, RowDataPacket { id: 2, name: 'Michael Wanyoike', city: 'Nairobi' }, RowDataPacket { id: 3, name: 'James Hibbard', city: 'Munich' }, RowDataPacket { id: 4, name: 'Karolina Gawron', city: 'Wrocław' } ]

Data returned from the MySQL database can be parsed by simply looping over the rows object.

rows.forEach( (row) => { console.log(`${row.name} lives in ${row.city}`);
});

This gives you the following:

Michaela Lehr lives in Berlin
Michael Wanyoike lives in Nairobi
James Hibbard lives in Munich
Karolina Gawron lives in Wrocław

Creating

You can execute an insert query against a database, like so:

const author = { name: 'Craig Buckler', city: 'Exmouth' };
con.query('INSERT INTO authors SET ?', author, (err, res) => { if(err) throw err; console.log('Last insert ID:', res.insertId);
});

Note how we can get the ID of the inserted record using the callback parameter.

Updating

Similarly, when executing an update query, the number of rows affected can be retrieved using result.affectedRows:

con.query( 'UPDATE authors SET city = ? Where ID = ?', ['Leipzig', 3], (err, result) => { if (err) throw err; console.log(`Changed ${result.changedRows} row(s)`); }
);

Destroying

The same thing goes for a delete query:

con.query( 'DELETE FROM authors WHERE id = ?', [5], (err, result) => { if (err) throw err; console.log(`Deleted ${result.affectedRows} row(s)`); }
);

Advanced Use

I’d like to finish off by looking at how the mysql module handles stored procedures and the escaping of user input.

Stored Procedures

Put simply, a stored procedure is prepared SQL code that you can save to a database, so that it can easily be reused. If you’re in need of a refresher on stored procedures, then check out this tutorial.

Let’s create a stored procedure for our sitepoint database which fetches all the author details. We’ll call it sp_get_authors. To do this, you’ll need some kind of interface to the database. I’m using Adminer. Run the following query against the sitepoint database, ensuring that your user has admin rights on the MySQL server:

DELIMITER $$ CREATE PROCEDURE `sp_get_authors`()
BEGIN SELECT id, name, city FROM authors;
END $$

This will create and store the procedure in the information_schema database in the ROUTINES table.

Creating stored procedure in Adminer

Note: if the delimiter syntax looks strange to you, it’s explained here.

Next, establish a connection and use the connection object to call the stored procedure as shown:

con.query('CALL sp_get_authors()',function(err, rows){ if (err) throw err; console.log('Data received from Db:'); console.log(rows);
});

Save the changes and run the file. Once it’s executed, you should be able to view the data returned from the database:

[ [ RowDataPacket { id: 1, name: 'Michaela Lehr', city: 'Berlin' }, RowDataPacket { id: 2, name: 'Michael Wanyoike', city: 'Nairobi' }, RowDataPacket { id: 3, name: 'James Hibbard', city: 'Leipzig' }, RowDataPacket { id: 4, name: 'Karolina Gawron', city: 'Wrocław' }, OkPacket { fieldCount: 0, affectedRows: 0, insertId: 0, serverStatus: 34, warningCount: 0, message: '', protocol41: true, changedRows: 0 } ]

Along with the data, it returns some additional information, such as the affected number of rows, insertId etc. You need to iterate over the 0th index of the returned data to get employee details separated from the rest of the information:

rows[0].forEach( (row) => { console.log(`${row.name} lives in ${row.city}`);
});

This gives you the following:

Michaela Lehr lives in Berlin
Michael Wanyoike lives in Nairobi
James Hibbard lives in Leipzig
Karolina Gawron lives in Wrocław

Now let’s consider a stored procedure which requires an input parameter:

DELIMITER $$ CREATE PROCEDURE `sp_get_author_details`( in author_id int
)
BEGIN SELECT name, city FROM authors where id = author_id;
END $$

We can pass the input parameter while making a call to the stored procedure:

con.query('CALL sp_get_author_details(1)', (err, rows) => { if(err) throw err; console.log('Data received from Db:n'); console.log(rows[0]);
});

This gives you the following:

[ RowDataPacket { name: 'Michaela Lehr', city: 'Berlin' } ]

Most of the time when we try to insert a record into the database, we need the last inserted ID to be returned as an out parameter. Consider the following insert stored procedure with an out parameter:

DELIMITER $$ CREATE PROCEDURE `sp_insert_author`( out author_id int, in author_name varchar(25), in author_city varchar(25)
)
BEGIN insert into authors(name, city) values(author_name, author_city); set author_id = LAST_INSERT_ID();
END $$

To make a procedure call with an out parameter, we first need to enable multiple calls while creating the connection. So, modify the connection by setting the multiple statement execution to true:

const con = mysql.createConnection({ host: 'localhost', user: 'user', password: 'password', database: 'sitepoint', multipleStatements: true
});

Next, when making a call to the procedure, set an out parameter and pass it in:

con.query( "SET @author_id = 0; CALL sp_insert_author(@author_id, 'Craig Buckler', 'Exmouth'); SELECT @author_id", (err, rows) => { if (err) throw err; console.log('Data received from Db:n'); console.log(rows); }
);

As seen in the above code, we have set an @author_id out parameter and passed it while making a call to the stored procedure. Once the call has been made we need to select the out parameter to access the returned ID.

Run app.js. On successful execution you should be able to see the selected out parameter along with various other information. rows[2] should give you access to the selected out parameter:

 [ RowDataPacket { '@author_id': 6 } ] ]

Note: To delete a stored procedure you need to run the command DROP PROCEDURE <procedure-name>; against the database you created it for.

Escaping User Input

In order to avoid SQL Injection attacks, you should always escape any data you receive from users before using it inside an SQL query. Let’s demonstrate why:

const userSubmittedVariable = '1'; con.query( `SELECT * FROM authors WHERE id = ${userSubmittedVariable}`, (err, rows) => { if(err) throw err; console.log(rows); }
);

This seems harmless enough and even returns the correct result:

 { id: 1, name: 'Michaela Lehr', city: 'Berlin' }

However, try changing the userSubmittedVariable to this:

const userSubmittedVariable = '1 OR 1=1';

We suddenly have access to the entire data set. Now change it to this:

const userSubmittedVariable = '1; DROP TABLE authors';

We’re now in proper trouble!

The good news is that help is at hand. You just have to use the mysql.escape method:

con.query( `SELECT * FROM authors WHERE id = ${mysql.escape(userSubmittedVariable)}`, (err, rows) => { if(err) throw err; console.log(rows); }
);

You can also use a question mark placeholder, as we did in the examples at the beginning of the article:

con.query( 'SELECT * FROM authors WHERE id = ?', [userSubmittedVariable], (err, rows) => { if(err) throw err; console.log(rows); }
);

Why Not Just USE an ORM?

Before we get into the pros and cons of this approach, let’s take a second to look at what ORMs are. The following is taken from an answer on Stack Overflow:

Object-Relational Mapping (ORM) is a technique that lets you query and manipulate data from a database using an object-oriented paradigm. When talking about ORM, most people are referring to a library that implements the Object-Relational Mapping technique, hence the phrase “an ORM”.

So this means you write your database logic in the domain-specific language of the ORM, as opposed to the vanilla approach we’ve been taking so far. To give you an idea of what this might look like, here’s an example using Sequelize, which queries the database for all authors and logs them to the console:

const sequelize = new Sequelize('sitepoint', 'user', 'password', { host: 'localhost', dialect: 'mysql'
}); const Author = sequelize.define('author', { name: { type: Sequelize.STRING, }, city: { type: Sequelize.STRING },
}, { timestamps: false
}); Author.findAll().then(authors => { console.log("All authors:", JSON.stringify(authors, null, 4));
});

Whether or not using an ORM makes sense for you will depend very much on what you’re working on and with whom. On the one hand, ORMS tend to make developers more productive, in part by abstracting away a large part of the SQL so that not everyone on the team needs to know how to write super efficient database specific queries. It’s also easy to move to different database software, because you’re developing to an abstraction.

On the other hand however, it is possible to write some really messy and inefficient SQL as a result of not understanding how the ORM does what it does. Performance is also an issue in that it’s much easier to optimize queries that don’t have to go through the ORM.

Whichever path you take is up to you, but if this is a decision you’re in the process of making, check out this Stack Overflow thread: Why should you use an ORM?. Also check out this post on SitePoint: 3 JavaScript ORMs You Might Not Know.

Conclusion

In this tutorial, we’ve installed the mysql client for Node.js and configured it to connect to a database. We’ve also seen how to perform CRUD operations, work with prepared statements and escape user input to mitigate SQL injection attacks. And yet, we’ve only scratched the surface of what the mysql client offers. For more detailed information, I recommend reading the official documentation.

And please bear in mind that the mysql module is not the only show in town. There are other options too, such as the popular node-mysql2.

Source: https://www.sitepoint.com/using-node-mysql-javascript-client/?utm_source=rss

Code

How To Manage A Technical Debt Properly

Avatar

Published

on

Alex Omeyer Hacker Noon profile picture

@alex-omeyerAlex Omeyer

Co-founder & CEO at stepsize.com, SaaS to measure & manage technical debt

We’re used to thinking that you cannot deliver fast and maintain a healthy codebase. But does it really has to be a trade-off?

One of my greatest privileges building Stepsize has been hearing from hundreds of the best engineering teams in the world about how they ship software at pace while maintaining a healthy codebase.

That’s right, these teams go faster because they manage technical debt properly. We’re so used to the quality vs. cost trade-off that this statement sounds like a lie—you can’t both be fast and maintain a healthy codebase.

Martin Fowler does a great job at debunking this idea in his piece ‘Is high quality software worth the cost?‘. Spoiler:

High quality software is actually cheaper to produce.

The lessons I’ll relay in this article are drawn from the combined centuries of experience of the these 300+ software engineers I’ve interviewed.

Why bother?

As Adam Tornhill and I recently discussed in our webinar, software has well and truly eaten the world. And look, if you’re here, this will probably sound like a cliché to you. In this case, it’s because it’s true. Look around you, can you name one object that didn’t need some form of software intervention to be manufactured, purchased, or delivered to you?

Software companies live and die by the quality of their software, and the speed at which they deliver it.

Stripe found that ‘engineers spend 33% of their time dealing with technical debt’. Gartner found that companies who manage technical debt ship 50% than those who don’t. These data points may seem a little dry, but we intuitively know they’re true. How many times have we estimated a feature will be delivered in a sprint, only for it to take two? Now take a moment to extrapolate and think about the impact this will have on your company over a year, two, or its entire lifespan.

Is it not clear that companies who manage technical debt properly simply win?

A simple framework to achieve these results

Google around for ‘types of technical debt’, and you’ll find hordes of articles by authors geeking out about code debt, design debt, architecture debt, process debt, infrastructure debt—this debt that debt.

These articles are helpful in that they can train you to recognise technical debt when you come across it, but they won’t help you decide how to deal with each piece of debt, let alone how to manage tech debt as a company.

The only thing that matters is whether you’re dealing with a small, medium, or large piece of debt.

The process for small pieces of debt

This is the type of tech debt that can be handled as soon as the engineer spots it in the code—a quick refactoring or variable rename. Engineers don’t need anyone’s approval to do this, or to create a ticket for it to be prioritised. It is simply part of their jobs to apply the boyscout rule coined by Uncle Bob:

Always leave the code better than you found it.

This is table stakes at every software company who have their tech debt under control that I’ve interviewed. It’s mostly driven by Engineering culture, gets enforced in PRs or with linters, and it is understood that it is every individual contributor’s responsibility to handle small pieces of debt when they come across them.

The process for medium-sized debt

The top performers I’ve interviewed stress the importance of addressing technical debt continuously as opposed to tackling it in big projects.

Paying off technical debt is a process, not a project.

You do not want to end up in a situation where you need to stop all feature development to rewrite your entire application every three to five years.

This is why these teams dedicate 10-30% of every sprint to maintenance work that tackles technical debt. I call the tech debt that is surfaced and addressed as part of this process medium-sized debt.

To determine what proportion of your sprint to allocate to tech debt, simply find the overlap between the parts of your codebase you’ll modify with your feature work, and the parts of your codebase where your worse tech debt lives. You can then scope out the tech debt work and allocate resources accordingly. Some teams even increase the scope of their feature work to include the relevant tech debt clean up. More in this article ‘How to stop wasting time on tech debt‘.

For this to work, individual contributors need to track medium-sized debt whenever they come across it. It is then the Team Lead’s responsibility to prioritise this list of tech debt, and to discuss it with the Product Manager prior to sprint planning so that engineering resources can be allocate effectively.

The process for large pieces of debt

Every once in a while, your team will realise that some of the medium-sized debt they came across is actually due to a much larger piece of debt. For example, they may realise that the reason the frontend code is under-performing is because they should be using a different framework for the job.

Left unattended, these large pieces of debt can cause huge problems, and—like all tech debt—get much worse as time goes by.

The best companies I’ve interviewed have monthly or quarterly technical planning sessions in which all engineering and product leaders participate. Depending on the size of the company, Staff Engineer, Principal Engineers, and/or Engineering Managers are responsible for putting together technical proposals outlining the problem, solution, and business case for each of these large pieces of debt. These then get reviewed by engineering and product leadership and the ones that get prioritised are added to the roadmap.

How to achieve this easier

In order to be able to run this process, you need to have visibility into your tech debt. A lot of companies I’ve spoken to try to achieve this by creating a tech debt backlog in their project management tool or in a spreadsheet.

It’s a great way to start, but here’s the problem: these issues will not contain the context necessary for you to prioritise them effectively. Not only do you need to rank each tech debt issue against all others, you also need to convincingly argue that fixing this tech debt is more important than using these same engineering resources towards shipping a new feature instead.

Here’s the vicious cycle that ensues: the team tracks debt, you can’t prioritise it, so you can’t fix it, the backlog grows, it’s even harder to prioritise and make sense of it, you’re still not effectively tackling your debt, so the team stops tracking it. You no longer have visibility into your debt, still can’t prioritise it, and it was all for nothing.

We built Stepsize to solve this exact problem. With our product, engineers can track debt directly from their workflow (code editor, pull request, Slack, and more) so that you can have visibility into your debt. Stepsize automatically picks up important context like the code the debt relates to, and engineers get to quantify the impact the debt is having on the business and the risks it presents (e.g. time lost, customer risk, and more) so that you can prioritise it easily.

You can join the best software companies by adopting this process, start here.

Previously published at https://www.stepsize.com/blog/how-to-maintain-a-healthy-codebase-while-shipping-fast

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/how-to-manage-a-technical-debt-properly-6p1533e6?source=rss

Continue Reading

Blockchain

What is key to stirring a Litecoin comeback on the charts?

Avatar

Published

on

[PRESS RELEASE – Please Read Disclaimer]

Zug, Switzerland, 9th March, 2021, // ChainWire // Privacy-centric blockchain Concordium has finalized its MVP testnet and concluded a private sale of tokens to fund further development. The company secured $15M in additional funding for the Public and permissionless compliance-ready privacy-centric blockchain.

Late February Concordium announced joint venture cooperation between Concordium and Geely Group, a Fortune 500 company and automotive technology firm. The partnership will focus on building blockchain-based services on Concordium’s enterprise-focused chain.

Concordium recently completed Testnet 4, which saw over 2,300 self-sovereign identities issued and over 7,000 accounts created, with more than 1,000 active nodes, 800 bakers, and over 3,600 wallet downloads. The successful testnet led to the release of Concordium smart contracts functionality based on RustLang, with a select group of community members participating in stress-testing the network. Test deployments for smart contracts included gaming, crowdfunding, time-stamping, and voting.

Concordium CEO Lone Fonss Schroder said: “The interest of the community, from RustLang developers, VCs, system integrators, family offices, crypto service providers, and private persons, has been amazing. Concordium has fielded strong demand from DeFi projects looking to build on a blockchain with ID at the protocol level.”

Concordium will bring its blockchain technology for broad use, which also appeals to enterprises with protocol-level ID protected by zero-knowledge proofs and stable transaction costs to support predictable, fast, and secure transactions. Its core scientific team is made up of renowned researchers Dr. Torben Pedersen, creator of the Pedersen commitment, and Prof. Ivan Damgård, father of the Merkel-Damgård Construct.

Concordium, which is on course for a mainnet launch in Q2, aims to solve the long-standing blockchain-for-enterprise problem by addressing it in a novel way with a unique software stack based on peer-reviewed and demonstrated advanced identity and privacy technologies providing speed, security and counterpart transparency.

The Concordium team intends to announce its post-mainnet roadmap in the coming days.

About Concordium

Concordium is a next-generation, broad-focused, decentralized blockchain and the first to introduce built-in ID at the protocol level. Concordium’s core features solve the shortcomings of classic blockchains by allowing identity management at the protocol level and zero-knowledge proofs, which are used to replace anonymity with perfect privacy. The technology supports encrypted payments with software that upholds future regulatory compliance demands for transactions made on the blockchain. Concordium employs a team of dedicated cryptographers and business experts to further its vision. Protocols are science-proofed by peer reviews and developed in cooperation with Concordium Blockchain Research Center Aarhus, Aarhus University, and other global leading universities, such as ETH Zürich, a world-leading computer science university, and the Indian Institute of Science.

SPECIAL OFFER (Sponsored)
Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter CRYPTOPOTATO35 code to get 35% free bonus on any deposit up to 1 BTC.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
Source: https://coingenius.news/what-is-key-to-stirring-a-litecoin-comeback-on-the-charts/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-key-to-stirring-a-litecoin-comeback-on-the-charts

Continue Reading

AR/VR

Bowling in VR!

Avatar

Published

on

The bowling ball: By pressing the trigger on the controller, the user can pick up, hold, and release the ball. The weight and speed of the ball mimics the movement that a regular bowling ball would have. After it is thrown, the ball will respawn in the starting position by hitting the backstop, or through a manual reset from the user pressing the “reset ball” button.

The pins: Once hit by bowling ball, the pins will fall down after the collision. Once all of the pins are hit, they will respawn to reset the game. The pins can also be manually be reset through the button as seen in the image above on the right hand side. The design of the both the ball and pins were already pre-created in the asset store which are available for free.

The asset store is your friend!

The lane: With a wood flooring, the lane has two walls on either side with a backstop, simulating bumpers that would be regularly seen bowling.

1. The Possibility of AR on the Urban Space

2. Enter The Monarchy

3. The Exciting Applications of AR and VR in Automotive

4. The Best VR Events and Concerts Planned for 2021

Here’s a look into how the game actually works!

The Physics:

These are the components added on the bowling ball for functionality in collision and interactivity. The rigid body and mesh collider are also applied to the bowling pins, with the only change in mass (the pins are 1, while the ball is 3), and the OVR Grabbable script.

Rigid body: This makes sure the laws of physics and gravity are applied upon the game objects, and lets you apply forces and control it in a realistic way.

Mesh Collider: The checkmark on “Convex” indicates that this mesh collider object will collide with other mesh collider objects so that they don’t fall through the floor!

OVR Grabbable: From the free oculus integration in the asset store, the ovr grabbable script was already pre-made, allowing user interactiveness.

Pin and Ball Reset:

To reset both the pins and the ball, the user can click on these floating buttons in order to do so. I followed this tutorial (step 4 & 5) to add the buttons, but the most important step is integrating the Tilia package along with all the prefabs (fully configured game objects that have already been created for your use).

Installing the Tilia package into Unity: navigate to the manifest.json file in finder (go to the actual folder in your computer). After opening it up, there will be a section that says “dependencies”. At the bottom of this section, add in “io.extendreality.tilia.input.unityinputmanager”: “1.3.16”. The code below is a shortened version of what the dependencies would look like, and all the tilia extensions I added to complete this project.

 "dependencies": {
"com.unity.xr.legacyinputhelpers": "2.1.7",
"com.unity.xr.management": "3.2.17",
"com.unity.xr.oculus": "1.6.1",
"com.unity.xr.openvr.standalone": "2.0.5",
"com.unity.modules.xr": "1.0.0",

//Above are a few of the dependencies already installed, while the tilia extensions below were manually added

"io.extendreality.tilia.input.unityinputmanager": "1.3.16",
"io.extendreality.tilia.indicators.objectpointers.unity": "1.6.7",
"io.extendreality.tilia.camerarigs.trackedalias.unity": "1.5.7",
"io.extendreality.tilia.camerarigs.unityxr": "1.4.9",
"io.extendreality.tilia.camerarigs.spatialsimulator.unity": "1.2.31",
"io.extendreality.tilia.interactions.interactables.unity": "1.15.7",
"io.extendreality.tilia.interactions.spatialbuttons.unity": "1.2.3"
}

For the code behind the reset, here is a look into the C# script for the pins:

The Awake() call is used to load certain set up necessary prior to the game scene.

The SavePositions() method is called, where all the starting positions of the pins are logged in an array.

The ResetPositions() method contains a for loop, and goes through each of the pins to set the position and rotation to the original value saved previously. The velocity is also flattened on the pins, in the case that it spun out after being knocked over.

Oculus Integration:

The Hierarchy!

Once again, the asset store comes in really handy! The free oculus integration is compiled of pre-made scripts and functions, such as the OVR Player Controller that includes the camera rig for the oculus and the controller visibility. To properly set it up with controller integration, I followed this tutorial that is also mentioned underneath the resources below. I had to turn on developer mode on the oculus app and connect my computer to the headset with the USB-C charging cable.

Some awesome resources that helped me out:

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
Source: https://arvrjourney.com/bowling-in-vr-46e7047e2cc7?source=rss—-d01820283d6d—4

Continue Reading

Blockchain

India’s Crypto Ban Uncertain as Finance Minister Touts a Window for Experiments

Avatar

Published

on

Aave is a decentralized, open-source, non-custodial liquidity protocol that enables users to earn interest on cryptocurrency deposits, as well as borrow assets through smart contracts.

Aave is interesting (pardon the pun) because interest compounds immediately, rather than monthly or yearly. Returns are reflected by an increase in the number of AAVE tokens held by the lending party. 

Apart from helping to generate earnings, the protocol also offers flash loans. These are trustless, uncollateralized loans where borrowing and repayment occur in the same transaction. 

Assets on Aave as of 3/7/21 (source: aave homepage)

Assets on Aave as of 3/7/21 (source: aave homepage)

The following article explores Aave’s history, services, tokenomics, security, how the protocol works, and what users should be wary of when using the Aave platform.

How Does Aave Work?

The Aave protocol mints ERC-20 compliant tokens in a 1:1 ratio to the assets supplied by lenders. These tokens are known as aTokens and are interest-bearing in nature. These tokens are minted upon deposit and burned when redeemed. 

These aTokens, such as aDai, are pegged at a ratio of 1:1 to the value of the underlying asset – that is Dai in the case of aDai. 

The lending-borrowing mechanism of the Aave lending pool dictates that lenders will send their tokens to an Ethereum blockchain smart contract in exchange for these aTokens — assets that can be redeemed for the deposited token plus interest.  

atokens on Aave

atokens on Aave

Borrowers withdraw funds from the Aave liquidity pool by depositing the required collateral and, also, receive interest-bearing aTokens to represent the equivalent amount of the underlying asset.

Each liquidity pool, the liquidity market in the protocol where lenders deposit and borrowers withdraw from, has a predetermined loan-to-value ratio that determines how much the borrower can withdraw relative to their collateral. If the borrower’s position goes below the threshold LTV level, they face the risk of liquidation of their assets.

Humble Beginnings as ETHLend 

Aave was founded in May 2017 by Stani Kulechov as a decentralized peer-to-peer lending platform under the name ETHLend to create a transparent and open infrastructure for decentralized finance. ETHLend raised 16.5 million US dollars in its Initial Coin Offering (ICO) on November 25, 2017.

Kulechov, currently serving also as the CEO of Aave, has successfully led the company into the list of top 50 blockchain projects published by PWC. Aave is headquartered in London and backed by credible investors, such as Three Arrows Capital, Framework Ventures, ParaFi Capital, and DTC Capital.

ETHLend widened its bouquet of offerings and rebranded to Aave by September 2018. The Aave protocol was formally launched in January 2020, switching to the liquidity pool model from a Microstaking model.

To add context to this evolution from a Microstaking model to a Liquidity Pool model, Microstaking was where everyone using the ETHLend platform. Whether one is applying for a loan, funding a loan, or creating a loan offer, they had to purchase a ticket to obtain the rights to use the application, and that ticket had to be paid in the platform’s native token LEND. The ticket was previously a small amount pegged to USD, and the total number of LEND needed varied based on the token’s value. 

In the liquidity pool model, Lenders deposit funds to liquidity pools. Thus creating what’s known as a liquidity market, and borrowers can withdraw funds from the liquidity pools by providing collateral. In case the borrowers become undercollateralized, they face liquidation.

Aave raised another 4.5 million US dollars from an ICO and  3 million US dollars from Framework Ventures on July 8th and July 15th, 2020. 

Aave Pronunciation

Aave is typically pronounced “ah-veh.” 

Aave’s Products and Services

The Aave protocol is designed to help people lend and borrow cryptocurrency assets. Operating under a liquidity pool model, Aave allows lenders to deposit their digital assets into liquidity pools to a smart contract on the Ethereum blockchain. In exchange, they receive aTokens — assets that can be redeemed for the deposited token plus interest.

Aave's functionality

Borrowers can take out a loan by putting their cryptocurrency as collateral. The liquidity protocol of Aave, as per the latest available numbers, is more than 4.73 billion US dollars strong. 

Flash Loans

Aave’s Flash loans are a type of uncollateralized loan option, which is a unique feature even for the DeFi space. The Flash Loan product is primarily utilized by speculators seeking to take advantage of quick arbitrage opportunities. 

Borrowers can instantly borrow cryptocurrency for a matter of seconds; they must return the borrowed amount to the pool within one transaction block. If they fail to return the borrowed amount within the same transaction block, the entire transaction reverses and undo all actions executed until that point. 

Flash loans encourage a wide range of investment strategies that typically aren’t possible in such a short window of time. If used properly, a user could profit through arbitrage, collateral swapping, or self-liquidation.

Rate Switching

Aave allows borrowers to switch between fixed and floating rates, which is a fairly unique feature in DeFi. Interest rates in any DeFi lending and borrowing protocol are usually volatile, and this feature offers an alternative by providing an avenue of fixed stability. 

For example, if you’re borrowing money on Aave and expect interest rates to rise, you can switch your loan to a fixed rate to lock in your borrowing costs for the future. In contrast, if you expect rates to decrease, you can go back to floating to reduce your borrowing costs.

Aave Bug Bounty Campaign

Aave offers a bug bounty for cryptocurrency-savvy users. By submitting a bug to the Aave protocol, you can earn a reward of up to $250,000.

Aave Tokenomics

The maximum supply of the AAVE token is 16 million, and the current circulating supply is a little above 12.4 million AAVE tokens.

Initially, AAVE had 1.3 billion tokens in circulation. But in a July 2020 token swap, the protocol swapped the existing tokens for newly minted AAVE coins at a 1:100 ratio, resulting in the current 16 million supply. Three million of these tokens were kept in reserve allocated to the development fund for the core team. 

Aave’s price has been fairly volatile, with an all-time high of $559.12 on February 10, 2021. The lowest price was $25.97 on November 5th, 2020. 

Aave Security

Aave stores funds on a non-custodial smart contract on the Ethereum blockchain. As a non-custodial project, users maintain full control of their wallets. 

Aave governance token holders can stake their tokens in the safety module, which acts as a sort of decentralized insurance fund designed to ensure the protocol against any shortfall events such as contract exploits. In the module, the stakers can risk up to 30% of the funds they lock in the module and earn a fixed yield of 4.66%. 

The safety module has garnered $375 million in deposits, which is arguably the largest decentralized insurance fund of its kind. 

Final Thoughts: Why is Aave Important?

Aave is a DeFi protocol built on strong fundamentals and has forced other competitors in the DeFi space to bolster their value propositions to stay competitive. Features such as Flash loans and Rate switching offer a distinct utility to many of its users.

Aave emerged as one of the fastest-growing projects in the Summer 2020 DeFi craze. At the beginning of July 2020, the total value locked in the protocol was just above $115 million US dollars. In less than a year, on February 13, 2021, the protocol crossed the mark of 6 billion US dollars. The project currently allows borrowing and lending in 20 cryptocurrencies.

Aave is important because it shows how ripe the DeFi space is for disruption with new innovative features and how much room there is to grow.

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Check out Nord
Make your Money Grow with Mintos
Source: https://coincentral.com/what-is-aave/

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://coingenius.news/indias-crypto-ban-uncertain-as-finance-minister-touts-a-window-for-experiments/?utm_source=rss&utm_medium=rss&utm_campaign=indias-crypto-ban-uncertain-as-finance-minister-touts-a-window-for-experiments

Continue Reading
Esports4 days ago

Free Fire World Series APK Download for Android

Esports3 days ago

C9 White Keiti Blackmail Scandal Explains Sudden Dismissal

Esports3 days ago

Overwatch League 2021 Day 1 Recap

Esports4 days ago

Dota 2: Top Mid Heroes of Patch 7.29

Esports2 days ago

Fortnite: Epic Vaults Rocket Launchers, Cuddlefish & Explosive Bows From Competitive

Esports4 days ago

Don’t Miss Out on the Rogue Energy x Esports Talk Giveaway!

Esports5 days ago

Capcom Reveals Ransomware Hack Came from Old VPN

Esports3 days ago

Fortnite: DreamHack Cash Cup Extra Europe & NA East Results

Esports3 days ago

Gamers Club and Riot Games Organize Women’s Valorant Circuit in Latin America

Esports5 days ago

PSA: CSGO Fans Beware, Unfixed Steam Invite Hack Could Take Over Your PC.

Blockchain3 days ago

CoinSmart Appoints Joe Tosti as Chief Compliance Officer

Fintech5 days ago

FinSS and Salt Edge partner for CDR Compliance solution in Australia

Blockchain4 days ago

Bitfinex-Hacker versenden BTC im Wert von 750 Millionen USD

Blockchain3 days ago

April Continuum Blockchain Legislation Summit ContinuumBlockLegs

Esports5 days ago

AI-Driven Overwatch Power Rankings Coming From IBM

Esports4 days ago

Position 5 Faceless Void is making waves in North American Dota 2 pubs after patch 7.29

Blockchain2 days ago

15. BNB Burn: Binance zerstört Coins im Wert von 600 Mio. USD

Fintech4 days ago

Zip Co raises $400 million for international expansion

Esports4 days ago

2021 Call of Duty Mobile World Championship Announced

Fintech4 days ago

Mambu research reveals global consumers are hesitant to use Open Banking

Trending