Connect with us

Big Data

EHR Data Launches Movement for Patients to Own Health Data

Avatar

Published

on

EHR Data has announced the official launch of EHR Data Wavemakers, a movement geared towards educating and empowering individuals to create waves that will push a much-needed change in the healthcare industry—for patients to be able to own and control their health data.

Many people from all walks of life must have had experiences dealing with miscommunication due to delays and failures in retrieving and sharing their own health records that may have negatively impacted their own or loved ones’ medical treatment and care. At the forefront of the EHR Data Wavemakers movement is the digital campaign My EHR Story that encourages people to share their stories on social media using the hashtag #myEHRstory. This does not only create awareness about the current situation wherein patients are having difficulties accessing their personal health data—when that data should be rightfully theirs to own and control—but also spearheads a movement towards responsible data management towards a blockchain-based global healthcare database.

“EHR Data is a company that’s existed for 41 years in the U.S. It’s not a startup. It is a significant player in the world of healthcare data in the U.S. It’s bringing its 41 years of experience to follow Craig Wright’s lead in empowering people to have more control over their data, in this case, healthcare data. They want to create more patient safety. They’re building the concept of a global electronic health record so that all of your health data can live in one place on the blockchain. As opposed to the current systems we have in the U.S. and many countries where I go to my general practitioner doctor, and they have some of my health records; my dentist has some health records,” Bitcoin Association Founding President Jimmy Nguyen said in a presentation of enterprise solutions built on the Bitcoin SV blockchain during an event in Ljubljana, Slovenia last year.

This movement could not have come at a more appropriate time. As people realize the value of data during this time of the pandemic, now is the time to make waves and enact the changes needed for people to own their data and benefit from it. Not only that, the global healthcare database is built on the Bitcoin SV blockchain, which provides transparency, security, scalability, and immutability to data. Furthermore, the Bitcoin SV blockchain can accommodate big data and low-cost microtransactions and operates on an economically incentivized model,

which makes it perfect for a global healthcare database.

“Times are changing, and a greater focus is being placed on interoperability and the patients’ absolute right to increased access to their health data. We will spearhead and shepherd this process; it’s high time that there was a centralized location for healthcare data, controlled and permission by the patient that they and their team of providers can access at any time,” Ron Austring, EHR Data Chief Scientist, explained.

Because patients own their health data, they will be asked for permission whenever their data is needed for use by various institutions, and they will be paid for it. This is in contrast to the current system where only big businesses profit off of collecting people’s data. And this is why there is a need for change. People must come together to revolutionize the system so they may claim back ownership of their data.

Visit https://ehrdata.com/wavemakers to become part of this movement and to learn more on how to share your stories.

Author:Makkie Maclang

source link:https://bitcoinassociation.net/bitcoin-sv-means-business-why-bsv-is-the-enterprise-friendly-blockchain/

 

Big Data

8 Tricks in Microsoft Project You Should Know

Avatar

Published

on

When using Microsoft Project, you’d want to spend less of your time doing the actual planning and spend more of it on more important matters. Microsoft Project (MSP) has the tools that you will need to make your planning smooth and seamless.

But did you know that there are some tips and tricks that can make your work easier? MSP is a fantastic tool in itself, but there is so much that we can do if we invest the time and effort to learn the tool and use it to its maximum potential.

It’s even possible to make a working project plan flexible enough to last throughout the project. By following these tips and tricks, you wouldn’t have to break a sweat in doing just that!

8 Tricks You Need to Learn About Microsoft Project

To have a working project plan from Microsoft Project, these are the tricks that you should learn about. This section includes tips when setting options, updating your schedule, and a lot more. These tips are often associated with being the best practices in Microsoft Project.

You can also learn a lot of tips from Microsoft Project tutorials and other training programs. They teach you a lot of techniques when using Microsoft Project. But these tricks, you can learn these by yourself at home. All you need is a laptop and this list to make your project management seamless.

1. Define the specifics for a project

When making a project plan, it is crucial to define the specifics for a project. This is often overlooked, but it plays an important role to make your life easier as we go on with the list.

It’s important to define the date when the project was started. The default setting in Microsoft Project would be the date when you created the file. If the project started earlier (or about to start), you should change the date to the proper one.

Defining the start date in Microsoft Project.

You can also define the assumed finish date of the project. This is important, specifically if your project has a strict deadline. This will make you spend less time using Microsoft Project while gaining the most out of it.

2. Set your settings to Auto-scheduled

The default setting for your Microsoft Project tasks is manual scheduling. And if your task is manually scheduled, you have to update it manually every time there’s a factor that can affect the finish date of your project. 

That’s why it’s highly recommended to set your MSP settings to auto-scheduled. If you already defined your project start date, all your tasks will automatically have the same start date as their default.

3. Have a well-defined calendar

Everything in your project plan will follow a calendar. You can edit the calendar in Microsoft Project and set holidays depending on dedicated holidays for a specific year. You can also set the working time for each day in the calendar. This helps in managing your resources well using Microsoft Project.

There are three default calendars in MSP. The project calendar, task calendar, and resource calendar. All of these calendars can use these three different shifts to define the working time per day. Here are the three shifts:

  • Standard shift – In a standard shift, the working time is between 8 AM and 5 PM with a 1-hour break in the noon.
  • 24-hour shift – The 24-hour shift starts at midnight on the first day and ends at midnight of the next day with no break hours. 
  • Night shift – During the night shift, working time is set from 11 PM to 8 AM, with an hour of break (3 AM – 4 AM).

You can use these default calendars but it’s always good to make necessary changes in a calendar depending on organization holidays. You can always change these (and more!) in Microsoft Project.

4. Make sure your tasks are complete before starting a project

When starting a new project, it would be less stressful if you managed to complete your list of tasks before starting. It’s quite bothersome to add a new task in the middle of an ongoing project. That’s why try not to miss a task when adding them to your project. 

Keyboard shortcut: For Windows OS, press Insert to add another task manually. For macOS, you can use Fn+Enter to add a new task.

5. Define task relationships

When planning projects, you can’t escape the fact that there are tasks that are correlated to each other. And sometimes, these tasks can make or break your project deadline. When one task is delayed, the others will also be delayed.

That’s why it’s important to define the relationship between your tasks. If you set the actual dates in MSP, the dates for successor tasks will be automatically adjusted.

6. Identify task hierarchy

You can identify the tasks that go along together and the tasks under each category. These tasks can be grouped into sub-tasks in Microsoft Project. You can use the Indent and Outdent features of MSP to make a work breakdown structure (WBS).

Keyboard shortcut: Shift + Alt + Left/Right Arrow Key to indent/outdent a task for Windows. For Mac users, you can use Alt + Left/Right Arrow Keys to outdent/indent tasks, respectively.

7. Baseline start and finish

There are times that you will need to change the start and finish date of a task. Luckily, you can also set the baseline start and finish of the task. These baseline dates will not be affected even if you change the actual start and finish dates of a task.

8. Check for date constraints

Lastly, you need to check if there are any conflicts in the dates for the tasks. You may have to adjust certain tasks and make them independent from one another. This way, you will be alerted if you have tasks that will have constraints.

MSP does not change these tasks automatically. Therefore, you can adjust the dates for the conflicting tasks and modify them yourself to fit them into a schedule. This will give you more control and freedom when scheduling tasks.

Conclusion

The ultimate goal of learning the tips and tricks of MSP is to spend the least time on the software while getting the most out of it. These tricks will help you save a lot of time while still producing a quality project management plan.

Continue Reading

AI

Nvidia CEO Jensen Huang interview: From the Grace CPU to engineer’s metaverse

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Nvidia CEO Jensen Huang delivered a keynote speech this week to 180,000 attendees registered for the GTC 21 online-only conference. And Huang dropped a bunch of news across multiple industries that show just how powerful Nvidia has become.

In his talk, Huang described Nvidia’s work on the Omniverse, a version of the metaverse for engineers. The company is starting out with a focus on the enterprise market, and hundreds of enterprises are already supporting and using it. Nvidia has spent hundreds of millions of dollars on the project, which is based on 3D data-sharing standard Universal Scene Description, originally created by Pixar and later open-sourced. The Omniverse is a place where Nvidia can test self-driving cars that use its AI chips and where all sorts of industries will able to test and design products before they’re built in the physical world.

Nvidia also unveiled its Grace central processing unit (CPU), an AI processor for datacenters based on the Arm architecture. Huang announced new DGX Station mini-sucomputers and said customers will be free to rent them as needed for smaller computing projects. And Nvidia unveiled its BlueField 3 data processing units (DPUs) for datacenter computing alongside new Atlan chips for self-driving cars.

Here’s an edited transcript of Huang’s group interview with the press this week. I asked the first question, and other members of the press asked the rest. Huang talked about everything from what the Omniverse means for the game industry to Nvidia’s plans to acquire Arm for $40 billion.

Jensen Huang, CEO of Nvidia, at GTC 21.

Above: Nvidia CEO Jensen Huang at GTC 21.

Image Credit: Nvidia

Jensen Huang: We had a great GTC. I hope you enjoyed the keynote and some of the talks. We had more than 180,000 registered attendees, 3 times larger than our largest-ever GTC. We had 1,600 talks from some amazing speakers and researchers and scientists. The talks covered a broad range of important topics, from AI [to] 5G, quantum computing, natural language understanding, recommender systems, the most important AI algorithm of our time, self-driving cars, health care, cybersecurity, robotics, edge IOT — the spectrum of topics was stunning. It was very exciting.

Question: I know that the first version of Omniverse is for enterprise, but I’m curious about how you would get game developers to embrace this. Are you hoping or expecting that game developers will build their own versions of a metaverse in Omniverse and eventually try to host consumer metaverses inside Omniverse? Or do you see a different purpose when it’s specifically related to game developers?

Huang: Game development is one of the most complex design pipelines in the world today. I predict that more things will be designed in the virtual world, many of them for games, than there will be designed in the physical world. They will be every bit as high quality and high fidelity, every bit as exquisite, but there will be more buildings, more cars, more boats, more coins, and all of them — there will be so much stuff designed in there. And it’s not designed to be a game prop. It’s designed to be a real product. For a lot of people, they’ll feel that it’s as real to them in the digital world as it is in the physical world.

Omniverse lets artists design hotels in a 3D space.

Above: Omniverse lets artists design hotels in a 3D space.

Image Credit: Leeza SOHO, Beijing by ZAHA HADID ARCHITECTS

Omniverse enables game developers working across this complicated pipeline, first of all, to be able to connect. Someone doing rigging for the animation or someone doing textures or someone designing geometry or someone doing lighting, all of these different parts of the design pipeline are complicated. Now they have Omniverse to connect into. Everyone can see what everyone else is doing, rendering in a fidelity that is at the level of what everyone sees. Once the game is developed, they can run it in the Unreal engine that gets exported out. These worlds get run on all kinds of devices. Or Unity. But if someone wants to stream it right out of the cloud, they could do that with Omniverse, because it needs multiple GPUs, a fair amount of computation.

That’s how I see it evolving. But within Omniverse, just the concept of designing virtual worlds for the game developers, it’s going to be a huge benefit to their work flow.

Question: You announced that your current processors target high-performance computing with a special focus on AI. Do you see expanding this offering, developing this CPU line into other segments for computing on a larger scale in the market of datacenters?

Huang: Grace is designed for applications, software that is data-driven. AI is software that writes software. To write that software, you need a lot of experience. It’s just like human intelligence. We need experience. The best way to get that experience is through a lot of data. You can also get it through simulation. For example, the Omniverse simulation system will run on Grace incredibly well. You could simulate — simulation is a form of imagination. You could learn from data. That’s a form of experience. Studying data to infer, to generalize that understanding and turn it into knowledge. That’s what Grace is designed for, these large systems for very important new forms of software, data-driven software.

As a policy, or not a policy, but as a philosophy, we tend not to do anything unless the world needs us to do it and it doesn’t exist. When you look at the Grace architecture, it’s unique. It doesn’t look like anything out there. It solves a problem that didn’t used to exist. It’s an opportunity and a market, a way of doing computing that didn’t exist 20 years ago. It’s sensible to imagine that CPUs that were architected and system architectures that were designed 20 years ago wouldn’t address this new application space. We’ll tend to focus on areas where it didn’t exist before. It’s a new class of problem, and the world needs to do it. We’ll focus on that.

Otherwise, we have excellent partnerships with Intel and AMD. We work very closely with them in the PC industry, in the datacenter, in hyperscale, in supercomputing. We work closely with some exciting new partners. Ampere Computing is doing a great ARM CPU. Marvell is incredible at the edge, 5G systems and I/O systems and storage systems. They’re fantastic there, and we’ll partner with them. We partner with Mediatek, the largest SOC company in the world. These are all companies who have brought great products. Our strategy is to support them. Our philosophy is to support them. By connecting our platform, Nvidia AI or Nvidia RTX, our raytracing platform, with Omniverse and all of our platform technologies to their CPUs, we can expand the overall market. That’s our basic approach. We only focus on building things that the world doesn’t have.

Nvidia's Grace CPU for datacenters.

Above: Nvidia’s Grace CPU for datacenters is named after Grace Hopper.

Image Credit: Nvidia

Question: I wanted to follow up on the last question regarding Grace and its use. Does this signal Nvidia’s perhaps ambitions in the CPU space beyond the datacenter? I know you said you’re looking for things that the world doesn’t have yet. Obviously, working with ARM chips in the datacenter space leads to the question of whether we’ll see a commercial version of an Nvidia CPU in the future.

Huang: Our platforms are open. When we build our platforms, we create one version of it. For example, DGX. DGX is fully integrated. It’s bespoke. It has an architecture that’s very specifically Nvidia. It was designed — the first customer was Nvidia researchers. We have a couple billion dollars’ worth of infrastructure our AI researchers are using to develop products and pretrain models and do AI research and self-driving cars. We built DGX primarily to solve a problem we had. Therefore it’s completely bespoke.

We take all of the building blocks, and we open it. We open our computing platform in three layers: the hardware layer, chips and systems; the middleware layer, which is Nvidia AI, Nvidia Omniverse, and it’s open; and the top layer, which is pretrained models, AI skills, like driving skills, speaking skills, recommendation skills, pick and play skills, and so on. We create it vertically, but we architect it and think about it and build it in a way that’s intended for the entire industry to be able to use however they see fit. Grace will be commercial in the same way, just like Nvidia GPUs are commercial.

With respect to its future, our primary preference is that we don’t build something. Our primary preference is that if somebody else is building it, we’re delighted to use it. That allows us to spare our critical resources in the company and focus on advancing the industry in a way that’s rather unique. Advancing the industry in a way that nobody else does. We try to get a sense of where people are going, and if they’re doing a fantastic job at it, we’d rather work with them to bring Nvidia technology to new markets or expand our combined markets together.

The ARM license, as you mentioned — acquiring ARM is a very similar approach to the way we think about all of computing. It’s an open platform. We sell our chips. We license our software. We put everything out there for the ecosystem to be able to build bespoke, their own versions of it, differentiated versions of it. We love the open platform approach.

Question: Can you explain what made Nvidia decide that this datacenter chip was needed right now? Everybody else has datacenter chips out there. You’ve never done this before. How is it different from Intel, AMD, and other datacenter CPUs? Could this cause problems for Nvidia partnerships with those companies, because this puts you in direct competition?

Huang: The answer to the last part — I’ll work my way to the beginning of your question. But I don’t believe so. Companies have leadership that are a lot more mature than maybe given credit for. We compete with the ARM GPUs. On the other hand, we use their CPUs in DGX. Literally, our own product. We buy their CPUs to integrate into our own product — arguably our most important product. We work with the whole semiconductor industry to design their chips into our reference platforms. We work hand in hand with Intel on RTX gaming notebooks. There are almost 80 notebooks we worked on together this season. We advance industry standards together. A lot of collaboration.

Back to why we designed the datacenter CPU, we didn’t think about it that way. The way Nvidia tends to think is we say, “What is a problem that is worthwhile to solve, that nobody in the world is solving and we’re suited to go solve that problem and if we solve that problem it would be a benefit to the industry and the world?” We ask questions literally like that. The philosophy of the company, in leading through that set of questions, finds us solving problems only we will, or only we can, that have never been solved before. The outcome of trying to create a system that can train AI models, language models, that are gigantic, learn from multi-modal data, that would take less than three months — right now, even on a giant supercomputer, it takes months to train 1 trillion parameters. The world would like to train 100 trillion parameters on multi-modal data, looking at video and text at the same time.

The journey there is not going to happen by using today’s architecture and making it bigger. It’s just too inefficient. We created something that is designed from the ground up to solve this class of interesting problems. Now this class of interesting problems didn’t exist 20 years ago, as I mentioned, or even 10 or five years ago. And yet this class of problems is important to the future. AI that’s conversational, that understands language, that can be adapted and pretrained to different domains, what could be more important? It could be the ultimate AI. We came to the conclusion that hundreds of companies are going to need giant systems to pretrain these models and adapt them. It could be thousands of companies. But it wasn’t solvable before. When you have to do computing for three years to find a solution, you’ll never have that solution. If you can do that in weeks, that changes everything.

That’s how we think about these things. Grace is designed for giant-scale data-driven software development, whether it’s for science or AI or just data processing.

Nvidia DGX SuperPod

Above: Nvidia DGX SuperPod

Image Credit: Nvidia

Question: You’re proposing a software library for quantum computing. Are you working on hardware components as well?

Huang: We’re not building a quantum computer. We’re building an SDK for quantum circuit simulation. We’re doing that because in order to invent, to research the future of computing, you need the fastest computer in the world to do that. Quantum computers, as you know, are able to simulate exponential complexity problems, which means that you’re going to need a really large computer very quickly. The size of the simulations you’re able to do to verify the results of the research you’re doing to do development of algorithms so you can run them on a quantum computer someday, to discover algorithms — at the moment, there aren’t that many algorithms you can run on a quantum computer that prove to be useful. Grover’s is one of them. Shore’s is another. There are some examples in quantum chemistry.

We give the industry a platform by which to do quantum computing research in systems, in circuits, in algorithms, and in the meantime, in the next 15-20 years, while all of this research is happening, we have the benefit of taking the same SDKs, the same computers, to help quantum chemists do simulations much more quickly. We could put the algorithms to use even today.

And then last, quantum computers, as you know, have incredible exponential complexity computational capability. However, it has extreme I/O limitations. You communicate with it through microwaves, through lasers. The amount of data you can move in and out of that computer is very limited. There needs to be a classical computer that sits next to a quantum computer, the quantum accelerator if you can call it that, that pre-processes the data and does the post-processing of the data in chunks, in such a way that the classical computer sitting next to the quantum computer is going to be super fast. The answer is fairly sensible, that the classical computer will likely be a GPU-accelerated computer.

There are lots of reasons we’re doing this. There are 60 research institutes around the world. We can work with every one of them through our approach. We intend to. We can help every one of them advance their research.

Question: So many workers have moved to work from home, and we’ve seen a huge increase in cybercrime. Has that changed the way AI is used by companies like yours to provide defenses? Are you worried about these technologies in the hands of bad actors who can commit more sophisticated and damaging crimes? Also, I’d love to hear your thoughts broadly on what it will take to solve the chip shortage problem on a lasting global basis.

Huang: The best way is to democratize the technology, in order to enable all of society, which is vastly good, and to put great technology in their hands so that they can use the same technology, and ideally superior technology, to stay safe. You’re right that security is a real concern today. The reason for that is because of virtualization and cloud computing. Security has become a real challenge for companies because every computer inside your datacenter is now exposed to the outside. In the past, the doors to the datacenter were exposed, but once you came into the company, you were an employee, or you could only get in through VPN. Now, with cloud computing, everything is exposed.

The other reason why the datacenter is exposed is because the applications are now aggregated. It used to be that the applications would run monolithically in a container, in one computer. Now the applications for scaled out architectures, for good reasons, have been turned into micro-services that scale out across the whole datacenter. The micro-services are communicating with each other through network protocols. Wherever there’s network traffic, there’s an opportunity to intercept. Now the datacenter has billions of ports, billions of virtual active ports. They’re all attack surfaces.

The answer is you have to do security at the node. You have to start it at the node. That’s one of the reasons why our work with BlueField is so exciting to us. Because it’s a network chip, it’s already in the computer node, and because we invented a way to put high-speed AI processing in an enterprise datacenter — it’s called EGX — with BlueField on one end and EGX on the other, that’s a framework for security companies to build AI. Whether it’s a Check Point or a Fortinet or Palo Alto Networks, and the list goes on, they can now develop software that runs on the chips we build, the computers we build. As a result, every single packet in the datacenter can be monitored. You would inspect every packet, break it down, turn it into tokens or words, read it using natural language understanding, which we talked about a second ago — the natural language understanding would determine whether there’s a particular action that’s needed, a security action needed, and send the security action request back to BlueField.

This is all happening in real time, continuously, and there’s just no way to do this in the cloud because you would have to move way too much data to the cloud. There’s no way to do this on the CPU because it takes too much energy, too much compute load. People don’t do it. I don’t think people are confused about what needs to be done. They just don’t do it because it’s not practical. But now, with BlueField and EGX, it’s practical and doable. The technology exists.

Nvidia's Inception AI statups over the years.

Above: Nvidia’s Inception AI statups over the years.

Image Credit: Nvidia

The second question has to do with chip supply. The industry is caught by a couple of dynamics. Of course one of the dynamics is COVID exposing, if you will, a weakness in the supply chain of the automotive industry, which has two main components it builds into cars. Those main components go through various supply chains, so their supply chain is super complicated. When it shut down abruptly because of COVID, the recovery process was far more complicated, the restart process, than anybody expected. You could imagine it, because the supply chain is so complicated. It’s very clear that cars could be rearchitected, and instead of thousands of components, it wants to be a few centralized components. You can keep your eyes on four things a lot better than a thousand things in different places. That’s one factor.

The other factor is a technology dynamic. It’s been expressed in a lot of different ways, but the technology dynamic is basically that we’re aggregating computing into the cloud, and into datacenters. What used to be a whole bunch of electronic devices — we can now virtualize it, put it in the cloud, and remotely do computing. All the dynamics we were just talking about that have created a security challenge for datacenters, that’s also the reason why these chips are so large. When you can put computing in the datacenter, the chips can be as large as you want. The datacenter is big, a lot bigger than your pocket. Because it can be aggregated and shared with so many people, it’s driving the adoption, driving the pendulum toward very large chips that are very advanced, versus a lot of small chips that are less advanced. All of a sudden, the world’s balance of semiconductor consumption tipped toward the most advanced of computing.

The industry now recognizes this, and surely the world’s largest semiconductor companies recognize this. They’ll build out the necessary capacity. I doubt it will be a real issue in two years because smart people now understand what the problems are and how to address them.

Question: I’d like to know more about what clients and industries Nvidia expects to reach with Grace, and what you think is the size of the market for high-performance datacenter CPUs for AI and advanced computing.

Huang: I’m going to start with I don’t know. But I can give you my intuition. 30 years ago, my investors asked me how big the 3D graphics was going to be. I told them I didn’t know. However, my intuition was that the killer app would be video games, and the PC would become — at the time the PC didn’t even have sound. You didn’t have LCDs. There was no CD-ROM. There was no internet. I said, “The PC is going to become a consumer product. It’s very likely that the new application that will be made possible, that wasn’t possible before, is going to be a consumer product like video games.” They said, “How big is that market going to be?” I said, “I think every human is going to be a gamer.” I said that about 30 years ago. I’m working toward being right. It’s surely happening.

Ten years ago someone asked me, “Why are you doing all this stuff in deep learning? Who cares about detecting cats?” But it’s not about detecting cats. At the time I was trying to detect red Ferraris, as well. It did it fairly well. But anyway, it wasn’t about detecting things. This was a fundamentally new way of developing software. By developing software this way, using networks that are deep, which allows you to capture very high dimensionality, it’s the universal function approximator. If you gave me that, I could use it to predict Newton’s law. I could use it to predict anything you wanted to predict, given enough data. We invested tens of billions behind that intuition, and I think that intuition has proven right.

I believe that there’s a new scale of computer that needs to be built, that needs to learn from basically Earth-scale amounts of data. You’ll have sensors that will be connected to everywhere on the planet, and we’ll use them to predict climate, to create a digital twin of Earth. It’ll be able to predict weather everywhere, anywhere, down to a square meter, because it’s learned the physics and all the geometry of the Earth. It’s learned all of these algorithms. We could do that for natural language understanding, which is extremely complex and changing all the time. The thing people don’t realize about language is it’s evolving continuously. Therefore, whatever AI model you use to understand language is obsolete tomorrow, because of decay, what people call model drift. You’re continuously learning and drifting, if you will, with society.

There’s some very large data-driven science that needs to be done. How many people need language models? Language is thought. Thought is humanity’s ultimate technology. There are so many different versions of it, different cultures and languages and technology domains. How people talk in retail, in fashion, in insurance, in financial services, in law, in the chip industry, in the software industry. They’re all different. We have to train and adapt models for every one of those. How many versions of those? Let’s see. Take 70 languages, multiply by 100 industries that need to use giant systems to train on data forever. That’s maybe an intuition, just to give a sense of my intuition about it. My sense is that it will be a very large new market, just as GPUs were once a zero billion dollar market. That’s Nvidia’s style. We tend to go after zero billion dollar markets, because that’s how we make a contribution to the industry. That’s how we invent the future.

Arm's campus in Cambridge, United Kingdom.

Above: Arm’s campus in Cambridge, United Kingdom.

Image Credit: Arm

Question: Are you still confident that the ARM deal will gain approval by close? With the announcement of Grace and all the other ARM-relevant partnerships you have in development, how important is the ARM acquisition to the company’s goals, and what do you get from owning ARM that you don’t get from licensing?

Huang: ARM and Nvidia are independently and separately excellent businesses, as you know well. We will continue to have excellent separate businesses as we go through this process. However, together we can do many things, and I’ll come back to that. To the beginning of your question, I’m very confident that the regulators will see the wisdom of the transaction. It will provide a surge of innovation. It will create new options for the marketplace. It will allow ARM to be expanded into markets that otherwise are difficult for them to reach themselves. Like many of the partnerships I announced, those are all things bringing AI to the ARM ecosystem, bringing Nvidia’s accelerated computing platform to the ARM ecosystem — it’s something only we and a bunch of computing companies working together can do. The regulators will see the wisdom of it, and our discussions with them are as expected and constructive. I’m confident that we’ll still get the deal done in 2022, which is when we expected it in the first place, about 18 months.

With respect to what we can do together, I demonstrated one example, an early example, at GTC. We announced partnerships with Amazon to combine the Graviton architecture with Nvidia’s GPU architecture to bring modern AI and modern cloud computing to the cloud for ARM. We did that for Ampere computing, for scientific computing, AI in scientific computing. We announced it for Marvell, for edge and cloud platforms and 5G platforms. And then we announced it for Mediatek. These are things that will take a long time to do, and as one company we’ll be able to do it a lot better. The combination will enhance both of our businesses. On the one hand, it expands ARM into new computing platforms that otherwise would be difficult. On the other hand, it expands Nvidia’s AI platform into the ARM ecosystem, which is underexposed to Nvidia’s AI and accelerated computing platform.

Question: I covered Atlan a little more than the other pieces you announced. We don’t really know the node side, but the node side below 10nm is being made in Asia. Will it be something that other countries adopt around the world, in the West? It raises a question for me about the long-term chip supply and the trade issues between China and the United States. Because Atlan seems to be so important to Nvidia, how do you project that down the road, in 2025 and beyond? Are things going to be handled, or not?

Huang: I have every confidence that it will not be an issue. The reason for that is because Nvidia qualifies and works with all of the major foundries. Whatever is necessary to do, we’ll do it when the time comes. A company of our scale and our resources, we can surely adapt our supply chain to make our technology available to customers that use it.BlueField-3 DPU

Question: In reference to BlueField 3, and BlueField 2 for that matter, you presented a strong proposition in terms of offloading workloads, but could you provide some context into what markets you expect this to take off in, both right now and going into the future? On top of that, what barriers to adoption remain in the market?

Huang: I’m going to go out on a limb and make a prediction and work backward. Number one, every single datacenter in the world will have an infrastructure computing platform that is isolated from the application platform in five years. Whether it’s five or 10, hard to say, but anyway, it’s going to be complete, and for very logical reasons. The application that’s where the intruder is, you don’t want the intruder to be in a control mode. You want the two to be isolated. By doing this, by creating something like BlueField, we have the ability to isolate.

Second, the processing necessary for the infrastructure stack that is software-defined — the networking, as I mentioned, the east-west traffic in the datacenter, is off the charts. You’re going to have to inspect every single packet now. The east-west traffic in the data center, the packet inspection, is going to be off the charts. You can’t put that on the CPU because it’s been isolated onto a BlueField. You want to do that on BlueField. The amount of computation you’ll have to accelerate onto an infrastructure computing platform is quite significant, and it’s going to get done. It’s going to get done because it’s the best way to achieve zero trust. It’s the best way that we know of, that the industry knows of, to move to the future where the attack surface is basically zero, and yet every datacenter is virtualized in the cloud. That journey requires a reinvention of the datacenter, and that’s what BlueField does. Every datacenter will be outfitted with something like BlueField.

I believe that every single edge device will be a datacenter. For example, the 5G edge will be a datacenter. Every cell tower will be a datacenter. It’ll run applications, AI applications. These AI applications could be hosting a service for a client or they could be doing AI processing to optimize radio beams and strength as the geometry in the environment changes. When traffic changes and the beam changes, the beam focus changes, all of that optimization, incredibly complex algorithms, wants to be done with AI. Every base station is going to be a cloud native, orchestrated, self-optimizing sensor. Software developers will be programming it all the time.

Every single car will be a datacenter. Every car, truck, shuttle will be a datacenter. Every one of those datacenters, the application plane, which is the self-driving car plane, and the control plane, that will be isolated. It’ll be secure. It’ll be functionally safe. You need something like BlueField. I believe that every single edge instance of computing, whether it’s in a warehouse, a factory — how could you have a several-billion-dollar factory with robots moving around and that factory is literally sitting there and not have it be completely tamper-proof? Out of the question, absolutely. That factory will be built like a secure datacenter. Again, BlueField will be there.

Everywhere on the edge, including autonomous machines and robotics, every datacenter, enterprise or cloud, the control plane and the application plane will be isolated. I promise you that. Now the question is, “How do you go about doing it? What’s the obstacle?” Software. We have to port the software. There’s two pieces of software, really, that need to get done. It’s a heavy lift, but we’ve been lifting it for years. One piece is for 80% of the world’s enterprise. They all run VMware vSphere software-defined datacenter. You saw our partnership with VMware, where we’re going to take vSphere stack — we have this, and it’s in the process of going into production now, going to market now … taking vSphere and offloading it, accelerating it, isolating it from the application plane.

Nvidia has eight new RTX GPU cards.

Above: Nvidia has eight new RTX GPU cards.

Image Credit: Nvidia

Number two, for everybody else out at the edge, the telco edge, with Red Hat, we announced a partnership with them, and they’re doing the same thing. Third, for all the cloud service providers who have bespoke software, we created an SDK called DOCA 1.0. It’s released to production, announced at GTC. With this SDK, everyone can program the BlueField, and by using DOCA 1.0, everything they do on BlueField runs on BlueField 3 and BlueField 4. I announced the architecture for all three of those will be compatible with DOCA. Now the software developers know the work they do will be leveraged across a very large footprint, and it will be protected for decades to come.

We had a great GTC. At the highest level, the way to think about that is the work we’re doing is all focused on driving some of the fundamental dynamics happening in the industry. Your questions centered around that, and that’s fantastic. There are five dynamics highlighted during GTC. One of them is accelerated computing as a path forward. It’s the approach we pioneered three decades ago, the approach we strongly believe in. It’s able to solve some challenges for computing that are now front of mind for everyone. The limits of CPUs and their ability to scale to reach some of the problems we’d like to address are facing us. Accelerated computing is the path forward.

Second, to be mindful about the power of AI that we all are excited about. We have to realize that it’s a software that is writing software. The computing method is different. On the other hand, it creates incredible new opportunities. Thinking about the datacenter not just as a big room with computers and network and security appliances, but thinking of the entire datacenter as one computing unit. The datacenter is the new computing unit.

Bentley's tools used to create a digital twin of a location in the Omniverse.

Above: Bentley’s tools used to create a digital twin of a location in the Omniverse.

Image Credit: Nvidia

5G is super exciting to me. Commercial 5G, consumer 5G is exciting. However, it’s incredibly exciting to look at private 5G, for all the applications we just looked at. AI on 5G is going to bring the smartphone moment to agriculture, to logistics, to manufacturing. You can see how excited BMW is about the technologies we’ve put together that allow them to revolutionize the way they do manufacturing, to become much more of a technology company going forward.

Last, the era of robotics is here. We’re going to see some very rapid advances in robotics. One of the critical needs of developing robotics and training robotics, because they can’t be trained in the physical world while they’re still clumsy — we need to give it a virtual world where it can learn how to be a robot. These virtual worlds will be so realistic that they’ll become the digital twins of where the robot goes into production. We spoke about the digital twin vision. PTC is a great example of a company that also sees the vision of this. This is going to be a realization of a vision that’s been talked about for some time. The digital twin idea will be made possible because of technologies that have emerged out of gaming. Gaming and scientific computing have fused together into what we call Omniverse.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/04/17/jensen-huang-interview-from-the-grace-cpu-to-engineers-metaverse-of-the-omniverse/

Continue Reading

Artificial Intelligence

Deep Learning vs Machine Learning: How an Emerging Field Influences Traditional Computer Programming

Avatar

Published

on

When two different concepts are greatly intertwined, it can be difficult to separate them as distinct academic topics. That might explain why it’s so difficult to separate deep learning from machine learning as a whole. Considering the current push for both automation as well as instant gratification, a great deal of renewed focus has been heaped on the topic.

Everything from automated manufacturing worfklows to personalized digital medicine could potentially grow to rely on deep learning technology. Defining the exact aspects of this technical discipline that will revolutionize these industries is, however, admittedly much more difficult. Perhaps it’s best to consider deep learning in the context of a greater movement in computer science.

Defining Deep Learning as a Subset of Machine Learning

Machine learning and deep learning are essentially two sides of the same coin. Deep learning techniques are a specific discipline that belong to a much larger field that includes a large variety of trained artificially intelligent agents that can predict the correct response in an equally wide array of situations. What makes deep learning independent of all of these other techniques, however, is the fact that it focuses almost exclusively on teaching agents to accomplish a specific goal by learning the best possible action in a number of virtual environments.

Traditional machine learning algorithms usually teach artificial nodes how to respond to stimuli by rote memorization. This is somewhat similar to human teaching techniques that consist of simple repetition, and therefore might be thought of the computerized equivalent of a student running through times tables until they can recite them. While this is effective in a way, artificially intelligent agents educated in such a manner may not be able to respond to any stimulus outside of the realm of their original design specifications.

That’s why deep learning specialists have developed alternative algorithms that are considered to be somewhat superior to this method, though they are admittedly far more hardware intensive in many ways. Subrountines used by deep learning agents may be based around generative adversarial networks, convolutional neural node structures or a practical form of restricted Boltzmann machine. These stand in sharp contrast to the binary trees and linked lists used by conventional machine learning firmware as well as a majority of modern file systems.

Self-organizing maps have also widely been in deep learning, though their applications in other AI research fields have typically been much less promising. When it comes to defining the deep learning vs machine learning debate, however, it’s highly likely that technicians will be looking more for practical applications than for theoretical academic discussion in the coming months. Suffice it to say that machine learning encompasses everything from the simplest AI to the most sophisticated predictive algorithms while deep learning constitutes a more selective subset of these techniques.

Practical Applications of Deep Learning Technology

Depending on how a particular program is authored, deep learning techniques could be deployed along supervised or semi-supervised neural networks. Theoretically, it’d also be possible to do so via a completely unsupervised node layout, and it’s this technique that has quickly become the most promising. Unsupervised networks may be useful for medical image analysis, since this application often presents unique pieces of graphical information to a computer program that have to be tested against known inputs.

Traditional binary tree or blockchain-based learning systems have struggled to identify the same patterns in dramatically different scenarios, because the information remains hidden in a structure that would have otherwise been designed to present data effectively. It’s essentially a natural form of steganography, and it has confounded computer algorithms in the healthcare industry. However, this new type of unsupervised learning node could virtually educate itself on how to match these patterns even in a data structure that isn’t organized along the normal lines that a computer would expect it to be.

Others have proposed implementing semi-supervised artificially intelligent marketing agents that could eliminate much of the concern over ethics regarding existing deal-closing software. Instead of trying to reach as large a customer base as possible, these tools would calculate the odds of any given individual needing a product at a given time. In order to do so, it would need certain types of information provided by the organization that it works on behalf of, but it would eventually be able to predict all further actions on its own.

While some companies are currently relying on tools that utilize traditional machine learning technology to achieve the same goals, these are often wrought with privacy and ethical concerns. The advent of deep structured learning algorithms have enabled software engineers to come up with new systems that don’t suffer from these drawbacks.

Developing a Private Automated Learning Environment

Conventional machine learning programs often run into serious privacy concerns because of the fact that they need a huge amount of input in order to draw any usable conclusions. Deep learning image recognition software works by processing a smaller subset of inputs, thus ensuring that it doesn’t need as much information to do its job. This is of particular importance for those who are concerned about the possibility of consumer data leaks.

Considering new regulatory stances on many of these issues, it’s also quickly become something that’s become important from a compliance standpoint as well. As toxicology labs begin using bioactivity-focused deep structured learning packages, it’s likely that regulators will express additional concerns in regards to the amount of information needed to perform any given task with this kind of sensitive data. Computer scientists have had to scale back what some have called a veritable fire hose of bytes that tell more of a story than most would be comfortable with.

In a way, these developments hearken back to an earlier time when it was believed that each process in a system should only have the amount of privileges necessary to complete its job. As machine learning engineers embrace this paradigm, it’s highly likely that future developments will be considerably more secure simply because they don’t require the massive amount of data mining necessary to power today’s existing operations.

Image Credit: toptal.io

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://datafloq.com/read/deep-learning-vs-machine-learning-how-emerging-field-influences-traditional-computer-programming/13652

Continue Reading

AI

What did COVID do to all our models?

Avatar

Published

on

What did COVID do to all our models?

An interview with Dean Abbott and John Elder about change management, complexity, interpretability, and the risk of AI taking over humanity.


By Heather Fyson, KNIME

What did COVID do to all our models?

After the KNIME Fall Summit, the dinosaurs went back home… well, switched off their laptops. Dean Abbott and John Elder, longstanding data science experts, were invited to the Fall Summit by Michael to join him in a discussion of The Future of Data Science: A Fireside Chat with Industry Dinosaurs. The result was a sparkling conversation about data science challenges and new trends. Since switching off the studio lights, Rosaria has distilled and expanded some of the highlights about change management, complexity, interpretability, and more in the data science world. Let’s see where it brought us.

What is your experience with change management in AI, when reality changes and models have to be updated? What did COVID do to all our models?

 
[Dean] Machine Learning (ML) algorithms assume consistency between past and future. When things change, the models fail. COVID has changed our habits, and therefore our data. Pre-COVID models struggle to deal with the new situation.

[John] A simple example would be the Traffic layer on Google Maps. After lockdowns hit country after country in 2020, Google Maps traffic estimates were very inaccurate for a while. It had been built on fairly stable training data but now that system was thrown completely out of whack.

How do you figure out when the world has changed and the models don’t work anymore?

 
[Dean] Here’s a little trick I use: I partition my data by time and label records as “before” and “after”. I then build a classification model to discriminate the “after” vs. the “before” from the same inputs the model uses. If the discrimination is possible, then the “after” is different from the “before”, the world has changed, the data has changed, and the models must be retrained.

How complicated is it to retrain models in projects, especially after years of customization?

 
[John] Training models is usually the easiest step of all! The vast majority of otherwise successful projects die in the implementation phase. The greatest time is spent in the data cleansing and preparation phase. And the most problems are missed or made in the business understanding / project definition phase. So if you understand what the flaw is and can obtain new data and have the implementation framework in place, creating a new model is, by comparison, very straightforward.

Based on your decades-long experience, how complex is it to put together a really functioning Data Science application?

 
[John] It can vary of course, by complexity. Most of our projects get functioning prototypes at least in a few months. But for all, I cannot stress enough the importance of feedback: You have to talk to people much more often than you want to. And listen! We learn new things about the business problem, the data, or constraints, each time. Not all us quantitative people are skilled at speaking with humans, so it often takes a team. But the whole team of stakeholders has to learn to speak the same language.

[Dean] It is important to talk to our business counterpart. People fear change and don’t want to change the current status. One key problem really is psychological. The analysts are often seen as an annoyance. So, we have to build the trust between the business counterpart and the analytics geeks. The start of a project should always include the following step: Sync up domain experts / project managers, the analysts, and the IT and infrastructure (DevOps) team so everyone is clear on the objectives of the project and how it will be executed. Analysts are number 11 on the top 10 list of people they have to see every day! Let’s avoid embodying data scientist arrogance: “The business can’t understand us/our techniques, but we know what works best”. What we don’t understand, however, are the domains experts are actually experts in the domain we are working in! Translation of data science assumptions and approaches into language that is understood by the domain experts is key!

The latest trend now is deep learning, apparently it can solve everything. I got a question from a student lately, asking “why do we need to learn other ML algorithms if deep learning is the state of the art to solve data science problems”?

 
[Dean] Deep learning sucked a lot of the oxygen out of the room. It feels so much like the early 1990s when neural networks ascended with similar optimism! Deep Learning is a set of powerful techniques for sure, but they are hard to implement and optimize. XGBoost, Ensembles of trees, are also powerful but currently more mainstream. The vast majority of problems we need to solve using advanced analytics really don’t require complex solutions, so start simple; deep learning is overkill in these situations. It is best to use the Occam’s razor principle: if two models perform the same, adopt the simplest.

About complexity. The other trend, opposite to deep learning, is ML interpretability. Here, you greatly (excessively?) simplify the model in order to be able to explain it. Is interpretability that important?

 
[John] I often find myself fighting interpretability. It is nice, sure, but often comes at too high a cost of the most important model property: reliable accuracy. But many stakeholders believe interpretability is essential, so it becomes a barrier for acceptance. Thus, it is essential to discover what kind of interpretability is needed. Perhaps it is just knowing what the most important variables are? That’s doable with many nonlinear models. Maybe, as with explaining to credit applicants why they were turned down, one just needs to interpret outputs for one case at a time? We can build a linear approximation for a given point. Or, we can generate data from our black box model and build an “interpretable” model of any complexity to fit that data.

Lastly, research has shown that if users have the chance to play with a model – that is, to poke it with trial values of inputs and see its outputs, and perhaps visualize it – they get the same warm feelings of interpretability. Overall, trust – in the people and technology behind the model – is necessary for acceptance, and this is enhanced by regular communication and by including the eventual users of the model in the build phases and decisions of the modeling process.

[Dean] By the way KNIME Analytics Platform has a great feature to quantify the importance of the input variables in a Random Forest! The Random Forest Learner node outputs the statistics of candidate and splitting variables. Remember that, when you use the Random Forest Learner node.

There is an increase in requests for explanations of what a model does. For example, for some security classes, the European Union is demanding verification that the model doesn’t do what it’s not supposed to do. If we have to explain it all, then maybe Machine Learning is not the way to go. No more Machine Learning?

 
[Dean]  Maybe full explainability is too hard to obtain, but we can achieve progress by performing a grid search on model inputs to create something like a score card describing what the model does. This is something like regression testing in hardware and software QA. If a formal proof what models are doing is not possible, then let’s test and test and test! Input Shuffling and Target Shuffling can help to achieve a rough representation of the model behavior.

[John] Talking about understanding what a model does, I would like to raise the problem of reproducibility in science. A huge proportion of journal articles in all fields — 65 to 90% — is believed to be unreplicable. This is a true crisis in science. Medical papers try to tell you how to reproduce their results. ML papers don’t yet seem to care about reproducibility. A recent study showed that only 15% of AI papers share their code.

Let’s talk about Machine Learning Bias. Is it possible to build models that don’t discriminate?

 
[John] (To be a nerd for a second, that word is unfortunately overloaded. To “discriminate” in the ML world word is your very goal: to make a distinction between two classes.) But to your real question, it depends on the data (and on whether the analyst is clever enough to adjust for weaknesses in the data): The models will pull out of the data the information reflected therein. The computer knows nothing about the world except for what’s in the data in front of it. So the analyst has to curate the data — take responsibility for those cases reflecting reality. If certain types of people, for example, are under-represented then the model will pay less attention to them and won’t be as accurate on them going forward. I ask, “What did the data have to go through to get here?” (to get in this dataset) to think of how other cases might have dropped out along the way through the process (that is survivor bias). A skilled data scientist can look for such problems and think of ways to adjust/correct for them.

[Dean] The bias is not in the algorithms. The bias is in the data. If the data is biased, we’re working with a biased view of the world. Math is just math, it is not biased.

Will AI take over humanity?!

 
[John] I believe AI is just good engineering. Will AI exceed human intelligence? In my experience anyone under 40 believes yes, this is inevitable, and most over 40 (like me, obviously): no! AI models are fast, loyal, and obedient. Like a good German Shepherd dog, an AI model will go and get that ball, but it knows nothing about the world other than the data it has been shown. It has no common sense. It is a great assistant for specific tasks, but actually quite dimwitted.

[Dean] On that note, I would like to report two quotes made by Marvin Minsky in 1961 and 1970, from the dawn of AI, that I think describe well the future of AI.

“Within our lifetime some machines may surpass us in general intelligence” (1961)

“In three to eight years we’ll have a machine with the intelligence of a human being” (1970)

These ideas have been around for a long time. Here is one reason why AI will not solve all the problems: We’re judging its behavior based on one number, one number only! (Model error.) For example, predictions of stock prices over the next five years, predicted by building models using root mean square error as the error metric, cannot possibly paint the full picture of what the data are actually doing and severely hampers the model and its ability to flexibly uncover the patterns. We all know that RMSE is too coarse of a measure. Deep Learning algorithms will continue to get better, but we also need to get better at judging how good a model really is. So, no! I do not think that AI will take over humanity.

We have reached the end of this interview. We would like to thank Dean and John for their time and their pills of knowledge. Let’s hope we meet again soon!

About Dean Abbott and John Elder

What did COVID do to all our models Dean Abbott is Co-Founder and Chief Data Scientist at SmarterHQ. He is an internationally recognized expert and innovator in data science and predictive analytics, with three decades of experience solving problems in omnichannel customer analytics, fraud detection, risk modeling, text mining & survey analysis. Included frequently in lists of pioneering data scientists and data scientists, he is a popular keynote speaker and workshop instructor at conferences worldwide, also serving on Advisory Boards for the UC/Irvine Predictive Analytics and UCSD Data Science Certificate programs. He is the author of Applied Predictive Analytics (Wiley, 2014) and co-author of The IBM SPSS Modeler Cookbook (Packt Publishing, 2013).


What did COVID do to all our models John Elder founded Elder Research, America’s largest and most experienced data science consultancy in 1995. With offices in Charlottesville VA, Baltimore MD, Raleigh, NC, Washington DC, and London, they’ve solved hundreds of challenges for commercial and government clients by extracting actionable knowledge from all types of data. Dr. Elder co-authored three books — on practical data mining, ensembles, and text mining — two of which won “book of the year” awards. John has created data mining tools, was a discoverer of ensemble methods, chairs international conferences, and is a popular workshop and keynote speaker.


 
Bio: Heather Fyson is the blog editor at KNIME. Initially on the Event Team, her background is actually in translation & proofreading, so by moving to the blog in 2019 she has returned to her real passion of working with texts. P.S. She is always interested to hear your ideas for new articles.

Original. Reposted with permission.

Related:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.kdnuggets.com/2021/04/covid-do-all-our-models.html

Continue Reading
Esports4 days ago

Free Fire World Series APK Download for Android

Esports3 days ago

C9 White Keiti Blackmail Scandal Explains Sudden Dismissal

Esports2 days ago

Overwatch League 2021 Day 1 Recap

Esports4 days ago

Dota 2: Top Mid Heroes of Patch 7.29

Esports2 days ago

Fortnite: Epic Vaults Rocket Launchers, Cuddlefish & Explosive Bows From Competitive

Esports4 days ago

Don’t Miss Out on the Rogue Energy x Esports Talk Giveaway!

Esports5 days ago

Capcom Reveals Ransomware Hack Came from Old VPN

Esports3 days ago

Fortnite: DreamHack Cash Cup Extra Europe & NA East Results

Esports3 days ago

Gamers Club and Riot Games Organize Women’s Valorant Circuit in Latin America

Esports5 days ago

PSA: CSGO Fans Beware, Unfixed Steam Invite Hack Could Take Over Your PC.

Blockchain3 days ago

CoinSmart Appoints Joe Tosti as Chief Compliance Officer

Fintech5 days ago

FinSS and Salt Edge partner for CDR Compliance solution in Australia

Blockchain4 days ago

Bitfinex-Hacker versenden BTC im Wert von 750 Millionen USD

Blockchain3 days ago

April Continuum Blockchain Legislation Summit ContinuumBlockLegs

Esports5 days ago

AI-Driven Overwatch Power Rankings Coming From IBM

Blockchain2 days ago

15. BNB Burn: Binance zerstört Coins im Wert von 600 Mio. USD

Fintech4 days ago

Zip Co raises $400 million for international expansion

Esports4 days ago

Position 5 Faceless Void is making waves in North American Dota 2 pubs after patch 7.29

Esports4 days ago

2021 Call of Duty Mobile World Championship Announced

Fintech4 days ago

Mambu research reveals global consumers are hesitant to use Open Banking

Trending