Connect with us


How to Rename PDF Files Based on Content



Tired of workflows that require you to rename PDF files or documents? Automate such tedious manual tasks with Nanonets. Click below to check out Nanonets’ Zap to automatically rename PDF files based on their content!

Rename PDF files based on content

Why Rename PDF Files based on their content?

* PDF files shared between organizations are named haphazardly. * The file names often have nothing to do with the data they contain.
* This makes it hard to keep track of documents and identify them.
* Precious man-hours are spent in renaming and organizing such documents for convenient reference. *This allows users to identify files more quickly, and get some information about the documents without having to open them individually. 

PDF files are convenient for sharing and storing vast amounts of data/information. But PDF file names are not standardized.

Businesses struggle to organize & identify large numbers of PDF files in their database. The file names often have nothing to do with the underlying content of the document. It is not uncommon for organizations to receive PDF documents with a string of unintelligible characters for a file name.

For example, organizations often receive invoices as PDF files. Vendors follow different file naming conventions and invoicing formats. So vendor A might share a PDF invoice named “Vendor A” and vendor B might title their invoice “July2021 Vendor B”.

Original file vs renamed file

A standardized file naming protocol would make life so much easier – e.g. “Date_VendorName_Amount”. Organizing or identifying invoices renamed in this format would be so much more convenient and practical.

But it’s quite unrealistic to expect vendors or external parties to adhere to specific conventions such as “naming PDF files based on content” for each document they share. For all practical concerns, they might have their own rules, or worse none at all. Businesses often end up having to manually rename PDF files; an extremely time-consuming, error-prone & inefficient process.

So is there an efficient/automated way to reorganize PDF names based on their content or metadata?

How to Rename PDFs Based on Content

* Set up a Zap with Nanonets & Google Drive in 2 mins.
* Or reuse/customize Nanonets' Zap. *Just add files to a dedicated Google Drive folder. *Nanonets extracts data from the documents for renaming them in a meaningful way. *Renamed copies of the files are saved on another Google Drive folder. 

The team at Nanonets came up with an elegant solution to rename PDF files based on content – a Zap.

All you need is a Zapier account, a free Google Drive account and a free Nanonets account. And the Zap can be set up (or customised) in 2 minutes.

This is how the workflow looks like:

  • A new file is added in a folder on Google Drive
  • Nanonets OCR scans the file to extract information from the document
  • A renamed copy of the file (based on the extracted data) is saved to another Google Drive folder
Auto-rename files with Nanonets’ Zap

By connecting Nanonets & Google Drive on Zapier, you can create an automated workflow that renames PDF files according to content within each file. Here’s the shared version of the Zap that can be customized on Zapier for your specific use case.

Although this Zap specifically deals with renaming invoices, Nanonets’ OCR engine has pre-trained algorithms that can extract information from receipts, passports, & driver’s licenses. You can additionally train a custom OCR model with Nanonets to handle different/unknown document types (id cards, reports, bank statements etc.) & file formats (.doc, images etc.).

How to train a custom OCR model with Nanonets

Looking to extract data from financial documents? Check out Nanonets invoice automation or invoice scanner & receipt OCR solutions to optimize your workflows.

Alternate Solutions

* Adobe plugins *Does the job but not automated *Requires considerable manual intervention *Might throw up errors 

Most solutions that attempt to rename documents in bulk come in the form of plugins for Adobe’s PDF reader; since renaming PDFs is the most popular use case.

While these solutions do a decent job, they are not automated in the true sense. They require considerable manual intervention to operate; and require some level of review/validation to check for errors.

Using a template-based approach to extract data, these solutions require users to mark areas of interest in the documents. This allows the plugin/software to identify content correctly in each document with the same layout. But this approach is impractical when dealing with unknown or non-standard document layouts. Users would be forced to make different templates for each document type; an inefficient and tedious approach!

Why Nanonets’ Zap is Better

* Fully automated, scalable & accurate
* AI/ML capabilities that keep learning continuously
* Renames multiple files automatically in seconds
* Handles unknown layouts and various file formats 

Nanonets’ Zap is a truly automated solution powered by Zapier & Nanonets. Just upload the documents to one folder on Google Drive and get the renamed files in another dedicated folder.

Nanonets leverages AI & ML capabilities to only extract relevant data accurately from documents. This makes renaming PDFs or any other documents based on content pretty straightforward & scalable.

Nanonets can handle documents with unknown or new layouts/formatting with ease. Its algorithms learn continuously and keep getting better with time. Do you want to rename multiple documents that come in various file formats, different layouts and/or multiple languages? Nanonets can handle it all.

Nanonets online OCR & OCR API have many interesting use cases that could optimize your business performance, save costs and boost growth. Find out how Nanonets’ use cases can apply to your product.

Update June 2021: this post was originally published in June 2021 and has since been updated.

Here’s a slide summarizing the findings in this article. Here’s an alternate version of this post.

Coinsmart. Beste Bitcoin-Börse in Europa


The Third Pillar of Trusted AI: Ethics



Click to learn more about author Scott Reed.

Building an accurate, fast, and performant model founded upon strong Data Quality standards is no easy task. Taking the model into production with governance workflows and monitoring for sustainability is even more challenging. Finally, ensuring the model is explainable, transparent, and fair based on your organization’s ethics and values is the most difficult aspect of trusted AI.

We have identified three pillars of trust: performance, operations, and ethics. In our previous articles, we covered performance and operations. In this article, we will look at our third and final pillar of trust, ethics.

Ethics relates to the question: “How well does my model align with my organization’s ethics and values?” This pillar primarily focuses on understanding and explaining the mystique of model predictions, as well as identifying and neutralizing any hidden sources of bias. There are four primary components to ethics: 

  • Privacy
  • Bias and fairness
  • Explainability and transparency
  • Impact on the organization

In this article, we will focus on two in particular: bias and fairness and explainability and transparency. 

Bias and Fairness

Examples of algorithmic bias are everywhere today, oftentimes relating to the protected attributes of gender or race, and existing across almost every vertical, including health care, housing, and human resources. As AI becomes more prevalent and accepted in society, the number of incidents of AI bias will only increase without standardized responsible AI practices.

Let’s define bias and fairness before moving on. Bias refers to situations in which,  mathematically, the model performed differently (better or worse) for distinct groups in the data. Fairness, on the other hand, is a social construct and subjective based on stakeholders, legal regulations, or values. The intersection between the two lies in context and the interpretation of test results.

At the highest level, measuring bias can be split into two categories: fairness by representation and fairness by error. The former means measuring fairness based on the model’s predictions among all groups, while the latter means measuring fairness based on the model’s error rate among all groups. The idea is to know if the model is predicting favorable outcomes at a significantly higher rate for a particular group in fairness by representation, or if the model is wrong more often for a particular group in fairness by error. Within these two families, there are individual metrics that can be applied. Let’s look at a couple of examples to demonstrate this point.

In a hiring use case where we are predicting if an applicant will be hired or not, we would measure bias within a protected attribute such as gender. In this case, we may use a metric like proportional parity, which satisfies fairness by representation by requiring each group to receive the same percentage of favorable predictions (i.e., the model predicts “hired” 50% of the time for both males and females). 

Next, consider a medical diagnosis use case for a life-threatening disease. This time, we may use a metric like favorable predictive value parity, which satisfies fairness by equal error by requiring each group to have the same precision, or probability of the model being correct. 

Once bias is identified, there are several different ways to mitigate and force the model to be fair. Initially, you can analyze your underlying data, and determine if there are any steps in data curation or feature engineering that may assist. However, if a more algorithmic approach is required, there are a variety of techniques that have emerged to assist. At a high level, those techniques can be classified by the stage of the machine learning pipeline in which they are applied:

  • Pre-processing
  • In-processing
  • Post-processing

Pre-processing mitigation happens before any modeling takes place, directly on the training data. In-processing techniques relate to actions taken during the modeling process (i.e., training). Finally, post-processing techniques occur after modeling the process and operate on the model predictions to mitigate bias.

Explainability and Transparency

All Data Science practitioners have been in a meeting where they were caught off-guard trying to explain the inner workings of a model or the model’s predictions. From experience, I know that isn’t a pleasant feeling, but those stakeholders had a point. Trust in ethics also means being able to interpret, or explain, the model and its results as well as possible. 

Explainability should be a part of the conversation when selecting which model to put into production. Choosing a more explainable model is a great way to build rapport between the model and all stakeholders. Certain models are more easily explainable and transparent than others – for example, models that use coefficients (i.e., linear regression) or ones that are tree-based (i.e., random forest). These are very different from deep learning models, which are far less intuitive. The question becomes, should we sacrifice a bit of model performance for a model that we can explain?

At the model prediction level, we can leverage explanation techniques like XEMP or SHAP to understand why a particular prediction was assigned to the favorable or unfavorable outcome. Both methods are able to show which features contribute most, in a negative or positive way, to an individual prediction. 


In this series, we have covered the three pillars of trust in AI: performance, operations, and ethics. Each plays a significant role in the lifecycle of an AI project. While we’ve covered them in separate articles, in order to fully trust an AI system, there are no trade-offs between the pillars. Enacting trusted AI requires buy-in at all levels and a commitment to each of these pillars. It won’t be an easy journey, but it is a necessity if we want to ensure the maximum benefit and minimize the potential for harm through AI. 

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Evolution, rewards, and artificial intelligence



Elevate your enterprise data technology and strategy at Transform 2021.

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.

Natural selection

In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.”

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated.

According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.”

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

Reinforcement learning and artificial general intelligence

Reinforcement learning artificial intelligence

In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.”

In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.”

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].”

This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world.

Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Robot working in kitchen

Above: Image credit: Depositphotos

Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.”

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Copyright 2021


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


OceanDAO Launches 7th Round of Grants, valued at $224K, for Data Science, Developer, AI Research Projects



OceanDAO, a distributed autonomous organization supporting the Ocean Protocol, reveals that the 7th round is now open for submissions. More than $200,000 is being offered for Data Science, Developer, and AI Research projects according to a release shared with Crowdfund Insider.

During its first six months, OceanDAO has “made 49 grants to community projects,” the announcement noted while adding that more than 15M OCEAN tokens used were to vote in the funding initiative, “painting a promising picture of an autonomous future for the Ocean Protocol community.”

The announcement also mentioned that OceanDAO presents opportunities for public financing that’s open to data science and AI practitioners “interested in building and creating streams to sell and curate data.”

The release also noted:

“OceanDAO’s seventh round is now open for submissions with 400,000 OCEAN (valued at $224K USD) available and up to 32,000 OCEAN per project. Proposals are due by July 6th. The community voting period begins on July 8th. Interested parties can pitch project ideas and form teams on the OceanDAO Discord. More information on the submission process can be found on OceanDAO’s website. OceanDAO is the community funding initiative of Ocean Protocol, the data exchange protocol.”

The update pointed out that OceanDAO’s funding has managed to reach almost ½ million OCEAN tokens during the first six rounds since its launch. OceanDAO, the grants DAO to assist with funding Ocean Protocol community-curated initiatives, has reportedly made 49 allocations since December of last year, with its 7th round now taking submissions.

OceanDAO intends to expand the fast-evolving Ocean ecosystem, as “a key component in the Ocean’s near-term growth and long-term sustainability,” the release noted while adding that OceanDAO remains focused on making strategic investments in certain areas that can assist with expanding the Ocean Protocol ecosystem including: “building and improving applications or integrations to Ocean, community outreach, making data available on an Ocean-powered marketplace, building and improving Ocean core software, and improvements to the OceanDAO.”

Alex Napheys, OceanDAO Community & Growth Lead, stated:

“Our main goal is to support the long-term growth of the Ocean Protocol. The OceanDAO community is evolving monthly including some of the brightest and enthusiastic builders in the new data economy sector. The DAO aims to continually grow the [number] of projects it supports by onboarding the next wave to the OceanDAO community.”

As mentioned in the release, the community behind OceanDAO includes talented data scientists, engineers, builders, educators, and more. OceanDAO holds monthly rounds, during which teams are invited to apply for grants.

OceanDAO community regularly casts its votes for initiatives that aim to provide the best chance for growth and sustainability “based on the following criteria: return on investment towards growth and alignment with Ocean’s mission.”

Town Hall meetings are “held every week and are open to the public to discuss the status of projects and the future of the DAO,” the announcement confirmed.

OceanDAO backs initiatives across “all aforementioned categories with financial resources to meet their objectives.”

OceanDAO investments reportedly include:

  •, the project “creates a two-sided market and economy for crowdsourced data to enable long and short-term benefits of AI for everyone.”
  •, helping data scientists “to make better decisions when buying data online.”
  • Opsci Bay, an open science bay “for self-sovereign data flows from Lab to Market that is GDPR-compliant.”
  • Data Whale, a user-friendly “one-stop” solution that “helps data economy participants to understand the ecosystem and make smart staking decisions.”
  • ResilientML, will bring a vast collection of data sets “curated by experts in NLP for utilization directly in machine learning methods and sentiment models running in the Ocean environment and available through the Ocean marketplace.”

As noted in the release:

“As the projects drive traction in the Ocean ecosystem, it grows network fees and improves fundamentals for OCEAN, which in turn increases funds to OceanDAO available for future investments. This “snowball effect” is a core mechanism of the Web3 Sustainability Loop developed by Ocean Protocol Founder Trent McConaghy, in which both Network Revenue and Network Rewards are directed to work that is used for growth.”

Network Rewards help “to kickstart the project and to ensure funding. Network Revenue can help to push growth further once the Web3 project achieves traction at scale,” the announcement noted.

You may access the list of initiatives supported since OceanDAO’s launch here. OceanDAO has reportedly seen more than 60 proposals since December of last year, and all project proposals are publicly available to view online.

As previously reported, Ocean Protocol’s mission is to support a new Data Economy that “reaches the world, giving power back to data owners and enabling people to capture value from data to better our world.”

According to Ocean Protocol developers, data is like “a new asset class; Ocean Protocol unlocks its value.” Data owners and consumers use the Ocean Market app “to publish, discover, and consume data assets in a secure, privacy-preserving fashion.”

Ocean datatokens “turn data into data assets” and this enables data wallets, data exchanges, and data co-ops by “leveraging crypto wallets, exchanges, and other DeFi tools.” Projects use Ocean libraries and OCEAN in their own apps “to help drive the new Data Economy.”

The OCEAN token is used “to stake on data, govern Ocean Protocol’s community funding, and buy & sell data,” the announcement explained while confirming that its supply is “disbursed over time to drive near-term growth and long-term sustainability.” OCEAN has been designed “to increase with a rise in usage volume.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading

Artificial Intelligence

AI Fraud Protection Firm Servicing Digital Goods Raises $6.8 Million Seed Round



Israel-based has raised a $6.8 million Seed round led by DisruptiveAI, Phoenix Insurance, Kamet (an AXA backed VC), Moneta Seeds and other individual investors. is a “predictive AI fraud protection company” that services digital goods such as gift cards, prepaid debit cards, software and game keys, digital wallet transfers, international money transfers, tickets, and more. The company explains that sellers of physical goods have processing times that allow them to double-check charges and can withhold a shipment if needed. Digital sellers lack this buffer, so even if fraud is detected minutes later, the assailant may be untraceable. is bringing anti-fraud technological and chargeback guarantees to the digital goods sector.

“We are thrilled that our investors have placed their trust in our leadership and confidence in,” says Alex Zeltcer, co-founder and CEO. “This investment enables us to register thousands of new merchants, who can feel confident selling higher-risk digital goods, without accepting fraud as a part of business.”

The founders of, Zeltcer and Ziv Isaiah say they experienced first-hand the unique challenges faced by retailers of digital assets. During the first week of operating their online gift card business, 40% of sales were fraudulent, resulting in chargebacks.’s 98% approval rate offers a more accurate fraud-detection strategy, allowing retailers to recapture nearly $100 billion a year in revenue lost by declining legitimate customers, according to Zeltcer.

Gadi Tirosh, Venture Partner at Disruptive AI, says they believe fraud, especially in the field of digital goods, can only be fought with top-of-the-line AI technologies.

“ has both the technology and industry understanding to win this market.”

The funding is expected to be used to further develop’s predictive AI and machine learning algorithms. solution currently monitors and manages millions of transactions every month, and has approved close to $1B in volume since going live.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
Esports5 days ago

World of Warcraft 9.1 Release Date: When is it?

Esports2 days ago

Select Smart Genshin Impact: How to Make the Personality Quiz Work

Energy5 days ago

Biocides Market worth $13.6 billion by 2026 – Exclusive Report by MarketsandMarkets™

Esports5 days ago

Here are the patch notes for Brawl Stars’ Jurassic Splash update

Blockchain2 days ago

Bitmain Released New Mining Machines For DOGE And LTC

Blockchain4 days ago

PancakeSwap (CAKE) Price Prediction 2021-2025: Will CAKE Hit $60 by 2021?

Esports4 days ago

Here are the patch notes for Call of Duty: Warzone’s season 4 update

Esports4 days ago

How to complete Path to Glory Update SBC in FIFA 21 Ultimate Team

Energy5 days ago

XCMG dostarcza ponad 100 sztuk żurawi dostosowanych do regionu geograficznego dla międzynarodowych klientów

Blockchain4 days ago

Will Jeff Bezos & Kim Kardashian Take “SAFEMOON to the Moon”?

Gaming5 days ago

MUCK: Best Seeds To Get Great Loot Instantly | Seeds List

Esports4 days ago

How to Get the Valorant ‘Give Back’ Skin Bundle

Esports4 days ago

How to unlock the MG 82 and C58 in Call of Duty: Black Ops Cold War season 4

Blockchain3 days ago

Digital Renminbi and Cash Exchange Service ATMs Launch in Beijing

Aviation2 days ago

Southwest celebrates 50 Years with a new “Freedom One” logo jet on N500WR

Esports4 days ago

How to unlock the Call of Duty: Black Ops Cold War season 4 battle pass

Blockchain3 days ago

Bitcoin isn’t as Anonymous as People Think it is: Cornell Economist

Aviation2 days ago

Delta Air Lines Drops Cape Town With Nonstop Johannesburg A350 Flights

AR/VR4 days ago

Larcenauts Review-In-Progress: A Rich VR Shooter With Room To Improve

Blockchain3 days ago

Index Publisher MSCI Considers Launching Crypto Indexes