Connect with us

Big Data

StructureFlow’s accelerated 2020 growth sees expansion of international operations, customer base and new hires

Avatar

Published

on

StructureFlow’s accelerated 2020 growth sees expansion of international operations, customer base and new hires
StructureFlow, a legal tech start-up helping lawyers and finance professionals quickly and easily visualise complex legal structures and transactions, announces two hires to its senior leadership team and continued expansion of its international operations and customer roster. The start-up is also currently participating in Allen & Overy’s ‘Fuse’ Incubator and Founders Factory’s 6-month FinTech accelerator programme as part of its growth strategy.

Founded by former corporate lawyer Tim Follett, StructureFlow is a cloud-based software that was developed to address the difficulties and inefficiencies he faced when trying to visualise complex legal structures and transactions using tools that were not up to the task. The start-up was formally launched earlier this year at a time when many firms were, and continue to be, heavily focused on finding new technologies that enable efficient collaborative working.

Global growth and expanding beyond the legal industry

StructureFlow opened its first international office outside of the UK in Singapore earlier this year and has been running successful pilots of its visualisation software with prestigious international law firms. In addition to the growing customer base in the UK – the company is expanding internationally and is excited to announce that it will be onboarding customers in India, Australia, the Netherlands, and Canada in the next month.

With the belief that accounting teams, investment banks, private equity firms and venture capital firms will also benefit from access to StructureFlow’s visual structuring tool, the start-up has begun venturing beyond its legal customer base working with a small number of asset management and private fund businesses. This includes M7 Real Estate, a leading specialist in pan-European, multi-tenanted commercial real estate investment and asset management operations.

“We decided to expand internationally despite the pandemic as there is a heightened need for new technologies to support global organisations who are restructuring business models to adapt to the ‘new normal’,” said Alex Baker, Head of Growth. “Our product helps law firms and other financial institutions to work securely whilst working from anywhere and our growing Singapore operations will allow us to better serve our customers in Asia Pacific.”

 

New hires join the senior leadership team

Jean-Paul de Jong joins StructureFlow as its Chief Technology Officer and Chief Security Officer along with Owen Oliveras its Head of Product.

With a background in enterprise software development, information security and a track record of many successful large-scale integrations, De Jong has held several prominent positions within regulated industries in both private and public sectors.

Oliver is a co-founder of Workshare Transact, the legal transaction management application that was acquired by Litera in 2019, having been previously a corporate lawyer with Fieldfisher.

Together, they bring decades of legal and technology leadership and expertise to expedite StructureFlow’s product development and will be instrumental in developing the software to meet the demands of the company’s broadening customer base.

Tim Follett, CEO of StructureFlow, commented on these developments, “The decision to further expand our presence across the legal and financial technology markets in Europe, Asia and North America is a logical step in our business growth strategy. The addition of Jean-Paul de Jong and Owen Oliver will bring first-class engineering, security and product expertise to our team, bolstering our ability to build and scale innovative enterprise products.”

Accelerator programmes to complement growth strategy

In a move to broaden its global presence and increase the impact of its product, StructureFlow has joined two reputable accelerator programmes this year. Following the company’s success as the inaugural winner of Slaughter and May’s Collaborate programme, StructureFlow has subsequently joined the fourth cohort of Fuse, Allen & Overy’s flagship legal tech incubator.
More recently, the team has partnered with Founders Factory, joining its FinTech accelerator programme giving StructureFlow unparalleled access to the programme’s corporate partners. The programme also includes mentorship supporting the company’s growth and impact of its product across the legal, financial services and other key sectors.

“Being accepted into two industry-acclaimed incubator and accelerator programmes is a crucial part of our expansion plans and will provide us with expert guidance to further develop our visualisation software. By utilising the expertise of industry experts, we expedite our plan of becoming a global platform for a range of organisations and stakeholders to visually engage with essential corporate information,” Follett added.

Source: https://www.fintechnews.org/structureflows-accelerated-2020-growth-sees-expansion-of-international-operations-customer-base-and-new-hires/

Big Data

Simple & Intuitive Ensemble Learning in R

Avatar

Published

on

Simple & Intuitive Ensemble Learning in R

Read about metaEnsembleR, an R package for heterogeneous ensemble meta-learning (classification and regression) that is fully-automated.


By Ajay Arunachalam, Orebro University

I always believe in democratizing AI and machine learning, and spreading the knowledge in such a way to cater the larger audiences in general to potentially exploit the power of AI.

One such attempt is the development of the R package for meta-level ensemble learning (Classification, Regression) that is fully-automated. It significantly lowers the barrier for the practitioners to apply heterogeneous ensemble learning techniques in an amateur fashion to their everyday predictive problems.

Before we dwell into the package details, let’s understand a few basic concepts.

Why Ensemble Learning?

 
Generally, predictions become unreliable when the input sample is out of the training distribution, bias to data distribution or error prone to noise, and so on. Most approaches require changes to the network architecture, fine tuning, balanced data, increasing model size, etc. Further, the selection of the algorithm plays a vital role, while the scalability and learning ability decrease with the complex datasets. Combining multiple learners is an effective approach, and have been applied to many real-world problems. Ensemble learners combine a diverse collection of predictions from the individual base models to produce a composite predictive model that is more accurate and robust than its components. With meta ensemble learning one can minimize generalization error to some extent irrespective of the data distribution, number of classes, choice of algorithm, number of models, complexity of the datasets, etc. So, in summary, the predictive models will be able to generalize better.

How can we build models in more stable fashion while minimizing under-fitting/overfitting which is very critical to the overall outcome? The solution is ensemble meta-learning of a heterogeneous collection of base learners.

Common Ensemble Learning Techniques

 
The different popular ensemble techniques are referred to in the figure below. Stacked generalization is a general method of using a high-level model to combine lower- level models to achieve greater predictive accuracy. In the Bagging method, the independent base models are derived from the bootstrap samples of the original dataset. The Boosting method grows an ensemble in a dependent fashion iteratively, which adjusts the weight of an observation based on the past prediction. There are several extensions of bagging and boosting.

Image for post

Overview

 
metaEnsembleR is an R package for automated meta-learning (Classification, Regression). The functionalities provided includes simple user input based predictive modeling with the selection choice of the algorithms, train-validation-test split, model valuations, and easy guided unseen data prediction which can help the user’s to build stack ensembles on the go. The core aim of this package is to cater the larger audiences in general. metaEnsembleR significantly lowers the barrier for the practitioners to apply heterogeneous ensemble learning techniques in an amateur fashion to their everyday predictive problems.

Using metaEnsembleR

The package consists of the following components:

  • Ensemble Classifiers Training and Prediction
  • Ensemble Regressor Training and Prediction
  • Model Evaluation, Model Results (Observation vs. Prediction on test data) & new unseen data prediction and Disk write I/O performance charts & saving prediction results

All these functions are very intuitive, and their use is illustrated with examples below covering the Classification and Regression problem in general.

Getting Started

 
The package can be installed directly from CRAN

Install from Rconsole:

install.packages(“metaEnsembleR”)


However, the latest stable version (if any) could be found on Github, and installed using devtools package.

Install from GitHub:

if(!require(devtools)) install.packages(“devtools”)
devtools::install_github(repo = ‘ajayarunachalam/metaEnsembleR’, ref = ‘main’)


Usage

library(“metaEnsembleR”)
set.seed(111)


Training the ensemble classification model is as simple as one-line call to the ensembler.classifier function, in the following ways either passing the csv file directly or the imported dataframe, that takes into account the arguments in the following order starting the Dataset, Outcome/Response Variable index, Base Learners, Final Learner, Train-Validation-Test split ratio, and the Unseen data

ensembler_return ← ensembler.classifier(iris[1:130,], 5, c(‘treebag’,’rpart’), ‘gbm’, 0.60, 0.20, 0.20, read.csv(‘./unseen_data.csv’))


OR

unseen_new_data_testing ← iris[130:150,]
ensembler_return ← ensembler.classifier(iris[1:130,], 5, c(‘treebag’,’rpart’), ‘gbm’, 0.60, 0.20, 0.20, unseen_new_data_testing)


The above function returns the following, i.e., test data with the predictions, prediction labels, model result, and finally the predictions on unseen data.

testpreddata ← data.frame(ensembler_return[1])
table(testpreddata$actual_label)
table(ensembler_return[2])


Performance comparison

modelresult ← ensembler_return[3]
modelresult


Unseen data

unseenpreddata ← data.frame(ensembler_return[4])
table(unseenpreddata$unseenpreddata)


 
Training the ensemble regression model is the same as one-line call to the ensembler.regression function, in the following ways either passing the csv file directly or the imported dataframe, that takes into account the arguments in the following order starting the Dataset, Outcome/Response Variable index, Base Learners, Final Learner, Train-Validation-Test split ratio, and the Unseen data

house_price ←read.csv(file = ‘./data/regression/house_price_data.csv’)
unseen_new_data_testing_house_price ←house_price[250:414,]
write.csv(unseen_new_data_testing_house_price, ‘unseen_house_price_regression.csv’, fileEncoding = ‘UTF-8’, row.names = F)
ensembler_return ← ensembler.regression(house_price[1:250,], 1, c(‘treebag’,’rpart’), ‘gbm’, 0.60, 0.20, 0.20, read.csv(‘./unseen_house_price_regression.csv’))


OR

ensembler_return ← ensembler.regression(house_price[1:250,], 1, c(‘treebag’,’rpart’), ‘gbm’, 0.60, 0.20, 0.20, unseen_new_data_testing_house_price )


The above function returns the following, i.e., test data with the predictions, prediction values, model result, and finally the unseen data with the predictions.

testpreddata ← data.frame(ensembler_return[1])


Performance comparison

modelresult ← ensembler_return[3]
modelresult
write.csv(modelresult[[1]], “performance_chart.csv”)


Unseen data

unseenpreddata ← data.frame(ensembler_return[4])


Examples

Classification

library(“metaEnsembleR”)
attach(iris)
data(“iris”)
unseen_new_data_testing ← iris[130:150,]
write.csv(unseen_new_data_testing, ‘unseen_check.csv’, fileEncoding = ‘UTF-8’, row.names = F)
ensembler_return ← ensembler.classifier(iris[1:130,], 5, c(‘treebag’,’rpart’), ‘gbm’, 0.60, 0.20, 0.20, unseen_new_data_testing)
testpreddata ← data.frame(ensembler_return[1])
table(testpreddata$actual_label)
table(ensembler_return[2])


Performance comparison

modelresult ← ensembler_return[3]
modelresult
act_mybar ← qplot(testpreddata$actual_label, geom= “bar”)
act_mybar
pred_mybar ← qplot(testpreddata$predictions, geom= ‘bar’)
pred_mybar
act_tbl ← tableGrob(t(summary(testpreddata$actual_label)))
pred_tbl ← tableGrob(t(summary(testpreddata$predictions)))
ggsave(“testdata_actual_vs_predicted_chart.pdf”,grid.arrange(act_tbl, pred_tbl))
ggsave(“testdata_actual_vs_predicted_plot.pdf”,grid.arrange(act_mybar, pred_mybar))


Unseen data

unseenpreddata ← data.frame(ensembler_return[4])
table(unseenpreddata$unseenpreddata)
table(unseen_new_data_testing$Species)


Regression

library(“metaEnsembleR”)
data(“rock”)
unseen_rock_data ← rock[30:48,]
ensembler_return ← ensembler.regression(rock[1:30,], 4,c(‘lm’), ‘rf’, 0.40, 0.30, 0.30, unseen_rock_data)
testpreddata ← data.frame(ensembler_return[1])


Performance comparison

modelresult ← ensembler_return[3]
modelresult
write.csv(modelresult[[1]], “performance_chart.csv”)


Unseen data

unseenpreddata ← data.frame(ensembler_return[4])


More Examples

 
Comprehensive demonstrations can be found in the Demo.R file, to see the results run Rscript Demo.R in the terminal.

If there is some implementation you would like to see here or add in some examples feel free to do so. You can always reach me at ajay.aruanchalam08@gmail.com

Always Keep Learning & Sharing Knowledge!!!

 
Bio: Ajay Arunachalam (personal website) is a Postdoctoral Researcher (Artificial Intelligence) at Centre for Applied Autonomous Sensor Systems, Orebro University, Sweden. Prior to this, he was working as a Data Scientist at True Corporation, a Communications Conglomerate, working with Petabytes of data, building & deploying deep models in production. He truly believes that Opacity in AI systems is need of the hour, before we fully accept the power of AI. With this in mind, he has always strived to democratize AI, and be more inclined towards building Interpretable Models. His interest is in Applied Artificial Intelligence, Machine Learning, Deep Learning, Deep RL, and Natural Language Processing, specifically learning good representations. From his experience working on real-world problems, he fully acknowledges that finding good representations is the key in designing the system that can solve interesting challenging real-world problems, that go beyond human-level intelligence, and ultimately explain complicated data for us that we don’t understand. In order to achieve this, he envisions learning algorithms that can learn feature representations from both unlabelled and labelled data, be guided with and/or without human interaction, and that are on different levels of abstractions in order to bridge the gap between low-level data and high-level abstract concepts.

Original. Reposted with permission.

Related:

Source: https://www.kdnuggets.com/2020/12/simple-intuitive-meta-learning-r.html

Continue Reading

Big Data

NoSQL for Beginners

Avatar

Published

on

NoSQL for Beginners

NoSQL can offer an advantage to those who are entering Data Science and Analytics, as well as having applications with high-performance needs that aren’t met by traditional SQL databases.


By Alex Williams, Hosting Data UK

What is NoSQL?

 
NoSQL is essentially the response to SQL’s rigid structure. First created in the early 1970s, NoSQL didn’t really take off until the late 2000s, when Amazon and Google both put a lot of research and development into it. Since then, it’s taken off to be an integral part of the modern world, with many big websites around the world using some form of NoSQL.

So what is NoSQL exactly? Essentially it is a philosophy for creating databases that does not require a schema nor does it store data in a relational model. In fact, NoSQL has a variety of NoSQL Databases to pick from, each with their own specialization and use cases. As such, NoSQL is incredibly diverse when it comes to filling niches, and you can almost certainly find a NoSQL data model to fit your needs.

Image

Differences Between SQL vs NoSQL

 
While SQL is a specific database and language, NoSQL isn’t, but that doesn’t mean that we can’t look at the general philosophies and differences between the two.

Scalability

 
When it comes to SQL, the only real way to scale is to upgrade vertically. That means that you need to buy higher-end and more expensive gear than you already have if you want better performance. Scaling with NoSQL is done through horizontal expansion, so all you really need to do is throw in another shard and you’re basically done.

This means that NoSQL is absolutely great for applications that are likely to grow in the future where hardware may very well be a substantial roadblock.

Schema

 
SQL is built from the ground-up to essentially avoid data duplication. That ultimately means that any SQL project requires an expert designer to spend a long period of time on the schema before implementing it. This step is not only important in the long run to maintain data quality, it can also get quite expensive.

On the other hand, since NoSQL doesn’t necessitate the need for a schema, you avoid the expense and time of that initial design stage. Furthermore, the lack of schema means that most NoSQL databases are incredibly flexible, allowing you to change or even mix data types and models. This makes administrating and dealing with the database much easier.

Performance

 
With SQL, querying data tends to require you to do so across multiple tables. With NoSQL, all the data is contained in one table, and therefore querying is much easier. This has the side-effect of making NoSQL much better at dealing with high-performance tasks. For example, Amazon DB can do millions of queries a second, which is pretty useful for a global store like Amazon.

Support

 
The only really big downside when it comes to NoSQL is that it isn’t as established as SQL. Keep in mind that SQL has several decades worth of a head start, and this maturity shows in the ease with which information can be found. Similarly, finding an expert on SQL is much easier than finding one for NoSQL.

Finally, since NoSQL isn’t a singular database or language, but instead several dozen data models, this granularity further divides any potential expertise and makes NoSQL a matter of specialization. Therefore, reliance for information and support will mostly be contained to smaller communities.

Types of NoSQL Databases

 
While there are over half a dozen NoSQL data models to go with, we’ll cover the four main ones here that are essential to understanding NoSQL databases.

Image

Document Stores

This type of data model allows you to store information as any type of data. This is in contrast to SQL, which relies heavily on XML and JSON, and essentially ties the two together, and can make any query inefficient (or less efficient). Since NoSQL doesn’t use a scheme, there’s no need for relational data storage, and no need to tie those two together.

In fact, there is a NoSQL data model that is XML specific, if you want to go that route.

 
Graph

Graph or Network Data models are built around the concept that the relationship between the data is just as important as the data itself. In this data model, information is stored as relationships and nodes, with the nodes holding the data, and the relationship describing the relationship between any set of nodes.

As the name might suggest, and if you’ve been following along, this is an excellent data model to use for showing information on a graph. The ability to quickly visualize information from disparate sets of data, especially in relation to each other, can offer a massive amount of insight that doesn’t require pouring through several hundred pages of data.

 
Key-Value Store

As the name suggests, this data model stores information using keys and pointers to the specific value. Since both keys and values can be any piece of data you desire, Key-value store data models are quite versatile. Its purpose made for retrieving, storing, and managing arrays and is perfect for high volume applications.

In fact, Amazon DB is a key-value store data model, and this data model type was pioneered by Amazon themselves. A key-value store is also a general category under which other data models exist, with some types of graph data-models essentially functioning like key-value.

 
Column-oriented

Whereas SQL traditionally stores data in rows, Column-oriented stores it in columns. These are then grouped into families, which can themselves hold a nearly infinite amount of columns. Writing and reading are also done by columns as well, so the whole system is really efficient, and is made for fast search & access, as well as data aggregation.

That being said, it isn’t that great for complex querying.

Conclusion

 
One thing to remember is that NoSQL is not meant as a replacement to SQL so much as it’s meant to supplement it. NoSQL itself mostly uses specialized databases to fill in the gaps that are missing from SQL, and while you absolutely can go without SQL if you so chose, NoSQL does not preclude the use of it. Sometimes you may very well find yourself using both SQL and NoSQL.

 
Bio: Alex Williams is a seasoned full-stack developer and the owner of Hosting Data UK. After graduating from the University of London, majoring in IT, Alex worked as a developer leading various projects for clients from all over the world for almost 10 years. Recently, Alex switched to being an independent IT consultant and started his own blog. There, he explores web development, data management, digital marketing, and solutions for online business owners just starting out.

Related:

Source: https://www.kdnuggets.com/2020/12/nosql-beginners.html

Continue Reading

Big Data

RPA‌ ‌in‌ ‌Banking‌ ‌and‌ ‌Finance‌ ‌Industry:‌ ‌The‌ ‌Use‌ ‌Cases‌ ‌and‌ ‌Benefits‌ ‌

Avatar

Published

on

Robotic Process Automation (RPA) has become an essential instrument for most businesses to drive cost optimization, process efficiency, improved data accuracy, and turnaround time. Banks and financial service providers are no exception.

Banking and financial service organizations have started evaluating the value of RPA tools with pilots in the areas of operations and finance due to their repetitive, time-consuming, and logic-driven tasks. The application of RPA in banking and finance industry can rapidly reduce operational costs and compliance risks. Recently, many organizations have shifted their focus decisively from exploration to enterprise-wide execution of RPA technology. Risk and regulatory compliance are popular target areas because manual compliance management in an industry that is heavily regulated and volatile results in higher operational costs, human errors, and delayed timelines.

Some processes in this domain that are good candidates for RPA include:

  • Anti-Money Laundering alert notification
  • Know-your-customer (KYC) profiling and validation
  • Regulatory monitoring, data collection, and report generation
  • Compiling customer information, screening data, and customer servicing
  • Account closure processing

Let us take a closer look at the use of RPA in banking and finance operations.

Anti-Money Laundering

According to a recent report, anti-money laundering analysts usually spend only 10% of their time on analysis. These highly skilled resources invest close to 75% of their efforts into data collection and another 15% into data entry and management. Such time-intensive processes are a great fit for RPA automation.

The anti-money laundering investigation process can be seamlessly automated using RPA. This process is highly manual and takes anywhere between 30 to 40 minutes for a single case investigation. In the case of high complexity and lack of information across various systems, this process can consume even more time and effort. RPA-enabled automation can eliminate most of the rules-based tasks driving more than a 60% reduction in process turnaround time.

Know-your-customer (KYC)

Know-Your-Customer (KYC) is another important use case of RPA for banking and finance industry. KYC, an essential part of customer onboarding, is a highly daunting process involving manual verifications of several identity documents. As per a Thomson Reuters survey, the KYC compliance and customer due diligence operations cost $52 to $384 million a year for a bank.

RPA and advanced applications of computer vision (CV) can be used in conjunction to automate a range of manual operations. For example, RPA and intelligent optical character recognition (OCR) together can completely automate the manual data extraction process. This automation frees up back-office staff from dealing with several documents and hundreds of application forms daily, saving a significant amount of time and effort.

Regulatory Report Generation

Regulatory report generation is again a highly time-intensive process involving a range of manual tasks such as data extraction from discrete systems, data aggregation, creation of different templates for customized reports, reconciliation of reports, etc. As the process can be managed through certain pre-defined steps/rules, RPA can be leveraged successfully to achieve an immediate return on investment (RoI).

Customer Servicing

Today, every organization is using some form of automation in servicing its customers on different digital platforms. The chatbot is one of the highly used automation tools to deliver automated customer service. Banks and financial institutions are using customer service bots to offer quick and 24×7 responses to basic queries related to account opening, balance checks, transaction history, etc. On the other hand, RPA is used as an attended automation tool by customer service representatives to deliver accurate data more quickly. An RPA Bot can be called on-demand to access or download customer data, check external or non-integrated systems or to prevent duplicate data entry. The Bot seamlessly interacts with all of these systems and third-party websites to perform these tasks within a fraction of seconds.

Account Closure Processing

Checking to ensure up-to-date and accurate account information and transaction history, sending emails to managers and end-customers, and updating data in internal/external systems are some of the manual tasks involved in the account closure activity. RPA can be used to automate these operations and free up your knowledge workers to focus on more productive tasks.

Benefits of RPA in Banking and Finance:

The benefits of RPA in the banking and finance industry are not limited to just cost reduction and time-saving. As the scope of automation expands with the introduction of cognitive technologies such as artificial intelligence, machine learning, and computer vision, the impact of automation will be manyfold.

  • Scalability: RPA bots can be easily scaled up as the volume of data and processes increases

  • Operational efficiency: Processes are better streamlined and optimized with effective digitization driving more efficiency

  • Improved compliance: Quick adherence to changing rules and regulations saves a lot of time and effort. Moreover, RPA tools offer unmatched auditability.

  • Speed and accuracy: RPA Bots are many times faster than humans and deliver error-free results.

  • Productivity: Automation allows banks and financial institutions to free up their knowledge workers for more value-added tasks.

Continue reading to discover some more use cases of RPA in the banking and finance industry:

Loan Application Processing

The loan application process is again a good fit for RPA-enabled automation. There are several manual tasks involved in this process such as data extraction from application forms, its verification against several identity documents, assessment of creditworthiness, etc. But many times, the documents and application forms are received via email in varying formats and data structure. This is where various applications of Artificial Intelligence such as machine learning and natural language processing can be used along with RPA.

  • RPA Bots with cognitive capabilities can read emails and intelligently classify and assign them to respective agents.

  • Computer Vision-enabled intelligent optical character recognition (OCR) can be used to extract data from loan/appraisal documents.

  • Machine learning models can be used to detect fraud propensity.

Check the following demo video on the Nividous Platform being used for loan origination process automation.

Click Here to View Demo Video 

Credit Card Processing

Applying RPA to automate credit card processing is another focus area where banks have seen phenomenal results. Banks can issue credit cards to customers within hours, eliminating unnecessary delays caused by manual operations. As RPA Bots navigate through multiple systems with ease, it can seamlessly extract and validate data, conduct rule-based background checks, and present accurate results that help in final approval or rejection of the application.

Conclusion

Early adopters and risk-takers have already invested heavily in RPA and AI technologies for their ability to drive unprecedented benefits. The adoption will continue to increase rapidly as automation technologies continue to improve. If you are interested to learn more about the application of RPA in the banking and finance industry, reach out to us at contactus@nividous.com

Image Credit: Nividous

Source: https://datafloq.com/read/rpa-in-banking-and-finance-industry-the-use-cases-and-benefits/10972

Continue Reading

Big Data

Droning the drove: Israeli cow-herders turn to flying tech

Avatar

Published

on

NAHLAYIM RANCH, Golan Heights (Reuters) – The buzz of tiny rotors has replaced dog barks and bullwhips on this Israeli ranch, where drones are being used to herd and observe cattle.

The remote-controlled quadcopters hover near the cows, which move along in response while live video is relayed back to the farmers.

“Using a drone, instead of cowboys and dogs, creates a much less stressful environment for the animals, and an animal that is less stressful is a lot healthier and more productive,” said Noam Azran, CEO of BeeFree Agro, the firm developing the method.

The drones also offer more efficient control of large droves and pastures, he said, adding that there has been interest from the United Arab Emirates, which in September established formal relations with Israel.

BeeFree Agro representatives will go to the Gulf state this month “to see if our solution can work for camels,” he said.

(Writing by Dan Williams; Editing by Mike Collett-White)

Image Credit: Reuters

Source: https://datafloq.com/read/droning-drove-israeli-cow-herders-turn-flying-tech/10971

Continue Reading
Big Data2 hours ago

Simple & Intuitive Ensemble Learning in R

Aerospace4 hours ago

Otto Aviation selects VOLTA as its collaborative MDO framework

Aerospace4 hours ago

Harnessing innovation crucial to UK aerospace and defence future

Crowdfunding6 hours ago

Payment Service Provider PingPong Payments Secures E-Money License in Luxembourg

Big Data6 hours ago

NoSQL for Beginners

AR/VR6 hours ago

Solaris Offworld Combat’s Squad Update Allows Friends to Teamup

Big Data6 hours ago

RPA‌ ‌in‌ ‌Banking‌ ‌and‌ ‌Finance‌ ‌Industry:‌ ‌The‌ ‌Use‌ ‌Cases‌ ‌and‌ ‌Benefits‌ ‌

Start Ups7 hours ago

Messaging Software Startup Aampe Raises Rs 13 Crore From Sequoia India Surge

Start Ups7 hours ago

Tata inches closer to make foray into Online Grocery Biz

Big Data7 hours ago

Droning the drove: Israeli cow-herders turn to flying tech

Big Data7 hours ago

UK watchdog studies ‘range anxiety’ in electric vehicle charging

Big Data7 hours ago

Salesforce to buy workplace app Slack in $27.7 billion deal

Aerospace7 hours ago

Valuechain’s MES solution now integrates PrintSyst’s AI Engine

Big Data8 hours ago

Do China tech giants pose a risk for European banks?

Aerospace8 hours ago

Paragraf drives electric transport revolution with graphene sensors

Start Ups8 hours ago

Genesis Therapeutics raises $52M A round for its AI-focused drug discovery mission

Blockchain News9 hours ago

Active Bitcoin Addresses Hit Third-Highest Level in November

Aviation9 hours ago

Major US Airlines Pause Nonstop Flights To Shanghai

AI9 hours ago

Facial recognition tech: risks, regulations and future startup opportunities in the EU

Aviation9 hours ago

HOP’s Embraer Fleet To Be Rebranded As Air France

AI9 hours ago

KDnuggets™ News 20:n45, Dec 2: TabPy: Combining Python and Tableau; Learn Deep Learning with this Free Course from Yann LeCun

Aviation9 hours ago

UK approves Australia-purchased vaccine

Start Ups10 hours ago

Valencia-based Jeff raises €17.4 million and confirms launch in the US

Blockchain News10 hours ago

OKEx Announces Support for Spark (FLR) Airdrop as XRP Holders Prepare for Token Distribution

Start Ups10 hours ago

Stockholm-based Voi Technology lands more than €132 million to boost its geographic and fleet expansion

Big Data10 hours ago

Nordigen launches first-ever free open banking platform to challenge Tink and Plaid

Aerospace10 hours ago

Ontic acquires instruments product line from Flightline Electronics

Blockchain10 hours ago

NYDIG raises $100 million from a single investor for its new crypto fund

Aviation11 hours ago

FAA Issues First Boeing 737 MAX Airworthiness Certificate Since 2019

Blockchain News11 hours ago

SushiSwap (SUSHI) Token Price Surges Over 15% After Merger with Yearn.finance

Trending