Connect with us


Cannabis marketing company Fyllo acquires CannaRegs for $10M



Fyllo, a digital marketing company focused on the cannabis industry, has acquired CannaRegs, a website offering subscription access to state and municipal cannabis regulations. Fyllo founder and CEO Chad Bronstein (pictured above) said his company paid $10 million in cash and stock.

Bronstein previously served as chief revenue officer at digital marketing company Amobee, and he told me that the two companies are “very complementary,” particularly since regulations and compliance present “a unique technical challenge” when it comes to advertising cannabis products.

Ultimately, his goal is for Fyllo to offer “compliance as a service,” with artificial intelligence helping brands and publishers ensure that all their cannabis advertising follows local laws. At the same time, Bronstein said Fyllo will continue to support CannaRegs’ 150-plus customers (mostly law firms, real estate professionals and cannabis operators) and work to bring more automation to the platform.

In addition, CannaRegs founder and CEO Amanda Ostrowitz will become Fyllo’s chief strategy officer, with CannaRegs’ 30 employees continuing to work out of their Denver office. This brings Fyllo’s total headcount to around 70.

“In a short period of time, Fyllo has emerged as an essential platform for publishers and cannabis companies to build creative campaigns in a safe and compliant way,” Ostrowitz said in a statement. “By teaming up with Fyllo, we have the chance to build a truly remarkable brand that can disrupt the entire industry. We look forward to delivering our same quality of data to existing customers and incorporating that data into Fyllo’s platform to become a one-stop-shop for cannabis brands looking to grow their businesses.”

Chicago-based Fyllo raised $18 million in funding last year.

Read more:


Defeated Chess Champ Garry Kasparov Has Made Peace With AI



Garry Kasparov is perhaps the greatest chess player in history. For almost two decades after becoming world champion in 1985, he dominated the game with a ferocious style of play and an equally ferocious swagger.

Outside the chess world, however, Kasparov is best known for losing to a machine. In 1997, at the height of his powers, Kasparov was crushed and cowed by an IBM supercomputer called Deep Blue. The loss sent shock waves across the world, and seemed to herald a new era of machine mastery over man.

The years since have put things into perspective. Personal computers have grown vastly more powerful, with smartphones now capable of running chess engines as powerful as Deep Blue alongside other apps. More significantly, thanks to recent progress in artificial intelligence, machines are learning and exploring the game for themselves.

Deep Blue followed hand-coded rules for playing chess. By contrast, AlphaZero, a program revealed by the Alphabet subsidiary DeepMind in 2017, taught itself to play the game at a grandmaster level simply by practicing over and over. Most remarkably, AlphaZero uncovered new approaches to the game that dazzled chess experts.

Last week, Kasparov returned to the scene of his famous Deep Blue defeat—the ballroom of a New York hotel—for a debate with AI experts organized by the Association for the Advancement of Artificial Intelligence. He met with WIRED senior writer Will Knight there to discuss chess, AI, and a strategy for staying a step ahead of machines. An edited transcript follows:

WIRED: What was it like to return to the venue where you lost to Deep Blue?

Garry Kasparov: I’ve made my peace with it. At the end of the day, the match was not a curse but a blessing, because I was a part of something very important. Twenty-two years ago, I would have thought differently. But things happen. We all make mistakes. We lose. What’s important is how we deal with our mistakes, with negative experience.

1997 was an unpleasant experience, but it helped me understand the future of human-machine collaboration. We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side [by increasingly powerful AI programs]. But it doesn't mean that life is over. We have to find out how we can turn it to our advantage.

I always say I was the first knowledge worker whose job was threatened by a machine. But that helps me to communicate a message back to the public. Because, you know, nobody can suspect me of being pro-computers.

What message do you want to give people about the impact of AI?

I think it's important that people recognize the element of inevitability. When I hear outcry that AI is rushing in and destroying our lives, that it's so fast, I say no, no, it's too slow.

Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. They're dead, they just don’t know it.

For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are. We have to look for opportunities to create jobs that will emphasize our strengths. Technology is the main reason why so many of us are still alive to complain about technology. It's a coin with two sides. I think it's important that, instead of complaining, we look at how we can move forward faster.

When these jobs start disappearing, we need new industries, we need to build foundations that will help. Maybe it’s universal basic income, but we need to create a financial cushion for those who are left behind. Right now it's a very defensive reaction, whether it comes from the general public or from big CEOs who are looking at AI and saying it can improve the bottom line but it’s a black box. I think it's we still struggling to understand how AI will fit in.


A lot of people will have to contend with AI taking over some part of their jobs. What advice do you have for them?

There are different machines, and it is the role of a human and understand exactly what this machine will need to do its best. At the end of the day it's about combination. For instance, look at radiology. If you have a powerful AI system, I’d rather have an experienced nurse than a top-notch professor [use it]. A person with decent knowledge will understand that he or she must add only a little bit. But a big star in medicine will like to challenge the machines, and that destroys the communication.

People ask me, “What can you do to assist another chess engine against AlphaZero?” I can look at AlphaZero’s games and understand the potential weaknesses. And I believe it has made some inaccurate evaluations, which is natural. For example, it values bishop over knight. It sees over 60 million games that statistically, you know, the bishop was dominant in many more games. So I think it added too much advantage to bishop in terms of numbers. So what you should do, you should try to get your engine to a position where AlphaZero will make inevitable mistakes [based on this inaccuracy].

I often use this example. Imagine you have a very powerful gun, a rifle that can shoot a target 1 mile from where you are. Now a 1-millimeter change in the direction could end up with a 10-meter difference a mile away. Because the gun is so powerful, a tiny shift can actually make a big difference. And that's the future of human-machine collaboration.

With AlphaZero and future machines, I describe the human role as being shepherds. You just have to nudge the flock of intelligent algorithms. Just basically push them in one direction or another, and they will do the rest of the job. You put the right machine in the right space to do the right task.

How much progress do you think we’ve made toward human-level AI?

We don't know exactly what intelligence is. Even the best computer experts, the people on the cutting edge of computer science, they still have doubts about exactly what we're doing.

What we understand today is AI is still a tool. We are comfortable with machines making us faster and stronger, but smarter? It’s some sort of human fear. At the same time, what's the difference? We have always invented machines that help us to augment different qualities. And I think AI is just a great tool to achieve something that was impossible 10, 20 years ago.

How it will develop I don't know. But I don't believe in AGI [artificial general intelligence]. I don't believe that machines are capable of transferring knowledge from one open-ended system to another. So machines will be dominant in the closed systems, whether it's games, or any other world designed by humans.

Keep Reading
The latest on artificial intelligence, from machine learning to computer vision and more

David Silver [the creator of AlphaZero] hasn’t answered my question about whether machines can set up their own goals. He talks about subgoals, but that’s not the same. That’s a certain gap in his definition of intelligence. We set up goals and look for ways to achieve them. A machine can only do the second part.

So far, we see very little evidence that machines can actually operate outside of these terms, which is clearly a sign of human intelligence. Let's say you accumulated knowledge in one game. Can it transfer this knowledge to another game, which might be similar but not the same? Humans can. With computers, in most cases you have to start from scratch.

Let’s talk about the ethics of AI. What do you think of the way the technology is being used for surveillance or weapons?

We know from history that progress cannot be stopped. So we have certain things we cannot prevent. If you [completely] restrict it in Europe, or America, it will just give an advantage to the Chinese. [But] I think we do need to exercise more public control over Facebook, Google, and other companies that generate so much data.


People say, oh, we need to make ethical AI. What nonsense. Humans still have the monopoly on evil. The problem is not AI. The problem is humans using new technologies to harm other humans.

AI is like a mirror, it amplifies both good and bad. We have to actually look and just understand how we can fix it, not say “Oh, we can create AI that will be better than us.” We are somehow stuck between two extremes. It's not a magic wand or Terminator. It's not a harbinger of utopia or dystopia. It's a tool. Yes, it's a unique tool because it can augment our minds, but it's a tool. And unfortunately we have enough political problems, both inside and outside the free world, that could be made much worse by the wrong use of AI.

Returning to chess, what do you make of AlphaZero’s style of play?

I looked at its games, and I wrote about them in an article that mentioned chess as the “drosophila of reasoning.” Every computer player is now too strong for humans. But we actually could learn more about our games. I can see how the millions of games played by AlphaGo during practice can generate certain knowledge that’s useful.

It was a mistake to think that if we develop very powerful chess machines, the game would be dull, that there will be many draws, maneuvers, or a game will be 1,800, 1,900 moves and nobody can break through. AlphaZero is totally the opposite. For me it was complementary, because it played more like Kasparov than Karpov! It found that it could actually sacrifice material for aggressive action. It’s not creative, it just sees the pattern, the odds. But this actually makes chess more aggressive, more attractive.

Magnus Carlsen [the current World Chess Champion] has said that he studied AlphaZero games, and he discovered certain elements of the game, certain connections. He could have thought about a move, but never dared to actually consider it; now we all know it works.

When you lost to DeepBlue, some people thought chess would no longer be interesting. Why do you think people are still interested in Carlsen?

You answered the question. We are still interested in people. Cars move faster than humans, but so what? The element of human competition is still there, because we want to know that our team, our guy, he or she is the best in the world.

The fact is that you have computers that dominate the game. It creates a sense of uneasiness, but on the other hand, it has expanded interest in chess. It’s not like 30 years ago, when Kasparov plays Karpov, and nobody dared criticize us even if we made a blunder. Now you can look at the screen and the machine tells you what's happening. So somehow machines brought many people into the game. They can follow, it's not a language they don't understand. AI is like an interface, an interpreter.

Read more:

Continue Reading


Network with CrunchMatch at TC Sessions: Mobility 2020



Got your sights set on attending TC Sessions: Mobility 2020 on May 14 in San Jose? Spend the day with 1,000 or more like-minded founders, makers and leaders across the startup ecosystem. It’s a day-long deep dive dedicated to current and evolving mobility and transportation tech. Think autonomous vehicles, micromobility, AI-based mobility applications, battery tech and so much more.

Hold up. Don’t have a ticket yet? Buy your early-bird pass and save $100.

In addition to taking in all the great speakers (more added every week), presentations, workshops and demos, you’ll want to meet people and build the relationships that foster startup success. Get ready for a radical network experience with CrunchMatch. TechCrunch’s free business-matching platform makes finding and connecting with the right people easier than ever. It’s both curated and automated, a potent combination that makes networking simple and productive. Hey needle, kiss that haystack goodbye.

Here’s how it works.

When CrunchMatch launches, we’ll email all registered attendees. Create a profile, identify your role and list your specific criteria, goals and interests. Whomever you want to meet — investors, founders or engineers specializing in autonomous cars or ride-hailing apps. The CrunchMatch algorithm kicks into gear and suggests matches and, subject to your approval, proposes meeting times and sends meeting requests.

CrunchMatch benefits everyone — founders looking for developers, investors in search of hot prospects, founders looking for marketing help — the list is endless, and the tool is free.

You have one programming-packed day to soak up everything this conference offers. Start strategizing now to make the most of your valuable time. CrunchMatch will help you cut through the crowd and network efficiently so that you have time to learn about the latest tech innovations and still connect with people who can help you reach the next level.

TC Sessions: Mobility 2020 takes place on May 14 in San Jose, Calif. Join, meet and learn from the industry’s mightiest minds, makers, innovators and investors. And let CrunchMatch make your time there much easier and more productive. Buy your early-bird ticket, and we’ll see you in San Jose!

Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility 2020? Contact our sponsorship sales team by filling out this form.

Read more:

Continue Reading


Do AI startups have worse economics than SaaS shops?



A few days ago, Andreessen Horowitz’s Martin Casado and Matt Bornstein published an interesting piece digging into the world of artificial intelligence (AI) startups, and, more specifically, how those companies perform as businesses. Core to the argument presented is that while founders and investors are wagering “that AI businesses will resemble traditional software companies,” the well-known venture firm is “not so sure.”

Given that TechCrunch cares a lot about startup business fundamentals, the notion that one oft-discussed and well-funded category of venture-backed startup might sport materially less attractive economics than we expected captured our attention.

The Andreessen Horowitz (a16z) perspective is straightforward, arguing that AI-focused companies have lesser gross margins than software companies due to cloud compute and human-input costs, endure issues stemming from “edge-cases” and enjoy less product differentiation from competing companies when compared to software concerns. Today, we’re drilling into the gross margin point, as it’s something inherently numerical that we can get other, informed market participants to weigh in on.

If a16z is correct about AI startups having slimmer gross margins than SaaS companies, they should — all other things held equal — be worth less per dollar of revenue generated; or in simpler terms, they should trade at a revenue multiple discount to SaaS companies, leaving the latter category of technology company still atop the valuation hierarchy.

This matters, given the amount of capital that AI-focused startups have raised.

Is a16z correct about AI gross margins? I wanted to find out. So this week I spoke to a number of investors from firms that have made AI-focused bets to get a handle on their views. Read the full a16z piece, mind. It’s interesting and worth your time.

Today we’re hearing from Rohit Sharma of True Ventures, Jeremy Kaufmann of Scale Venture Partners, Nick Washburn of Intel Capital and Ben Blume of Atomico. We’ll start with a digest of their responses to our questions, with their unedited notes at the end.

AI economics and optimism

We asked our group of venture investors (selected with the help of research from TechCrunch’s Arman Tabatabai) three questions. The first dealt with margins themselves, the second dealt with resulting valuations and, finally, we asked about their current optimism interval regarding AI-focused companies.

Read more:

Continue Reading