Zephyrnet Logo

Tag: look like

Parcel delivery scams are on the rise: Do you know what to watch out for?

As package delivery scams that spoof DHL, USPS and other delivery companies soar, here’s how to stay safe not just this shopping season...

AI-driven creativity gives overpowered PCs something worthwhile to do, at last

Column Until recently, personal computer hardware seemed to have leapt past any demands software could possibly place upon it. Even high-end games – traditionally...

Digital Transformation: Embracing Complexity & the Courage to Fail

The last couple of years has taught us that change is constant. And, amid all the chaos – whether it’s spurred by disruption, inflation, business growth, new markets, new product lines, mergers and acquisitions, or evolving customer requirements – the supply chain is not a background actor but often at the heart of what drives failure or success.

Mitigating Supply-Side Service Risks with AI

In September 2021, we asked members of our Indago supply chain research community — who are all supply chain and logistics executives from manufacturing,...

Ask a Techspert: How does Lens turn images to text?Ask a Techspert: How does Lens turn images to text?Keyword Contributor

When I was on holiday recently, I wanted to take notes from an ebook I was reading. But instead of taking audio notes or scribbling things down in a notebook, I used Lens to select a section of the book, copy it and paste it into a document. That got me curious: How did all that just happen on my phone? How does a camera recognize words in all their fonts and languages?

I decided to get to the root of the question and speak to Ana Manasovska, a Zurich-based software engineer who is one of the Googlers on the front line of converting an image into text.

Ana, tell us about your work in Lens.

I’m involved with the text aspect, so making sure that the app can discern text and copy it for a search or translate it — with no typing needed. For example, if you point your phone’s camera at a poster in a foreign language, the app can translate the text on it. And for people who are blind or have low vision, it can read the text out loud. It’s pretty impressive.

So part of what my team does is get Lens to recognize not just the text, but also the structure of the text. We humans automatically understand writing that is separated into sentences and paragraphs, or blocks and columns, and know what goes together. It’s very difficult for a machine to distinguish that, though.

Is this machine learning?

Yes. In other words, it uses systems (we call them models) that we’ve trained to discern characters and structure in images. A traditional computing system would have only a limited ability to do this. But our machine learning model has been built to “teach itself” on enormous datasets and is learning to distinguish text structures the same way a human would.

Can the system work with different languages?

Yes, it can recognize 30 scripts, including Cyrillic, Devanagari, Chinese and Arabic. It’s most accurate in Latin-alphabet languages at the moment, but even there, the many different types of fonts present challenges. Japanese and Chinese are tricky because they have lots of nuances in the characters. What seems like a small variation to the untrained eye can completely change the meaning.

What’s the most challenging part of your job?

There’s lots of complexity and ambiguity, which are challenging, so I’ve had to learn to navigate that. And it’s very fast paced; things are moving constantly and you have to ask a lot of questions and talk to a lot of people to get the answers you need.

When it comes to actual coding, what does that involve?

Mostly I use a programming language called C++, which enables you to run processing steps needed to take you from an image to a representation of words and structure.

Hmmm, I sort of understand. What does it look like?

A screenshot of some C++ code against a white background.

This is what C++ looks like.

The code above shows the processing for extracting only the German from a section of text. So say the image showed German, French and Italian — only the German would be extracted for translation. Does that make sense?

Kind of! Tell me what you love about your job.

It boils down to my lifelong love of solving problems. But I also really like that I’m building something I can use in my everyday life. I’m based in Zurich but don’t speak German well, so I use Lens for translation into English daily.

Decoding what the coders do: Ana works in Lens, focusing on text recognition. But what does that actually involve?

Scroll to Top in Vue with Reusable Components

In this tutorial - learn how to implement a reusable VueJS component to scroll to top with JavaScript, through practical code and an example project.

Upscale Your eCommerce Business with Growth Best Practices – CommerceNow’ 22 Recap 

This summer, founders, CEOs, marketers, sales reps, and revenue leaders in the eCommerce space gathered virtually to share their experiences, best strategies, and ultra-powerful...

Top 10 Best Crypto Gifts in 2023

<!-- --> Christmas is coming… Is it just me, or does it feel like the holiday season is always “just around...

A day in the life of an Investment Banking Expert – IPO Books – Philippe Espinasse

Set out below is an article that was published in the Autumn 2022 edition of “Expert Matters”, the magazine of the Expert Witness Institute...

A day in the life of an Investment Banking Expert

Set out below is an article that was published in the Autumn 2022 edition of “Expert Matters”, the magazine of the Expert Witness Institute...

Hugo Feiler, CoFounder/CEO Minima – FinTech Silicon Valley

Pemo:Welcome, Hugo. So pleased to speak to you today. I was wondering if you could tell me a little bit about Minima and where...

Transitioning to Net-Zero

What will it bring and what will it cost? Around the world, companies and governments are pledging to achieve net-zero emissions of greenhouse gases –...

Latest Intelligence

spot_img
spot_img