Connect with us

AI

Ethics In AI Awareness And AI Autonomous Cars

Avatar

Published

on

Ethically, developers of AI self-driving cars need to be aware of incidents happening in the real world, from small ones to accidents resulting in fatalities. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

One of the first jobs that I took after having earned my degree in computer science involved doing work for a large manufacturer of airplanes.

I was excited about the new job and eager to showcase my programming prowess.

My friends that had graduated when I did were at various other companies and we were all vying to see which of us would get the juiciest project. There were some serious bragging rights to be had. The more important the project and world shaking it might be, the more you could lord it over the others of us.

My manager handed me some specs that he had put together for a program that would do data analysis.

Nowadays, you’d likely use any of a myriad of handy data analysis tools to do the work required, but in those days the data analysis tools were crude, expensive, and you might as well build your own. He didn’t quite tell me what the data was about and instead just indicated the types of analyses and statistics that my program would need to generate based on the data.

I slaved away at the code.

I got in early and left late.

I was going to show my manager that I would do whatever it took to get the thing going in the shortest amount of days that I could. I had it working pretty well and presented the program to him. He seemed pleased and told me he’d be using the program and would get back to me. After about a week, he came back and said that some changes were needed based on feedback about the program.

He also then revealed to me the nature of the data and the purpose of the effort.

It had to do with the design of airplane windshields.

You’ve probably heard stories of planes that take-off in some locales and encounter flocks of birds. The birds can potentially gum up the engines of the plane. Even more likely is that the birds might strike the windshield and fracture it or punch a hole in it. The danger to the integrity of the plane and the issues this could cause for the pilots is significant and thus worthwhile to try and design windshields to withstand such impacts.

The data that my program was analyzing consisted of two separate datasets.

First, there was data collected from real windshields that in the course of flying on planes around the world had been struck by birds. Second, the company had set up a wind tunnel that contained various windshield designs and were firing clay blobs at the windshields. There was an analysis by my program of the various impacts to the windshields and also a comparison of the test ones used in the wind tunnel versus the real-world impacted ones.

I right away contacted my former college buddies and pointed out that my work was going to save lives. Via my program, there would be an opportunity to redesign windshields to best ensure that newer windshields would have the right kind of designs. Who knew that my first program out of college would have a worldwide impact, it was amazing.

I also noted that whenever any of my friends were to go flying in a plane in the future, they should look at the windshield and be thinking “Lance made that happen.”

Bragging rights for sure!

What happened next though dashed my delight to some degree.

After the windshield design team reviewed the reports produced by my program, they came back to me with some new data and some changes needed to the code. I made the changes. They looked at the new results. About two weeks later, they came back with newer data and some changes to be made to the code. No one had explained what made this data any different and nor why the code changes were needed. I assumed it was just a series of tests using the clay blobs in the wind tunnel.

Turns out that the clay blobs were not impacting the windshields in the same manner as the real-world results of birds hitting the windshields. Believe it or not, they switched to using frozen chickens instead of the clay blobs. After I had loaded that data and they reviewed the results, they determined that a frozen chicken does not have the same impact as a live bird. They then got permission to use real live chickens. That was the next set of data I received, namely, data of living chickens that had been shot out of a cannon inside a wind tunnel and that were smacking against test windshields.

When I mentioned this to my friends, some of them said that I should quit the project. It was their view that it was ethically wrong to use live chickens. I was contributing to the deaths of living animals. If I had any backbone, some of them said, I would march into my manager’s office and declare that I would not stand for such a thing. I subtly pointed out that the loss of the lives of some chickens was a seemingly small price to pay for better airplane windshields that could save human lives. Plus, I noted that most of them routinely ate chicken for lunch and dinner, and so obviously those chickens had given their lives for an even less “honorable” act.

What would you have done?

Ethical Choices At Work

While you ponder what you would have done, one salient aspect to point out is that at first I was not aware of what the project consisted of. In other words, at the beginning, I had little awareness of what my efforts were contributing toward. I was somewhat in the blind. I had assumed that it was some kind of “need to know” type of project.

You might find of idle interest that I had worked on some top security projects prior to this effort, projects that were classified, and so I had been purposely kept in the dark about the true nature of the effort. For example, I wrote a program that calculated the path of “porpoises” trying to intersect with “whales” — my best guess was that maybe the porpoises were actually submarines and the whales were surface ships like navy destroyers or carriers (maybe that’s what it was about, or maybe something completely different!).

In the case of the live chickens and the airplane windshields, upon my becoming more informed and with the realization of what I was contributing toward, presumably the added awareness gave me a chance to reflect upon the matter.

Would my awareness cause me to stop working on the effort?

Would my awareness taint my efforts such that I might do less work on it or be less motivated to do the work?

Might I even try to somehow subvert the project, doing so under the “justified” belief that what was taking place was wrong to begin with?

If you are interested in how workers begin to deviate from the norms as they get immersed in tech projects, take a look at my article: https://aitrends.com/selfdrivingcars/normalization-of-deviance-endangers-ai-self-driving-cars/

For the dangers of company groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For background about some key ethics of workplace and society issues, see: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Workplace Awareness Of Ethical Matters

When referring to how workplace related awareness can make a potential difference in worker behavior, a recent study of that phenomena gained national interest.

The study examined the opioid drug crisis occurring in the United States.

There are many thousands of deaths each year due to opioid overdoses and an estimated nearly 2 million Americans that are addicted to opioids. According to the study, part of the reason that opioid use has vastly increased over the last two decades is as a result of prescribing opioids for pain relief and for similar purposes.

Apparently, medical doctors had gotten used to prescribing opioids and did so without necessarily overtly considering the downsides of becoming possibly addicted to it. If a patient can be helped now by giving them opioids, it’s an easy immediate solution for them. The patient is presumably then happy. The doctor is also happy because they’ve made the patient happy. Everyone would seem to be happy. This is not as true if you consider the longer term impacts of prescribing opioids.

The researchers wondered whether they could potentially change the behavior of the prescribing medical doctors.

Via analyzing various data, the researchers were able to identify medical doctors that had patients that had suffered opioid overdoses. Dividing that set of physicians into a control group and an experimental group, the researchers arranged for those in the experimental group to receive a letter from the county medical examiner telling the medical doctor about the death and tying this matter to the overall dangers of prescribing opioids.

The result seemed to be that the medical doctors in the experimental group subsequently dispensed fewer opioids. It was asserted that the use of the awareness letters as targeted to the medical doctors was actually more effective in altering their behavior than the mere adoption of regulatory limits related to prescribing opioids.

By increasing the awareness of these physicians, this added awareness apparently led to a change in their medical behavior.

You can quibble about various aspects of the study, but let’s go with the prevailing conclusions for now, thanks.

Ethics Awareness By AI Developers And AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. We also remain very much aware of any incidents involving AI self-driving cars and discuss those incidents with our teams, regardless of whether those incidents relate directly to any of our work per se.

In essence, we believe that it is important for every member of the team, whether an AI developer, QA specialist, hardware engineer, project manager, and so on, for them to be aware of what’s happening throughout industry about AI self-driving cars. Small incidents to big incidents, ones involving no injuries to ones involving deaths, whatever the incident might be it is considered vital to consider it.

Should the auto makers and tech firms that are also developing AI self-driving cars do likewise?

There’s no written rule that says there is any obligation of the automaker or tech firm to keep their AI developers apprised of AI self-driving car incidents. Indeed, it’s often easy to ignore incidents that happen to competing AI self-driving car efforts. Those dunces, they don’t know what they are doing, can sometimes be the attitude involved. Why look at what they did and figure out what went wrong, since they were not up-to-snuff anyway in terms of their AI self-driving car efforts. That’s a cocky kind of attitude often prevalent among AI developers (actually, prevalent among many in high-tech that think they have the right-stuff!).

So, the question arises as to whether promoting awareness of AI self-driving car incidents to AI self-driving car developers would be of value to the automakers and tech firms developing AI self-driving cars and their teams. You might say that even if you did make them aware, what difference would it make in what they are doing. Won’t they just continue doing what they are already doing?

The counter-argument is that like the prescribing medical doctors, perhaps an increased awareness would change their behavior. And, you might claim that without the increased awareness there is little or no chance of changing their behavior. As the example of the chickens and the airplane windshield suggests, if you don’t know what you are working on and its ramifications, it makes it harder to know that you should be concerned and possibly change course.

In the case of the opioid prescribing medical doctors, it was already ascertained that something was “wrong” about what the physicians were doing. In the case of the automakers and tech firms that are making AI self-driving cars, you could say that there’s nothing wrong with what they are doing. Thus, there’s no point to increasing their awareness.

That might be true, except for the aspect that most of the AI self-driving car community would admit if pressed that they know that their AI self-driving car is going to suffer an incident someday, somehow. Even if you’ve so far been blessed to have nothing go awry, it’s going to happen that something will go awry. There’s really no avoiding it. Inextricably, inexorably, it’s going to happen.

There are bound to be software bugs in your AI self-driving car system. There are bound to be hardware exigencies that will confuse or confound your AI system. There are bound to be circumstances that will arise in a driving situation that will exceed what the AI is able to cope with, and the result will at some point produce an adverse incident. The complexity of AI self-driving car systems is relatively immense and the ability to test all possibilities prior to fielding is questioned.

For issues of irreproducibility and AI self-driving cars, see my article: https://aitrends.com/ai-insider/irreproducibility-and-ai-self-driving-cars/

For pre-mortems about AI self-driving cars, see my article: https://aitrends.com/ai-insider/pre-mortem-analysis-for-ai-self-driving-cars/

For my article on software neglect issues, see: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

For the likely freezing robot problem and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

Furthermore, there is a perceived rush to get AI self-driving cars on our public roadways, at least by some.

The auto makers and tech firms tend to argue that the only viable means to test out AI self-driving cars is by running them on our public roadways.

Simulations, they claim, can only do so much.

Proving grounds, they say, are limited and there’s only so much you can discover.

The public roadways are the means to get us to true AI self-driving cars. The risks to the public are presumed to be worth the assumed faster pace to perfecting AI self-driving cars.

You’ve got to accept some public pain to gain a greater public good, some say.

For public trust issues about AI self-driving cars and the makers of them, see: https://aitrends.com/ai-insider/roller-coaster-public-perception-ai-self-driving-cars/

Are AI developers and other tech specialists involved in the making of AI self-driving cars keeping apprised of what is going on in terms of the public roadways trials and especially the incidents that occur from time-to-time?

On an anecdotal basis of asking those that I meet at industry conferences, many are so focused on their day-to-day job and the pressures to produce that they find little time or energy to keep up with the outside world per se. Indeed, at the conferences, many times they tell me that they have scooted over to the event for just a few hours and need to rush back to the office to continue their work efforts.

The intense pressure by their workplace and their own internal pressure to do the development work would seem to be preoccupying them.

I’ve mentioned before in my writings and speeches that there is a tendency for these developers to get burned out.

For my article about the burnout factor of AI developers, see: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about the recent spate of accidents with AI self-driving cars, see: https://aitrends.com/ai-insider/accidents-contagion-and-ai-self-driving-cars/

Proposed Research Project Focused On AI Developers

Here’s then a proposed research project that would be interesting and informative to undertake.

Suppose that akin to the research on physicians and the awareness of opioids prescribing, we were to do a study of AI self-driving car developers and their awareness of AI self-driving car incidents. The notion would be to identify to what degree they have awareness in mind already, and whether increased awareness would aid in their efforts.

A null hypothesis could be: Developers of AI self-driving cars have little or no awareness of AI self-driving car incidents.

The definition of awareness could be operationalized by indicating that it consists of having read or seen information about one or more AI self-driving car incidents in the last N number of months.

This hypothesis is structured in a rather stark manner by indicating “no awareness” which would presumably be easiest to break. One would assume or hope that these developers would have some amount of awareness, even if minimal, about relatively recent incidents.

The next such hypothesis could examine the degree of awareness. For example, maybe levels such as Q, R, S, and T number of impressions about incidents in the last N months, wherein we use say Q=1, R=2-4, S=5-7, T=8+, in order to indicate ranges of awareness. One potential flaw to simply using the number of impressions would be whether they are repetitive of the same incident, or another loophole is that they read or saw something but did so in a cursory way (this could be further tested by gauging how much they remembered or knew about the incident as an indicator of whether they actually gained awareness per se or not).

The next aspect to consider is whether awareness makes a difference in behavior.

In the case of the physicians and the opioids prescribing, it was indicated that their presumed increased awareness led to less prescriptions of opioids being written. We don’t know for sure that the increased awareness “caused” that change in behavior, and it could be that some other factor produced the change, but in any case, the study suggests or asserts that the two aspects went hand-in-hand.

What might an AI developer do differently as a result of increased awareness about AI self-driving car incidents?

We can postulate that they might become more careful and retrospective about the AI systems they are developing. They might take longer to develop their code in the belief that they need to be more cautious to pay attention to systems safety related aspects. They might increase the amount of testing time. They might use tools for inspecting their code that they hadn’t used before or might re-double their use of such tools. They might devise new safety mechanisms for their systems that they had not otherwise done previously.

They might within their firm become an advocate for greater attention and time towards AI systems safety. They might seek to collaborate more so with the QA teams or others that are tasked with trying to find bugs and errors and do other kinds of systems testing. They might seek to bolster AI safety related practices within the company. They might seek to learn more about how to improve their AI system safety skills and how to apply them to the job. They might push back within the firm at deadlines that don’t take into account prudent AI systems safety considerations. And so on.

For my framework on AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For purposes of a research study, it would be necessary to somehow quantify those potential outcomes in order to readily measure whether the awareness does have an impact. The quantification could be subjectively based; the developers could be asked to rate their changes based on a list of the possible kinds of changes. This is perhaps the simplest and easiest way to determine it. A more arduous and satisfying means would be to try to arrive at true counts of other signifiers of those changes.

Similar to the physicians and opioids study, there would be a control group and an experimental or treatment group. The treatment group might be provided with information about recent AI self-driving car incidents, and then post-awareness a follow-up some X days or weeks later try to discern whether their behavior has changed as a result of the treatment. It would not be necessary axiomatic that any such changes in behavior could be entirely construed as due to the awareness increase, but it would seem like a reasonable inference. There is also the chance of a classic Hawthorne effect coming to play, and for which the research study would want to consider how to best handle.

Conclusion

AI developers for self-driving cars are dealing with systems that involve life-and-death.

In the pell-mell rush to try and get AI self-driving cars onto our roadways, we all collectively need to be mindful of the dangers that a multi-ton car can have if the AI encounters difficulties and runs into other cars, or runs into pedestrians, or otherwise might lead to human injuries or deaths.

Though AI developers certainly grasp this overall perspective, in the day-to-day throes of slinging code and building Machine Learning systems for self-driving cars it can become a somewhat lost or lessened consideration, and instead the push to get things going can overtake that awareness. We believe fervently that AI developers need to keep this awareness at the forefront of their efforts, and by purposely allow time for it and structuring it as part of the job effort, it is our hope that it makes a difference in the reliability and ultimate safety of these AI systems.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/ethics-in-ai-awareness-and-ai-autonomous-cars/

AI

Man paralyzed from neck down uses AI brain implants to write out text messages

Avatar

Published

on

Video A combination of brain implants and a neural network helped a 65-year-old man paralyzed from the neck down type out text messages on a computer at 90 characters per minute, faster than any other known brain-machine interface.

The patient, referred to as T5 in a research paper published [preprint] in Nature on Wednesday, is the first person to test the technology, which was developed by a team of researchers led by America’s Stanford University.

Two widgets were attached to the surface of T5’s brain; the devices featured hundreds of fine electrodes that penetrated about a millimetre into the patient’s gray matter. The test subject was then asked to imagine writing out 572 sentences over the course of three days. These text passages contained all the letters of the alphabet as well as punctuation marks. T5 was asked to represent spaces in between words using the greater than symbol, >.

Signals from the electrodes were then given to a recurrent neural network as input. The model was trained to map each specific reading from T5’s brain to the corresponding character as output. The brain wave patterns recorded from thinking about handwriting the letter ‘a’, for example, were distinct from the ones produced when imagining writing the letter ‘b’. Thus, the software could be trained to associate the signals for ‘a’ with the letter ‘a’, and so on, so that as the patient thought about writing each character in a sentence, the neural net would decode the train of brain signals into the desired characters.

With a data set of 31,472 characters, the machine learning algorithm was able to learn how to decode T5’s brain signals to each character he was trying to write correctly about 94 per cent of the time. The characters were then displayed so he was able to communicate.

Here’s a gentle video explaining the experiment.

Youtube Video

Unfortunately, there’s no delete button in this system; T5 had to push on even if he had made a mistake, such as imagining transcribing the wrong letter or punctuation mark. The character error rate was reduced from six per cent to 3.4 per cent by implementing an auto-correct feature. It’s about as accurate as today’s state-of-the-art speech-to-text systems, the researchers claimed.

It should be noted that the character error rate for free typing, when T5 was not transcribing text given by the researchers, was higher at 8.54 per cent and reduced to 2.25 per cent when an auto-correcting language model was used.

“Together, these results suggest that, even years after paralysis, the neural representation of handwriting in the motor cortex is probably strong enough to be useful for a BCI,” the team wrote, referring to a brain-computer interface. T5 was paralyzed due to a spinal cord injury, but the part of his brain that controls movement is still intact.

John Ngai, director of the US National Institutes of Health’s BRAIN Initiative, who was not directly involved in the research, called the study “an important milestone” for BCIs and machine learning algorithms. “This knowledge is providing a critical foundation for improving the lives of others with neurological injuries and disorders,” he said in a statement. The NIH, a government organization, helped fund the research.

Not a fit for all

Although the study seems promising, the team admitted there are a lot of challenges to overcome before this kind of technology can be commercialized or otherwise used by many more people. First of all, it has only been demonstrated on one person so far. The team will have to, as the tech stands today, retrain their model for each individual’s brain signals, and the performance may not be consistent from patient to patient.

“Why performance varies from person to person is still an unknown question,” Frank Willett, lead author of the study and a research scientist at Stanford’s Neural Prosthetics Translational Laboratory, told The Register.

“One cause is likely that the sensors sometimes record from different numbers of neurons – so sometimes when the sensor is placed into a person’s brain, it is particularly ‘hot’ and records a lot of neurons, while other times it does not. This is an open question in the field, and designing sensors that can always record many neurons is an important goal that others are working on.”

The academics also continuously retrained the system on T5’s brain signals to calibrate the software before they conducted experiments. Willett said that a system used in the real-world would have to work on minimal training data and that users shouldn’t have to retrain the machines every day.

“To translate the technology into a real product, it needs to be streamlined – the user should be able to use the BCI without needing to take too much time to train it,” he said.

“So we need to improve the algorithms so that they can work well with only a little bit of training data. In addition, it should be smart enough to automatically track how neural activity changes over time, so that the user does not have to pause to retrain the system each day.”

To translate the technology into a real product, it needs to be streamlined

The invasive nature of the electrodes is also an ssue; they have to stay implanted in a patient’s brain and will have to be made out of a material that is durable and safe. “Finally, the microelectrode device should be wireless and fully implanted,” Willett added. The software must also be able to run on a desktop computer or smartphone: it’s no good having to lug around heavy custom equipment.

“It is important to recognize that the current system is a proof of concept that a high-performance handwriting BCI is possible (in a single participant); it is not yet a complete, clinically viable system,” the paper concluded.

“More work is needed to demonstrate high performance in additional people, expand the character set (for example, capital letters), enable text editing and deletion, and maintain robustness to changes in neural activity without interrupting the user for decoder retraining. More broadly, intracortical microelectrode array technology is still maturing, and requires further demonstrations of longevity, safety and efficacy before widespread clinical adoption.” ®

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://go.theregister.com/feed/www.theregister.com/2021/05/13/brain_implant_typing/

Continue Reading

AI

A Swiss Blockchain-Based Analytical Platform for the Cryptocurrency Market: Meet Dohrnii

A Swiss Blockchain-Based Analytical Platform for the Cryptocurrency Market: Meet Dohrnii

A Swiss Blockchain-Based Analytical Platform for the Cryptocurrency Market: Meet DohrniiA new project has long started exploring the use cases of AI for trading and is set on a mission to become a pioneer in bringing them to the cryptocurrency market. Today, we will take a closer look at how trading has evolved over the years and where the cryptocurrency industry stands less than a

Read More

Avatar

Published

on

A new project has long started exploring the use cases of AI for trading and is set on a mission to become a pioneer in bringing them to the cryptocurrency market. Today, we will take a closer look at how trading has evolved over the years and where the cryptocurrency industry stands less than a decade since its inception.

The Development of Trading Tech Over the Years

If you think back to how the cradle of trading Wall Street was operating on the 80s – with DOS-based computers with green numbers and black screens and phones being the pinnacles of technology at the time – it is mindblowing what tools are available today even to those who do not trade professionally. Globalization has shaken the trading industry at its core – since the 1970s, computational algorithms and simulations such as the Monte Carlo method have been evolving, with the new century marking a drastic revolution in their capabilities and application scopes.

New and Emerging Investing Trends

In 1971, the first electronic stock market was launched by NASDAQ. It was a revolutionary concept that was regarded as a major step towards the future of the investing sector. Shortly after in 1980, online trading followed, allowing brokers to communicate with their clients digitally and to facilitate buy and sell orders directly. Then, the internet emerged, allowing everyone to conduct thorough research on companies and new investing opportunities easily accessible at their fingertips.

Parallel to these advancements, trading technology focused on the analysis of the markets was rapidly evolving. Algorithmic trading, which uses programmatic rules to analyze the markets, ultimately giving traders the power to execute orders exponentially quicker and with less bias than human operators are able to, bridged the gap between informational technology and investing, forming a never ending duo (Source: Stacker). More recently, companies like Wealthfront and Betterment introduced the first robo advisors, which allowed for a humanless financial planning and investing and laid out the foundations for a computer-driven future of the trading sector. AI, blockchain and cryptocurrencies followed, bringing us to where we stand today.

However, as a novel sector, the cryptocurrency market is still trailing behind in terms of analytical technology that is available to the traders. The analysis tools used traditionally in trading are rarely applicable to crypto due to the fundamental differences to the stock market and the inherent volatility of the industry. Many old school traders believe it is impossible to come up with reliable models that can be applied for cryptocurrency trading.  Surprisingly, recent research states otherwise – the truth is that data is the fundament that can enable the creation of statistically reliable models – even in cryptocurrency trading. That is, if you had a close to unlimited capabilities of gathering and analyzing a variety of market data. While you might think this is unlikely, technology has come a long way – particularly in the areas of Artificial Intelligence and its application to trading. Big companies such as BlackRock and their portfolio management software Aladdin have long started to stretch the boundaries of the potential technology can bring within the trading ecosystem. Such software is developed over a prolonged period of time by a large team of experts and is perfected continuously to become reliable. As such, the access to such software is greatly limited to the average investor, presenting the trading scene with asymmetries and one-sided power in favor of the wealthiest.

Dohrnii Takes the Initiative

The Dohrnii ecosystem combines a digital crypto academy, an analytical trading platform and a trading module, forming a comprehensive trading environment for crypto traders who wish to get into cryptocurrency trading or to bring their skillset to a new level. Each trader is delivered a personalized experience along their journey – from the starting onboarding process, the skill of traders is evaluated and a custom educational program is compiled for their profile. As they progress and start trading, their preferences and performance are also analyzed, allowing for Dohrnii to design personalized investment advice such as portfolio adjustments and deliver them to the traders through the robo advisor. What is more, the traders have access to a wide variety of tools that are unique to the cryptocurrency trading scene – from advanced market analysis to trading signals and price predictions, Dohrnii introduces features that were once reserved to the biggest investors on the stock markets to the average crypto trader.

The technology that is turning the wheels of the Dohrnii ecosystem is where the magic happens. By using the latest advancements in Artificial Intelligence and blockchain, Dohrnii is making tools that used to be available only to the biggest investment companies and hedge funds accessible to the average trader, thereby democratizing fintech technology and bringing the market into a natural equilibrium. This equilibrium is of utmost importance, as it will dissolve the current situation of a partial monopoly caused by the discrepancies in the access to advanced trading technology, which translates in much better advantage for several key players.

The Dohrnii Foundation is a non-profit organization based in Zug, Switzerland. It was founded in 2020 by a team of professionals with longstanding experience in multiple areas, all of whom with one common goal – to transform the world of cryptocurrency trading from a black box to an understandable discipline everyone has the ability to comprehend. The experts behind the Dohrnii Foundation have a diversified skill set, ranging from finance, trading, fintech, technology and blockchain, forming the fundamental backbone required for the creation of the Dohrnii ecosystem.

If you are interested in learning more about the Dohrnii project, the tools the ecosystem is offering to the traders and the innovative technology behind it, visit https://dohrnii.io/en

Related posts:

Like BTCMANAGER? Send us a tip!
Our Bitcoin Address: 3AbQrAyRsdM5NX5BQh8qWYePEpGjCYLCy4

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://btcmanager.com/swiss-blockchain-analytical-platform-cryptocurrency-market-dohrnii/

Continue Reading

AI

AI-powered identity access management platform Authomize raises $16M

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Cloud-based authorization startup Authomize today announced that it raised $16 million in series A funding led by Innovation Endeavors, bringing the startup’s total raised to $22 million to date. CEO and cofounder Dotan Bar Noy says that the capital will be used to support Authomize’s R&D and hiring efforts this year, as expansion ramps up.

One study found that companies consider implementing adequate identity governance and administration (IGA) practices to be among the least urgent tasks when it comes to securing the cloud. That’s despite the fact that, according to LastPass, 82% of IT professionals at small and mid-size businesses say identity challenges and poor practices pose risks to their employers.

Authomize, which emerged from stealth in June 2020, aims to address IGA challenges by delivering a complete view of apps across cloud environments. The company’s platform is designed to reduce the burden on IT teams by providing prescriptive, corrective suggestions and securing identities, revealing the right level of permissions and managing risk to ensure compliance.

“As security has evolved from endpoints and networks, attention has increasingly moved to identity and access management, and specifically the authorization space. Many of the CISOs and CIOs we spoke with expressed the need for a system that would secure and manage permissions from a single platform. They took access decisions based on hunches, not data, and when they tried to take data-driven decisions, they found out that the data was outdated. Additionally, most, if not all, of the process has been manually managed, making the IT and security teams the bottleneck for growth,” Noy told VentureBeat in an interview via email.

Authomize’s secret sauce is a technology called Smart Groups that aggregates data from enterprise systems in real time and infers the right-sized permissions. Using this data in tandem with graph neural networksunsupervised learning methods, evolutionary systems, and quantum-inspired algorithms, the platform offers action and process automation recommendations.

AI-powered recommendations

Using AI, Authomize detects relationships between identities and company assets throughout an organization’s clouds. The platform offers an inventory of access policies, blocking unintended access with guardrails and alerting on anomalies and risks. In practice, Authomize constructs a set of policies for each identity-asset relationship. It performs continuous access modeling, self-correcting as it incorporates new inputs like actual usage, activities, and decisions.

Of course, Authomize isn’t the only company in the market claiming to automate away IGA. ForgeRock, for instance, recently raised $93.5 million to further develop its products that tap AI and machine learning to streamline activities like approving access requests, performing certifications, and predicting what access should be provisioned to users.

But Authomize has the backing of notable investor M12 (Microsoft’s venture fund), Entrée Capital, and Blumberg Capital, along with acting and former CIOs, CISOs, and advisers from Okta, Splunk, ServiceNow, Fidelity, and Rubrik. Several undisclosed partners use the company’s product in production, Authomize claims — including an organization with 5,000 employees that tapped Smart Groups to cut its roughly 50,000 Microsoft Office 365 entitlements by 95%. And annual recurring revenue growth is expected to hit 600% during 2021.

Authomize recently launched an integration with the Microsoft Graph API to provide explainable, prescriptive recommendations for Microsoft services permissions. Via the API, Authomize can evaluate customers’ organization structure and authorization details, including role assignments, group security settings, SharePoint sites, OneDrive files access details, calendar sharing information, applications, and service principal access scopes and settings.

“Our technology is allowing teams to make authorization decisions based on accurate and updated data, and we also automate day-to-day processes to reduce IT burden … Authomize currently secures more than 7 million identities and hundreds of millions of assets, and our solution is deployed across dozens of customers,” Noy said. “Using our proprietary [platform], organizations can now strike a balance between security and IT, ensuring human and machine identity have only the permission they need. Our technology is built to connect easily to the entire organization stack and help solve the increasing complexity security, and IT teams face while reducing the overall operational burden.”

Authomize, which is based in Tel Aviv, Israel, has 22 full-time employees. It expects to have more than 55 by the end of the year as it expands its R&D teams to develop new entitlement eligibility engine and automation capabilities and increases its sales and marketing operations in North America.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/13/ai-powered-identity-access-management-platform-authomize-raises-22m/

Continue Reading

AI

10 cool tech events you shouldn’t miss out on this June

Avatar

Published

on

This article is visible for CLUB members only. If you are already a member, but you don’t see the content of this article, please login here. If you’re not a CLUB member yet, but you’d like to read members-only content like this one, have unrestricted access to the site and benefit from many additional perks, you can sign up h

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.eu-startups.com/2021/05/10-cool-tech-events-you-shouldnt-miss-out-on-this-june/

Continue Reading
Aviation4 days ago

JetBlue Hits Back At Eastern Airlines On Ecuador Flights

Cyber Security5 days ago

Cybersecurity Degrees in Massachusetts — Your Guide to Choosing a School

Blockchain4 days ago

“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi

AI2 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Cyber Security5 days ago

Cybersecurity Degrees in Texas — Your Guide to Choosing a School

Blockchain1 day ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Energy3 days ago

ONE Gas to Participate in American Gas Association Financial Forum

Esports3 days ago

Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya

Fintech3 days ago

Credit Karma Launches Instant Karma Rewards

Blockchain4 days ago

Opimas estimates that over US$190 billion worth of Bitcoin is currently at risk due to subpar safekeeping

SaaS4 days ago

Blockchain11 hours ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Esports2 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

SaaS4 days ago

Blockchain4 days ago

Yieldly announces IDO

Esports4 days ago

5 Best Mid Laners in League of Legends Patch 11.10

Cyber Security3 days ago

Top Tips On Why And How To Get A Cyber Security Degree ?

Blockchain1 day ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

SaaS4 days ago

Blockchain4 days ago

Decentraland Price Prediction 2021-2025: MANA $25 by the End of 2025

Trending