Connect with us

AI

This is how we’ll merge with AI

Avatar

Published

on

The relationship between humans and AI is something of a dance. We and AI come close together operating collaboratively, then are pushed away by the impossibility, only to stumble but return attracted by the potential. It is perhaps fitting that the dance community is beginning to embrace robots, with AI helping to create new movements and choreography, and with robots sharing the stage with human dancers.

The relationship between society and technology is yin and yang, with every massive enhancement accompanied by the potential for danger. AI, for example, offers the promise to end boring, repetitive jobs, enabling us to engage in higher level and more fulfilling tasks. It helps with any number of efficiency efforts, such as fraud detection, and it can even paint masterpiece artworks and compose symphonies. Sam Altman, CEO of OpenAI, hopes AI will unlock human potential and let us focus on the most interesting, most creative, most generative things.

Wired co-founder Kevin Kelly has argued that technology, and by extension AI, is a projection of the human mind. The argument is that technology stems organically, authentically, and follows patterns found in man and nature. It is a means by which humans gain control over their environment both for safety but also for advantage. The technology we produce is a natural biological engine of human evolution and a leading cause of change limited only by our imaginations. The positive versus negative polarity of how the technology is applied, the yin and yang, is an expression of the dualistic human mind.

However the dichotomy between humans and robots, between natural and artificial does create conflict. The tension between the innate drive to develop and use AI-enabled technology and the potential for it to surpass us creates an understandable emotional turmoil. This stew powers the dance and informs the ongoing industry dialogue about how best to utilize and control AI. In effect, the discussion is about who leads. Today, while AI is mostly still in its infancy, people are in control, but the concerns are about who leads the dance in the future.

(Caption: Robots can dance. Source: Boston Dynamics.)

As AI rapidly develops, the pressure to use it to drive greater advantage grows, as do the existential worries. In “The Master Algorithm,” computer scientist and University of Washington Professor Pedro Domingos assures us that “humans are not a dying twig on the tree of life. On the contrary, we are about to start branching. In the same way that culture coevolved with larger brains, we will coevolve with our creations. We always have: Humans would be physically different if we had not invented fire or spears. We are Homo technicus as much as Homo sapiens.” In this he suggests that humans will always lead, no matter how advanced AI becomes. It is this synergy that underlies a belief in collaboration between humans and machines, a dance pairing with each excelling in ways unique to their strengths. This has given rise to the idea of machines as teammates. The idea is that such collaboration could sustainably augment humans and generate positive benefits for individuals, organizations, and societies.

That might work – unless man and machine merge. Philosopher Jason Silva says that AI will change our scope of possibilities in ways we are only starting to glimpse and will lead to a merging between man and machine. Certainly, Elon Musk believes this is both possible and a necessary direction. Though the near-term goal of his Neuralink company and others is to build a brain-computer interface that can help people with specific health conditions, longer-term he has a grander vision. Specifically, he believes this interface will be necessary for humans to keep pace with increasingly powerful AI.

Such a development could redefine the relationship between humans and machines, with the merged combination giving rise to a higher form of AI-powered intelligence. In effect, a fusion of the dancers. Among other things, this would also have huge implications for religion. If God created human beings in God’s own image and humans create robots in our image, what does that make them in the eyes of religion? And what does that make a merged creation? Perhaps that is one of the reasons why the Pope recently urged people to pray that robots and artificial intelligence respect the dignity of the person and always serve mankind.

(Caption: The Pope on robots and AI.)

Even if there is not this direct physical connection between humans and AI, there is still a growing symbiosis. Researchers are starting to build hybrid collaborative systems that combine the best of an AI model’s superpowers with human intuition. In this, humans contribute leadership, teamwork, creativity, and social skills and machines lead with speed and scalability.

A new line of research has a vision of a society in which people are living seamlessly with machines. Though admittedly still some years off, in this vision the AI is merged with an intelligent body to create new types of robots that have properties comparable to those of intelligent living organisms, possibly a step towards creating Replicants with all the implications as imagined by Philip K. Dick in Do Androids Dream of Electric Sheep? that also inspired the Blade Runner movies. This requires what the researchers call Physical AI, combining knowledge from materials science, mechanical engineering, computer science, biology and chemistry. According to a new paper, these robots would be designed to look and behave like humans or other animals and would possess intellectual capabilities normally associated with biological organisms. The goal, according to the paper, is to build robots that could exist like benevolent animals together with nature and people.

How might we move towards this higher self – this symbiotic future of natural and artificial? The drive of human imagination, and the onward march of human technology towards what was once science fiction is revealing the possibility of a new dance.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.


Best practices for a successful AI Center of Excellence: A guide for both CoEs and business units Access here


Source: https://venturebeat.com/2020/11/23/this-is-how-well-merge-with-ai/

AI

AI clocks first-known ‘binary sextuply-eclipsing sextuple star system’. Another AI will be along shortly to tell us how to pronounce that properly

Avatar

Published

on

Astronomers have discovered the first-known “sextuply-eclipsing sextuple star system,” after a neural network flagged it up in data collected by NASA’s Transiting Exoplanet Survey Satellite (TESS).

The star system, codenamed TIC 168789840, is an oddball compared to its peers. Not only does it contain six suns, they’re split into three pairs of eclipsing binary stars. That means the suns in each pair, to an observer, pass directly in front of one another in their orbits. The stars in each pair are gravitationally bound to each other and to every other sun in their system, meaning each pair circles around each other, and around a common center of mass.

To get an idea of how this is structured, consider each star pair to be labeled A, B, and C. The A pair circle one another every 1.6 days, and C every 1.3 days. The A and C system completes a full orbit around their galactic center in a little under four years.

The remaining pair, B, is much further away, and is described as an outer binary. The B suns revolve around one another every 8.22 days, and it takes them about two thousand years to run a lap around the system’s common center of mass, according to a paper due to appear in The Astrophysical Journal detailing these findings. If that’s all a little mind-boggling, here’s a rough sketch of the system’s structure taken from the paper:

stars

The system’s orbital mechanics … Click to enlarge

An alien living on a hypothetical planet orbiting one of the inner quadruple stars would see four very bright suns in the sky and another two dimmer ones further away. These stars would periodically disappear, as they eclipsed one another. The chances of anyone observing this, however, are pretty slim to none as it doesn’t look like there are any exoplanets in TIC 168789840.

Finding the first sextuply-eclipsing sextuple star system with machine learning

NASA’s TESS telescope gathers a massive amount of data. Instead of manually poring over tens of millions of objects, scientists instead feed the data into machine-learning algorithms designed to highlight the most interesting examples for further examination.

Brian Powell, first author of the study and a data scientist at NASA’s High Energy Astrophysics Science Archive Research Center, trained a classifier to spot eclipsing binary systems.

The neural network looks for the characteristic dip in an object’s light curve, caused when one star passes in front of the other. It assigns a score on the likelihood it’s identified a eclipsing binary systems: ones rated above 0.9 on a scale up to 1.0 is considered a strong candidate.

tess

Looking for a new hobby to kill the COVID-19 blues? Join NASA’s Planet Patrol to hunt for alien worlds

READ MORE

The computer-vision model that performs all this is made up of a layer of approximately 5.5 million parameters, and was taught using more than 40,000 training examples on a cluster of eight Nvidia V100 GPUs for approximately two days.

At first, TIC 168789840 didn’t seem so odd. “The neural network was trained to look for the feature of the eclipse in the light curve with no concern as to periodicity,” Powell told The Register.

“Therefore, to the neural net, an eclipsing binary is no different than an eclipsing sextuple, both of them would likely have an output near 1.0.”

Upon closer inspection, however, the scientists were shocked when they realized they had discovered the first-known triplet eclipsing binary system. Each star locked in their pairs are very similar to one another in terms of mass, radius, and temperature.

“The fact that all three binaries show eclipses allows us to determine the radii and relative temperatures of each star. This, together with measurement of the radial velocities, allows us to determine the masses of the stars. Having this much information on a multiple star system of this order is quite rare,” Powell added.

There are 17 or so known sextuple star systems, though TIC 168789840 is the first structure where the sextuple suns are also binary eclipsing stars. Scientists hope that studying all its structural and physical properties will unlock mysteries of how multiple star systems are born. ®

Source: https://go.theregister.com/feed/www.theregister.com/2021/01/26/sextuple_star_system/

Continue Reading

AI

Governance: Companies mature in their use of AI know that it needs guardrails

Avatar

Published

on

Quality governance ensures responsible data models and AI execution, as well as helps the data models stay true to the business objectives.

Retro wooden robot with light bulb on bright background

Image: Getty Images/iStockphoto

More about artificial intelligence

The fundamentals of traditional IT governance have focused on service-level agreements like uptime and response time, and also on oversight of areas such as security and data privacy. The beauty of these goals is that they are concrete and easy to understand. This makes them attainable with minimal confusion if an organization is committed to getting the job done.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Unfortunately, governance becomes a much less-definable task in the world of artificial intelligence (AI), and a premature one for many organizations.

“This can come down to the level of AI maturity that a company is at,” said Scott Zoldi, chief analytics officer at FICO. “Companies are in a variety of stages of the AI lifecycle, from exploring use cases and hiring staff, to building the models, and having a couple of instances deployed but not widely across the organization. Model governance comes into play when companies are mature in their use of AI technology, are invested in it, and realize that AI’s predictive and business value should be accompanied by guardrails.”

Because AI is more opaque than enterprise IT environments, AI requires a governance strategy that asks questions of architectures and that requires architectures to be more transparent,” Zoldi said.

SEE: 3 steps for better data modeling with IT and data science (TechRepublic)

Achieving transparency in AI governance begins with being able explain in plain language the technology behind AI and how it operates to board members, senior management, end users, and non-AI IT staff. Questions that AI practitioners should be able to answer should include but not be limited to, how data is prepared and taken into AI systems, which data is being taken in and why, and how the AI operates on the data to return answers to the questions that the business is asking. AI practitioners should also explain how both data and what you ask of it continuously change over time as business and other conditions change.

This is a pathway to ensuring responsible data models and AI execution, and also a way to ensure that the data models that a company develops for its AI stay true to its business objectives.

One central AI governance challenge is ensuring that the data and the AI operating on it are as bias-free as possible.

“AI governance is a board-level responsibility to mitigate pressures from regulators and advocacy groups,” Zoldi said. “Boards of directors should care about AI governance because AI technology makes decisions that profoundly affect everyone. Will a borrower be invisibly discriminated against and denied a loan? Will a patient’s disease be incorrectly diagnosed, or a citizen unjustly arrested for a crime he did not commit?  

How to achieve AI fairness

The increasing magnitude of AI’s life-altering decisions underscores the urgency with which AI fairness and bias should be ushered onto boards’ agendas.”

SEE: Equitable tech: AI-enabled platform to reduce bias in datasets released  (TechRepublic)

Zoldi said that to eliminate bias, boards must understand and enforce auditable, immutable AI model governance based on four classic tenets of corporate governance: accountability, fairness, transparency, and responsibility. He believes this can be achieved if organizations focus their AI governance on ethical, efficient, and explainable AI.

Ethical AI ensures that models operate without bias toward a protected group, and are used only in areas where we have confidence in the decisions the models generate. These issues have strong business implications; models that make biased decisions against protected groups aren’t just wrong, they are illegal.

Efficient AI helps AI make the leap from the development lab to making decisions in production that can be accepted with confidence. Otherwise, an inordinate amount of time and resources are invested in models that don’t deliver real-world business value. 

Explainable AI makes sure that companies using AI models can meet a growing list of regulations, starting with GDPR, to be able to explain how the model made its decision, and why.”

SEE: Encourage AI adoption by moving shadow AI into the daylight (TechRepublic)

Some organizations are already tackling these AI governance challenges, while others are just beginning to think about them.

This is why, when putting together an internal team to address governance, a best practice approach is a three-tiered structure that begins with an executive sponsor at the top to champion AI at a corporate level.

“One tier down, executives such as the CAO, CTO, CFO, and head of legal should lead the oversight of AI governance from a policy and process perspective,” Zoldi said. “Finally, at the blocking-and-tackling level, senior practitioners from the various model development and model delivery areas, who work together with AI technology on a daily basis, should hash out how to meet those corporate governance standards.”

 Also see

Source: https://www.techrepublic.com/article/governance-companies-mature-in-their-use-of-ai-know-that-it-needs-guardrails/#ftag=RSS56d97e7

Continue Reading

AI

Gartner: The future of AI is not as rosy as some might think

Avatar

Published

on

A Gartner report predicts that the second-order consequences of widespread AI will have massive societal impacts, to the point of making us unsure if and when we can trust our own eyes.

Vector of a face made of digital particles as symbol of artificial intelligence and machine learning

Image: iStockphoto/Feodora Chiosea

More about artificial intelligence

Gartner has released a series of Predicts 2021 research reports, including one that outlines the serious, wide-reaching ethical and social problems it predicts artificial intelligence (AI) to cause in the next several years. In Predicts 2021: Artificial Intelligence and Its Impact on People and Society, five Gartner analysts report on different predictions it believes will come to fruition by 2025. The report calls particular attention to what it calls second-order consequences of artificial intelligence that arise as unintended results of new technologies.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Generative AI, for example, is now able to create amazingly realistic photographs of people and objects that don’t actually exist; Gartner predicts that by 2023, 20% of account takeovers will use deepfakes generated by this type of AI. “AI capabilities that can create and generate hyper-realistic content will have a transformational effect on the extent to which people can trust their own eyes,” the report said.

The report tackles five different predictions for the AI market, and gives recommendations for how businesses can address those challenges and adapt to the future: 

  • By 2025, pretrained AI models will be largely concentrated among 1% of vendors, making responsible use of AI a societal concern
  • In 2023, 20% of successful account takeover attacks will use deepfakes as part of social engineering attacks
  • By 2024, 60% of AI providers will include harm/misuse mitigation as a part of their software
  • By 2025, 10% of governments will avoid privacy and security concerns by using synthetic populations to train AI 
  • By 2025, 75% of workplace conversations will be recorded and analyzed for use in adding organizational value and assessing risk

Each of those analyses is enough to make AI-watchers sit up and take notice, but when combined it creates a picture of a grim future rife with ethical concerns, potential misuse of AI, and loss of privacy in the workplace. 

How businesses can respond 

Concerns over AI’s effect on privacy and truth are sure to be major topics in the coming years if Gartner’s analysts are accurate in their predictions, and successful businesses will need to be ready to adapt quickly to those concerns.

A recurring theme in the report is the establishment of ethics boards at companies that rely on AI, whether as a service or a product. This is mentioned particularly for businesses that plan to record and analyze workplace conversations: Boards with employee representation should be established to ensure fair use of conversations data, Gartner said.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Gartner also recommends that businesses establish criteria for responsible AI consumption and prioritize vendors that “can demonstrate responsible development of AI and clarity in addressing related societal concerns.”

As for security concerns surrounding deepfakes and generative AI, Gartner recommends that organizations should schedule training about deepfakes. “We are now entering a zero-trust world. Nothing can be trusted unless it is certified as authenticated using cryptographic digital signatures,” the report said. 

There’s a lot to digest in this report, from figures saying that the best deepfake detection software will top out at a 50% identification rate in the long term, to the prediction that in 2023 a major US corporation will adopt conversation analysis to determine employee compensation. There’s much to be worried about in these analyses, but potential antidotes are included as well. The full report is available at Gartner, but interested parties will need to pay for access.

Also see

Source: https://www.techrepublic.com/article/gartner-the-future-of-ai-is-not-as-rosy-as-some-might-think/#ftag=RSS56d97e7

Continue Reading

AI

Model serving in Java with AWS Elastic Beanstalk made easy with Deep Java Library

Avatar

Published

on

Deploying your machine learning (ML) models to run on a REST endpoint has never been easier. Using AWS Elastic Beanstalk and Amazon Elastic Compute Cloud (Amazon EC2) to host your endpoint and Deep Java Library (DJL) to load your deep learning models for inference makes the model deployment process extremely easy to set up. Setting up a model on Elastic Beanstalk is great if you require fast response times on all your inference calls. In this post, we cover deploying a model on Elastic Beanstalk using DJL and sending an image through a post call to get inference results on what the image contains.

About DJL

DJL is a deep learning framework written in Java that supports training and inference. DJL is built on top of modern deep learning engines (such as TenserFlow, PyTorch, and MXNet). You can easily use DJL to train your model or deploy your favorite models from a variety of engines without any additional conversion. It contains a powerful model zoo design that allows you to manage trained models and load them in a single line. The built-in model zoo currently supports more than 70 pre-trained and ready-to-use models from GluonCV, HuggingFace, TorchHub, and Keras.

Benefits

The primary benefit of hosting your model using Elastic Beanstalk and DJL is that it’s very easy to set up and provides consistent sub-second responses to a post request. With DJL, you don’t need to download any other libraries or worry about importing dependencies for your chosen deep learning framework. Using Elastic Beanstalk has two advantages:

  • No cold startup – Compared to an AWS Lambda solution, the EC2 instance is running all the time, so any call to your endpoint runs instantly and there isn’t any ovdeeerhead when starting up new containers.
  • Scalable – Compared to a server-based solution, you can allow Elastic Beanstalk to scale horizontally.

Configurations

You need to have the following gradle dependencies set up to run our PyTorch model:

plugins { id 'org.springframework.boot' version '2.3.0.RELEASE' id 'io.spring.dependency-management' version '1.0.9.RELEASE' id 'java'
} dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl.pytorch:pytorch-model-zoo" implementation "ai.djl.pytorch:pytorch-native-auto" implementation "org.springframework.boot:spring-boot-starter" implementation "org.springframework.boot:spring-boot-starter-web"
}

The code

We first create a RESTful endpoint using Java SpringBoot and have it accept an image request. We decode the image and turn it into an Image object to pass into our model. The model is autowired by the Spring framework by calling the model() method. For simplicity, we create the predictor object on each request, where we pass our image for inference (you can optimize this by using an object pool) . When inference is complete, we return the results to the requester. See the following code:

 @Autowired ZooModel<Image, Classifications> model; /** * This method is the REST endpoint where the user can post their images * to run inference against a model of their choice using DJL. * * @param input the request body containing the image * @return returns the top 3 probable items from the model output * @throws IOException if failed read HTTP request */ @PostMapping(value = "/doodle") public String handleRequest(InputStream input) throws IOException { Image img = ImageFactory.getInstance().fromInputStream(input); try (Predictor<Image, Classifications> predictor = model.newPredictor()) { Classifications classifications = predictor.predict(img); return GSON.toJson(classifications.topK(3)) + System.lineSeparator(); } catch (RuntimeException | TranslateException e) { logger.error("", e); Map<String, String> error = new ConcurrentHashMap<>(); error.put("status", "Invoke failed: " + e.toString()); return GSON.toJson(error) + System.lineSeparator(); } } @Bean public ZooModel<Image, Classifications> model() throws ModelException, IOException { Translator<Image, Classifications> translator = ImageClassificationTranslator.builder() .optFlag(Image.Flag.GRAYSCALE) .setPipeline(new Pipeline(new ToTensor())) .optApplySoftmax(true) .build(); Criteria<Image, Classifications> criteria = Criteria.builder() .setTypes(Image.class, Classifications.class) .optModelUrls(MODEL_URL) .optTranslator(translator) .build(); return ModelZoo.loadModel(criteria); } 

A full copy of the code is available on the GitHub repo.

Building your JAR file

Go into the beanstalk-model-serving directory and enter the following code:

cd beanstalk-model-serving
./gradlew build

This creates a JAR file found in build/libs/beanstalk-model-serving-0.0.1-SNAPSHOT.jar

Deploying to Elastic Beanstalk

To deploy this model, complete the following steps:

  1. On the Elastic Beanstalk console, create a new environment.
  2. For our use case, we name the environment DJL-Demo.
  3. For Platform, select Managed platform.
  4. For Platform settings, choose Java 8 and the appropriate branch and version.

  1. When selecting your application code, choose Choose file and upload the beanstalk-model-serving-0.0.1-SNAPSHOT.jar that was created in your build.
  2. Choose Create environment.

After Elastic Beanstalk creates the environment, we need to update the Software and Capacity boxes in our configuration, located on the Configuration overview page.

  1. For the Software configuration, we add an additional setting in the Environment Properties section with the name SERVER_PORT and value 5000.
  2. For the Capacity configuration, we change the instance type to t2.small to give our endpoint a little more compute and memory.
  3. Choose Apply configuration and wait for your endpoint to update.

Calling your endpoint

Now we can call our Elastic Beanstalk endpoint with our image of a smiley face.

See the following code:

curl -X POST -T smiley.png <endpoint>.elasticbeanstalk.com/inference

We get the following response:

[ { "className": "smiley_face", "probability": 0.9874626994132996 }, { "className": "face", "probability": 0.004804758355021477 }, { "className": "mouth", "probability": 0.0015588520327582955 }
]

The output predicts that a smiley face is the most probable item in our image. Success!

Limitations

If your model isn’t called often and there isn’t a requirement for fast inference, we recommend deploying your models on a serverless service such as Lambda. However, this adds overhead due to the cold startup nature of the service. Hosting your models through Elastic Beanstalk may be slightly more expensive because the EC2 instance runs 24 hours a day, so you pay for the service even when you’re not using it. However, if you expect a lot of inference requests a month, we have found the cost of model serving on Lambda is equal to the cost of Elastic Beanstalk using a t3.small when there are about 2.57 million inference requests to the endpoint.

Conclusion

In this post, we demonstrated how to start deploying and serving your deep learning models using Elastic Beanstalk and DJL. You just need to set up your endpoint with Java Spring, build your JAR file, upload that file to Elastic Beanstalk, update some configurations, and it’s deployed!

We also discussed some of the pros and cons of this deployment process, namely that it’s ideal if you need fast inference calls, but the cost is higher when compared to hosting it on a serverless endpoint with lower utilization.

This demo is available in full in the DJL demo GitHub repo. You can also find other examples of serving models with DJL across different JVM tools like Spark and AWS products like Lambda. Whatever your requirements, there is an option for you.

Follow our GitHub, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Author

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/model-serving-in-java-with-aws-elastic-beanstalk-made-easy-with-deep-java-library/

Continue Reading
Blockchain3 days ago

Buying the Bitcoin Dip: MicroStrategy Scoops $10M Worth of BTC Following $7K Daily Crash

Blockchain3 days ago

Bitcoin Correction Intact While Altcoins Skyrocket: The Crypto Weekly Recap

Blockchain3 days ago

MicroStrategy CEO claims to have “thousands” of executives interested in Bitcoin

Blockchain3 days ago

Canadian VR Company Sells $4.2M of Bitcoin Following the Double-Spending FUD

custom-packet-sniffer-is-a-great-way-to-learn-can.png
Blockchain4 days ago

TA: Ethereum Starts Recovery, Why ETH Could Face Resistance Near $1,250

Amb Crypto3 days ago

Monero, OMG Network, DigiByte Price Analysis: 23 January

Amb Crypto3 days ago

Chainlink Price Analysis: 23 January

Amb Crypto2 days ago

Will range-bound Bitcoin fuel an altcoin rally?

Amb Crypto2 days ago

Bitcoin Price Analysis: 24 January

Amb Crypto4 days ago

Popular analyst prefers altcoins LINK, UNI, others during Bitcoin & Eth’s correction phase

Amb Crypto3 days ago

Bitcoin Cash, Synthetix, Dash Price Analysis: 23 January

Amb Crypto3 days ago

Why has Bitcoin’s brief recovery not been enough

Automotive3 days ago

Tesla Powerwalls selected for first 100% solar and battery neighborhood in Australia

Blockchain4 days ago

Bitcoin Cash Analysis: Strong Support Forming Near $400

Blockchain4 days ago

OIO Holdings Appoints Rudy Lim as CEO of Blockchain Business Subsidiary

SPAC Insiders4 days ago

Virtuoso Acquisition Corp. (VOSOU) Prices Upsized $200M IPO

Amb Crypto3 days ago

Why now is the best time to buy Bitcoin, Ethereum

Amb Crypto3 days ago

Stellar Lumens, Cosmos, Zcash Price Analysis: 23 January

AI2 days ago

Plato had Big Data and AI firmly on his radar

Cyber Security5 days ago

Einstein Healthcare Network Announces August Breach

Trending