Learn the emerging software trends you should pay attention to. Attend online QCon Plus (Nov 29 – Dec 9, 2022). Register Now
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
The panelists discuss ways to improve as developers. Are better tools the solution, or can simple changes in mindset help? And what practices are already here, but not yet universally adopted?
Ballerina has been designed as a data-oriented programming language and supports a functional programming coding style. The Ballerina query language is similar to SQL in the sense that a query expression is made up of clauses. The Ballerina “Table” data structure can be more effective than maps in representing indexed data collections.
In this annual report, the InfoQ editors discuss the current state of AI, ML, and data engineering and what emerging trends you as a software engineer, architect, or data scientist should watch. We curate our discussions into a technology adoption curve with supporting commentary to help you understand how things are evolving.
In this podcast, Shane Hastie, Lead Editor for Culture & Methods spoke to Dean Guida of Infragistics about what’s needed to enable and support a collaborative, innovative culture.
Erin Schnabel discusses how application metrics align with other observability and monitoring methods, from profiling to tracing, and the limits of aggregation.
Learn how cloud architectures help organizations take care of application and cloud security, observability, availability and elasticity. Register Now.
Understand the emerging software trends you should pay attention to. Attend in-person on Oct 24-28, 2022.
Make the right decisions by uncovering how senior software developers at early adopter companies are adopting emerging trends. Register Now.
InfoQ Homepage News Machine Learning Systems Vulnerable to Specific Attacks
Aug 15, 2022 2 min read
by
The growing number of organizations creating and deploying machine learning solutions raises concerns as to their intrinsic security, argues the NCC Group in a recent whitepaper.
The NCC Group's whitepaper provides a classification of attacks that may be carried through against machine learning systems, including examples based on popular libraries such as SciKit-Learn, Keras, PyTorch and TensorFlow platforms.
Although the various mechanisms that allow this are to some extent documented, we contend that the security implications of this behaviour are not well-understood in the broader ML community.
According to the NCC Groups, ML systems are subject to specific forms of attacks in addition to more traditional attacks that may attempt to exploit infrastructure or applications bugs, or other kind of issues.
A first vector of risk is associated to the fact that many ML models contain code that is executed when the model is loaded or when a particular condition is met, such as a given output class is predicted. This means an attacker may craft a model containing malicious code and have it executed to a variety of aims, including leaking sensitive information, installing malware, produce output errors, and so on. Hence:
Downloaded models should be treated in the same way as downloaded code; the supply chain should be verified, the content should be cryptographically signed, and the models should be scanned for malware if possible.
The NCC Group claims to have successfully exploited this kind of vulnerability for many popular libraries, including Python pickle files, SciKit-Learn pickles, PyTorch pickles and state dictionaries, TensorFlow Server and several others.
Another family of attacks are adversarial perturbation attacks, where an attacker may craft an input that causes the ML system to return results of their choice. Several methods for this have been described in literature, such as crafting an input to maximize confidence in any given class or a specific class, or to minimize confidence in any given class. This approach could be used to tamper with authentication systems, content filters, and so on.
The NCC Group's whitepaper also provides a reference implementation of a simple hill climbing algorithm to demonstrate adversarial perturbation by adding noise to the pixels of an image:
We add random noise to the image until confidence increases. We then use the perturbed image as our new base image. When we add noise, we start by adding noise to 5% of the pixels in the image, and decrease that proportion if this was unsuccessful.
Other kinds of well-known attacks include membership inference attacks, which enable telling if an input was part of the model training set; model inversion attacks, which allow attacker to gather sensitive data in the training set; and data poisoning backdoor attacks, which consist in inserting specific items into the training data of a system to cause it to respond in some pre-defined way.
As mentioned, the whitepaper provides a comprehensive taxonomy of machine learning attacks, including possible mitigation, as well as a review of more traditional security issues that were found in many machine learning systems. Make sure you read it to get the full detail.
Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We’d love to have more people join our team.
The Open Stack for Modern Data Apps. Kubernetes Based. Cloud Delivered. Developer Ready. Sign up to try Astra for free.
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example
We protect your privacy.
You need to Register an InfoQ account or Login or login to post comments. But there’s so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example
We protect your privacy.
Real-world technical talks. No product pitches.
Practical ideas to inspire you and your team.
QCon San Francisco – Oct 24-28, In-person.
QCon San Francisco brings together the world’s most innovative senior software engineers across multiple domains to share their real-world implementation of emerging trends and practices.
Uncover emerging software trends and practices to solve your complex engineering challenges, without the product pitches.Save your spot now
InfoQ.com and all content copyright © 2006-2022 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we’ve ever worked with.
Privacy Notice, Terms And Conditions, Cookie Policy