Defense against Slaughterbot Attacks

Slaughterbots is a video that presents a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence, explosives, and facial recognition to assassinate political opponents by crashing into them. In my opinion, it is one of the most dystopian and depressing near-future scenario that I know.

When I watched the video the first time in 2017, I had no idea how we could defend towns against such terror attacks. Shooting against so many microdrones does not make sense. It also makes no sense to block the radio signals because the microdrones fly totally autonomous. Now, some years later, I realized that we could use secure machine learning to defend security-critical areas like shopping malls or train stations.

Many practical machine learning systems, like self-driving cars, are operating in the physical world. By adding adversarial stickers (patches) on top of, e.g., traffic signs, self-driving cars get fooled by these stickers.

Patch attacks projected on monitors in hallways, train stations, and so on could fool facial recognization systems on such suicide bombers. In this scenario, it is important to iterate over a lot of pretested patch attacks on test classifiers to find a potential weakness in the mini drones. After an effective attack was found, we could project this successful attack on all available screens in the attacked area.

When we imagine that nowadays shopping malls have physical barriers against terrorist truck attacks, nuclear underground utilities, and explosives in Swiss bridges it is not hard to imagine that we could develop an emergency program for public available monitors which could help defense against adversarial Slaughterbot attacks.

Problems of Generating Real-World Patch Attacks

Of course, there are still some problems left for generating real-world patch attacks. Images of the same objects, for example, are in real-world conditions unlikely exactly the same. To successfully realize physical attacks, attackers need to find image patches that are independent of the exact imaging condition, such as changes in pose and lighting. For that, we need to find adversarial patches that generalize beyond a single image. To enhance the generality of the patch, we look for patches that can cause any image in a set of inputs to be misclassified. For that reason, we formalize the generation of real-world patch attack as an optimization problem.

To find a quite universal patch for a certain classifier, it is important that we solved the optimisation problem for a lot of different classifiers before the actual attack.

Autonomous Organizations

In a world where machines are increasingly replacing the employment of people, we have to create new lifestyles. Passive income is an excellent way to earn money in such a world. It is derived from a rental property, limited partnership, or another enterprise in which a person is not actively involved. Nowadays, these organizations are mostly managed by public corporations. But soon, more and more autonomous organizations will arise because of the rapid development of artificial intelligence and the blockchain technology.

Based on this development, it could be possible to develop whole companies that work autonomously for people who don’t want to or can’t work anymore. So it could be a new business model for engineers to develop such companies for costumers.

But how could such an organization work internally?

Current neural networks can already evaluate the aesthetics of an image, so it will not take long until neural networks can also evaluate the appeal, usefulness, and other factors of a product. A combination of

  1. Optimization Algorithms
  2. Artificial Intelligence
  3. Simulations
  4. APIs to marketplaces like Amazon

could be enough to run such organizations autonomously. Figure 1 illustrates an Autonomous Organization Framework. The holder inputs a number of assets and configures what the autonomous organization should do. The AO calibrates itself; then, it interfaces with Google Trends to analyze markets. Its analysis could find out that it is currently a profitable moment to sell remote-controlled cars. It could then use social media platforms to find popular models, and search other websites for the toy construction plans. After that, it could access online shops to extract the needed components, simulating many different combinations (optimizing via evolutionary algorithms, evaluating the product via deep learning and, finally, forecasting) to build a competitive RC-car. After constructing the product, it could be sold via the Amazon API.

Figure 1: A Framework for an autonomous organization

Besides such manufactory organizations, it would be probably easier to implement services like e-learning schools first. For example, an online school that teaches costumers how to dance salsa or bachata. From one of my previous posts, you already know how easy it is to develop a neural network that can differentiate between-song genres. It would also be possible to develop a classifier that can be detected if a person is dancing salsa or not.

I really prefer to have especially in social environments people instead of machines in control. But what is with my previous post about language table organizations? It would be much work to automate every single step of such an organization, but it could still be possible.

* This whole framework is for now just a general idea. Such organizations work probably better in a field where the competitive environment is not changing too much.

Storytelling for AI Development

Around five years ago, Amazon released its smart assistant named Amazon Alexa. She has gained a lot of popularity among voice assistants. Most users use her to play music and search the internet for information. But these are just fun tasks. Alexa can help you with more of the daily tasks and make your life easier.

Amazon Alexa tries to be your assistant, but someone who reacts to commands doesn’t show any interpersonal relationship. This type of connection is essential for an assistant job – it builds respect and lets us see assistants like ‘individuals.’ This is one goal that AI developers want to achieve: Artificial Intelligence that is human-like. In this blog post, I want to discuss one possibility of achieving this goal.

Building an Interpersonal Relationship with Storytelling

A while ago, I read “Story” by Robert McKee. This book inspired me to think about human-like AI development a little bit differently.

Robert McKee describes in his books that a story is a metaphor for life – stories awaken feelings. When we watch good movies, we try to adopt the personal perspective of the movie characters. This leads to an ever-stronger bond with the characters in the course of the story.

This is exactly something that we want to achieve for our users and the virtual assistants. The field of storytelling reminds us that there is no way to love or hate a person if you have not heard that person’s story; that’s why our virtual assistant needs to have a backstory. As shown in the movie example, it makes no difference whether the person is real or not.

Often, a connection with movie characters arises when they make decisions in stressful situations. The real genius of a person manifests itself in the choices they make under pressure. The higher the pressure, the more the decision reflects the innermost nature of the figure. Stressful situations often appear for virtual assistants, such as when users ask for unanswerable questions. In this situation, ordinary virtual assistants answer with standard answers like, “I don’t understand,” or “I can’t help you.” In this case, the real character of the virtual assistant is revealed. AI developers have to avoid such standard answers, and instead, the artificial intelligence should answer such problematic user-questions with a backstory. The backstory depends on the use case but should be created by somebody familiar with storytelling and dialog writing. For dialog writing, I can recommend another book from Robert McKee called “DIALOG.”

According to the dramatist Jean Anouilh, fiction gives life a form. Thus, stories support the development of artificial life, making virtual assistants more than just empty shells.