Luxury Handbag Investment – A Data-Driven Point of View

In the investment landscape, designer handbags are undoubtedly worth taking a look at. According to Art Market Research (AMR), designer handbags outperform art, classic cars, and rare whiskies in terms of investment potential. Some handbags, from Hermes, Chanel, and Louis Vuitton, have even experienced a valuation spike of an average of 83% in the last ten years. To put that into context, watches have increased by 72%.

Average Prices of different handbag models on different reseller platforms in December 2021.

When it comes to considering designer handbags as an investment it’s important to have the right expectations. A quality designer handbag can be a great wardrobe investment. Selling your designer handbags years later for a profit is only true for certain designer handbags.

Where do you get them?

Whether you are on the lookout for a classic Louis Vuitton bag, or desperately want a Hermès Birkin and don’t want to wait on their list, luxury resale websites are the new place to be. The most popular luxury resale sites are Vestiaire Collective, The Luxury Closet, and Rebelle.

Short-Term Strategy

When reselling fashion items like handbags, you have to understand the trends. A good way to understand the trends is to analyze the sales on the previously mentioned reselling platforms. They give you an overview of how certain handbag models are performing. A good performance indicator is for example the turnaround time (the duration of how long certain products are on the market). Lower turnaround times indicate that certain models are more wanted than other models.

Average turnaround times of different handbag models in December 2021. Each model sample size is larger than 20 items (so currently not the biggest one).

When setting a price, do not forget to take platform fees into account (mostly around 25% of the price). Therefore, a quite nice scenario would be to buy a handbag 25% less than its average price and sell it a bit more than the average price.

Long-Term Strategy

Designer bags go in and out of fashion, but a well-chosen designer bag can last forever. Classic brands, such as Hermes, Chanel, and Louis Vuitton, and classic handbag styles may hold their value. Taking good care of your bag is necessary—both when in use and not—to guarantee interest if you’re looking to trade it in.

Microchips – Demand, Industry, and Shortage

The microchip became one of the most important strategic materials in the 21st century. Almost everything we use depends on microchips. From your iPhone, your toaster to fighter jets, and automobiles. Microchips became a part of our daily lives and, therefore, the heart of our modern society. The development of AI, the internet of things, and the self-driving car revolution won’t stop this trend.

From Semiconductors To Microchips

All this technological advancement builds on top of a simple group of materials called semiconductors. When passing through a conductor, electricity faces little resistance, creating a free-flowing current. In an insulator, electrical current cannot travel due to high levels of resistance. Semiconductors sit somewhere between these two extremes, allowing a degree of control over the flow of electricity by providing a change of electric fields. Silicon semiconductors are the industry standard for most transistors. Transistors are devices that regulate current and act as switches for electronic signals. These transistors are crucial to microchip manufacturing, from processors to memory cards.

Semiconductor Industry

The semiconductor industry has professionalized, and today, companies in the field typically specialize in one of the following domains:

Mining: China is with two-thirds of the worldwide production by far the world’s largest producer of silicon and therefore the producer of the essential material for microchips. Other producers are Russia, the USA, Norway, and Brazil.

Chip Design defines how many cores a microchip should have, how those and other components such as memory are arranged on the silicon, and how the circuits should actually look like. Chip Designers normally outsource the chip manufacturing to fab foundries (microchip manufacturers). Famous chip designers are AMD, Apple, Amazon, Alphabet, and a lot more.

Fabrication: There are a handful of fab foundries. Intel, Samsung, and TSMC are the Top 3 leading companies by sales revenue in this field. While Intel designs its own microchips, there are other companies like TSMC specializing in manufacturing microchips for other companies and is, therefore, a pure-play fab foundry. In the field of fab foundries, TSMC (ca. 50% market share, Taiwan), Samsung (ca. 20% Market Share, South Korea), Global Foundries (ca. 8% market share, USA), UMC (ca. 7%, Taiwan), SMIC (ca. 5%, China) are the most noticeable ones. TSMC delivers its microchips to famous tech players like AMD, Apple, ARM, Broadcom, Nvidia, and Qualcomm. TrendForce and ReportLinker estimated a foundries revenue in 2020 with 70 Billion dollars and an average turnover of 10% per year over the next decade.

Equipment: The high-tech industry of semiconductors needs one of the advanced engineered machines in the world. Without the most advanced machines, no manufacturer would keep up with the competition. The Dutch company ASML makes lithography systems, which are machines that are used to make chips. All major chipmakers use their technology because ASML lithography systems are the most advanced systems in this field with years of distance.

Semiconductor Microchip Shortage

The 2020 global microchip shortage is an ongoing crisis. The demand for microchips is greater than the supply and has led to major shortages and queues amongst consumers, not only in the information technology sector. According to AlixPartners, the chip shortage could cost the automotive industry around the world a loss of 61 Billion dollars. So how did this shortage started?

One major reason is the tech war between the USA and China. The outsourcing of AMD’s chip production to TSMC created additional pressure on TSMC production plants during the pandemic, and the Covid-19 crisis itself.

Semiconductors are no longer just components, but strategic resources that all major economies must secure.

Arisa Liu (Analyst, Taiwan Economic Research Institute)

Amazingly there are only a handful of major microchip manufacturers in the world (TSMC, Samsung, Intel). Whoever has secure access to microchips can make their economy more robust against these global shortages. In this way, microchips became more or less the new oil of the 21st century.

So, there will be an increased effort for all countries to secure the demand for microchips for their economies in the future. This effect can be already observed in various countries like the USA, strengthening their semiconductor microchip production.

The Future of Freelancer Platforms

The digital transformation of the workplace has only just begun. The notion that you have to move to Silicon valley to get employed by one of the world-class organizations is just not the case anymore.

Platforms like Fiverr and Upwork give freelancers the possibility to advertise their services to millions of customers remotely. That offers an excellent opportunity for people who want to travel around the world and still want to earn money.

Remote freelancing allows people from third-world countries to easily participate in the western world markets without leaving their homes. Remote freelancing from a third-world country allows freelancers to improve their lifestyle. This will also lead to economic growth in these third-world countries, especially in areas with high unemployment rates.

While businesses compete for local talents, remote freelancers give smaller startups a larger talent pool to choose from. Instead of hiring a graphic designer in the west, startups gain access to a far broader and deeper talent pool with these freelancer platforms than those who limit themselves to one geographic area. And for managers, organizing and coordinating a remote team’s work is crucial to winning recognition and advancement in the coming years.

So what will change in the future? I think that the prices for services that can be done remotely will drop, and they will be more and more outsourced in third-world countries. People who can do their work remotely may move to nicer places and do not need to live where their employer is located. This could have a quite interesting effect in, for example, Europe. Nowadays, low-wage countries move to North-European countries to earn more money while North-European citizens move to South-European to enjoy a friendlier climate.

Provence (2020)

Smartphones and Wearables for Remote Diagnostics

Over the past few years, medical diagnostic apps are on the rise. Rapidly emerging technologies used to diagnose diseases result in more personalized patient care. About 70% of medical decisions are supported by diagnostics [2]. However, short of specialists and relatively low diagnostic accuracy calls for a new way of diagnostic strategy, in which deep learning may play a significant role. Smartphones and wearable devices can play a key role in health monitoring and diagnostics. They already support remote diagnostics and decrease the workload of GPs.

Diagnostic Apps

Modern smartphones are fitted with several sensors. These sensors allow for the sensing of several health parameters and health conditions.

Built-in sensors in a typical smartphone and the number of sensors are rising [1].

Respiratory sounds are important indicators of respiratory health and respiratory disorders. When a person breathes, the sound emitted is directly related to air movement, changes within lung tissue, and the position of secretions within the lung. Depending on the sound, it is possible to monitor asthma, to record respiratory sounds for snoring and sleep apnea severity, detect respiratory symptoms like sneeze and cough, to detect bronchitis, bronchiolitis, and pertussis, to record wheezes in pediatric populations, and to detect the chronic obstructive pulmonary disease (COPD).

Leukocoria is an abnormal white reflection from ophthalmologists’ retina to detect several different eye diseases. As well as being an early indication of retinoblastoma, a pediatric eye cancer, leukocoria can also be a sign of pediatric cataract, Coats’ disease, amblyopia, strabismus, and other childhood eye disorders [4]. A portable 3d-printed device connected to a smartphone makes precise images of the retina to detect back-of-the-eye (fundus) disease at a far lower cost than conventional methods [3]. It is also possible to analyze images of the eye taken with this retinal camera to detect diabetic retinopathy, one of the leading causes of blindness.

Heart rate variability (HRV) is a measure of variations in the time intervals between your heartbeats and describes how “uneven” your heart beats. This metric can be used for different purposes:

  • to measure stress levels
  • to diagnose chronic health problems
  • to assess the immune system
  • to predict the recovery time after a severe illness

Ear and mastoid disease can easily be treated by early detection and appropriate medical care.

Skin diseases are widespread nowadays and spreading widely among people. The resolution of smartphone cameras has such a high resolution nowadays that these kinds of diseases can be detected [5].

In the modern world, psychological health issues like anxiety and depression have become very common among the masses.  Smartphone technology can be used to diagnose depression and anxiety [2].

And this is not the end of the medical usage of smartphones. Future sensors for analyzing sweat, isoline levels, and more are coming and will support more diagnostic applications for smartphones and wearables.

Datasets

The lack of large training data sets is often mentioned as an obstacle. However, this is only partially correct. Nowadays, hospitals are huge data storages, and databases in, e.g., radiology, are filled with millions of images. There are also large public data sets available on the internet.

What ML model for what task?

A survey name ‘A Survey on Deep Learning in Medical Image Analysis’ concluded that the exact deep learning architecture is not the most important determinant in getting a good solution. More important is expert knowledge about the task that can provide advantages beyond adding more layers to a CNN. Novel data preprocessing strategies also contribute to more accurate and more robust neural networks [6].

Problems

There remain still some problems with the accuracy of such systems. 98% accuracy of a diagnostic system can be quite expressing, but when we scale this up to a user base of millions, we could overrun our health systems with wrong diagnosed patients. So it is essential to make these diagnostic systems more and more robust.

References

[1] Majumder, Sumit, and M. Jamal Deen. “Smartphone sensors for health monitoring and diagnosis.” Sensors 19.9 (2019): 2164.

[2] https://www.bupa.com/newsroom/our-views/the-future-of-diagnostics

[3] https://medicalxpress.com/news/2019-06-portable-device-eye-disease-remotely.html

[4] Munson, Micheal C., et al. “Autonomous early detection of eye disease in childhood photographs.” Science advances 5.10 (2019): eaax6363.

[5] Chan, Stephanie, et al. “Machine Learning in Dermatology: Current Applications, Opportunities, and Limitations.” Dermatology and therapy (2020): 1-22.

[6] Litjens, Geert, et al. “A survey on deep learning in medical image analysis.” Medical image analysis 42 (2017): 60-88.

Defense against Slaughterbot Attacks

Slaughterbots is a video that presents a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence, explosives, and facial recognition to assassinate political opponents by crashing into them. In my opinion, it is one of the most dystopian and depressing near-future scenario that I know.

When I watched the video the first time in 2017, I had no idea how we could defend towns against such terror attacks. Shooting against so many microdrones does not make sense. It also makes no sense to block the radio signals because the microdrones fly totally autonomous. Now, some years later, I realized that we could use secure machine learning to defend security-critical areas like shopping malls or train stations.

Many practical machine learning systems, like self-driving cars, are operating in the physical world. By adding adversarial stickers (patches) on top of, e.g., traffic signs, self-driving cars get fooled by these stickers.

Patch attacks projected on monitors in hallways, train stations, and so on could fool facial recognization systems on such suicide bombers. In this scenario, it is important to iterate over a lot of pretested patch attacks on test classifiers to find a potential weakness in the mini drones. After an effective attack was found, we could project this successful attack on all available screens in the attacked area.

When we imagine that nowadays shopping malls have physical barriers against terrorist truck attacks, nuclear underground utilities, and explosives in Swiss bridges it is not hard to imagine that we could develop an emergency program for public available monitors which could help defense against adversarial Slaughterbot attacks.

Problems of Generating Real-World Patch Attacks

Of course, there are still some problems left for generating real-world patch attacks. Images of the same objects, for example, are in real-world conditions unlikely exactly the same. To successfully realize physical attacks, attackers need to find image patches that are independent of the exact imaging condition, such as changes in pose and lighting. For that, we need to find adversarial patches that generalize beyond a single image. To enhance the generality of the patch, we look for patches that can cause any image in a set of inputs to be misclassified. For that reason, we formalize the generation of real-world patch attack as an optimization problem.

To find a quite universal patch for a certain classifier, it is important that we solved the optimisation problem for a lot of different classifiers before the actual attack.

AI Safety Concerns in Warfare

Governments around the world are increasingly investing in autonomous military systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries.

AI in Warfare

Target Systems Analysis (TSA) and Target Audience Analysis (TAA) are intelligence-related methods to develop a deep understanding of potential areas for operations. Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. For military forces on the ground, the challenge is to conceal their presence as much as possible from discovery from the air. A common way of hiding military assets from sight is camouflage, for example, by using camouflage nets.

Algorithmic Targeting: Today autonomous weapon platforms are using computer vision to identify, track targets, and shoot targets. These algorithmic targeting technologies are nowadays so precise, that even the military adds randomness to the targeting process to spread the bullets over the target. Otherwise, the bullet would just fly through the already generated holes.

Problem

These previously mentioned AI systems outperform human experts dramatically. But there are still major issues with these technologies. These systems are vulnerable to adversarial noise. You can imagine this (adversarial) noise as blurry pixels on your Instagram photos, usually, these pixel perturbations are so small that none of your human followers will see them but when Instagram uses an image classification software to check your account for illegal posts, it could happen (in the WORST CASE) that Instagram will block you.

Manipulating these AI systems

So what can you do when you forgot your camouflage net or when you don’t want a huge hole in your battleship? Well, you could try to generate this previously mentioned adversarial noise and stick it on your battleship or airplane. So from the point of mobility, it would be much easier to print a specific pattern on top of your battleship to hide from adversarial drones as carrying camouflage nets.

A plane with an adversarial noise patch camouflage that can hide it from being automatically detected from the air (Source: https://arxiv.org/pdf/2008.13671.pdf).

Summary

In this post, I wanted to describe the current weaknesses of AI systems, especially in warfare. The biggest problem of the current AI technology is (not only in warfare), that they are quite vulnerable against (adversarial) noise. This should be considered before releasing fully autonomous war machines that could make huge damage because of perturbated pixels in input data.

Reinforcement Learning

Our totally wasted grid world agent calls us and asks if we can safely navigate him home. We use the WhatsApp location sharing feature to determine the agent location. Our goal is to message the agent a policy plan which he can use to find his way back home. Like every drunk guy, the agent does noisy movements: he follows our instructions only in 80% of the time. In 10% of the time, he stumbles WEST instead of NORTH and in the other 10%, he stumbles EAST instead of NORTH. Similar behavior occurs for the other instructions. If he stumbles against a wall, he enters the current grid cell again. He gets more tired of every further step he takes. If the police catch him, he has to pay a fee and is afterward much more tired. On the way back home it could happen, that he gets some troubles with the “hombre malo”, and in his condition, he is not prepared for such situations. He can only rest if he comes home or meets the “hombre malo”. We want to navigate him as optimal as possible.

Wasted agent in a grid world. The agent can move NORTH, EAST, SOUTH, and WEST. The terminal states are \(s_{(1,2)}\) and \(s_{(3,3)}\). If the agent moves against the outer wall, he enters the previous state again. For each action which does not lead to state \(s_{(1,2)}\), \(s_{(3,0)}\), and \(s_{(3,3)}\) he receives always a negative reward of \(-0.1\). AVAILABLE AS OPEN AI GYM.

Markov Decision Process

Because of the noisy movement, we call our problem a non-deterministic search problem. We have to do 3 things to guide our agent safely home: First, we need a simulation of the agent and the grid world. Then we apply a Markov Decision Process (MDP) on this simulation to create an optimal policy plan for our agent which we will finally send via WhatsApp to him. An MDP is defined by a set of states \(s \in S\). This set \(S\) contains every possible state of the grid world. Our simulated agent can choose an action \(a\) from a set of actions \(A\) which changes the state \(s\) of the grid world. In our scenario he can choose the actions \( A = \{NORTH, EAST, SOUTH, WEST\}\). A transition function \(T(s,a,s’)\) yields the probability that \(a\) from \(s\) leads to \(s’\). A reward function \(R(s,a,s’)\) rewards every action \(a\) taken from \(s\) to \(s’\). In our case, we use a negative reward of \(-0.1\) which is also referred to \(\textit{living penalty}\) (every step hurts). We name the initial state of the agent \(s_{init}\) and every state which leads to an end of the simulation terminal state. There are 2 terminal states in our grid world. One with a negative reward of \(-1\) in \((1,2)\) and one with a positive reward of \(1\) in \((3,3)\).

Solving Markov Decision Processes

Our goal is to guide our agent from the initial state \(s_{init}\) to the terminal state \(s_{home}\). To make sure that our agent does not arrive too tired at home, we should always try to guide the agent to the nearest grid cell with the highest expected reward of \(V^{*}(s)\). The expected reward tells us how tired the agent will be in the terminal state. The optimal expected reward is marked with a *. To calculate the optimal expected reward \(V_{k+1}^*(s)\) for every state iteratively, we use the Bellman equation:

\(V^*_{k+1}(s) = \max_{a} Q^{*}(s,a)\)

\(Q^{*}(s,a) =\sum_{s’}^{}T(s,a,s’)[R(s,a,s’) + \gamma V^{*}_{k}(s’)]\\\) \(V_{k+1}^*(s) = \max_{a} \sum_{s’}^{}T(s,a,s’)[R(s,a,s’) + \gamma V_{k}^{*}(s’)]\\\)

\(Q^{*}(s, a)\) is the expected reward starting out having taken action \(a\) from state \(s\) and thereafter acting optimally. To calculate the expected rewards \(V^{*}_{k+1}(s)\), we set for every state \(s\) the optimal expected reward \(V_{0}^{*}(s) = 0\) and use the equation.

As soon as our calculations convergent for every state, we can extract an optimal policy \(\pi^{*}(s)\) out of it. An optimal policy \(\pi^(s)\) tells us the optimal action for a given state \(s\). To calculate the optimal policy \(\pi^{*}(s)\) for a given state \(s\), we use:

\(\pi^{*}(s) \leftarrow arg\max_{a} \sum_{s’}T(s,a,s’)[R(s,a,s’) + \gamma V_k^{*}(s’)]\)

After we calculated the optimal policy \(\pi^{*}(s)\) we draw a map of the grid world and put in every cell an arrow which shows the direction of the optimal way. Afterward, we send this plan to our agent.

This whole procedure is called offline planning because we know all details of the MDP. But what if we don’t know the exact probability distribution of our agent stumbling behavior and how tired he gets every step? In this case we don’t know the reward function \(R(s,a,s’)\) and the transition function \(T(s,a,s’)\)? We call this situation online planning. In this case, we have to learn the missing details by making a lot of test runs with our simulated agent before we can tell the agent an approximately optimal policy \(\pi^*\). This is called reinforcement learning.

Reinforcement Learning

One day later our drunk agent calls us again and he is actually in the same state ;) as the last time. This time he is so wasted that he even can not calculate the reward function \(R(s,a,s’)\) and the transition function \(T(s,a,s’)\). Because of the unknown probabilities and reward function, the old policy is outdated. So the only way to get an approximately optimal policy \(\pi^{*}\) is to do online planning which is also referred to reinforcement learning. So this time we have to run multiple episodes in our simulation to estimate the reward function \(R(s,a,s’)\) and the transition function \(T(s,a,s’)\). Each episode is a run from the initial state of the agent to one of the terminate states. Our old approach of calculating \(\phi^{*}\) would take too long and it doesn’t convergent smoothly in the case of online training. A better approach is to do Q-Learning:

\(Q_{k+1}(s, a) \leftarrow \sum_{s’} T(s,a,s’) [R(s,a,s’) + \gamma \max_{a’} Q_k^{*}(s’, a’)]\\\)

Q-Learning is a sample-based Q-value iteration To calculate the unknown transition function \(T(s, a, s’)\) we empirically calculate the probability distribution of the transition function.

This time we choose our policy based on the q-values of each state and update the \(Q(s, a)\) on the fly. The problem with this policy is, that it exploits always the best \(Q(s, a)\) which is probably not the optimal policy. To find better policies, we introduce a randomization/exploration factor to our policy. This forces the agent to follow sometimes a random instruction from us which leads to a better policy \(\pi^{*}\). This approach is referred to as \(\epsilon\)-greedy learning. After our \(Q(s, a)\) calculations converging, we can send the optimal policy to our agent.

Till now, our agent was lucky and he lived in a stationary grid world (all objects besides the agent are static). But as soon as the other objects start moving the grid world gets more complex and the number of states in the set \(S\) grows enormously. This could lead to a situation where it is not possible to calculate all the \(Q(s, a)\) anymore. In such situations, we have to approximate the \(Q(s, a)\) function. One way to approximate \(Q(s,a)\) functions is to use deep reinforcement learning.

References & Sources

  • Haili Song, C-C Liu, Jacques Lawarrée, and Robert W Dahlgren. Optimal electricity supply bidding by Markov decision process.IEEE transactions on power systems, 15(2):618–624, 2000.
  • Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey.Journal of artificial intelligence research, 4:237–285, 1996.
  • Christopher JCH Watkins and Peter Dayan. Q-learning.Machine learning, 8(3-4):279–292, 1992.
  • Drunk guy image from flaticon.com
  • Hombre malo image from flaticon.com
  • Home image from flaticon.com
  • Police image from flaticon.com

Autonomous Organizations

In a world where machines are increasingly replacing the employment of people, we have to create new lifestyles. Passive income is an excellent way to earn money in such a world. It is derived from a rental property, limited partnership, or another enterprise in which a person is not actively involved. Nowadays, these organizations are mostly managed by public corporations. But soon, more and more autonomous organizations will arise because of the rapid development of artificial intelligence and the blockchain technology.

Based on this development, it could be possible to develop whole companies that work autonomously for people who don’t want to or can’t work anymore. So it could be a new business model for engineers to develop such companies for costumers.

But how could such an organization work internally?

Current neural networks can already evaluate the aesthetics of an image, so it will not take long until neural networks can also evaluate the appeal, usefulness, and other factors of a product. A combination of

  1. Optimization Algorithms
  2. Artificial Intelligence
  3. Simulations
  4. APIs to marketplaces like Amazon

could be enough to run such organizations autonomously. Figure 1 illustrates an Autonomous Organization Framework. The holder inputs a number of assets and configures what the autonomous organization should do. The AO calibrates itself; then, it interfaces with Google Trends to analyze markets. Its analysis could find out that it is currently a profitable moment to sell remote-controlled cars. It could then use social media platforms to find popular models, and search other websites for the toy construction plans. After that, it could access online shops to extract the needed components, simulating many different combinations (optimizing via evolutionary algorithms, evaluating the product via deep learning and, finally, forecasting) to build a competitive RC-car. After constructing the product, it could be sold via the Amazon API.

Figure 1: A Framework for an autonomous organization

Besides such manufactory organizations, it would be probably easier to implement services like e-learning schools first. For example, an online school that teaches costumers how to dance salsa or bachata. From one of my previous posts, you already know how easy it is to develop a neural network that can differentiate between-song genres. It would also be possible to develop a classifier that can be detected if a person is dancing salsa or not.

I really prefer to have especially in social environments people instead of machines in control. But what is with my previous post about language table organizations? It would be much work to automate every single step of such an organization, but it could still be possible.

* This whole framework is for now just a general idea. Such organizations work probably better in a field where the competitive environment is not changing too much.

Storytelling for AI Development

Around five years ago, Amazon released its smart assistant named Amazon Alexa. She has gained a lot of popularity among voice assistants. Most users use her to play music and search the internet for information. But these are just fun tasks. Alexa can help you with more of the daily tasks and make your life easier.

Amazon Alexa tries to be your assistant, but someone who reacts to commands doesn’t show any interpersonal relationship. This type of connection is essential for an assistant job – it builds respect and lets us see assistants like ‘individuals.’ This is one goal that AI developers want to achieve: Artificial Intelligence that is human-like. In this blog post, I want to discuss one possibility of achieving this goal.

Building an Interpersonal Relationship with Storytelling

A while ago, I read “Story” by Robert McKee. This book inspired me to think about human-like AI development a little bit differently.

Robert McKee describes in his books that a story is a metaphor for life – stories awaken feelings. When we watch good movies, we try to adopt the personal perspective of the movie characters. This leads to an ever-stronger bond with the characters in the course of the story.

This is exactly something that we want to achieve for our users and the virtual assistants. The field of storytelling reminds us that there is no way to love or hate a person if you have not heard that person’s story; that’s why our virtual assistant needs to have a backstory. As shown in the movie example, it makes no difference whether the person is real or not.

Often, a connection with movie characters arises when they make decisions in stressful situations. The real genius of a person manifests itself in the choices they make under pressure. The higher the pressure, the more the decision reflects the innermost nature of the figure. Stressful situations often appear for virtual assistants, such as when users ask for unanswerable questions. In this situation, ordinary virtual assistants answer with standard answers like, “I don’t understand,” or “I can’t help you.” In this case, the real character of the virtual assistant is revealed. AI developers have to avoid such standard answers, and instead, the artificial intelligence should answer such problematic user-questions with a backstory. The backstory depends on the use case but should be created by somebody familiar with storytelling and dialog writing. For dialog writing, I can recommend another book from Robert McKee called “DIALOG.”

According to the dramatist Jean Anouilh, fiction gives life a form. Thus, stories support the development of artificial life, making virtual assistants more than just empty shells.

German vs Dutch Carnival Songs

Soon its time for carnival! I have noticed that a lot of German hits are played in Dutch pubs. Because of the similarity of these two languages, I have got often asked whether it is a German or Dutch song. Famous Dutch and German songs are often translated into the other language. This results in an exciting question of how a neural network distinguishes these two music genres. Because it can not directly learn from the music rhythm but must analyze the words inside the song, so it has to filter out the rhythm somehow.

Collect Data

In this little experiment, I wanted to use songs, which are available in both languages. So my first task was to collect these songs. I ended up with the following table.

Dutch SongGerman Song
Het VliegerliedDas Fliegerlied
Schatje mag ik je foto!Schatzi schenk mir ein Foto
Alleen maar schoenen aan*Sie hatte nur noch Schuhe an
Drank & DrugsStoff & Schnaps
Viva HollandiaViva Colonia
Mama LaudaaaMama Laudaaa
Ga es bier halenGeh mal Bier Holen
Liever Te Dik In De KistMich hat ein Engel geküsst
LaylaLayla
* favorite

Data Preprocessing

I split each song into 3 seconds chunks and converted each chunk into a spectrogram image. These images can be used to identify spoken words phonetically. Each spectrogram has got two geometric dimensions. One axis represents time, and the other axis represents frequency. Additionally, a third dimension indicates the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image. I ended up with 2340 training images (1170 images for each class) and 300 validation images (150 images for each class). For the preprocessing step I used my jupyter notebook.

Example Spectrogram of the German song Mama Laudaaa

Neural Network

I use transfer learning to mainly decrease the running time of the training phase. I chose the RESNET with 18 layers. You can find my code in my GitHub-Repository.

Results

I achieved a training accuracy of 83% and a validation accuracy of 80% in 24 epochs. A long time ago I developed already a music genre classification for bachata and salsa, here I reached a training accuracy of 90% and a validation accuracy of 87% as far as I remember.