Submitted by Duion on
This is kind of a follow up to my last blog post about how promoting popularity is insane. I thought more about this and the whole problem goes much deeper, since popularity nowadays is not even decided by humans anymore for the most part, but by machines, that supposedly know better what humans want, than the humans themselves. The algorithms itself may be intelligent, or let us say, they do what they do, the problem is more the human error that made them and the human error that is put into them, which then is turned into more human error by the computer.
It began when I was doing research regarding how I could promote my game or in general what decides what becomes popular on social media and what not. I read a bunch of articles and watched several talks and the gist of all seemed to be that it was all about algorithms that run social media platforms that decide almost everything. For example a game store would be run mostly by algorithms that decide what sells well and since the store wants to make the most money, they would promote what sells best. The problem with that is that you don't really need a good game that really sells well, you only need to make it that way, that the algorithm will think it is. When you made the algorithm think your game is good and sells well, it will get promoted and it will really sell well because of that and because of the fact it sells well, people will really think it is good and it will become popular. This does not work with everything, you need a minimum of potential, but I would say it works with almost anything, if done "right". Alternatively you can just become a copycat and just copy what sells well and because of the popularity it generated, you automatically gain something from it when you copy it.
The same seems to apply for most social media platforms that have some kind of recommendation feature or front pages that promote certain things. How often did I catch myself clicking on some clickbait, just because it was recommended and then becoming angry, because it was total crap and I did not really want to watch it, but because I watched it, the algorithm probably thought "Oh he watched it, so he liked it, lets give him more of the same stuff". So clicking on something I did not want to watch was interpreted by the algorithm, that I liked it and would result in it getting recommended more, which then increases the likeliness that I accidentally click on it again which perpetuates the cycle of insanity. This is of course just one example and some algorithms even may account for that, but I just wanted to give an example where a supposedly intelligent algorithm returns stupid results.
The most obvious way of beating the algorithm is cheating, somehow this is the least obvious way to normal people, since they somehow cannot believe that their favorite companies or gurus really just cheat the system and are not that special. One of the most common ways of cheating the algorithm is by faking view counts, which makes the algorithm think something is popular and recommend it more, which then results in more real viewers. This is also probably one of the best cases where the algorithm is stupid, but humans can spot the fake pretty easily, however this does not matter as the damage has already been done.
Another way of where the algorithms fail is irony, which is very common among people giving reviews on the internet, the algorithm just thinks a positive review is a positive review, but a human can spot pretty quickly in most cases when it is meant seriously or not. Detecting irony is probably one of the cases that will be hardest to learn for machines and this problem will probably prevail long time into the future, while other issues may get fixed relatively soon from now on.
The algorithms themselves may not be that bad or even very intelligent, the problem is more that the developers probably assumed the data that is fed is legitimate. So in an ideal environment without disruptive factors they even may work pretty well, but this of course is not the case in the real world. Some people even do it as a sport to find loopholes in the system and then exploit them. Even in an ideal environment you still have problems of philosophical nature, since it is not clear if humans really make the decisions they do themselves or voluntarily make them or can even want what they want and then do what they want.
Imagine a person with psychological issues who cannot do what he wants, the algorithm will then enforce him in his self deceptive behavior.
Or maybe the algorithm thing is just a scam and on top there is just a human who decides things by hand, or it is just a random generator.
There are so many possibilities and problems and of course I don't deny that modern recommendation algorithms can be really good and help me with what I'm searching for or help extending my horizon by recommending me more of what may interest me, but as said, it also can and often does go wrong. Often I can imagine how the algorithm works and therefore can decide, if I want to engage with it or not, but most people are not high enough in conscious for that and probably never will be.
In the end I probably make my decisions mostly based on recommendations of real people that I deem qualified for that, but I fear that the trend goes to that machines will decide more and more things in our lives, since most people are not high enough in conscious to even realize what is going on. Maybe in the future all humans will turn into mindless robot like creatures stuck in an infinite feedback loop, because the algorithm has no more new data to process, because the only data it has, is what it feed to itself, because why would you need other data if you already determined the most popular data.
- Duion's blog
- Log in to post comments