But it's important to understand it if this is becoming the basis of the whole economy and the whole civilization.If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore.
- dating for gifte Vesthimmerlands
- polda sulsel online dating
- online sex viedoes
- Germany porn chat
- rules dating jewish women
- law enforcement dating site
I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems.
For instance, if the theory is that you're getting big data by observing a lot of people who make choices, and then you're doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? At this point there's no way to know, because it's too universal.
Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. The same thing can be said for any other big data system that recommends courses of action to people, whether it's the Google ad business, or social networks like Facebook deciding what you see, or any of the myriad of dating apps.
They're an existential threat, whatever scary language there is.
My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going.For instance, we can talk about pattern classification.Can you get programs that recognize faces, that sort of thing? I was the chief scientist of the company Google bought that got them into that particular game some time ago. It's a wonderful field, and it's been wonderfully useful.The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive.I'm going to go through a couple of layers of how the mythology does harm.All of these things, there's no baseline, so we don't know to what degree they're measurement versus manipulation.