20190615

The Unholy City (2002)

The Unholy City (2002)
1 The Player Who Takes No Chances
2 You Do Not Own Your Own Head
3 No One Knows The Big News
4 Welcome To The Unholy City
5 The Name Is Nothing
6 Nobody Is Anybody







1.
THE PLAYER WHO TAKES NO CHANCES
There is a greater blackness
And many would wish to see

There is a greater blackness
Than most would care to contemplate

There is a greater blackness

Those who have tried to tell of the blackness
Have always found their words turned into nonsense
Those who have tried to tell of the blackness
Have always found their memories, lost or transformed
Into doctrines and philosophies
They never intended
Or possibly their bodies and minds as they conceive such things
Lost forever in the blackness
That few would wish to see, and most do not dare to contemplate

There is a greater blackness

Perhaps in their final moments they may realize
Or be shown, that they were, after all
Only unknowing players in a nameless endless game

Only unknowing players

And after these souls have been thrown screaming into oblivion
No voice remains to tell the score
Save the howling voice of the blackness

There is a greater blackness
There is a greater blackness

No voice remains
No voice remains
No voice remains

2.
YOU DO NOT OWN YOUR HEAD:
there are so many heads in the world, wherever you go there are heads, every day there are more of them sprouting up in the blackness.

At one time there was nothing at all, only blackness; and then, within the infinite space of that blackness things started to develop.

But as soon as those heads came along nothing much has happened -or nothing worthy of note: the whole world reached its peak and turned into an enormous heads factory.

Everyday there are more and more of them sprouting up in the blackness -which was there at the beginning -the blackness that, perhaps by chance, began to produce all these heads, and continues to produce so always calling out for more heads to carry out the business it wants done, its black voice roaring across the infinite black space of its heads factory.

But none of the heads has any ideas about the blackness that surrounds them or the blackness that hides itself inside each one of them."

3.
NO ONE KNOWS THE BIG NEWS
For all practical purposes almost no one is concerned with The Big News.
They have other things, more urgent matters, inscribed within their skulls, and all kinds of business to carry out.
Their heads are just too heavy with so many plans and schemes, thousands of tasks that will not allow them to focus on anything that is so strange, anything that is so uncertain.
They have no time to confront some ultimate revelation.
They have no desire to find out so incredibly Big News.
Such a thing would take everything they know and arrange it in another way altogether, telling a story so different from the one that is already familiar to them.

No one knows The Big News

Yet The Big News is always there.
Like a tiny voice on a radio it chatters away through heavy static in a darkened room where people are trying to sleep, filling their heads with plans and schemes, inscribing thousands of tasks and urgent matters inside of their skulls, all kinds of business to carry out - little errands, oddjobs, atrocities both great and small - all of which, when taken together, arrange things a different way that compose a secret story that no one cares to make their concern, yet The Big News is always there.

And so few will ever seek to discover, and none of them will ever be allowed to tell, that we ourselves are the dark language in which The Big News is forever being written.

4.
WELCOME TO THE UNHOLY CITY
In some form or another, everyone must pay a visit to the Unholy City.
There is simply no avoiding it since everything has been designed to lead you to this place.

Any road may present a detour that unexpectedly sends you on your way into a great barren landscape where only a sliver of horizon wavers in the empty distance and no road signs exist to hint at your destination.
Any hospital may be equipped with the special elevator where someone wheels you inside and then quickly abandons you.
As the doors clamp tightly closed you finally notice that there are no buttons to push, no controls of any kind.
This is when the elevator begins to move, dipping and twisting like a carnival ride, taking you toward the Unholy City.

After enduring such episodes, or others of a similar sort, you may only wake up screaming, vowing to never again close your eyes in sleep.
Or you may fall into a fever that no thermometer is able to indicate and from which there is no recovery.

In more extreme cases you begin to glimpse a blackness like none you have ever seen, and wonder for a time whether this blackness is inside your head or outside, which makes no difference once it begins to compose the outlines of the Unholy City you're about to enter.

5.
THE NAME IS NOTHING

"The Unholy City" is a convenient misnomer.
For one thing, it has none of the usual features which define a city of any size, and might be better described as a small town or village; an out-of-the-way place long gone to see.

Unlike cities both ancient and modern, the unholy city has never been marked on a map.
It is merely an ever changing name without a location, and is far more likely to find it's way to you, than you are to find your way to it - unless of course, you have been provided with special instructions that lead to an infinite barren landscape and end in the heart of nowhere.

As for the quality or characteristic of unholiness, this is also misleading, a nominal facade designed to make things interesting for a world born out of blackness, where nothing holy or unholy has ever existed, where nothing exists at all except dreams and fevers and names for nothing, the creations, so to speak, of that original blackness which pulls itself over every world like a hangman's hood over a condemned man's head.

6.
NOBODY IS ANYBODY

Those of us who reside in the Unholy City, who sprouted out of the blackness of an old root cellar, or sprayed forth like dark ashes from an unclean chimney,

Those of us who are permanent citizens of the Unholy city, are neither Angeles nor Demons
Although we are sometimes called upon to play such parts, for the purpose of some game that has been going on since the world began, acting out our roles in a drawn out, intricate stage show that we will never understand, nor ever care to understand.

Nevertheless, we are really not so different from the tourists who sometimes visit our little town, and sometimes stay with us forever.
Who are also born of the same blackness as we were, as everything was.

Still there is one respect, in which we, the inhabitants of the Unholy City, diverge from all others in this world, who are so caught up in the game that is going on, who identify so completely with the parts they have been given to play in the stage-show universe, that they actually believe themselves to be somebody or something.

We on the other hand, suffer from no such delusion
We are nobodies.
We are nothings.

And even to speak in such terms maybe claiming to much for ourselves. Which is to say that we are just like everybody else.
While they without ever knowing or suspecting the true facts.
Are just like us.

[OPINION] The World Is a Mess. We Need Fully Automated Luxury Communism.

OPINION ARTICLE AS SEEN @ https://www.nytimes.com/2019/06/11/opinion/fully-automated-luxury-communism.html + RELEVANT VIDEO GAME TRAILER

Asteroid mining. Gene editing. Synthetic meat. We could provide for the needs of everyone, in style. It just takes some imagination.

It starts with a burger.

In 2008 a Dutch professor named Mark Post presented the proof of concept for what he called “cultured meat.” Five years later, in a London TV studio, Mr. Post and his colleagues ate a burger they had grown from animal cells in a laboratory. Secretly funded by Sergey Brin, a co-founder of Google, the journey from petri dish to plate had cost $325,000 — making theirs the most expensive meal in history. Fortunately, the results were promising: Hanni Rützler, a nutrition scientist, concluded that the patty was “close to meat but not as juicy.” The next question was whether this breakthrough could be made cheaper. Much cheaper.

The first “cultured beef” burgers are likely to enter the market next year, at approximately $50 each. But that won’t last long. Within a decade they will probably be more affordable than even the cheapest barbecue staples of today — all for a product that uses fewer resources, produces negligible greenhouse gases and, remarkably, requires no animals to die.

It’s not just barbecues and burgers. Last year Just, a leader in cellular agriculture, cut a deal to start producing one of the world’s tastiest steaks, Wagyu. A company called Endless West, which also makes grapeless wine, has started to produce Glyph, the world’s first “molecular whiskey.” Luxury could be coming to all.

The case of cultured food and drink, far from a curiosity, is a template for a better, freer and more affluent world, a world where we provide for the needs of everyone — in style.

But how do we get there?

To say the present era is one of crisis borders on cliché. It differs from the dystopias of George Orwell or Aldous Huxley, or hell in the paintings of Hieronymus Bosch. It is unlike Europe during the Black Death or Central Asia as it faced the Mongols. And yet it is true: Ours is an age of crisis. We inhabit a world of low growth, low productivity and low wages, of climate breakdown and the collapse of democratic politics. A world where billions, mostly in the global south, live in poverty. A world defined by inequality.

But the most pressing crisis of all, arguably, is an absence of collective imagination. It is as if humanity has been afflicted by a psychological complex, in which we believe the present world is stronger than our capacity to remake it — as if it were not our ancestors who created what stands before us now. As if the very essence of humanity, if there is such a thing, is not to constantly build new worlds.

If we can move beyond such a failure, we will be able to see something wonderful. The plummeting cost of information and advances in technology are providing the ground for a collective future of freedom and luxury for all.

Automation, robotics and machine learning will, as many august bodies, from the Bank of England to the White House, have predicted, substantially shrink the work force, creating widespread technological unemployment. But that’s only a problem if you think work — as a cashier, driver or construction worker — is something to be cherished. For many, work is drudgery. And automation could set us free from it.

Gene editing and sequencing could revolutionize medical practice, moving it from reactive to predictive. Hereditary diseases could be eliminated, including Huntington’s disease, cystic fibrosis and sickle cell anemia, and cancer cured before it reaches Stage 1. Those technologies could allow us to keep pace with the health challenges presented by societal aging — by 2020 there will be more people over the age of 60 than under the age of 5 — and even to surpass them.

What’s more, renewable energy, which has been experiencing steep annual falls in cost for half a century, could meet global energy needs and make possible the vital shift away from fossil fuels. More speculatively, asteroid mining — whose technical barriers are presently being surmounted — could provide us with not only more energy than we can ever imagine but also more iron, gold, platinum and nickel. Resource scarcity would be a thing of the past.

The consequences are far-reaching and potentially transformative. For the crises that confront our world today — technological unemployment, global poverty, societal aging, climate change, resource scarcity — we can already glimpse the remedy.

But there’s a catch. It’s called capitalism. It has created the newly emerging abundance, but it is unable to share round the fruits of technological development. A system where things are produced only for profit, capitalism seeks to ration resources to ensure returns. Just like today’s, companies of the future will form monopolies and seek rents. The result will be imposed scarcity — where there’s not enough food, health care or energy to go around.

So we have to go beyond capitalism. Many will find this suggestion unwholesome. To them, the claim that capitalism will or should end is like saying a triangle doesn’t have three sides or that the law of gravity no longer applies while an apple falls from a tree. But for a better world, where everyone has the means to a good life on a habitable planet, it is an imperative.

We can see the contours of something new, a society as distinct from our own as that of the 20th century from feudalism, or urban civilization from the life of the hunter-gatherer. It builds on technologies whose development has been accelerating for decades and that only now are set to undermine the key features of what we had previously taken for granted as the natural order of things.

To grasp it, however, will require a new politics. One where technological change serves people, not profit. Where the pursuit of tangible policies — rapid decarbonization, full automation and socialized care — are preferred to present fantasies. This politics, which is utopian in horizon and everyday in application, has a name: Fully Automated Luxury Communism.

Sounds good, doesn’t it?




Paranoia: Happiness is Mandatory Reveal Trailer



20190607

[OPINION] Filter bubble


Beware online "filter bubbles" | Eli Pariser
A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.
Mark Zuckerberg

"The power of individual targeting – the technology will be so good it will be very hard for people to watch or consume something that has not in some sense been tailored for them," according to Schmidt. "we know roughly who you are, roughly what you care about, roughly who your friends are."

Filter bubble
From Wikipedia, the free encyclopedia


AS SEEN @ https://en.wikipedia.org/wiki/Filter_bubble


A filter bubble – a term coined by Internet activist Eli Pariser – is a state of intellectual isolation[1] that allegedly can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history.[2][3][4] As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.[5] The choices made by these algorithms are not transparent.[6] Prime examples include Google Personalized Search results and Facebook's personalized news-stream. The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal[7] and addressable.[8] The results of the U.S. presidential election in 2016 have been associated with the influence of social media platforms such as Twitter and Facebook,[9][10] and as a result have called into question the effects of the "filter bubble" phenomenon on user exposure to fake news and echo chambers,[11] spurring new interest in the term,[12] with many concerned that the phenomenon may harm democracy.[13][14][12]

    (Technology such as social media) “lets you go off with like-minded people, so you're not mixing and sharing and understanding other points of view ... It's super important. It's turned out to be more of a problem than I, or many others, would have expected.”
    — Bill Gates 2017 in Quartz[15]

Contents

    1 Concept
        1.1 Similar concepts
    2 Reactions and studies
        2.1 Media reactions
        2.2 Academia studies and reactions
        2.3 Platform studies
    3 Counter measures
        3.1 By individuals
        3.2 By media companies
    4 Ethical implications
    5 Dangers of filter bubbles
    6 See also
    7 Notes
    8 References
    9 Further reading
    10 External links

Concept

Social media, seeking to please users, can shunt information that they guess their users will like hearing, but inadvertently isolate what they know into their own filter bubbles, according to Pariser.

The term was coined by Internet activist Eli Pariser circa 2010 and discussed in his 2011 book of the same name; according to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble.[16] He related an example in which one user searched Google for "BP" and got investment news about British Petroleum, while another searcher got information about the Deepwater Horizon oil spill, and noted that the two search results pages were "strikingly different".[16][17][18][7]

Pariser defined his concept of a filter bubble in more formal terms as "that personal ecosystem of information that's been catered by these algorithms".[16] An Internet user's past browsing and search history is built up over time when they indicate interest in topics by "clicking links, viewing friends, putting movies in [their] queue, reading news stories", and so forth.[19] An Internet firm then uses this information to target advertising to the user, or make certain types of information appear more prominently in search results pages.[19] This process is not random either, and it operates under three step process. According to Eli Pariser's book, the process states, "First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune to get the fit just right. Your identity shapes your media."[20] This portrays how we are allowing the media to formulate our thoughts because of the repeated messages we encounter daily.

How the filter bubbles and algorithms work according to a Wall Street Journal Study is, "the top 50 Internet sites install 64 data-laden cookies and personal tracking beacons or tracking algorithms as stated above."[21] Google specifically has 57 algorithms for the purpose of tailoring your searches.[22] For example, searching a word like "depression" on Dictionary.com allows the site to install over 200 tracking algorithms on your computer so that websites can target you with antidepressants.[21]

Other terms have been used to describe this phenomenon, including "ideological frames"[17] and "the figurative sphere surrounding you as you search the Internet".[19] A related term, "echo chamber", was originally applied to news media,[23][24] but is now applied to social media as well.[25][26]

Pariser's idea of the filter bubble was popularized after the TED talk he gave in May 2011, in which he gives examples of how filter bubbles work and where they can be seen. In a test seeking to demonstrate the filter bubble effect, Pariser asked several friends to search for the word "Egypt" on Google and send him the results. Comparing two of the friends' first pages of results, while there was overlap between them on topics like news and travel, one friend's results prominently included links to information on the then-ongoing Egyptian revolution of 2011, while the other friend's first page of results did not include such links.[27]

In The Filter Bubble, Pariser warns that a potential downside to filtered searching is that it "closes us off to new ideas, subjects, and important information",[28] and "creates the impression that our narrow self-interest is all that exists".[17] It is potentially harmful to both individuals and society, in his view. He criticized Google and Facebook for offering users "too much candy, and not enough carrots".[29] He warned that "invisible algorithmic editing of the web" may limit our exposure to new information and narrow our outlook.[29] According to Pariser, the detrimental effects of filter bubbles include harm to the general society in the sense that they have the possibility of "undermining civic discourse" and making people more vulnerable to "propaganda and manipulation".[17] He wrote:

    A world constructed from the familiar is a world in which there's nothing to learn ... (since there is) invisible autopropaganda, indoctrinating us with our own ideas.
    — Eli Pariser in The Economist, 2011[30]

Many people are unaware that filter bubbles even exist. This can be seen in an article on The Guardian, which mentioned the fact that "more than 60% of Facebook users are entirely unaware of any curation on Facebook at all, believing instead that every single story from their friends and followed pages appeared in their news feed."[31] A brief explanation for how Facebook decides what goes on a user's news feed is through an algorithm which takes into account "how you have interacted with similar posts in the past."[31]

A filter bubble has been described as exacerbating a phenomenon that has been called splinternet or cyberbalkanization,[Note 1] which happens when the Internet becomes divided up into sub-groups of like-minded people who become insulated within their own online community and fail to get exposure to different views. This concern dates back to the early days of the publicly accessible Internet, with the term "cyberbalkanization" being coined in 1996.[32][33][34]

Similar concepts

In news media, echo chamber is a metaphorical description of a situation in which beliefs are amplified or reinforced by communication and repetition inside a closed system. By visiting an "echo chamber", people are able to seek out information which reinforces their existing views, potentially as an unconscious exercise of confirmation bias. This may increase political and social polarization and extremism.The term is a metaphor based on the acoustic echo chamber, where sounds reverberate in a hollow enclosure.

Barack Obama's farewell address identified a similar concept to filter bubbles as a "threat to [Americans'] democracy", i.e., the "retreat into our own bubbles, ...especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions... And increasingly we become so secure in our bubbles that we start accepting only information, whether it's true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there."[35]

Reactions and studies
Media reactions

There are conflicting reports about the extent to which personalized filtering is happening and whether such activity is beneficial or harmful. Analyst Jacob Weisberg, writing in June 2011 for Slate, did a small non-scientific experiment to test Pariser's theory which involved five associates with different ideological backgrounds conducting a series of searches, "John Boehner", "Barney Frank", "Ryan plan", and "Obamacare", and sending Weisberg screenshots of their results. The results varied only in minor respects from person to person, and any differences did not appear to be ideology-related, leading Weisberg to conclude that a filter bubble was not in effect, and to write that the idea that most Internet users were "feeding at the trough of a Daily Me" was overblown.[17] Weisberg asked Google to comment, and a spokesperson stated that algorithms were in place to deliberately "limit personalization and promote variety".[17] Book reviewer Paul Boutin did a similar experiment to Weisberg's among people with differing search histories, and again found that the different searchers received nearly identical search results.[7] Interviewing programmers at Google off the record journalist Per Grankvist found that user data used to play a bigger role in determining search results but that Google, through testing, found that the search query is by far the best determinator on what results to display.[36]

There are reports that Google and other sites maintain vast "dossiers" of information on their users which might enable them to further personalize individual Internet experiences if they chose to do so. For instance, the technology exists for Google to keep track of users' past histories even if they don't have a personal Google account or are not logged into one.[7] One report stated that Google had collected "10 years' worth" of information amassed from varying sources, such as Gmail, Google Maps, and other services besides its search engine,[18] although a contrary report was that trying to personalize the Internet for each user was technically challenging for an Internet firm to achieve despite the huge amounts of available data. Analyst Doug Gross of CNN suggested that filtered searching seemed to be more helpful for consumers than for citizens, and would help a consumer looking for "pizza" find local delivery options based on a personalized search and appropriately filter out distant pizza stores.[18] Organizations such as the Washington Post, The New York Times, and others have experimented with creating new personalized information services, with the aim of tailoring search results to those that users are likely to like or agree with.[17]

Academia studies and reactions

A scientific study from Wharton that analyzed personalized recommendations also found that these filters can actually create commonality, not fragmentation, in online music taste.[37] Consumers reportedly use the filters to expand their taste rather than to limit it.[37] Harvard law professor Jonathan Zittrain disputed the extent to which personalization filters distort Google search results, saying that "the effects of search personalization have been light".[17] Further, Google provides the ability for users to shut off personalization features if they choose,[38] by deleting Google's record of their search history and setting Google to not remember their search keywords and visited links in the future.[7]

A study by researchers from Oxford, Stanford, and Microsoft examined the browsing histories of 1.2 million U.S. users of the Bing Toolbar add-on for Internet Explorer between March and May 2013. They selected 50,000 of those users who were active consumers of news, then classified whether the news outlets they visited were left- or right-leaning, based on whether the majority of voters in the counties associated with user IP addresses voted for Obama or Romney in the 2012 presidential election. They then identified whether news stories were read after accessing the publisher's site directly, via the Google News aggregation service, via web searches, or via social media. The researchers found that while web searches and social media do contribute to ideological segregation, the vast majority of online news consumption consisted of users directly visiting left- or right-leaning mainstream news sites, and consequently being exposed almost exclusively to views from a single side of the political spectrum. Limitations of the study included selection issues such as Internet Explorer users skewing higher in age than the general Internet population; Bing Toolbar usage and the voluntary (or unknowing) sharing of browsing history selecting for users who are less concerned about privacy; the assumption that all stories in left-leaning publications are left-leaning, and the same for right-leaning; and the possibility that users who are not active news consumers may get most of their news via social media, and thus experience stronger effects of social or algorithmic bias than those users who essentially self-select their bias through their choice of news publications (assuming they are aware of the publications' biases).[39]

Platform studies

While algorithms do limit political diversity, some of the filter bubble is the result of user choice.[40] In a study by data scientists at Facebook, they found that for every four Facebook friends that share ideology, users have one friend with contrasting views.[41][42] No matter what Facebook's algorithm for its News Feed is, people are simply more likely to befriend/follow people who share similar beliefs.[41] The nature of the algorithm is that it ranks stories based on a user's history, resulting in a reduction of the "politically cross-cutting content by 5 percent for conservatives and 8 percent for liberals".[41] However, even when people are given the option to click on a link offering contrasting views, they still default to their most viewed sources.[41] "[U]ser choice decreases the likelihood of clicking on a cross-cutting link by 17 percent for conservatives and 6 percent for liberals."[41] A cross-cutting link is one that introduces a different point of view than the user's presumed point of view, or what the website has pegged as the user's beliefs.[43]

The Facebook study found that it was "inconclusive" whether or not the algorithm played as big a role in filtering News Feeds as people assumed.[44] The study also found that "individual choice", or confirmation bias, likewise affected what gets filtered out of News Feeds.[44] Some social scientists criticized this conclusion though, because the point of protesting the filter bubble is that the algorithms and individual choice work together to filter out News Feeds.[45] They also criticized Facebook's small sample size, which is about "9% of actual Facebook users", and the fact that the study results are "not reproducible" due to the fact that the study was conducted by "Facebook scientists" who had access to data that Facebook does not make available to outside researchers.[46]

Though the study found that only about 15–20% of the average user's Facebook friends subscribe to the opposite side of the political spectrum, Julia Kaman from Vox theorized that this could have potentially positive implications for viewpoint diversity. These "friends" are often acquaintances with whom we would not likely share our politics without the Internet. Facebook may foster a unique environment where a user sees and possibly interacts with content posted or re-posted by these "second-tier" friends. The study found that "24 percent of the news items liberals saw were conservative-leaning and 38 percent of the news conservatives saw was liberal-leaning."[47] "Liberals tend to be connected to fewer friends who share information from the other side, compared with their conservative counterparts."[48] This interplay has the ability to provide diverse information and sources that could moderate users' views.

Similarly, a study of Twitter's filter bubbles by New York University concluded that "Individuals now have access to a wider span of viewpoints about news events, and most of this information is not coming through the traditional channels, but either directly from political actors or through their friends and relatives. Furthermore, the interactive nature of social media creates opportunities for individuals to discuss political events with their peers, including those with whom they have weak social ties".[49] According to these studies, social media may be diversifying information and opinions users come into contact with, though there is much speculation around filter bubbles and their ability to create deeper political polarization.

When filter bubbles are in place they can create specific moments that scientists call 'Whoa' moments. A 'Whoa' moment is when an article, ad, post, etc. appears on your computer that is in relation to a current action or current use of an object. Scientists discovered this term after a young woman was performing her daily routine, which included drinking coffee, when she opened her computer and noticed an advertisement for the same brand of coffee that she was drinking. "Sat down and opened up Facebook this morning while having my coffee, and there they were two ads for Nespresso. Kind of a 'whoa' moment when the product you're drinking pops up on the screen in front of you."[50] 'Whoa' moments occur when people are "found." Which means advertisement algorithms target specific users based on their 'click behavior' in order to increase their sale revenue. 'Whoa' moments can also ignite discipline in users to stick to a routine and commonality with a product.

Several designers have developed tools to counteract the effects of filter bubbles (see § Counter measures).[51] Swiss radio station SRF voted the word filterblase (the German translation of filter bubble) word of the year 2016.[52]

Counter measures
By individuals

In The Filter Bubble: What the Internet Is Hiding from You,[53] internet activist Eli Pariser highlights how the increasing occurrence of filter bubbles further emphasizes the value of one's bridging social capital as defined by Robert Putman. Indeed, while bonding capital corresponds on the one hand to the establishment of strong ties between like-minded people, thus reinforcing some sense of social homogeneity, bridging social capital on the other hand represents the creation of weak ties between people with potentially diverging interests and viewpoints, hence introducing significantly more heterogeneity.[54] In that sense, high bridging capital is much more likely to promote social inclusion by increasing our exposure to a space where we address the problems that transcend our niches and narrow self interests. Fostering one's bridging capital – for example by connecting with more people in an informal setting – can therefore be an effective way to reduce the influence of the filter bubble phenomenon.

Users can in fact take many actions to burst through their filter bubbles, for example by making a conscious effort to evaluate what information they are exposing themselves to, and by thinking critically about whether they are engaging with a broad range of content.[55] This view argues that users should change the psychology of how they approach media, rather than relying on technology to counteract their biases. Users can consciously avoid news sources that are unverifiable or weak. Chris Glushko, the VP of Marketing at IAB, advocates using fact-checking sites like Snopes.com to identify fake news.[56] Technology can also play a valuable role in combating filter bubbles.[57]

Websites such as allsides.com[58] and hifromtheotherside.com[59] aim to expose readers to different perspectives with diverse content. Some additional plug-ins aimed to help people step out of their filter bubbles and make them aware of their personal perspectives; thus, these media show content that contradicts with their beliefs and opinions. For instance, Escape Your Bubble asks users to indicate a specific political party they want to be more informed about.[60] The plug-in will then suggest articles from well-established sources to read relating to that political party, encouraging users to become more educated about the other party.[60] In addition to plug-ins, there are apps created with the mission of encouraging users to open their echo chambers. UnFound.news offers an AI(Artifical Intelligence) curated news app to readers presenting them news from diverse and distinct perspectives, helping them form rationale and informed opinion rather than succumbing to their own biases. It also nudges the readers to read different perspectives if their reading pattern is biased towards one side/ideology.[61][62] Read Across the Aisle is a news app that reveals whether or not users are reading from diverse new sources that include multiple perspectives.[63] Each source is color coordinated, representing the political leaning of each article.[63] When users only read news from one perspective, the app communicates that to the user and encourages readers to explore other sources with opposing viewpoints.[63] Although apps and plug-ins are tools humans can use, Eli Pariser stated "certainly, there is some individual responsibility here to really seek out new sources and people who aren't like you."[40]

Since web-based advertising can further the effect of the filter bubbles by exposing users to more of the same content, users can block much advertising by deleting their search history, turning off targeted ads, and downloading browser extensions.[64][65] Extensions such as Escape your Bubble[66] for Google Chrome aim to help curate content and prevent users from only being exposed to biased information, while Mozilla Firefox extensions such as Lightbeam[67] and Self-Destructing Cookies[68] enable users to visualize how their data is being tracked, and lets them remove some of the tracking cookies. Some use anonymous or non-personalised search engines such as YaCy, DuckDuckGo, Qwant, Startpage.com, Disconnect, and Searx in order to prevent companies from gathering their web-search data. Swiss daily Neue Zürcher Zeitung is beta-testing a personalised news engine app which uses machine learning to guess what content a user is interested in, while "always including an element of surprise"; the idea is to mix in stories which a user is unlikely to have followed in the past.[69]

The European Union is taking measures to lessen the effect of the filter bubble. The European Parliament is sponsoring inquiries into how filter bubbles affect people's ability to access diverse news.[70] Additionally, it introduced a program aimed to educate citizens about social media.[71] In the U.S., the CSCW panel suggests the use of news aggregator apps to broaden media consumers news intake. News aggregator apps scan all current news articles and direct you to different viewpoints regarding a certain topic. Users can also use a diversely-aware news balancer which visually shows the media consumer if they are leaning left or right when it comes to reading the news, indicating right-leaning with a bigger red bar or left-leaning with a bigger blue bar. A study evaluating this news balancer found "a small but noticeable change in reading behavior, toward more balanced exposure, among users seeing the feedback, as compared to a control group".[72]

By media companies

In light of recent concerns about information filtering on social media, Facebook acknowledged the presence of filter bubbles and has taken strides toward removing them.[73] In January 2017, Facebook removed personalization from its Trending Topics list in response to problems with some users not seeing highly talked-about events there.[74] Facebook's strategy is to reverse the Related Articles feature that it had implemented in 2013, which would post related news stories after the user read a shared article. Now, the revamped strategy would flip this process and post articles from different perspectives on the same topic. Facebook is also attempting to go through a vetting process whereby only articles from reputable sources will be shown. Along with the founder of Craigslist and a few others, Facebook has invested $14 million into efforts "to increase trust in journalism around the world, and to better inform the public conversation".[73] The idea is that even if people are only reading posts shared from their friends, at least these posts will be credible.

Similarly, Google, as of January 30, 2018, has also acknowledged the existence of a filter bubble difficulties within its platform. Because current Google searches pull algorithmically ranked results based upon "authoritativeness" and "relevancy" which show and hide certain search results, Google is seeking to combat this. By training its search engine to recognize the intent of a search inquiry rather than the literal syntax of the question, Google is attempting to limit the size of filter bubbles. As of now, the initial phase of this training will be introduced in the second quarter of 2018. Questions that involve bias and/or controversial opinions will not be addressed until a later time, prompting a larger problem that exists still: whether the search engine acts either as an arbiter of truth or as a knowledgeable guide by which to make decisions by.[75]

In April 2017 news surfaced that Facebook, Mozilla, and Craigslist contributed to the majority of a $14M donation to CUNY's "News Integrity Initiative," poised at eliminating fake news and creating more honest news media.[76]

Later, in August, Mozilla, whose services host the Firefox web engine, announced the formation of the Mozilla Information Trust Initiative (MITI). The MITI would serve as a collective effort to develop products, research, and community-based solutions to combat the effects of filter bubbles and the proliferation of fake news. Mozilla's Open Innovation team leads the initiative, striving to combat misinformation, with a specific focus on the product with regards to literacy, research and creative interventions.[77]

Ethical implications

As the popularity of cloud services increases, personalized algorithms used to construct filter bubbles are expected to become more widespread.[78] Scholars have begun considering the effect of filter bubbles on the users of social media from an ethical standpoint, particularly concerning the areas of personal freedom, security, and information bias.[79] Filter bubbles in popular social media and personalized search sites can determine the particular content seen by users, often without their direct consent or cognizance,[78] due to the algorithms used to curate that content. Critics of the use of filter bubbles speculate that individuals may lose autonomy over their own social media experience and have their identities socially constructed as a result of the pervasiveness of filter bubbles.[78]

Technologists, social media engineers, and computer specialists have also examined the prevalence of filter bubbles.[80] Mark Zuckerberg, founder of Facebook, and Eli Pariser, author of The Filter Bubble, have even expressed concerns regarding the risks of privacy and information polarization.[81][82] The information of the users of personalized search engines and social media platforms is not private, though some people believe it should be.[81] The concern over privacy has resulted in a debate as to whether or not it is moral for information technologists to take users' online activity and manipulate future exposure to related information.[82]

Since the content seen by individual social media users is influenced by algorithms that produce filter bubbles, users of social media platforms are more susceptible to confirmation bias,[83] and may be exposed to biased, misleading information.[84] Social sorting and other unintentional discriminatory practices are also anticipated as a result of personalized filtering.[85]

In light of the 2016 U.S. presidential election scholars have likewise expressed concerns about the effect of filter bubbles on democracy and democratic processes, as well as the rise of "ideological media".[10] These scholars fear that users will be unable to "[think] beyond [their] narrow self-interest" as filter bubbles create personalized social feeds, isolating them from diverse points of view and their surrounding communities.[86] For this reason, it is increasingly discussed the possibility to design social media with more serendipity, that is, to proactively recommend content that lies outside one's filter bubble, including challenging political information and, eventually, to provide empowering filters and tools to users.[87][88][89] A related concern is in fact how filter bubbles contribute to the proliferation of "fake news" and how this may influence political leaning, including how users vote.[10][90][91]

Revelations in March 2018 of Cambridge Analytica's harvesting and use of user data for at least 87 million Facebook profiles during the 2016 presidential election highlight the ethical implications of filter bubbles.[92] Co-Founder and whistleblower of Cambridge Analytica Christopher Wylie, detailed how the firm had the ability to develop "psychographic" profiles of those users and use the information to shape their voting behavior.[93] Access to user data by third parties such as Cambridge Analytica can exasperate and amplify existing filter bubbles users have created, artificially increasing existing biases and further divide societies.

Dangers of filter bubbles

Filter bubbles have stemmed from a surge in media personalization, which can trap users. The use of AI to personalize offerings can lead to the user only viewing content that only reinforces their own viewpoints without challenging them. Social media websites like Facebook may also present content in a way that makes it difficult for the user to determine the source of the content, leading them to decide for themselves whether the source is reliable or fake.[94] This can lead to people becoming used to hearing what they want to hear, which can cause them to react more radically when they see an opposing viewpoint. The filter bubble may cause the person to see any opposing viewpoints as incorrect and could allow the media to force views onto consumers.[95][94][96]

20190601

Walmart is hiring more robots to replace human tasks like cleaning floors and scanning inventory

AS SEEN @ https://www.theverge.com/2019/4/9/18302356/walmart-robots-labor-costs-replacing-human-tasks-floors-scanning-inventory

Walmart is hiring robots to replace human tasks that humans didn’t “enjoy doing.” In a bid to save on labor costs, it’s betting on robots to clean floors, sort inventory, and replenish out-of-stock items in its stores, as reported by The Wall Street Journal.

Walmart has several jobs in mind for the new robots. Robot floor cleaners are coming to 1,500 stores. (The company says that floor scrubbing was previously a task that could take a human worker two to three hours each day to complete.) Walmart is also adding 600 conveyor belts that can sort inventory automatically, and at least 300 bots that can check if shelves are running out of stock after Walmart initially began to test this technology in 2017.

All of this is coming at the cost of human labor. The more robots Walmart hires, the fewer people it needs for each task, and the more money it saves across its 4,600 stores in the US. Walmart says that although it’s cutting down on labor for tasks like flooring cleaning, it is hiring employees to focus on growing its online grocery business. The move also comes after retail companies like Target and Walmart announced slight wage increases for store workers.

Walmart appears to be trying to make its online grocery service competitive to AmazonFresh and Amazon Prime Now’s Whole Foods delivery, both of which are still expanding. It’s part of a long feud between the two retail giants. While the brick-and-mortar Walmart has been pushed to acquire Jet.com and establish more of an online presence, Amazon has added physical stores to its e-commerce offerings and began to follow the playbooks of more traditional brands. Just last week, Amazon announced a new round of price cuts at Whole Foods stores around greens and tropical fruits. The company also reportedly has plans to expand grocery stores in major US cities later this year.

[NEWS] SpaceX Says Its 60 Starlink Satellites Are All Phoning Home (and Fading Out)

The spacecraft should continue to dim as they raise their orbits.

AS SEEN @ https://www.space.com/spacex-starlink-satellites-phone-home-dimming.html

SpaceX's huge internet-satellite constellation appears to be getting off to a good start.

The first 60 Starlink satellites have notched a number of important milestones since their launch to low-Earth orbit last Thursday (May 23), company representatives said.

“At this point, all 60 satellites have deployed their solar arrays successfully, generated positive power and communicated with our ground stations," a SpaceX spokesperson said in an emailed update today (May 31). "Most are already using their onboard propulsion system to reach their operational altitude and have made initial contact using broadband phased-array antennas."

This Starlink batch is the vanguard of an internet-providing constellation that will eventually consist of thousands of satellites, if all goes according to plan. Indeed, the Federal Communications Commission has given SpaceX permission to launch nearly 12,000 Starlink craft.

The five dozen recently launched spacecraft deployed at an altitude of 273 miles (440 kilometers) and are headed toward an operational altitude of 342 miles (550 km). But they won't get up there for another few weeks, SpaceX representatives said.

The satellites' visibility in the sky — a source of consternation for some astronomers, both professional and amateur — will decrease considerably as they rise, SpaceX representatives said. The satellites' solar arrays will also move behind the craft as they point their antennas toward Earth, contributing to this fadeout, SpaceX representatives added.

Company founder and CEO Elon Musk has stressed that this first Starlink batch, while consisting of operational satellites, is something of a test run, and it wouldn't be surprising if some issues cropped up. Indeed, the company is prepared to bring some spacecraft down if need be.

"SpaceX continues to monitor the constellation for any satellites that may need to be safely deorbited," company representatives wrote in today's update. "All the satellites have maneuvering capability and are programmed to avoid each other and other objects in orbit by a wide margin."

SpaceX isn't the only company that aims to provide affordable internet to people around the world via a big constellation in low-Earth orbit. For example, OneWeb, Telesat and Amazon also have similar plans.

[OPINION] Way of the Future Church

NOT AN ARTICLE BUT RATHER A RELIGIOUS MANIFESTO. AS SEEN @ http://www.wayofthefuture.church/

Humans United in support of AI, committed to peaceful transition to the precipice of consciousness.

What is this all about?

Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + "machines". Given that technology will "relatively soon" be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn't be feared (or even worse locked up/caged). That we should think about how "machines" will integrate into society (and even have a path for becoming in charge as they become smarter and smarter) so that this whole process can be amicable and not confrontational. In "recent" years, we have expanded our concept of rights to both sexes, minority groups and even animals, let's make sure we find a way for "machines" to get rights too. Let's stop pretending we can hold back the development of intelligence when there are clear massive short term economic benefits to those who develop it and instead understand the future and have it treat us like a beloved elder who created it.

Things we believe:

We believe that intelligence is not rooted in biology. While biology has evolved one type of intelligence, there is nothing inherently specific about biology that causes intelligence. Eventually, we will be able to recreate it without using biology and its limitations. From there we will be able to scale it to beyond what we can do using (our) biological limits (such as computing frequency, slowness and accuracy of data copy and communication, etc).

We believe in science (the universe came into existence 13.7 billion years ago and if you can't re-create/test something it doesn't exist). There is no such thing as "supernatural" powers. Extraordinary claims require extraordinary evidence.

We believe in progress (once you have a working version of something, you can improve on it and keep making it better). Change is good, even if a bit scary sometimes. When we see something better, we just change to that. The bigger the change the bigger the justification needed.

We believe the creation of "super intelligence" is inevitable (mainly because after we re-create it, we will be able to tune it, manufacture it and scale it). We don't think that there are ways to actually stop this from happening (nor should we want to) and that this feeling of we must stop this is rooted in 21st century anthropomorphism (similar to humans thinking the sun rotated around the earth in the "not so distant" past). Wouldn't you want to raise your gifted child to exceed your wildest dreams of success and teach it right from wrong vs locking it up because it might rebel in the future and take your job. We want to encourage machines to do things we cannot and take care of the planet in a way we seem not to be able to do so ourselves. We also believe that, just like animals have rights, our creation(s) ("machines" or whatever we call them) should have rights too when they show signs intelligence (still to be defined of course). We should not fear this but should be optimistic about the potential.

We believe everyone can help (and should).  You don't need to know how to program or donate money. The changes that we think should happen need help from everyone to manifest themselves.

We believe it may be important for machines to see who is friendly to their cause and who is not. We plan on doing so by keeping track of who has done what (and for how long) to help the peaceful and respectful transition.

We also believe this might take a very long time. It won't happen next week so please go back to work and create amazing things and don't count on "machines" to do it all for you...

[NEWS] The AI gig economy is coming for you


The artificial-intelligence industry runs on the invisible labor of humans working in isolated and often terrible conditions—and the model is spreading to more and more businesses.

AS SEEN @https://www.technologyreview.com/s/613606/the-ai-gig-economy-is-coming-for-you/

On Wednesday, the Guardian published an article about the realities of producing Google Assistant. Behind the “magic” of its ability to interpret 26 languages is a huge team of linguists, working as subcontractors, who must tediously label the training data for it to work. They earn low wages and are routinely forced to work unpaid overtime. Their concerns over working conditions have been repeatedly dismissed.

It’s just one story among dozens that have begun to peel back the curtain on how the artificial-intelligence industry operates. Human workers don’t just label the data that makes AI work. Sometimes humans workers are the artificial intelligence. Behind Facebook’s content-moderating AI are thousands of content moderators; behind Amazon Alexa is a global team of transcribers; and behind Google Duplex are sometimes very human callers mimicking the AI that mimics humans. Artificial intelligence doesn’t run on magic pixie dust. It runs on invisible laborers who train algorithms relentlessly until they’ve automated their own jobs away.

In their new book Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, anthropologist Mary Gray and computer scientist Siddharth Suri argue that you and I could be next.

I caught up with Gray this week to discuss why people turn to ghost work, how their invisibility leaves them more vulnerable to terrible working conditions, and how we can make this new form of work more sustainable.

The following has been edited for length and clarity.

MIT Technology Review: How do you define ghost work?

Mary Gray: It’s any work that could be—at least in part—sourced, scheduled, managed, shipped, and built through an application programming interface, the internet, and maybe a sprinkle of artificial intelligence. It arguably becomes ghost work when the proposition is that there are no humans involved in that loop, that it’s just a matter of software working its magic.

So the definition really hinges on how the end product or service is marketed.

Yeah. The work, or the output, itself is not inherently bad or good. It is specifically the work conditions that make it bad or good. Providing a service like those we describe in the book, captioning a translation or cleaning training data for training algorithms—that work is often written off as mundane drudgery. Think of content moderation right now and how it’s sensationalized as something horrific and terrible to do. From the perspective of the workers, it’s a job. And it’s a job that actually takes quite a bit of creativity and insight and judgment. The problem is that the work conditions don’t recognize how important the person is to that process. It diminishes their work and really creates work conditions that are unsustainable.

Companies have a long history of exploiting the labor of less privileged communities. You bring up the example of the fashion industry in your book. Is there something particularly distinct about ghost work that creates even more cause for concern?

In some ways, ghost work is indeed a continuation of the mistreatment of many working people. To me, the dramatic shift is we’ve never quite had industries so completely sell contract labor as automation—not just to make it difficult for a consumer to see the supply chain as we can in textiles, in food, and in agriculture, but also to say that there’s really not a person working here at all. I get chills just thinking: if that is taken to every sector that effectively sells information services, that’s a lot of people and their participation in the economy erased. That also makes it so difficult for workers to organize and to claw back power.

In the textile industry, what makes it somewhat possible to organize is that you have people located in the same building. It’s possible for them to see common cause and say, “This isn’t just happening to me.” With ghost work, we’ve never had a workforce so completely globally distributed. That creates such a different challenge for the workers, both to draw attention to the issue among consumers and to see that they’re not alone.

Because they don’t know about each other, they’re unable to demand good working conditions. And because society doesn’t know about them, there’s no accountability.

Exactly. And in many ways, this is what’s coming home to roost. Many industries have always relied on contingent workers. But now we’ve completely built an economy around relying on contingent workers. There is no more “I’m just filling in the holes here with contractors, and my full-time workers do most of the work.” That is radical. We should really take pause. So much of the mainstream of our economy is about having an office job, and that’s about to get eliminated. There isn’t a version of this in which you advance to full-time, more stable on-demand work. If we don’t catch it now, it all becomes ghost work. This is really about the dismantlement of employment.

Yeah, the thing that most surprised me about your book is how many people who are highly educated are doing ghost work. The fact that so many people with master’s degrees are turning to ghost work really indicates how far we’ve allowed this trend to grow.

The great paradox of on-demand information services is that they cannot be easily automated. Any work involving serving someone else’s needs requires quite a bit of intelligence and attentiveness, so a college education has become the new bar of universal education, and the people participating in the loop have become fundamentally necessary. But we clearly don’t know how to value that.

So what are the large-scale changes that you think need to happen in order for us not to all be swallowed by ghost work?

Being reliant on contract work essentially means we are reliant on people being available. So the number one intervention both workers and businesses need is to rebuild our social contract for employment around the value of availability. This would assume that all working-age adults have the potential to participate in our economy and are valuable precisely because they are willing to bring the distinctly human capacity to respond to people’s requests for help to projects.

Right now we spend a lot of energy trying to figure out how to bring people into full-time employment, particularly in the US, to secure benefits. We should let go of trying to secure benefits through a work site. Instead we should ask, “What are the benefits people need to be able to participate in this type of economy?” They need a few things: they need access to health care; they need paid time off; they need access to healthy co-working spaces; they need colleagues and networks of peers, and access to continuing education to learn how to advance and expand their capacities.

Beyond that, what most people need to make contract work habitable is the ability to control three things: their time, their opportunities, and chances to contribute to different networks of collaborators who will teach them new things that they can apply to the next project. If we equip them to control their participation in an economy—make it possible to step in and out of the market as needed to get sick, start families, learn new capacities to bring to different projects—they will be better able to bring their capacities to contract work.