Connect with us

Tech

YouTube’s recommendations pushed election denial content to election deniers

Published

on

YouTube’s recommendations pushed election denial content to election deniers

YouTube’s recommendation algorithm pushed more videos about election fraud to people who were already skeptical about the 2020 election’s legitimacy, according to a new study. There were a relatively low number of videos about election fraud, but the most skeptical YouTube users saw three times as many of them as the least skeptical users.

“The more susceptible you are to these types of narratives about the election…the more you would be recommended content about that narrative,” says study author James Bisbee, who’s now a political scientist at Vanderbilt University.

In the wake of his 2020 election loss, former President Donald Trump has promoted the false claim that the election was stolen, calling for a repeat election as recently as this week. While claims of voter fraud have been broadly debunked, promoting the debunked claims continues to be a lucrative tactic for conservative media figures, whether in podcasts, films or online videos.

Bisbee and his research team were studying how often harmful content in general was recommended to users and happened to be running a study during that window. “We were overlapping with the US presidential election and then the subsequent spread of misinformation about the outcome,” he says. So they took advantage of the timing to specifically look at the way the algorithm recommended content around election fraud.

The research team surveyed over 300 people with questions about the 2020 election — asking them how concerned they were about fraudulent ballots, for example, and interference by foreign governments. People were surveyed between October 29th and December 8th, and people surveyed after election day were also asked if the outcome of the election was legitimate. The research team also tracked participants’ experiences on YouTube. Each person was assigned a video to start on, and then they were given a path to follow through the site — for instance, clicking on the second recommended video each time.

The team went through all the videos shown to participants and identified the ones that were about election fraud. They also classified the stance those videos took on election fraud — if they were neutral about claims of election fraud or if they endorsed election misinformation. The top videos associated with promoting claims around election fraud were videos of press briefings from the White House channel and videos from NewsNow, a Fox News affiliate.

The analysis found that people who were the most skeptical of the election had an average of eight more recommended videos about election fraud than the people who were least skeptical. Skeptics saw an average of 12 videos, and non-skeptics saw an average of four. The types of videos were different, as well — the videos seen by skeptics were more likely to endorse election fraud claims.

The people who participated in the study were more liberal, more well-educated, and more likely to identify as a Democrat than the United States population overall. So their media diet and digital information environment might already skew more to the left — which could mean the number of election fraud videos shown to the skeptics in this group is lower than it might have been for skeptics in a more conservative group, Bisbee says.

But the number of fraud-related videos in the study was low, overall: people saw around 400 videos total, so even 12 videos was a small percentage of their overall YouTube diet. People weren’t inundated with the misinformation, Bisbee says. And the number of videos about election fraud on YouTube dropped off even more in early December after the platform announced it would remove videos claiming that there was voter fraud in the 2020 election.

YouTube has instituted a number of features to fight misinformation, both moderating against videos that violate its rules and promoting authoritative sources on the homepage. In particular, YouTube spokesperson Elena Hernandez reiterated in an email to The Verge that platform policy doesn’t allow videos that falsely claim there was fraud in the 2020 election. However, YouTube has more permissive policies around misinformation than other platforms, according to a report on misinformation and the 2020 election, and took longer to implement policies around misinformation.

Broadly, YouTube disputed the idea that its algorithm was systematically promoting misinformation. “While we welcome more research, this report doesn’t accurately represent how our systems work,” Hernandez said in a statement. “We’ve found that the most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels.”

Crucially, Bisbee sees YouTube’s algorithm as neither good nor bad but recommending content to the people most likely to respond to it. “If I’m a country music fan, and I want to find new country music, an algorithm that suggests content to me that it thinks I’ll be interested in is a good thing,” he says. But when the content is extremist misinformation instead of country music, the same system can create obvious problems.

In the email to The Verge, Hernandez pointed to other research that found YouTube does not steer people toward extremist content — like a study from 2020 that concluded recommendations don’t drive engagement with far-right content. But the findings from the new study do contradict some earlier findings, Bisbee says, particularly the consensus among researchers that people self-select into misinformation bubbles rather than being driven there by algorithms.

In particular, Bisbee’s team did see a small but significant push from the algorithm toward misinformation for the people who might be most inclined to believe that misinformation. It might be a nudge specific to information on election fraud, although the study can’t say if the same is true for other types of misinformation. It means, though, that there’s still more to learn about the role algorithms play.

Read More

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Tech

Google Pixel 7 and 7 Pro are getting a built-in VPN at no extra cost

Published

on

By

Google Pixel 7 and 7 Pro are getting a built-in VPN at no extra cost
Google Pixel 7 Pro hands on front Snow



(Image credit: Future / Lance Ulanoff)

Users of the Google Pixel 7 and 7 Pro devices will be able to secure their data without the need to pay for an additional Android VPN after the company said it would be including its Google One VPN service at no extra cost. 

The move will make the Pixel 7 and 7 Pro the first smartphones to include a free VPN connection. 

The offer is restricted to just some countries, though – and what’s more, some data won’t be secured inside the VPN tunnel.  

Peace of mind when you connect online ✨Later this year, #Pixel7 and 7 Pro will be the only phones with a VPN by Google One—at no extra cost.¹#MadeByGoogle¹See image for more info pic.twitter.com/P7lzyoMdekOctober 6, 2022

See more

Google Pixel 7 VPN

Despite the aforementioned limits, the big tech giant assures that the VPN software won’t associate users’ app and browsing data with users’ accounts. 

Google One VPN typically costs around $10 per month as part of the company’s Premium One plan, which also comes with a 2TB of cloud storage on top. 

This decision is the latest move to bring Google’s mobile data security to the next level. Not too long ago, the company made Google One VPN available also for iOS devices, and also introduced the option of having an always-on VPN across its latest smartphones. 

Google promises that its secure VPN software will shield your phone against hackers on unsecure networks, like public Wi-Fi. It will also hide your IP address so that third parties won’t be able to track your location.

Shorter for virtual private network, a VPN is exactly the tool you want to shield your sensitive data as it masks your real location and encrypts all your data in transit. Beside privacy, it can allow you to bypass geo-restrictions and other online blocks. 

Chiara is a multimedia journalist, with a special eye for latest trends and issues in cybersecurity. She is a Staff Writer at Future with a focus on VPNs. She mainly writes news and features about data privacy, online censorship and digital rights for TechRadar, Tom’s Guide and T3. With a passion for digital storytelling in all its forms, she also loves photography, video making and podcasting. Originally from Milan in Italy, she is now based in Bristol, UK, since 2018.

Read More

Continue Reading

Tech

The Steam Deck dock is finally here and will ship faster than you think

Published

on

By

The Steam Deck dock is finally here and will ship faster than you think
a steam deck placed in a steam deck dock



(Image credit: Valve)

After months of waiting and delays, Valve has finally announced that the Steam Deck dock is available for purchase on its official site.

Not only that but, according to Valve, the dock will ship out in an incredibly fast one to two weeks, which pairs with the fact that the Steam Deck itself is now shipping with no wait time (not to mention that it’s incredibly easy to set up). The port selection is pretty solid as well, with the dock featuring three USB-A 3.1 gen 1 ports, one Ethernet port, a DisplayPort 1.4, and an HDMI 2.0 port. And for its power supply, it uses a USB-C passthrough delivery.

A Steam Deck dock will run you $90 (around £81 / AU$140), which is a bit steeper than most third-party options on the market right now. But for those waiting it out for an official product until now, price most likely will not be an issue.

Is it worth buying? 

Considering that even Steam Decks themselves are shipping without a queue and that the dock has such a quick turnaround to delivery, it seems that the supply chain issues that had been gripping Valve are loosening considerably.

However, the deck itself is far from perfect. Because of the fact that it uses USB-C for the display port, a third-party USB-C dock that uses its own power supply and video out will output the display of the official dock. 

And as mentioned before, the price of the official Steam Deck dock is steeper than many third-party options on the market, meaning that those who are on a budget might pass this product up in favor of a lower-priced one.

There are also some bugs that Valve is working on fixing at this time, including one involving compatibility with LG displays. According to the FAQ, if the “Docking Station is connected via HDMI, sleep/wake can result in visual noise.”

It might be worth waiting for Valve to work out the kinks of its dock before investing in one. And while you’re waiting, research other options that might better suit your needs.

Allisa has been freelancing at TechRadar for nine months before joining as a Computing Staff Writer. She mainly covers breaking news and rumors in the computing industry, and does reviews and featured articles for the site. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.

Read More

Continue Reading

Tech

Why doesn’t Bash’s `set -e` do what I expected?

Published

on

By

Why doesn’t set -e (or set -o errexit, or trap ERR) do what I expected?

set -e was an attempt to add “automatic error detection” to the shell. Its goal was to cause the shell to abort any time an error occurred, so you don’t have to put || exit 1 after each important command. This does not work well in practice.

The goal of automatic error detection is a noble one, but it requires the ability to tell when an error actually occurred. In modern high-level languages, most tasks are performed by using the language’s builtin commands or features. The language knows whether (for example) you tried to divide by zero, or open a file that you can’t open, and so on. It can take action based on this knowledge.

But in the shell, most of the tasks you actually care about are done by external programs. The shell can’t tell whether an external program encountered something that it considers an error — and even if it could, it wouldn’t know whether the error is an important one, worthy of aborting the entire program, or whether it should carry on.

The only information conveyed to the shell by the external program is an exit status — by convention, 0 for success, and non-zero for “some kind of error”. The developers of the original Bourne shell decided that they would create a feature that would allow the shell to check the exit status of every command that it runs, and abort if one of them returns non-zero. Thus, set -e was born.

But many commands return non-zero even when there wasn’t an error. For example,

if [ -d /foo ]; then ...; else ...; fi

If the directory doesn’t exist, the [ command returns non-zero. Clearly we don’t want to abort when that happens — our script wants to handle that in the else part. So the shell implementors made a bunch of special rules, like “commands that are part of an if test are immune”, and “commands in a pipeline, other than the last one, are immune”.

These rules are extremely convoluted, and they still fail to catch even some remarkably simple cases. Even worse, the rules change from one Bash version to another, as Bash attempts to track the extremely slippery POSIX definition of this “feature”. When a SubShell is involved, it gets worse still — the behavior changes depending on whether Bash is invoked in POSIX mode. Another wiki has a page that covers this in more detail. Be sure to check the caveats.

A reference comparing behavior across various historical shells also exists.

Story time

Consider this allegory, originally posted to bug-bash:

Once upon a time, a man with a dirty lab coat and long, uncombed hair
showed up at the town police station, demanding to see the chief of
police.  "I've done it!" he exclaimed.  "I've built the perfect
criminal-catching robot!"

The police chief was skeptical, but decided that it might be worth
the time to see what the man had invented.  Also, he secretly thought,
it might be a somewhat unwise move to completely alienate the mad
scientist and his army of hunter robots.

So, the man explained to the police chief how his invention could tell
the difference between a criminal and law-abiding citizen using a
series of heuristics.  "It's especially good at spotting recently
escaped prisoners!" he said.  "Guaranteed non-lethal restraints!"

Frowning and increasingly skeptical, the police chief nevertheless
allowed the man to demonstrate one robot for a week.  They decided that
the robot should patrol around the jail.  Sure enough, there was a
jailbreak a few days later, and an inmate digging up through the
ground outside of the prison facility was grabbed by the robot and
carried back inside the prison.

The surprised police chief allowed the robot to patrol a wider area.
The next day, the chief received an angry call from the zookeeper.
It seems the robot had cut through the bars of one of the animal cages,
grabbed the animal, and delivered it to the prison.

The chief confronted the robot's inventor, who asked what animal it
was.  "A zebra," replied the police chief.  The man slapped his head and
exclaimed, "Curses!  It was fooled by the black and white stripes!
I shall have to recalibrate!"  And so the man set about rewriting the
robot's code.  Black and white stripes would indicate an escaped
inmate UNLESS the inmate had more than two legs.  Then it should be
left alone.

The robot was redeployed with the updated code, and seemed to be
operating well enough for a few days.  Then on Saturday, a mob of
children in soccer clothing, followed by their parents, descended
on the police station.  After the chaos subsided, the chief was told
that the robot had absconded with the referee right in the middle of
a soccer game.

Scowling, the chief reported this to the scientist, who performed a
second calibration.  Black and white stripes would indicate an escaped
inmate UNLESS the inmate had more than two legs OR had a whistle on
a necklace.

Despite the second calibration, the police chief declared that the robot
would no longer be allowed to operate in his town.  However, the news
of the robot had spread, and requests from many larger cities were
pouring in.  The inventor made dozens more robots, and shipped them off
to eager police stations around the nation.  Every time a robot grabbed
something that wasn't an escaped inmate, the scientist was consulted,
and the robot was recalibrated.

Unfortunately, the inventor was just one man, and he didn't have the
time or the resources to recalibrate EVERY robot whenever one of them
went awry.  The robot in Shangri-La was recalibrated not to grab a
grave-digger working on a cold winter night while wearing a ski mask,
and the robot in Xanadu was recalibrated not to capture a black and
white television set that showed a movie about a prison break, and so
on.  But the robot in Xanadu would still grab grave-diggers with ski
masks (which it turns out was not common due to Xanadu's warmer climate),
and the robot in Shangri-La was still a menace to old televisions (of
which there were very few, the people of Shangri-La being on the average
more wealthy than those of Xanadu).

So, after a few years, there were different revisions of the
criminal-catching robot in most of the major cities.  In some places,
a clever criminal could avoid capture by wearing a whistle on a string
around the neck.  In others, one would be well-advised not to wear orange
clothing in certain rural areas, no matter how close to the Harvest
Festival it was, unless one also wore the traditional black triangular
eye-paint of the Pumpkin King.

Many people thought, "This is lunacy!"  But others thought the robots
did more good than harm, all things considered, and so in some places
the robots are used, while in other places they are shunned.

The end.

Exercises

Or, “so you think set -e is OK, huh?”

Exercise 1: why doesn’t this example print anything?

   1 
   2 set -e
   3 i=0
   4 let i++
   5 echo "i is $i"

Exercise 2: why does this one sometimes appear to work? In which versions of bash does it work, and in which versions does it fail?

   1 
   2 set -e
   3 i=0
   4 ((i++))
   5 echo "i is $i"

Exercise 3: why aren’t these two scripts identical?

   1 
   2 set -e
   3 test -d nosuchdir && echo no dir
   4 echo survived
   1 
   2 set -e
   3 f() { test -d nosuchdir && echo no dir; }
   4 f
   5 echo survived

Exercise 4: why aren’t these two scripts identical?

   1 set -e
   2 f() { test -d nosuchdir && echo no dir; }
   3 f
   4 echo survived
   1 set -e
   2 f() { if test -d nosuchdir; then echo no dir; fi; }
   3 f
   4 echo survived

Exercise 5: under what conditions will this fail?

   1 set -e
   2 read -r foo < configfile

(Answers)

But wait, there’s more!

Even if you use expr(1) (which we do not recommend — use arithmetic expressions instead), you still run into the same problem:

   1 set -e
   2 foo=$(expr 1 - 1)
   3 
   4 echo survived

Subshells from command substitution unset set -e, however (unless inherit_errexit is set with Bash 4.4):

   1 set -e
   2 foo=$(expr 1 - 1; true)
   3 
   4 echo survived

Note that set -e is not unset for commands that are run asynchronously, for example with process substitution:

   1 set -e
   2 mapfile foo < <(true; echo foo)
   3 echo ${foo[-1]} 
   4 mapfile foo < <(false; echo foo)
   5 echo ${foo[-1]} 

Another pitfall associated with set -e occurs when you use commands that look like assignments but aren’t, such as export, declare, typeset or local.

   1 set -e
   2 f() { local var=$(somecommand that fails); }
   3 f    
   4 
   5 g() { local var; var=$(somecommand that fails); }
   6 g    

In function f, the exit status of somecommand is discarded. It won’t trigger the set -e because the exit status of local masks it (the assignment to the variable succeeds, so local returns status 0). In function g, the set -e is triggered because it uses a real assignment which returns the exit status of somecommand.

A particularly dangerous pitfall with set -e is combining functions with conditionals. The following snippets will not behave the same way:

   1 set -e
   2 f() { false; echo "This won't run, right?"; }
   3 f
   4 echo survived
   1 set -e
   2 f() { false; echo "This won't run, right?"; }
   3 if f; then  
   4     echo survived
   5 fi

As soon as a function is used as a conditional (in a list or with a conditional test or loop) set -e stops being applied within the function. This may not only cause code to unexpectedly start executing in the function but also change its return status!

Using Process substitution, the exit code is also discarded as it is not visible from the main script:

   1 set -e
   2 cat <(somecommand that fails)
   3 echo survived

Using a pipe makes no difference, as only the rightmost process is considered:

   1 set -e
   2 somecommand that fails | cat -
   3 echo survived

set -o pipefail is a workaround by returning the exit code of the first failed process:

   1 set -e -o pipefail
   2 failcmd1 | failcmd2 | cat -
   3 
   4 echo survived

though with pipefail in effect, code like this will sometimes cause an error, depending on whether the output of somecmd exceeds the size of the pipe buffer or not:

   1 set -e -o pipefail
   2 somecmd | head -n1
   3 
   4 echo survived

So-called strict mode

In the mid 2010s, some people decided that the combination of set -e, set -u and set -o pipefail should be used by default in all new shell scripts. They call this unofficial bash strict mode, and they claim that it “makes many classes of subtle bugs impossible” and that if you follow this policy, you will “spend much less time debugging, and also avoid having unexpected complications in production”.

As we’ve already seen in the exercises above, these claims are dubious at best. The behavior of set -e is quite unpredictable. If you choose to use it, you will have to be hyper-aware of all the false positives that can cause it to trigger, and work around them by “marking” every line that’s allowed to fail with something like ||true.

Conclusions

GreyCat‘s personal recommendation is simple: don’t use set -e. Add your own error checking instead.

rking’s personal recommendation is to go ahead and use set -e, but beware of possible gotchas. It has useful semantics, so to exclude it from the toolbox is to give into FUD.

geirha’s personal recommendation is to handle errors properly and not rely on the unreliable set -e.

Read More

Continue Reading

Trending

Copyright © 2022 Xanatan