Good discussion about "Disinformation vs Misinformation. Berkeley

Leifer

Senior Member.
I could not stop watching this, though it is 1.5 hours long. (Sept/2021)


Source: https://www.youtube.com/watch?v=0dsr-VSp65U&t=6s


The panelists will be: Geeta Anand, dean of the School of Journalism; Erwin Chemerinsky, dean of Berkeley Law; Hany Farid, associate dean and head of the School of Information; Susan D. Hyde, chair of the Department of Political Science; john powell, director of the Othering & Belonging Institute; and moderator Henry Brady, former dean of the Goldman School of Public Policy.

 
Last edited:
I have my fav'ts.... but all these seem to have well-informed views.
This group is speaking about "disinformation and/vs misinformation", and the difference between the two in relation to social media and the truth.
 
Last edited:
Intro 0:00 to 02:40 First speaker, moderator, then in this order.....
(intros about 2 mins.)
Hany Farid, (associate dean and head of the School of Information @04:35 to 07:48
John Powell, {director of the Othering & Belonging Institute) @7:45 to 12:54
Geeta Anand (dean of the School of Journalism) @13:00 -17:53
Erwin Chemerinsky, (dean of Berkeley Law) @17:53 -22:52
Susan D. Hyde, (chair of the Department of Political Science) @22:53 - 27:36

....the rest is rebuttals, but equally interesting.
 
(I thought the @ symbol would link to the time following it ? how do I turn those times into links ?)
 
(I thought the @ symbol would link to the time following it ? how do I turn those times into links ?)
go to that time in the video. right click on video and choose "copy at current time". then use the hyperlink tool or just paste next to your times.

Hany Farid, (associate dean and head of the School of Information @04:35 to 07:48
Source: https://youtu.be/0dsr-VSp65U?t=278


or
Hany Farid, (associate dean and head of the School of Information @04:35 to 07:48
 
- Hi, I'm Henry Brady, former Dean of the Goldman School of Public Policy, and a political scientist who studies American institutions, especially such things as trust in American institutions. I'll be moderating today's session.

We have an all-star cast here today, and I'll introduce them in a moment. First, let me set the stage.

We live in an era where a majority of Republicans believe that Donald Trump won the presidential election, whereas Democrats believe overwhelmingly that Biden won.
Where a substantial fraction of people believe that COVID is fake or that the vaccines for COVID have not been thoroughly tested and that they have bad side effects.
Where Watchers of Fox News believe that Christians in America face more discrimination than black Americans and other people of color.
These beliefs exist against the background of partisan polarization between the two political parties and lack of trust for major American institutions. Republicans trust the police, the military and religion, whereas Democrats trust education, science and the press. Partisan polarization and disinformation, the decline of journalism, especially local journalism, and the rise of the internet with its ability to spread rumors and lies as truths seem to be at the root of these problems.

What can we do about them?

We're gonna spend some time first asking, what's the problem? And then trying to see if we can come up with some solutions.

The panel is a distinguished one.

Geeta Anand is Pulitzer Prize winning journalist and author, and Dean of the Graduate School of Journalism.

Erwin Chemerinsky is Dean of Berkeley Law, and one of the nation's leading authorities on the First Amendment in the Constitution.

Hany Farid, Associate Dean and Head of the School of Information is an expert on digital forensics, deepfakes,
cybersecurity, and human perception.

Susan Hyde is Chair of the Department of Political Science, Co-Director of the Institute of International Studies and a scholar who studies democratic backsliding, countries that are becoming more authoritarian by the day.

And John Powell is Director of the Othering & Belonging Institute and an expert in civil rights, civil liberties, structural racism, and democracy.

I'm gonna moderate the panel, as I said, let's get going.

So what are the sources and nature of the problem?

Let me start with Hany Farid, who knows a lot about the internet.

What is disinformation? What has changed socially and technologically to ignite the current storm of disinformation? What are the dangers from social media, especially?

- Thank you, Henry. And good to be here with such an amazing group of my colleagues here from the Berkeley Campus.

Let's start with some definitions. Let's start by distinguishing between disinformation and misinformation, which are often used interchangeably.

Disinformation is the intentional spreading of lies and conspiracies.
Think, for example, state sponsored actors trying to sow civil unrest or interfere with an election, think partisan hacks and trolls on Twitter and Facebook.

Misinformation, on the other hand, is the unintentional spreading of lies.
Think your quirky Uncle Frank Facebook posts about how Bill Gates is using COVID to implement a mandatory vaccine program with tracking microchips. (By the way, a pretty bizarre claim that some 28% of Americans believe.)

So disinformation of course, is not new, and we should acknowledge that. For as long as there's been information,
there's been disinformation. However, in the digital age, I don't think it will surprise you to learn that, and particularly in the age of social media, the nature and threat of disinformation is quite distinct.

So first, we've democratized access to publishing, many great things have come from that, but that now anybody with nothing more than a handheld device can instantaneously reach millions of people around the world.

Second, the gatekeepers of social media are not traditional publishers. And so posts that drive engagement are favored over just about everything else, with little consideration to journalistic standards or harm. Now here it's important to understand that critical to social media success is driving engagement and time spent on the platform and in turn ad revenue. So this is accomplished, not by chance, but by algorithmically determining what shows up on your social media feed. These algorithms aren't optimized for an informed citizenship, civility or truth, instead, repeated studies from outside of the social media companies and inside of the social media companies have shown that social medias algorithms favors outrage, favors anger, lies, and conspiracies, because that drives engagement.

And it's this algorithmic amplification that is the most significant difference today in the disinformation landscape.

So let me just say a few more things, Henry, 'cause you asked a series of these questions that I want to try to hit each of them.

An additional threat to this algorithmic amplification or manipulation is the risk of filter bubbles.

And which, as you said, at the very beginning, Henry, we seem to have two alternate realities because we are all consuming content inside of an echo chamber and a filter bubble driven by social media.

And so although disinformation is not new, what we are seeing is a scale and belief and even the most bizarre conspiracies that is unprecedented in history.

So here's another example, for example, the far-reaching far-right QAnon conspiracy claims among many things, that a cabal of Satan worshiping cannibalistic pedophiles and child sex traffickers plotted against Donald Trump during his term as president. It's a pretty outrageous even by Americans conspiracies. However, a recent poll finds that 37% of Americans are unsure whether this conspiracy is true or false and a full 17% believe it to be true.

In addition, we're seeing widespread vaccine hesitancy promoted all over social media with huge, huge implications for our public health. We're seeing, as you said at the beginning, widespread U.S. election lies with huge implications for our democracy. And we're seeing widespread climate change disinformation with huge implications for our entire planet.

So this disinformation is leading, and I don't think this is hyperbolic, to existential threats to our society and democracy. And I don't know how we have a stable society in a democracy if we can't agree on basic facts, because everybody is being manipulated by attention grabbing, dopamine fueled algorithms that promote the dredges of the internet, creating these bizarre fact-free alternate reality.

I'd very much like to believe in Brandeis' concept, that the best remedy for these falsehoods is more truth, not silence, but this only works in a fair marketplace of ideas, where ideas compete fairly on their merits, but social media doesn't come even close to being a fair marketplace of ideas, it is manipulating users in order to maximize profits.

And there it is, Henry, is the big difference today from 20 years ago, is how we are being actively manipulated in terms of the information we are being presented.

- Thank you.

So, John Powell, we've just heard the technological reasons why things have changed and outlined really adroitly, what about human beings and our psyches and then maybe, especially Americans, how much of this is based upon our tendencies towards tribalism and other?

What can we do to minimize that and to limit the degree to which those kinds of factors affect the way people process information? Is that part of the problem?

- Thank you, Henry.

First of all, I'll say, delighted to be here with such distinguished guests and I look forward to hearing and learning from all of you.

The problem, we sort of have a better sense of the problems that we do have solutions. The problems are multifaceted.

I'd suggested that the internet, social media, has sort of complicated the problem by far. I'm reading Martha Nussbaum's book now on religion and fear. And Aristotle was talking about this problem 2000 years ago, and that it could be hijacked. And so part of it does sort of mesh with human nature and society.

Tribalism is interesting. I'm part of more in common, and I looked at some of their materials in preparation for today's talk. I'm not in favor of the term tribalism, and I'll tell you why.

First of all, think about the U.S. history and our relationship with tribes here. In a sense you could say by many of the accounts, the tribes were much more welcoming to the Europeans than the Europeans were welcoming to the tribe. But even more pointedly, tribalism, as we understand it, evolutionary, tribes were small. They ranged from anywhere from 50 to about 150 people. They were people you had contact with every day, they were people that you knew. So in that you had all kinds of what we would call biases. These were the people you trusted, but tribes couldn't be a thousand people, tribes couldn't be a million people.

And so what we're seeing, I think tribes is actually a misnomer. So what allows for people who don't know each other, who will never see each other to actually feel like they're part of a same group and hostile to another group, whether it's blacks or Jews or Muslims.

So I think tribalism, like I said, is a misnomer, but I do think changing demographics actually plays a big part.

As discussed, there's polarization and identity along ideological lines, but there's also along social lines, along people. And there's very strong correlation between anxiety of change of demographics and polarization.

It doesn't have to happen. I think it sort of seeds, it creates the environment and then people use it. The elites use it to actually constitute or exaggerate the fear and the threat.

And one thing that's very important, I think to sort of point out is that the other is not natural, the other is socially constructed, the meaning and content of the other is socially constructed. And it's not saying we're all the same, but the meaning, especially saying that someone's not fully human, that there are threat, that they are like an animal, that they're smelly. There are certain words that show up over and over and over and over again, whether you're talking about again, blacks or Jews or immigrants.

And so dominant group, if you will, leaders oftentimes using that to sort of create a sense of us and them. So this is calculated. I suggest this is not misinformation, this is disinformation.

The tools available are more profound than they used to be 20 years ago, but also the changing demographics

end by just saying this, think about the report of the census data, I was very unhappy with the reporting. The reporting from my perspective was laced with fear, and it may have been implicit, but it was like almost like saying white people be afraid, you're about to lose, the minorities, black people, Latinos, they're coming, and you're gonna lose. It had just scores of stories about this white anxiety.
And it didn't paint a picture of how we might be a society without the racial majority and living in harmony and peace and coming together, it said nothing about the explosive expression of American families.
Now, one of the fastest growth are mixed race, mixed ethnicity families, that's potentially a positive, is simply was absent from the story.

- Thanks, john.

So that's the human side. And then there's journalism. Historically, the way we've learned about others is through journalism.

Geeta Anand, have things changed for journalism, and it's part of the problem the decline of journalism, or did journalism never have a chance with respect to the internet?

And also, are there other historical periods that looked like the one we're in now, and is there a hope that we can get out of the mess we're in?
 
Last edited:
Disinformation is the intentional spreading of lies and conspiracies.
Think, for example, state sponsored actors trying to sow civil unrest or interfere with an election, think partisan hacks and trolls on Twitter and Facebook.

Misinformation, on the other hand, is the unintentional spreading of lies.
Think your quirky Uncle Frank Facebook posts about how Bill Gates is using COVID to implement a mandatory vaccine program with tracking microchips. (By the way, a pretty bizarre claim that some 28% of Americans believe.)
 
So, disinformation is nearly always the birthplace of misinformation.
(??)
But I could see where a "generally suspicious person" who distrusts anything that has "normal rules"....could and might invent something like "disinformation" as an unproven truth. Then those that agree with it, repeats it as misinformation.
 
Last edited:
no. only according to Farid. (which should give you a hint about the quality of college panel discussions in modern times.)
according to Leifer! Farid never says that!

You have the transcript, could you show us some quotes and explain how they indicate poor quality of the discussion? (assuming "the quality" in your post means poor quality)
 
So, disinformation is nearly always the birthplace of misinformation.
(??)
But I could see where a "generally suspicious person" who distrusts anything that has "normal rules"....could and might invent something like "disinformation" as an unproven truth. Then those that agree with it, repeats it as misinformation.
Not necessarily, no. Misinformation could just as easily spawn from a misunderstanding or a lack of understanding. Misinformation solely means projecting false information WITHOUT the intent of it being false. Disinformation being projecting false information WITH the intent of it being false.

Also too note on the discussion of the quality, none of those names are prominent in the field in any form, nor are they even involved in the field either from implementation, counter, or even an academic standpoint. They're speaking from their external opinion over an adjacent concept with impact on their respective focus areas. Take that as you will. Farid DOES do stuff regarding deep fakes and similar activities but the act of that in itself would be more akin to forgery than MDM (mis- dis- and mal- information).
 
red and red bold added as highlight by me.

Let me start with Hany Farid, who knows a lot about the internet.

What is disinformation? What has changed socially and technologically to ignite the current storm of disinformation? What are the dangers from social media, especially?

- Thank you, Henry. And good to be here with such an amazing group of my colleagues here from the Berkeley Campus.

Let's start with some definitions. Let's start by distinguishing between disinformation and misinformation, which are often used interchangeably.

Disinformation is the intentional spreading of lies and conspiracies.
Think, for example, state sponsored actors trying to sow civil unrest or interfere with an election, think partisan hacks and trolls on Twitter and Facebook.

Misinformation, on the other hand, is the unintentional spreading of lies.
Think your quirky Uncle Frank Facebook posts about how Bill Gates is using COVID to implement a mandatory vaccine program with tracking microchips. (By the way, a pretty bizarre claim that some 28% of Americans believe.)
 
red and red bold added as highlight by me.
yep, thank you, that confirms what I wrote: Farid never says that '"lies" means disinformation'.
Both disinformation and misinformation is spreading of lies, according to that quote.

Leifer wrote, "disinformation is nearly always the birthplace of misinformation", and Farid simply doesn't say it. And that means your dig at the quality of that panel discussion is misdirected:
no. only according to Farid. (which should give you a hint about the quality of college panel discussions in modern times.)
because it is not "according to Farid".

I really don't understand why you are dragging this out, it looks super obvious to me (which should give you a hint about the quality of forum discussions in modern times).
 
Last edited:
According to the provided transcript, Hany Farid defines “Disinformation” as “the intentional spreading of lies and conspiracies.”

He defines “Misinformation” as “the unintentional spreading of lies.”

According to Farid, both types of false information (misinformation + disinformation) are derived from lies (intentionally false statements). He provides no category of false information that isn’t derived from lies. This is misleading. False information is commonly the result of misunderstanding and/or simple error/inaccuracy, and the standard usage of the term “misinformation” (defined simply as false or inaccurate information) captures this reality. Farid’s oddly narrow definition does not.

Farid leaves no room for false information to originate outside of intent, which is precisely where I think most false information does originate in the form of unintended error/inaccuracy, misinterpretation, carelessness, etc. I guess Farid never played the game of telephone as a kid.

On a side note, one of the less considered reasons everybody hates each other right now is that everybody’s determined to attribute intent even when none may be present. I think Farid’s premise in this regard is problematic.
 
Last edited:
...so, spreading lies blindly is still disinformation. (??) .............maybe so.
Without earnest fact-checking, this is so.
Unintentional lies (and the copying of such lies) is a lazy and subjective process...... and should hold little value.
 
Last edited:
Yeah, using a paraphrase or misquote, or whatever you want
to call it, to--so ironically--try to dump on
"...the quality of college panel discussions in modern times"
looks pretty damned lazy/sloppy...we should expect better on this site...
 
Yeah, using a paraphrase or misquote, or whatever you want
to call it, to--so ironically--try to dump on
"...the quality of college panel discussions in modern times"
actually i used his specific quote. i even highlighted it in red.

so are you now spreading disinformation (intentional lies) or misinformation (unintentional lies)?
 
Hany Farid:
Disinformation is the intentional spreading of lies and conspiracies.
"lies" means disinformation.
Farid never says that '"lies" means disinformation'.
...i used his specific quote.
No, you evidently decided that he meant B when he actually said A.
And then you kept insisting that you were quoting him saying B.

(never mind using your error to wrongly decry
"...the quality of college panel discussions in modern times.")
 
No, you evidently decided that he meant B when he actually said A.
And then you kept insisting that you were quoting him saying B.
Mendel already tried that tactic. It didn't work for him, why do you think it would work for you? If you want to go through life thinking that misinformation means "spreading lies", knock yourself out. I already said:
you must be understanding the word "lies" differently than we are.
which means: we'll have to agree to disagree.


You should have just said "Farid probably didn't know the moderator was going to convolute the topic, he was asked to talk about, in the opening speech and he [Farid] just mispoke a bit. He was likely just trying to differentiate between the guy who made up the lie (disinformation) and the people who mistakenly spread that specific disinformation lie (misinformation). I don't think he was talking about misinformation in general. It's easy to accidentally leave out qualifiers in verbal communication."




(never mind using your error to wrongly decry
"...the quality of college panel discussions in modern times.")
like your error of accusing ME of :
looks pretty damned lazy/sloppy...we should expect better on this site...

have you SEEN the quality of the OP allowed on this site? since when are we allowed to post an hour and a half video under General Discussion and say "so what do you guys think?"

(PS apparently nobody has watched it, as noone has questioned Leifer's description of the panel topic or commented on any of the other speakers)
 
have you SEEN the quality of the OP allowed on this site? since when are we allowed to post an hour and a half video under General Discussion and say "so what do you guys think?"

(PS apparently nobody has watched it, as noone has questioned Leifer's description of the panel topic or commented on any of the other speakers)
have you SEEN the quality of the OP allowed on this site?
I assume that means me, the troublemaker.

BTW Deidre, I attempted to separate the long video into sections. Perhaps you forgot.... and forgot your suggestion on how to accomplish just that ? (because I asked).
Of course I am defending myself.... and it is presumptive of you assume that "nobody has watched it" . (the OP video)
Prove it.
 
I did not mean to open-up a can of worms,
I just tried to open a discussion of Disinformation vs. Misinformation,,
.....which I believe as important, and integral to the basis of the forum.
The "method" the idea is brought about, should be less important. (posting a 1,5 hour video)
It was simply the best source I had at the time.
 
Last edited:
If you want to go through life thinking that misinformation means "spreading lies", knock yourself out.
Woolery already argued the point that Farid's definition of "misinformation" is not well chosen. I don't completely agree with that take, but at least he's addressing something Farid actually said, because he does define it that way (see the red bold in post #17).

the people who mistakenly spread that specific disinformation lie (misinformation).
You're saying that misinformation can mean "spreading lies", aren't you?

since when are we allowed to post an hour and a half video under General Discussion
it's not the first time I've provided a transcript for a video, or quoted a linked article, to bring a discussion up to Metabunk standards. This is not a new development.

Not everyone is up to being held to a high standard, @deirdre.
 
I did not mean to open-up a can of worms,
I just tried to open a discussion of Disinformation vs. Misinformation,,
.....which I believe as important, and integral to the basis of the forum.
The "method" the idea is brought about, should be less important. (posting a 1,5 hour video)
It was simply the best source I had at the time.

oh. well then, as others have mentioned... no. Farids definitions, you quoted, are not accurate because a "lie" in common American english usage is an intent.

ex: if i try to do math on Metabunk and i get my math wrong, then i am posting misinformation. It does not derive from disinformation, it derives from me making a mistake.


Article:
Misinformation is “false information that is spread, regardless of intent to mislead.”


Article:
The most widely accepted definition of lying is the following: “A lie is a statement made by one who does not believe it with the intention that someone else shall be led to believe it” (Isenberg 1973, 248) (cf. “[lying is] making a statement believed to be false, with the intention of getting another to accept it as true” (Primoratz 1984, 54n2)). This definition does not specify the addressee, however. It may be restated as follows:

(L1) To lie =df to make a believed-false statement to another person with the intention that the other person believe that statement to be true.
 
it can mean unintentionally spreading someone else's lie, yes. but that's not what Leifer originally asked stated.

This is Leifer's original source material repeated yet again:
Misinformation, on the other hand, is the unintentional spreading of lies.
Content from External Source
It's hard to be more explicitly "unintentionally spreading someone else's lie", your denial makes no sense to me.
 
your denial makes no sense to me.

you guys are focusing on the wrong stuff. i was responding to Leifer saying:
So, disinformation is nearly always the birthplace of misinformation.

it is only [nearly] always the birthplace of misinformation, if Farid gives an uncommon definition of 'misinformation'. which he does. that is why i said "only according to Farid".

But you guys don't understand my comments and i do not understand y'alls misunderstanding. so its probably time to move on.
 
you guys are focusing on the wrong stuff. i was responding to Leifer saying:

"So, disinformation is nearly always the birthplace of misinformation."

it is only [nearly] always the birthplace of misinformation

Thank you for pinpointing that. OK, we can agree on this. The level to which propagators of untruths can be bad actors in that propagation can both decrease (e.g. naively passing on someone else's deliberately crafted BS) and increase (e.g. deciding to use some other group's idle speculation and present it as fact to push a narrative, knowing that it's nothing but speculation), so I don't think it's particularly useful to simplify things thus.
 
it can mean unintentionally spreading someone else's lie, yes. but that's not what Leifer originally asked stated.
yeah, that's exactly what Leifer said: disinformation begets misinformation as people who believe it repeat it.

But Hany Farid doesn't care about misinformation, nobody does, the full transcript has it 6 times and disinfo 41 times.
Farid's main point is that social media intentionally amplify disinformation, and should be regulated:
Article:
Now, we’ve also seen that tweaking the underlying recommendation algorithms that I was talking about earlier can have a big impact on mitigating disinformation. In 2020, Facebook conducted an interesting experiment called “Good for the World, Bad for the World,” in which their users were asked to categorize posts as one or the other. And what Facebook researchers found is that there was a positive correlation between the popularity of a post and its categorization as bad for the world. This is what Geeta was talking about earlier. Then, Facebook trained the recommendation algorithms to make “Bad for the World” posts less visible. They didn’t ban them. They didn’t delete them. They just made them less visible on our newsfeeds.

And the research was successful. It reduced content that was “bad for the world,” but you know what else it did? It reduced the amount of time that people spent on Facebook. And so, what Facebook said was, “Nice try, but we’re literally going to turn this off,” and now knowingly recommend posts that we know are bad for the world. There are mitigation strategies. Despite the challenges, the scale, the definitional problems, there are mitigation strategies that are fairly well understood and could be implemented. The problem, of course, is that these changes are not necessarily good for corporate profits. And here we run into the tension here. I would argue that while the problem of disinformation is complex, the problem with disinformation on social media today is not primarily one of technology, but one of corporate responsibility.

I would also argue that we can mitigate harm without, and this is to Erwin’s point earlier, without necessarily banning specific types of speech or users, but instead we can tweak, as we have already seen, the underlying recommendation algorithms to simply favor civility and trust over hatred, lies, and conspiracies. And, of course, there are some definitional things that we have to get right there. The last thing I’ll say here is we have been waiting for now several decades for the technology sector to find their moral compass. And they have not seemed to be able to do that. They continue to unleash technology that is harmful to individuals, to groups, to societies, and to democracies. And left to their own devices that will continue. We cannot sit back and say, “Well, the technology sector will self-regulate.”
 
Back
Top