Indoor, Outdoor & Kids' Trampolines

The Digital Playground


REIHAN SALAM: Hi everyone. I’d love to invite Amanda
Lenhart of Pew, Victoria Grand of YouTube, and Del Harvey of
Twitter, up to the stage for our first panel which is on
playing nice, the challenges of the digital playground. So I should warn you guys, I’m
going to very rigorously keep you to your five minute
introductions and discussions so that we can launch into
what will hopefully be a ferocious, feisty, contentious,
discussion about the digital playground. Adora has talked a lot about
some of the dangers associated with these new civic spaces. And it does seem as though as
the internet has become more central to our lives, that
much more of the kind of conversation about civic life
is happening in that space. And that obviously introduces
some complications and also raises anxieties, particularly
among parents of young children. So Victoria, would you like to
kick off the discussion? VICTORIA GRAND: Sure,
yeah, I’d love to. My name is Victoria Grand, and
I look after communications and policy for YouTube and it’s
great to see a lot of familiar faces here. Now just as we’ve been here
this morning, in the short span of time, about 100 new
hours of video have been uploaded to YouTube. That’s about 7 years
of new video a day. And in fact, more video is
broadcast in YouTube in just a month’s time, than is broadcast
on all three major US networks over the
past 80 years. So the scale of YouTube is
phenomenal, and as Adora showed in countless really
amazing examples, many which I haven’t seen before, the
diversity of content that we see every single day is
phenomenal as well. So in those hours are sort of
Lady Gaga’s newest video, baby’s first steps, a Syrian
demonstration, and a teenager’s rant, sort of
all combined together. And lots and lots of sort of
cute kittens and dogs on skateboards as well. Many of these platforms, what’s
incredible, is that many of these are basically 7
years old, maybe 8 years old, and it’s hard to imagine,
especially if you’re in the younger generation, a world
where the Facebooks, and the Twitters, and the YouTubes, and
the tumblers don’t exist. Because these platforms are such
critical communications platforms, we always start from
the premise, every single day my day is filled with sort
of phone calls about, what do we do about this video? Or this has been uploaded. And the starting point for us
is always a bias toward free expression, toward leaving as
much content up as we can. And there are times when it’s
a lot easier, obviously, for things like child abuse, and
pornography, and content that’s illegal, these calls
are very easy to make. And then there’s content that’s
very challenging. It’s hard to know how to
properly balance the right of people to express themselves,
versus our ability to maintain this vibrant and engaging
community to exchange ideas. Some of the issues that we’ve
seen coming up recently are some of the ones that actually,
Adora talked about. And if we can click here
on the first video. This am I pretty trend. And this has now become the
meme, starting to become a meme of the conference. So we saw this starting about a
month ago and it actually is about a four-year-old trend
that’s been across YouTube and Twitter for some time. These are young girls who get
on the camera to ask, am I pretty, am I ugly? And the question is, is this
a disturbing teenage trend or is it not? The girl in this video talks
about how all of her friends tell her she’s pretty but she
doesn’t really know she believes them. So she wants to sort of crowd
source the answer by asking a lot of people. And you can imagine that the
responses are sort of all over the place, right. There are some that are positive
and affirming, actually probably more than
I thought there would be. And then there are some that
probably do nothing for sort of a teenage girls fragile
self-esteem. And the question that we face
as a private company is, should a private company take
away the right of a girl to ask the question to the
world, am I pretty? Is that our role to take
away that right? And is it our role to take away
the right of any of you, or any of us, to say to anybody
else you’re pretty or you’re ugly. Obviously there are going to be
extreme responses to this question that go into a kind
of hate speech territory. But the baseline, what do
we do with this video? Do we sort of decide
to make it go away? What’s interesting about this
trend also is that we saw that one of the most famous, “Am I
Pretty” videos, recently we found out that it was actually
a 21-year-old art student who had posed as sort of a
14-year-old girl who was putting on these videos. Other trends that we’ve seen
recently, and again this is just sort of part of
our day to day, the cinnamon challenge. These are girls who, and boys,
from all over the world who will [VIDEO PLAYS] the cinnamon challenge. So what she’s going to do is
she’s going to fill up a tablespoon of cinnamon and eat
it, and then she’s going to have some kind of a reaction
that ranges from sort of gulping, gasping, maybe all
of the way to potentially throwing up. Though they usually don’t,
it’s just sort of repulsive, right. And so doctors have come out
and said this is bad for people’s respiratory systems,
it causes asthma, YouTube please take these videos down. And again we’re sort of left
to assess, OK, how dangerous is this? Do we need to take
action on it? A very similar trend with
Axe body spray. So, in this situation what’s
been happening is people have been taking this Axe
body spray and putting it in their mouths. Again, just these gross
out videos. Axe Body Spray contacts us and
says, this stuff is actually really dangerous to ingest,
YouTube, take action against this. And what we have to think about
is, again, do you take away all of the links,
Axe Body Spray challenges off the site. Another one that we’ve looked
at, and I know I’ve talked with several of you
about these in the child safety community. [MUSIC] And I’m just going to caveat
this by, the next scene that you’re going to see
is a bit graphic. [MUSIC] OK, I’ll switch to
our next video. But the cutting videos are
really interesting. People are uploading videos and
in large part these videos are actually public service
announcements. They’re PSAs by fellow
teenagers. In many cases, they’re people
who are either saying, don’t do it, or who are documenting in
a neutral way and trying to explain what self injury
is all about. I think we found, there was a
study that was done that said that only in about 7% of the
cases of all the self injury videos that are uploaded to
YouTube is it actually promoting the act
of self injury. And so the question is, we have
some advocates who will say, actually these are really
authentic voices. It’s very important to
keep them on YouTube. And then other researchers who
say the very act of looking at the cutting actually triggers
additional cutting. These videos should all be
removed from YouTube. So interesting balance there. And then finally the
smoking videos. We recently talked to a group
of state attorneys generals who asked us to remove videos
of people smoking on YouTube as a whole because when you see
images of people smoking it means that they’re more
likely to smoke. And so, as you can imagine, we
have videos of people smoking in the back of a Syria
demonstration video on the curb. It’s a global site. So you can imagine what a
massive undertaking it would be to remove all images
of people smoking. And so again, these are just
some of the questions that we deal with every day. And this is just in the
teen safety context. You can also imagine that we
deal with questions of, what do you do when terrorists use
the site as a distribution platform or for recruitment? What do you do with all manner
of sexual content and even graphic content? Is their content that’s just
too graphic for YouTube? The stuff that’s coming out
of Libya and Syria. Should that just not be
allowed to be seen? What do you do in cases like
sexually charged cases? What do you do with
body paint? What do you do with man boobs? What do you do– there’s
just all kinds of cleavage of all sorts. So these are the types of
questions that we sort of grapple with every day. REIHAN SALAM: Thanks so much,
Victoria, your remarks were delightful, informative,
provocative, and a bit over time. So Del, I’m going to hope that
you’re going to be a little bit better on that front. Del, you’re with us from Twitter
which is a kind of fascinating new civic space, and
please tell us a bit about yourself and also about
the work that you do. DEL HARVEY: So I’m Del Harvey. I have been at Twitter since the
dawn of time for Twitter, -ish, which is like October
2008, so I mean kind of a long time. In internet years, it’s an
incredibly long time. And I head up the Trust and
Safety Team there which is actually an umbrella
organization for a number of sub teams covering everything
from account activity and abuse, to brand policy, to user
safety, to legal policy, to ads policy. There’s a lot of policy
as it turns out. Policy and abuse. So essentially it’s anything to
do with misuse of the site ends up coming to my team
for us to deal with. And of the things that we see I
am fortunate in that we are not hosting video, which I
greatly appreciate in terms of it making my job easier. But we definitely see such an
array of content, you could never actually watch all the
video uploaded to YouTube every day even if you hired
a battalion of people. Similarly, if you were to
actually try to read all of the tweets it would not
go so well for you. And that kind of is conflated
with some other parts of Twitter that make it
particularly complicated in that we have very little
context for tweets. You get 140 characters. If it’s a single tweet, you
really just have that 140 characters, no other information
about what that tweet actually signifies or
means, which becomes relevant when you see a conversation that
starts out between two people and the first @
reply is hey bitch. And you’re like, all right,
well, there are number of possibilities here. This could be somebody being
mean to someone else. This could be two friends, or
this could be two accounts that are pretending to be dogs
and all of these have a chance and do, in fact, exist. We
have a very healthy, for example, My Little Pony role
playing community. It’s complicated to make the
decision of, oh this shouldn’t be allowed, or this shouldn’t
be allowed. In general, Twitter is
very much a platform. We view ourselves as a platform
that is there for folks to sort of use
as they see fit. And we found, as Victoria, I
think, has as well, that in a lot of ways the community does
do self-moderation also. And if somebody’s out of line
in a group of people, then usually that group of people
tells the person they’re out of line. In terms of actual content that
gets posted, there’s such a sprawling variant of use types
for Twitter within that 140 characters which is kind
of impressive, that we also run into the same problem of,
even if we were to mediate content which content would fall
within the area of this is what should be mediated? And that’s something that
obviously we have a lot of conversations with folks about
because we do allow, for example, parody accounts. And we’ve had people say, well
yes, this is marked as a parody account, it’s in
compliance with your parody policies, but it’s not funny. Which sometimes it’s not. And that actually doesn’t
necessarily though mean that we want to remove it or that
it is a wrong to have. One of the biggest things that
we try to reiterate is that we really strongly believe that
the correct answer to what someone perceives
as bad speech is actually more good speech. In that, inevitably, if you
remove content what happens is it gets re-posted. And it often gets requested
by 30 people or 50 people or even more. And it’s kind of the
demonstration of the Streisand Effect, where a photographer
was doing a documentary on erosion along the California
coast, and one of the images included an image of
her house in it. And she sued to prevent that
image from being included in this coffee table book, meaning
that exponentially more people now saw the image
and knew it was her house, then had the coffee table
book just gone out. So the attempting to remove
information almost always results in it being distributed
more broadly and that’s one our probably
biggest challenges. REIHAN SALAM: Thanks
very much, Del. And you were under time. Very slightly, but a little
bit under time. I thank you for it. And I have a lot of
questions for you. But Amanda, a lot of your
research is specifically about young people and how they’re
using the internet and particularly how they use
mobile technology. But I was wondering if you
could illuminate our perspective with a bit of your
scholarly work at Pew. AMANDA LENHART: Sure, sure. So I’m a senior researcher at
the Pew Research Center which is a nonprofit, nonpartisan
research center based in Washington DC. Because I’m kind of the data
lady on the panel, I want to start out by talking a little
bit about what do we know. And so one of my last large
pieces of research was on digital citizenship, writ large,
I think is a real theme that we have here today. And so I just want to make sure
that we have a sense of what teens think an experience
when they go particularly to social spaces online. And so when we interviewed teens
we found that, first of, American teens are socializing
in digital space, and I think that’s an incredibly important
thing to remember. I think we all, in this room,
know it, but it’s always good to remember that 80% of
kids are actually using social media. It’s also important to remember
that we ask teens whether or not they felt that,
in general, social spaces were kind or unkind spaces. We wanted to know the emotional
climate of online space, and the majority of kids,
about 69% said people are mostly kind online. So in general, teens have this
sense that social space online is a positive place. But it’s important to remember
that 20% of teens say it’s actually not a positive place. I’ll talk a little bit more in
a minute about some subgroups who are more likely
to say that. But even with this positive
sense that teens have, it’s important to remember
the teens also witness negative behavior. 88% of teens say I have seen
somebody else act mean and cruel to another person on
a social networking site. That’s a lot of kids. But I think if you ask kids that
same question about, have you witnessed somebody being
mean and cruel to somebody in the hallways of your high school
you would probably get a similar number. It’s also important to know that
for most people it’s a witnessing experience, it’s
not a personally felt experience. Only about 15% of kids say that
they have had somebody be mean and cruel directly to them
in a social media space. It’s also important to remember
that teens are actually saying and telling us
that they stand up for other kids when they see them
being harassed online. Certainly the most common
response to witnessing people being mean in a social space
is to ignore it. And that, In many ways, makes
sense because often if you’re an adolescent, you may not know
the full context about what you’re seeing in digital
space, it might be a joke, there might be a lot of
back story that you have no idea about. And in fact in our focus groups
teens told us that they were more likely to stand up
and defend somebody who was closer to them where they’re
more likely to know the back story. And so as I said about 80% of
teens have actually stood up for somebody they’ve seen being
badly treated online, but because this is a yin and
yang kind of experience, we also heard teens tell us that
they joined in in the harassment. About 20% of kids said, yep,
I’ve joined in, I’ve piled on, I’ve helped to harass another
person in a social space, and about 2/3 of teens have actually
witnessed other people do this. So it is both a generally kind
space, but it is a space where teens witness, and experience,
and in many cases, enact cruelty to each other. So I think we then have to ask
ourselves what kind of space do we really want? What kind of expectations for
perfection do we have in social space? Should we be more worried also
about the kids who are more likely to say that the internet,
that social media is an unkind space, and that’s,
in general, middle school girls and African American
youth, substantially more likely to tell us that people
are mean online, people are cruel and unkind. I also think we want to talk
today and ask ourselves about what is the role of parents and
adults in helping to set norms around online behavior? Because we see this. We know that teens
act badly online. We know that generally they
don’t, but they see it. But what is our role
as adults? And when we ask teens about
where they heard about general information about online safety,
parents were the dominant place. In fact, parents have a
remarkably powerful role in teens’ lives. Teens, in fact, told us this
themselves which is actually sometimes quite difficult to get
adolescents to admit, that their parents were incredibly
instrumental in helping them to think through many of these
kinds of norms of behavior. Also important to note though
in the data that an enormous number of teens told us, the
average teen said that five different sources of information
was where they got their general information
online about safety. So that includes teachers who
come in sort of second, right behind parents. That also includes websites
and web properties. It includes people like youth
pastors and coaches, and your best friend’s mom. There’s an enormous number of
places where teens are getting this information. And so, in many cases, it takes
a village finding, that really many, many people are
contributing to teen’s understandings of how
to behave online. It’s also, I think, important
to think about specific incidences. So we asked teens, OK, so when
you have a specific moment where you’ve witnessed
something online, what do you do? About a third of teens are
willing to reach out to other people and say, hey,
I need some advice. Another 2/3 aren’t. Again, we don’t know exactly
what these incidents are about, we don’t know their
severity, but when teens do reach out they reach
out to their peers. So again, even as we think about
adults in their role in helping to set norms for this
online behavior, peers are actually where teens are going
for specific advice in a particular situation. So we can’t ignore the
importance of peer education. And finally I just want to
problematize this reliance again on adults and
peers and parents. I mean certainly, we often, I
think in a space where we’re trying to figure out how to
help people we want to use education as our main way of
fixing some of these problems that we see in internet space. But I think we also need to
remember that not every parent is capable of being the person
to give advice to every child. Not every adult is a
good role model for how to behave online. That oftentimes we have
expectations for our children that are adults and that we
as adults don’t meet. And so I think we need to ask
ourselves about what is a reasonable set of expectations
to have and what kinds of trade-offs are we willing to
make to get to that point? And I’ll stop there and look
forward to the discussion. REIHAN SALAM: Thanks
so much, Amanda. So Victoria too, that last point
that Amanda just made. The impression that I get from
some of the issues that you raised in your discussion is
that many people are looking to YouTube to adjudicate
these much larger social and civic issues. For example, cigarette smoking
is a bad thing, ergo, I want you to remove images of it. And I kind of wonder how you
feel about that because given that YouTube is actually serving
the civic function, many people believe that well,
ergo, you have this larger civic responsibility. Yet you obviously have a lot
of other different agendas that you’re seeking
to fulfill. And I wonder how you navigate
that terrain. VICTORIA GRAND: Sure, I think
it’s a challenge when people expect a private company to
solve these disputes between two children, for example, or
a situation like exposure of children to tobacco. One of the things that happened
when the state AGs came to us, is they said, well
look, we’ve approached the MPAA and the MPAA is willing to
take a look at this and to do more to not show
images of people smoking in their movies. And the difference between the
MPAA and what they can do, and what YouTube can do
is that we don’t actually create the content. And we can’t edit the content
either, right. And so it’s a different
kind of situation. I think it’s important to notice
that people come to you to be entertained primarily,
but I do think that it’s a good place to be educated
as well. And Del and I, when we see each,
other talk about can we be doing more to raise the
profile of the education resources that we have. I think
it’s an ecosystem where we all can probably
be doing more. I think schools can be teaching
internet ethics in a much more direct way. But obviously there’s a lot the
platforms can continue to do to educate people. REIHAN SALAM: Del, Twitter isn’t
the first thing you’ve done in the space of safety. Earlier on you worked to protect
children from online predators and I wonder, you talk
about a very wide range of issues, but I wonder
specifically with regard to children and feelings being hurt
and this kind of domain, this seems to be an area in
which there are a lot of people– and Amanda was talking
about parents and their attitudes and
expectations. I wonder is this particularly
fraught? Is this a lot of the kind of
commentary and feedback that you get where people are saying,
my child was hurt in this way, by these comments, and
I want you to do something about it, I want you shut down
this particular account? DEL HARVEY: We actually don’t
have that significant of a teen presence as compared to an
adult presence on Twitter. There’s not as high of a
number of teenagers. And that, however, certainly
doesn’t keep adults from having their feelings hurt. So I can still certainly speak
to people who have been hurt by comments that they’ve
received. It’s actually been very
interesting to watch the evolution of how folks handle
getting mean comments. Because the first kind of report
that you get is this person said something terrible
to me, you need to stop them. And sometimes you go look and
they actually said something terrible first. But the other
person replied and that was the part that wasn’t OK. And sometimes it’s more like,
well this person said something terrible about me and
you go and look and the person said something terrible
and then they tweeted, this person said something terrible
about me, and look at what they said, and now all their
friends have jumped on that person that said the
terrible thing. This weird sort of you need to
stop this person from saying mean things, but I’ve already
taken care of it also, over here, this other way. But why do you let people
post mean comments? And it’s a really kind of weird
disconnect because I think one thing that we end up
having to tell people a lot is if somebody really wants to
create an account on a site, they can create an account
on a site. Which by I mean, say that
you have an account that absolutely goes beyond the pale
and it gets suspended. So they can get a proxy, and a
disposable email address and create 30 accounts, or 50
accounts, or however many accounts and use all of those
accounts to do the same thing. The idea of suspension or
removing the content or anything like that actually
resolving this conflict has been pretty thoroughly
disproven, at least in my experiences. What we found is actually
helpful instead is folks just like, hey, that was kind of a
jerky thing you said, like what’s up, dude? REIHAN SALAM: Amanda, I wonder,
one way of framing this discussion is that
essentially what we’re doing is reproducing certain kinds of
patterns and certain kinds of inequalities that exist in
the offline world, in the online world as well. And I wonder if you think that
there is any legitimate role for the public sector for
regulation because it’s natural to say that, I’m a
private sector organization, it’s not reasonable to expect me
to kind of narrowly tightly police all of this content
that’s being created in the name of some kind of
larger civic voice. Because, again, these
are global organizations in many cases. There are many different local
standards around these issues. But I’m curious, because you’re
interacting with a lot of people in the public
policy side I imagine. And going back to this idea of
trying to cultivate digital citizenship. Should there be some kind of
role on the part of these organizations that are playing
the civic role to do that? AMANDA LENHART: That’s an
excellent question. I have to preface my response
by saying that the Pew Research Center is strictly
nonpartisan and therefore I can’t make any kind of policy
recommendations. REIHAN SALAM: Tell us about the
state of the debate and sort of what some might
say on that front. AMANDA LENHART: Yeah, I
certainly think there’s, as you point out, there’s a lot of
incredible complexity here. These are global corporations. What holds in one jurisdiction, doesn’t hold in another. The privacy regulations in
Europe are vastly stronger than the privacy regulations in
the United States, and how do companies manage all of
those different issues? I do think that, thinking about
there’s certainly a lot of calls, I think on one hand,
there’s the side of advocacy. There’s the side of those who
work with children who see the damage that some of these
experiences can have to children’s psyches and who feel
very strongly and have a lot of concern that something
must be done. But on the other hand, obviously
we have is this need to protect the right of others
to engage in free speech, as Victoria has pointed out. So I think those are the real
tensions behind the debate. I do think that one of the
middle grounds that often gets proposed is education,
because that can be incredibly localized. It starts at the user. It doesn’t require a particular
technological intervention. It doesn’t require regulation
which makes companies flip out. But what it does require is it
requires a lot of work on the part of parents, and requires
a lot of work on the part of the end user. And I think one of the questions
that really is before us is, how capable is the
end user of taking some of these steps? And so certainly when you’re
dealing with adolescence you have a whole overlay of
emotional difficulties on top of not necessarily being as
familiar with the technology. And I think we have
expectations, I think in this room, we’re all incredibly
tech savvy, right. We are comfortable with
the technology. Everybody’s got their laptops
cracked open. But you go back and think about
your relatives, think about your mom, think about your
second cousin, Susie, who maybe doesn’t know
what’s underneath the hood of her computer. And who doesn’t want to know. And who has a very
basic knowledge. And then expecting those folks
to really be able to rise to the level of saying, hey, my
son or my daughter really needs to take these six steps
to protect him or herself in the online world, when the
parent themselves doesn’t even know that those steps
need to be taken. So I think there’s a lot at play
here that makes this a complicated space. REIHAN SALAM: Before I open it
up for questions, I have one last question for Del
and for Victoria. I wonder, do you think that
there’s been an evolving understanding on the
part of users? That is, do you think that
people have a greater appreciation that you guys are
in a way a platform for other people’s voices, rather than
an active agent in deciding what kinds of content
you seek to promote? VICTORIA GRAND: Yeah, I think
one thing is we’re not advocating as sort of no
action approach here. I think if you look at what our
teens do every day, they spend a lot of time looking at
what is our turnaround time for taking down porn the gets
uploaded to the site? Look at a site like a
ChatRoulette, for example, something that could just
implode if you allowed a lot of porn and a lot of
nudity to go on it. And so obviously we deal a lot
with the porn situation. We have a lot of user control. So, for example, for the “Am I
Ugly” videos, that user who uploaded that video had the
ability to turn comments off altogether, to moderate
her comments, to block people who commented. Users also have the ability to
say, hey, I appear in this video and I did not consent, I
want this image of me removed through this video. And we’re working right now on
blurring technology that will allow people to blur images
of other people in videos. So I don’t think we’re saying
stand back and do nothing. And I do think that the social
norms are growing around it. I feel like when we launch
YouTube in any new country we usually see a massive spike
in flagging, and then very quickly it adjusts itself and
people start to learn what is and is not acceptable
to upload. DEL HARVEY: And I’d say
similarly we have a lot of the same, we’re also not hands off,
there are things that we do not allow and we very
actively try to protect against as many of those
as possible. I would say the other thing
that we’re working on, and that we actually talk to a lot
of the other companies in the space about pretty regularly
is trying to actually share more information and more of the
work that we’re doing on the educational side between
each other, also. Because, for example, Victoria
and I have often chatted about this is a space where it’s
not competitive. Right, this is helping people
out, keeping people safe, helping kids, educating
people. I’m not going to be like,
Victoria, just saying, my research is a little
better than yours. Right, it doesn’t
benefit people. This is one of the realms where
it really is we can all work together without having to
be worried about, well, the page views for that help
page over there are bigger than mine. I got to get some people
viewing this now. There’s just so much more
happening in that space then we really ever have
had before. And it’s been over the past
probably two years for Twitter at least, that we really start
talking with other companies and trying to make sure that
there is this ongoing dialogue of what should we be addressing
as companies? What are users not aware of? What do they need to
know more about? Et cetera. And you could probably
speak to the same. VICTORIA GRAND: Yeah. I think it’s hard. it’s like feeding people
spinach though. People come to the platform
to be entertained. They might not go to the safety
section and so a lot of the conversations have been
about how do you involve youth and teaching youth. And how can you create virality
around the phenomenon of flagging, of privacy
controls, of those kinds of things, because it’s hard to get
people to focus on them. REIHAN SALAM: Does anyone
have any questions? I don’t want to eat too much
into your break time but there’s a gentleman
back there. AUDIENCE: Hello. How do you deal with exposing
what are appropriate memes or discussion models. Because I think that’s an
interesting point that people can say what’s appropriate for
themself, either in their profile, or in the
discussion you’re hosting around some content. That would let people, kind of
like we did with the internet, [UNINTELLIGIBLE] and running
real time discussion. So that would be something that
I think there maybe is a model there, and maybe you
could discuss that. That’s scalable. REIHAN SALAM: Amanda, would
you like to field that? AMANDA LENHART: I’m not
sure that question is directed at me. REIHAN SALAM: If there’s
anyone who’d like to field it, please do. VICTORIA GRAND: I think the vast
majority of the content that’s on these platforms
is acceptable. We’ve tried to do things like
blog about, for example, when content is coming in from Syria
or from Libya or from citizen journalists on the
ground, oftentimes it’s very graphic and it will be a video
that’s coming from a cell phone that will be labeled with
something like, one, two, three, four, and it’s
extremely graphic. But it doesn’t have any context
attached to it. So it’s very hard for us to
reckon with, can we leave this up without any context for
people to know why there’s a brain on the ground
here, right? That it’s not just sort of a
shock and disgust type video. So we’ve tried to do blogs
around this topic, around things like artistic nudity
and how to make sure that those videos can stay up and
they’re not taken down. Again, I think the issue that
we’re always tackling is this limited attention span. And I think having some
breakthroughs with the teen safety community around, and I’m
sure you guys think about it every day, how can
we battle that? Because those blog posts don’t
get viewed nearly as much as the videos. How come we partner with the
Koney’s of the world which got, gosh, I think that video
got more views than any other television show in
the US this year, apart from the Superbowl. How can we bring the awareness
to that level? It’s very hard to cut
through the noise. AUDIENCE: Hi. I have a seven-year-old, an
eight-year-old, and an eleven-year-old, and
they’re constantly asking me to go on YouTube. And I know that I can’t put them
on YouTube and leave the room and go do the dishes. I have to be right there because
there’s an opportunity to click away to anything. What are the age ranges that
you would recommend for use for YouTube when you can, I
hate to say this, but walk away and leave them watching
that video while you go and do laundry. What are the terms
of use for age? VICTORIA GRAND: Yeah,
they’re 13 plus. And I know that there are a lot
of parents that go onto YouTube with their kids
when they’re under 13. We also have a safety mode
setting that you can set. And that means that videos that
have been marked for 18 plus, do not get presented. The other thing that happens
with safety mode is we actually have an algorithm
that scans all videos for flesh tones, believe it or
not, and videos that are deemed to be high on flesh
tones are not included in safety mode. That does capture a lot of
baby videos as well, so there’s some false positives. But, yeah, we say that
the age for sort of surfing solo is 13. And I know that many people say
that’s not realistic and go on with their children before
to teach them some of these norms. AUDIENCE: Hi. You got to my question a little
bit in terms of what type of models are being built
to have this predictive development to conflict on these
platforms. So I just wanted to have an open question,
if there were any other predictive algorithms
or models that you all are building to find conflict
on Twitter or YouTube before it happens? VICTORIA GRAND: Sure. I think what’s interesting,
whenever we talk about controversial content on YouTube
and we say well, the scale is so massive,
people have no sympathy for that argument. And I’ll tell my friends this,
and they’ll say, but you’re Google, figure it out. And the challenge is that an
algorithm actually can’t do most of this work. So even when you’re talking
about things like nudity, an algorithm isn’t going to be
able to know whether the nudity is being presented in a
breast cancer documentary, in a surgery, and an artistic
nudity context. And so actually what we do is we
use algorithms to organize videos for review. So you scan for flesh tones, if
it’s high on flesh tones it means people are going to
be reviewing it faster. You scan for things like the
flagger’s reputation. How accurate is this
flagger usually? Do they always flag the Justin
Bieber videos because they just want to take him down? If so, down in the
queue, right. Has this video already been
flagged before and been reviewed by a human? Down in the queue. How hot is it? What is the flag
to view ratio? Very high up in the
queue if it’s hot. So we use algorithms to help us
to prioritize the review, but ultimately a human does need
to look at those because so many of these decisions
really do turn on context. You think about the N word. We can’t do a scrub of all of
YouTube and just remove the N word from every single comment
because it might be self referential, it might be in an
Eddie Murphy video, comedic. There’s so many different ways
that context comes into play that any use of just a broad
algorithm would be over broad. And it would be, from our point
of view, censorship. REIHAN SALAM: Del do you have
any thoughts on the use of predictive analytics? DEL HARVEY: I would say that
what we actually use are algorithms for more, and I
actually know you do this as well, is actually dealing
with the spam component of it, right. So sure this is a lovely video
of a child gurgling at a cute puppy, and the comments is
something along the line of, great video, you should check
out my site, cheapcyalis.com. And you’re like, you know,
I don’t think you watched this video. I’m pretty sure. And so we actually use a lot of
stuff to identify spam and remove spam, and I would
actually wager that one thing that I’ve seen that I would
imagine that YouTube sees sees as well is if somebody’s
a really a bad actor on your site. Like they’re just doing
terrible things. They’re not just doing terrible
things in terms of like being mean at somebody,
They’re also creating multiple accounts. They’re sock puppet accounts. They’re straight up
impersonation on something. They’re violating other rules. And we see a lot of bad actors
again, that get flagged because of spam violations,
essentially. Even if what you might say,
hey, this account’s bad because of X and
it’s not spam. REIHAN SALAM: Guys we
have a break now.

Leave a Reply

Your email address will not be published. Required fields are marked *