Sunday, December 19, 2010
Understanding Anonymous, Understanding Youth
Well, with a couple of teenager outings documented, it seems likely that this assemblage is made up of at least some young people. But what do we take from thinking that they are "our students"? Also, I wonder how well the group can be understood by knowing a few of the unveiled involved individuals. With their 4chan affiliation, non-hierarchical "divided by zero" structuring, political and jackass-styled frivolous efforts, and "doing it for the lulz" history, a question I am much more interested in would follow up on last spring's InfoStructure talk by Julian Dibble to ask not "who" but "why" are young people Anonymous?
Wednesday, December 8, 2010
Guiltier on grounds of heteronomativity: On youth sexuality and disproportional sentencing
A Washington Post article quotes Prof. Stacey Horn from UIC interpreting Himmelstein and Bruckner's study this way:"To me, it is saying there is some kind of internal bias that adults are not aware of that is impacting the punishment of this group," she said.
But even with more school expulsions, bigger criminal records, and more frequent determinations of deviance from those around them, "it gets better" for these kids, right?
Monday, December 6, 2010
It gets better?: On technologies of homophobic bullying and adolescence
"Eighty per cent of Irish teachers have witnessed homophobic bullying in schools" reports this article on bullying and the It Gets Better campaign. This piece raises some important considerations about the campaign, and hits on a few things that I have been tossing about for a while now. Namely, what are we saying when we tell kids to wait and to believe that "it will get better?" Is it enough for young people to know that they occupy a transitory and liminal space of cruelty? Is this what they need to make it through?
What are we saying when we tell kids to just hang in there?
First off, perhaps I am reading into it, but there seems to be a suggestion of eventual escape in the campaign's "it gets better" message. Something like "Hang in there, kid! You'll be able to get your license and maybe even your emancipation and then you can move the heck away from this crap to more acceptance eventually." Ok. That's a little harsh. But I guess what bugs me is who is being asked to retool in this campaign. To me, it seems, once again, that the onus is being put on the kid here -- the onus to hang in there. To know that it doesn't make any sense, but to know that it will get better. The onus to take it until they can get out. But should young people have to take it? Should they have to wait for the changes that come with more adult living in order to be treated ok? Should they have to escape?
Of course, many do. This is, indeed, what some young people need to do to be ok. However, according to Mary Gray's very bitchin book, quite unlike the popular discourse that suggests LGBT-identifying youth view cities as light-at-the-end-of-the-tunnel lifelines where respect and real living, like pots of gold, await them, not all rural kids jump at the chance to get out of the country when they grow up out in the country. Not all can leave. Not all have to leave. Not all want to leave. And not all do leave. How is this considered?
Outside of this, I see this framing of the homophobic bullying some LGBTQ youth go through as quite similar to adolescence -- an extended period of marginalization and oppression within Western culture in which young people are deemed less-than-full citizens who are unworthy of full civil rights. This, too, is a cruel liminal period. Similar to the hostile conditions faced by many lesbian, gay, bi, queer, questioning, and trans youth (as well as to the non-hetro status of many of these youth), young people in adolescence are told they will grow out of it, that "it will get better" when they "move on." Yes, it is comforting to know that you are not alone in your plight. But what I feel is missing in both analyses is the fact that this cruelty does not go away once we grow out of it. Kids move on, perhaps, to more happy queer friendly/adult social locations. The experience of marginalization and cruelty remains as part of life for others.
I think the campaign says some very important things. Still, I wonder how this campaign would be different if it spoke not to LGBT youth, but to those involved with youth, to the active and complicit social relationships that make up this technology of bullying. 80% of Irish teachers say that they see bullying happening toward their students. Dorothy Espelage reports similar numbers in her school bullying research stateside. Still, the promise is that youth will move beyond the negative treatment eventually. Without even needing to think about how most LGBTQ-identifying folks, in fact, never do find spaces in our country where "it gets better" enough:
-to allow them civil rights that respect and protect their families' rights to earned social security and pensions, to insurance, and to their general bounding,
-to provide the couple immigration rights,
-to allow them to make decisions for family members and to visit their loved ones unexpectedly in the hospital without a lot of hoo-haw and pre-filed paperwork (the logistics of Obama's April federal decision are still being worked out),
or,
-to keep them from losing their job because of who they are,
this current approach seems to me to be based in at least a bit of unintentional neoliberalist finger-crossing and good old American denial of structure.
Jeffrey Arnett coined the term "emerging adulthood" to address the lengthening of Western non-adulthood deep into the 20s. On a visit to campus a couple years back, he stated numerous times that young people and their parents across the Western world have been so relieved to hear about his "new developmental stage" because it helped describe their experiences of not finding meaningful careers, not feeling like they were "on a path," not "knowing who they are," not being able to support themselves financially or to figure out how to afford health insurance. Yes, this sort of not finding and not knowing involves a lot of exploration, which folks think is quite swell in these parts, as in Europe. But it also involves a lot of stress and uncertainty, and, for many, a forced exploration that comes from having few meaningful or even tangible places to settle. With no universal healthcare, it also involves real fear and risk. People felt relieved, Arnett said, because knowing that emerging adulthood "existed" made them feel more normal.
Exploration is often quite wonderful, but the rhetorical line drawn between youth exploration and adult stability in US culture gives me serious pause. Why such a stark division? Does exploration end when adulthood begins? Of course not. Why might we tell young people that it does?
Perhaps relatedly, I have noticed that concern about this new developmental stage commonly draws justifications about how youth prefer exploration over settling down. It goes something like this: Young people want to explore. They do not want to settle down into drudgery. As such, emerging adulthood is meeting young people's interest and needs. It is good for them. They want it.
This line of thought is rooted in an artificial binary of youth/emerging adulthood as a time of exploration, and adulthood as a time entirely devoid of exploration. I am not at all convinced that this binary in any way represents reality, or that a choice involving these two options needs to be made. However, with such choices laid out on the table, who wouldn't volunteer to avoid adulthood at all cost?
This type of thinking also strikes me as similar to Hargittai's work on digital na(t)ives. Are all young people naturally tech saavy? Clearly no, argues Hargittai in her article. Are all young people interested in drifting through their 20s and 30s? Hardly. But, in either case, what might be the result (or purpose) of common conceptualizations of youth that state that they are these things, that they want these things? How might these framings shape how young people are treated, funded, educated, counseled, comforted, policed? How might this shape how they understand themselves?
Arnett's theory takes into consideration that societal changes are underway that make delayed adulthood make more sense -- the "new information society," birth control and more acceptance of sex out of marriage, later marriages... But emerging adulthood doesn't question these changes or critique the culture which ends up denying full citizenship to more and more "young people." These things are accepted as immutable givens. The spotlight falls instead upon young people's search for personal "identity" and meaning. In emerging adulthood, extended adolescence is young people's exploration, its their choice, its their problem to solve. Again, the young people are the ones who are to do the retooling to fit in. Should we believe that the reason 20-somethings work Mcjobs and pay exorbitant prices pursuing endless degrees boils down to "because they want to?" Baseball-loving girls drop out of baseball "because they want to" as they move further into grade school and receive more intense gender-based teasing from boys on their own team. There is whole a lot of social and structural experience involved in both of these types of individual wanting. Without critical perspective, it is impossible for "emerging adulthood" to distinguish young people's intentional meandering explorations from their very purposeful and productive subjectivity through governmentality and self-regulation.
Yes, so enough. Most of this will be for another post. For now, though, I am left thinking about the impact of being told that something that is causing hardship is normal rather than troubling. What do we lose when we are coached to feel normal within clearly abnormal and unjust circumstances? What happens when those who are oppressed think that the conditions they exist in are normal?
By accepting that denied adulthood through the 30s is normal, I think that concern is removed from the systemic and structural factors involved in young people's struggle. By accepting that this is normal, I think people feel less worried about how young people are faring and being treated in our culture. By accepting that it is normal, I think we are more able to look away from the unjust economic and social policies and practices involved in young people's current social realities. By accepting that it is normal, social critique and discontent lessens as struggles continue.
I believe both extended adolescence and homophobic bullying present extremely good cases for not just looking to more just futures for kids, but for taking seriously what is happening in young people's present. Hope is problematic if it convinces us to ignore our (or others') realities. Yes, those who are treated poorly in liminal spaces due to the way they are or identify should be able to feel hope. But they also deserve to claim and use their social discontent. They deserve to have their struggles read as structural rather than personal problems. It may, indeed, "get better" for LGBTQ youth and for young people in adolescence, but this means little when the left-behind social marginalization and cruelty remains considered a normal part of life for others among us.
Here's to more hope for the present, eh?
(image credit: http://www.pawesome.net/2010/01/a-roundup-of-hang-in-there-motivational-posters/ Pretty pawesome.)
Monday, September 13, 2010
Wednesday, September 1, 2010
The Technology of Adolescence
Crispin Thurlow's Fabricating Youth.
Wednesday, August 25, 2010
New York Times
June 6, 2010
More Americans Sense a Downside to an Always Plugged-In Existence
By MARJORIE CONNELLY
While most Americans say devices like smartphones, cellphones and personal computers have made their lives better and their jobs easier, some say they have been intrusive, increased their levels of stress and made it difficult to concentrate, according to a New York Times/CBS News poll.
Younger people are particularly affected: almost 30 percent of those under 45 said the use of these devices made it harder to focus, while less than 10 percent of older users agreed.
Neil Erickson of Akron, Ohio, blames his lack of focus on his cellphone. “It’s distracting, but you never know if something is going to be important,” he said in a follow-up interview. Mr. Erickson, who is 28 and studying computer engineering, added, “I suppose I could cut down on checking e-mail and phone use, but I probably won’t.”
Technology has simplified life in many ways for Liz Clark, 49, a Realtor from Rye, N.Y., by allowing her to shop online, stay in touch with friends and keep tabs on her three children. “I can text them, and they get back to me immediately,” Ms. Clark said.
But while mobile devices and PCs have eased stress for some, just about as many said the devices had heightened the amount of stress they felt.
“Every single electronic device absolutely causes some stress,” said Warren Gerhard, 55, of Cape May, N.J. Because Mr. Gerhard, a retired member of the Coast Guard, is a volunteer E.M.T. worker, he cannot turn his cellphone off.
People seem to find it hard to shut down after work. Almost 40 percent check work e-mail after hours or on vacation.
Some people can’t imagine living without their computers. About a third of those polled said they couldn’t, while 65 percent said they either probably or definitely could get along without their PCs. The people who are most computer-dependent tend to be better educated and more affluent.
While most said the use of devices had no effect on the amount of time they spent with their family, a few were concerned. One in seven married respondents said the use of these devices was causing them to see less of their spouses. And 1 in 10 said they spent less time with their children under 18.
The nationwide poll was conducted May 6-9, using both land-line phones and cellphones. Interviews were conducted with 855 adults, of whom 726 said they used a personal computer or had a smartphone. The poll has a margin of sampling error of plus or minus 3 percentage points for all adults and 4 percentage points for computer and smartphone users. Complete results and methodology are available at nytimes.com/polls.
An Ugly Toll of Technology: Impatience and Forgetfulness
June 6, 2010
An Ugly Toll of Technology: Impatience and ForgetfulnessBy TARA PARKER-POPE
Are your Facebook friends more interesting than those you have in real life?
Has high-speed Internet made you impatient with slow-speed children?
Do you sometimes think about reaching for the fast-forward button, only to realize that life does not come with a remote control?
If you answered yes to any of those questions, exposure to technology may be slowly reshaping your personality. Some experts believe excessive use of the Internet, cellphones and other technologies can cause us to become more impatient, impulsive, forgetful and even more narcissistic.
“More and more, life is resembling the chat room,” says Dr. Elias Aboujaoude, director of the Impulse Control Disorders Clinic at Stanford. “We’re paying a price in terms of our cognitive life because of this virtual lifestyle.”
We do spend a lot of time with our devices, and some studies have suggested that excessive dependence on cellphones and the Internet is akin to an addiction. Web sites like NetAddiction.com offer self-assessment tests to determine if technology has become a drug. Among the questions used to identify those at risk: Do you neglect housework to spend more time online? Are you frequently checking your e-mail? Do you often lose sleep because you log in late at night? If you answered “often” or “always,” technology may be taking a toll on you.
In a study to be published in the journal Cyberpsychology, Behavior and Social Networking, researchers from the University of Melbourne in Australia subjected 173 college students to tests measuring risk for problematic Internet and gambling behaviors. About 5 percent of the students showed signs of gambling problems, but 10 percent of the students posted scores high enough to put them in the at-risk category for Internet “addiction.”
Technology use was clearly interfering with the students’ daily lives, but it may be going too far to call it an addiction, says Nicki Dowling, a clinical psychologist who led the study. Ms. Dowling prefers to call it “Internet dependence.”
Typically, the concern about our dependence on technology is that it detracts from our time with family and friends in the real world. But psychologists have become intrigued by a more subtle and insidious effect of our online interactions. It may be that the immediacy of the Internet, the efficiency of the iPhone and the anonymity of the chat room change the core of who we are, issues that Dr. Aboujaoude explores in a book, “Virtually You: The Internet and the Fracturing of the Self,” to be released next year.
Dr. Aboujaoude also asks whether the vast storage available in e-mail and on the Internet is preventing many of us from letting go, causing us to retain many old and unnecessary memories at the expense of making new ones. Everything is saved these days, he notes, from the meaningless e-mail sent after a work lunch to the angry online exchange with a spouse.
“If you can’t forget because all this stuff is staring at you, what does that do to your ability to lay down new memories and remember things that you should be remembering?” Dr. Aboujaoude said. “When you have 500 pictures from your vacation in your Flickr account, as opposed to five pictures that are really meaningful, does that change your ability to recall the moments that you really want to recall?”
There is also no easy way to conquer a dependence on technology. Nicholas Carr, author of the new book “The Shallows: What the Internet Is Doing to Our Brains,” says that social and family responsibilities, work and other pressures influence our use of technology. “The deeper a technology is woven into the patterns of everyday life, the less choice we have about whether and how we use that technology,” Mr. Carr wrote in a recent blog post on the topic.
Some experts suggest simply trying to curtail the amount of time you spend online. Set limits for how often you check e-mail or force yourself to leave your cellphone at home occasionally.
The problem is similar to an eating disorder, says Dr. Kimberly Young, a professor at St. Bonaventure University in New York who has led research on the addictive nature of online technology. Technology, like food, is an essential part of daily life, and those suffering from disordered online behavior cannot give it up entirely and instead have to learn moderation and controlled use. She suggests therapy to determine the underlying issues that set off a person’s need to use the Internet “as a way of escape.”
The International Center for Media and the Public Agenda at the University of Maryland asked 200 students to refrain from using electronic media for a day. The reports from students after the study suggest that giving up technology cold turkey not only makes life logistically difficult, but also changes our ability to connect with others.
“Texting and I.M.’ing my friends gives me a constant feeling of comfort,” wrote one student. “When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable.”
"our ability to focus is being undermined by bursts of information"
June 6, 2010
Attached to Technology and Paying a Price
By MATT RICHTEL
SAN FRANCISCO — When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.
Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.
“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”
The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing. (View an interactive panorama of Mr. Campbell's workstation.)
While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.
His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”
This is your brain on computers.
Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.
These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.
The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.
While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.
And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers.
“The technology is rewiring our brains,” said Nora Volkow, director of the National Institute of Drug Abuse and one of the world’s leading brain scientists. She and other researchers compare the lure of digital stimulation less to that of drugs and alcohol than to food and sex, which are essential but counterproductive in excess.
Technology use can benefit the brain in some ways, researchers say. Imaging studies show the brains of Internet users become more efficient at finding information. And players of some video games develop better visual acuity.
More broadly, cellphones and computers have transformed life. They let people escape their cubicles and work anywhere. They shrink distances and handle countless mundane tasks, freeing up time for more exciting pursuits.
For better or worse, the consumption of media, as varied as e-mail and TV, has exploded. In 2008, people consumed three times as much information each day as they did in 1960. And they are constantly shifting their attention. Computer users at work change windows or check e-mail or other programs nearly 37 times an hour, new research shows.
The nonstop interactivity is one of the most significant shifts ever in the human environment, said Adam Gazzaley, a neuroscientist at the University of California, San Francisco.
“We are exposing our brains to an environment and asking them to do things we weren’t necessarily evolved to do,” he said. “We know already there are consequences.”
Mr. Campbell, 43, came of age with the personal computer, and he is a heavier user of technology than most. But researchers say the habits and struggles of Mr. Campbell and his family typify what many experience — and what many more will, if trends continue.
For him, the tensions feel increasingly acute, and the effects harder to shake.
The Campbells recently moved to California from Oklahoma to start a software venture. Mr. Campbell’s life revolves around computers. (View a slide show on how the Campbells interact with technology.)
He goes to sleep with a laptop or iPhone on his chest, and when he wakes, he goes online. He and Mrs. Campbell, 39, head to the tidy kitchen in their four-bedroom hillside rental in Orinda, an affluent suburb of San Francisco, where she makes breakfast and watches a TV news feed in the corner of the computer screen while he uses the rest of the monitor to check his e-mail.
Major spats have arisen because Mr. Campbell escapes into video games during tough emotional stretches. On family vacations, he has trouble putting down his devices. When he rides the subway to San Francisco, he knows he will be offline 221 seconds as the train goes through a tunnel.
Their 16-year-old son, Connor, tall and polite like his father, recently received his first C’s, which his family blames on distraction from his gadgets. Their 8-year-old daughter, Lily, like her mother, playfully tells her father that he favors technology over family.
“I would love for him to totally unplug, to be totally engaged,” says Mrs. Campbell, who adds that he becomes “crotchety until he gets his fix.” But she would not try to force a change.
“He loves it. Technology is part of the fabric of who he is,” she says. “If I hated technology, I’d be hating him, and a part of who my son is too.”
Always On
Mr. Campbell, whose given name is Thomas, had an early start with technology in Oklahoma City. When he was in third grade, his parents bought him Pong, a video game. Then came a string of game consoles and PCs, which he learned to program.
In high school, he balanced computers, basketball and a romance with Brenda, a cheerleader with a gorgeous singing voice. He studied too, with focus, uninterrupted by e-mail. “I did my homework because I needed to get it done,” he said. “I didn’t have anything else to do.”
He left college to help with a family business, then set up a lawn mowing service. At night he would read, play video games, hang out with Brenda and, as she remembers it, “talk a lot more.”
In 1996, he started a successful Internet provider. Then he built the start-up that he sold for $1.3 million in 2003 to LookSmart, a search engine.
Mr. Campbell loves the rush of modern life and keeping up with the latest information. “I want to be the first to hear when the aliens land,” he said, laughing. But other times, he fantasizes about living in pioneer days when things moved more slowly: “I can’t keep everything in my head.”
No wonder. As he came of age, so did a new era of data and communication.
At home, people consume 12 hours of media a day on average, when an hour spent with, say, the Internet and TV simultaneously counts as two hours. That compares with five hours in 1960, say researchers at the University of California, San Diego. Computer users visit an average of 40 Web sites a day, according to research by RescueTime, which offers time-management tools.
As computers have changed, so has the understanding of the human brain. Until 15 years ago, scientists thought the brain stopped developing after childhood. Now they understand that its neural networks continue to develop, influenced by things like learning skills.
So not long after Eyal Ophir arrived at Stanford in 2004, he wondered whether heavy multitasking might be leading to changes in a characteristic of the brain long thought immutable: that humans can process only a single stream of information at a time.
Going back a half-century, tests had shown that the brain could barely process two streams, and could not simultaneously make decisions about them. But Mr. Ophir, a student-turned-researcher, thought multitaskers might be rewiring themselves to handle the load.
His passion was personal. He had spent seven years in Israeli intelligence after being weeded out of the air force — partly, he felt, because he was not a good multitasker. Could his brain be retrained?
Mr. Ophir, like others around the country studying how technology bent the brain, was startled by what he discovered.
The Myth of Multitasking
The test subjects were divided into two groups: those classified as heavy multitaskers based on their answers to questions about how they used technology, and those who were not.
In a test created by Mr. Ophir and his colleagues, subjects at a computer were briefly shown an image of red rectangles. Then they saw a similar image and were asked whether any of the rectangles had moved. It was a simple task until the addition of a twist: blue rectangles were added, and the subjects were told to ignore them. (Play a game testing how well you filter out distractions.)
The multitaskers then did a significantly worse job than the non-multitaskers at recognizing whether red rectangles had changed position. In other words, they had trouble filtering out the blue ones — the irrelevant information.
So, too, the multitaskers took longer than non-multitaskers to switch among tasks, like differentiating vowels from consonants and then odd from even numbers. The multitaskers were shown to be less efficient at juggling problems. (Play a game testing how well you switch between tasks.)
Other tests at Stanford, an important center for research in this fast-growing field, showed multitaskers tended to search for new information rather than accept a reward for putting older, more valuable information to work.
Researchers say these findings point to an interesting dynamic: multitaskers seem more sensitive than non-multitaskers to incoming information.
The results also illustrate an age-old conflict in the brain, one that technology may be intensifying. A portion of the brain acts as a control tower, helping a person focus and set priorities. More primitive parts of the brain, like those that process sight and sound, demand that it pay attention to new information, bombarding the control tower when they are stimulated.
Researchers say there is an evolutionary rationale for the pressure this barrage puts on the brain. The lower-brain functions alert humans to danger, like a nearby lion, overriding goals like building a hut. In the modern world, the chime of incoming e-mail can override the goal of writing a business plan or playing catch with the children.
“Throughout evolutionary history, a big surprise would get everyone’s brain thinking,” said Clifford Nass, a communications professor at Stanford. “But we’ve got a large and growing group of people who think the slightest hint that something interesting might be going on is like catnip. They can’t ignore it.”
Mr. Nass says the Stanford studies are important because they show multitasking’s lingering effects: “The scary part for guys like Kord is, they can’t shut off their multitasking tendencies when they’re not multitasking.”
Melina Uncapher, a neurobiologist on the Stanford team, said she and other researchers were unsure whether the muddied multitaskers were simply prone to distraction and would have had trouble focusing in any era. But she added that the idea that information overload causes distraction was supported by more and more research.
A study at the University of California, Irvine, found that people interrupted by e-mail reported significantly increased stress compared with those left to focus. Stress hormones have been shown to reduce short-term memory, said Gary Small, a psychiatrist at the University of California, Los Angeles.
Preliminary research shows some people can more easily juggle multiple information streams. These “supertaskers” represent less than 3 percent of the population, according to scientists at the University of Utah.
Other research shows computer use has neurological advantages. In imaging studies, Dr. Small observed that Internet users showed greater brain activity than nonusers, suggesting they were growing their neural circuitry.
At the University of Rochester, researchers found that players of some fast-paced video games can track the movement of a third more objects on a screen than nonplayers. They say the games can improve reaction and the ability to pick out details amid clutter.
“In a sense, those games have a very strong both rehabilitative and educational power,” said the lead researcher, Daphne Bavelier, who is working with others in the field to channel these changes into real-world benefits like safer driving.
There is a vibrant debate among scientists over whether technology’s influence on behavior and the brain is good or bad, and how significant it is.
“The bottom line is, the brain is wired to adapt,” said Steven Yantis, a professor of brain sciences at Johns Hopkins University. “There’s no question that rewiring goes on all the time,” he added. But he said it was too early to say whether the changes caused by technology were materially different from others in the past.
Mr. Ophir is loath to call the cognitive changes bad or good, though the impact on analysis and creativity worries him.
He is not just worried about other people. Shortly after he came to Stanford, a professor thanked him for being the one student in class paying full attention and not using a computer or phone. But he recently began using an iPhone and noticed a change; he felt its pull, even when playing with his daughter.
“The media is changing me,” he said. “I hear this internal ping that says: check e-mail and voice mail.”
“I have to work to suppress it.”
Kord Campbell does not bother to suppress it, or no longer can.
Tuesday, August 24, 2010
On technology and being less creative
NYT article Aug 24, 2010
SAN FRANCISCO — It’s 1 p.m. on a Thursday and Dianne Bates, 40, juggles three screens. She listens to a few songs on her iPod, then taps out a quick e-mail on her iPhone and turns her attention to the high-definition television.
As Ms. Bates multitasks, she is also churning her legs in fast loops on an elliptical machine in a downtown fitness center. She is in good company. In gyms and elsewhere, people use phones and other electronic devices to get work done — and as a reliable antidote to boredom.
Cellphones, which in the last few years have become full-fledged computers with high-speed Internet connections, let people relieve the tedium of exercising, the grocery store line, stoplights or lulls in the dinner conversation.
The technology makes the tiniest windows of time entertaining, and potentially productive. But scientists point to an unanticipated side effect: when people keep their brains busy with digital input, they are forfeiting downtime that could allow them to better learn and remember information, or come up with new ideas.
Ms. Bates, for example, might be clearer-headed if she went for a run outside, away from her devices, research suggests.
At the University of California, San Francisco, scientists have found that when rats have a new experience, like exploring an unfamiliar area, their brains show new patterns of activity. But only when the rats take a break from their exploration do they process those patterns in a way that seems to create a persistent memory of the experience.
The researchers suspect that the findings also apply to how humans learn.
“Almost certainly, downtime lets the brain go over experiences it’s had, solidify them and turn them into permanent long-term memories,” said Loren Frank, assistant professor in the department of physiology at the university, where he specializes in learning and memory. He said he believed that when the brain was constantly stimulated, “you prevent this learning process.”
At the University of Michigan, a study found that people learned significantly better after a walk in nature than after a walk in a dense urban environment, suggesting that processing a barrage of information leaves people fatigued.
Even though people feel entertained, even relaxed, when they multitask while exercising, or pass a moment at the bus stop by catching a quick video clip, they might be taxing their brains, scientists say.
“People think they’re refreshing themselves, but they’re fatiguing themselves,” said Marc Berman, a University of Michigan neuroscientist.
Regardless, there is now a whole industry of mobile software developers competing to help people scratch the entertainment itch. Flurry, a company that tracks the use of apps, has found that mobile games are typically played for 6.3 minutes, but that many are played for much shorter intervals. One popular game that involves stacking blocks gets played for 2.2 minutes on average.
Today’s game makers are trying to fill small bits of free time, said Sebastien de Halleux, a co-founder of PlayFish, a game company owned by the industry giant Electronic Arts.
“Instead of having long relaxing breaks, like taking two hours for lunch, we have a lot of these micro-moments,” he said. Game makers like Electronic Arts, he added, “have reinvented the game experience to fit into micro-moments.”
Many business people, of course, have good reason to be constantly checking their phones. But this can take a mental toll. Henry Chen, 26, a self-employed auto mechanic in San Francisco, has mixed feelings about his BlackBerry habits.
“I check it a lot, whenever there is downtime,” Mr. Chen said. Moments earlier, he was texting with a friend while he stood in line at a bagel shop; he stopped only when the woman behind the counter interrupted him to ask for his order.
Mr. Chen, who recently started his business, doesn’t want to miss a potential customer. Yet he says that since he upgraded his phone a year ago to a feature-rich BlackBerry, he can feel stressed out by what he described as internal pressure to constantly stay in contact.
“It’s become a demand. Not necessarily a demand of the customer, but a demand of my head,” he said. “I told my girlfriend that I’m more tired since I got this thing.”
In the parking lot outside the bagel shop, others were filling up moments with their phones. While Eddie Umadhay, 59, a construction inspector, sat in his car waiting for his wife to grocery shop, he deleted old e-mail while listening to news on the radio. On a bench outside a coffee house, Ossie Gabriel, 44, a nurse practitioner, waited for a friend and checked e-mail “to kill time.”
Crossing the street from the grocery store to his car, David Alvarado pushed his 2-year-old daughter in a cart filled with shopping bags, his phone pressed to his ear.
He was talking to a colleague about work scheduling, noting that he wanted to steal a moment to make the call between paying for the groceries and driving.
“I wanted to take advantage of the little gap,” said Mr. Alvarado, 30, a facilities manager at a community center.
For many such people, the little digital asides come on top of heavy use of computers during the day. Take Ms. Bates, the exercising multitasker at the expansive Baker Gym. She wakes up and peeks at her iPhone before she gets out of bed. At her job in advertising, she spends all day in front of her laptop.
But, far from wanting a break from screens when she exercises, she says she couldn’t possibly spend 55 minutes on the elliptical machine without “lots of things to do.” This includes relentless channel surfing.
“I switch constantly,” she said. “I can’t stand commercials. I have to flip around unless I’m watching ‘Project Runway’ or something I’m really into.”
Some researchers say that whatever downside there is to not resting the brain, it pales in comparison to the benefits technology can bring in motivating people to sweat.
“Exercise needs to be part of our lives in the sedentary world we’re immersed in. Anything that helps us move is beneficial,” said John J. Ratey, associate clinical professor of psychiatry at the Harvard Medical School and author of “Spark: The Revolutionary New Science of Exercise and the Brain.”
But all things being equal, Mr. Ratey said, he would prefer to see people do their workouts away from their devices: “There is more bang for your buck doing it outside, for your mood and working memory.”
Of the 70 cardio machines on the main floor at Baker Gym, 67 have televisions attached. Most of them also have iPod docks and displays showing workout performance, and a few have games, like a rope-climbing machine that shows an animated character climbing the rope while the live human does so too.
A few months ago, the cable TV went out and some patrons were apoplectic. “It was an uproar. People said: ‘That’s what we’re paying for,’ ” said Leeane Jensen, 28, the fitness manager.
At least one exerciser has a different take. Two stories up from the main floor, Peter Colley, 23, churns away on one of the several dozen elliptical machines without a TV. Instead, they are bathed in sunlight, looking out onto the pool and palm trees.
“I look at the wind on the trees. I watch the swimmers go back and forth,” Mr. Colley said. “I usually come here to clear my head.”
Thursday, August 19, 2010
Research notes
here
Wednesday, August 11, 2010
Wednesday, July 21, 2010
First Day Sociology: Using Student Introductions to Illustrate the Concept of Norms
Author(s): Fletcher Winston
Source: Teaching Sociology, Vol. 35, No. 2 (Apr., 2007), pp. 161-165
Published by: American Sociological Association
Stable URL: http://www.jstor.org/stable/20058550
The Social Construction of Social Facts: Using the U.S. Census to Examine Race as a Scientific
and Moral Category
Author(s): Eleanor Townsley
Source: Teaching Sociology, Vol. 35, No. 3 (Jul., 2007), pp. 223-238
Published by: American Sociological Association
Stable URL: http://www.jstor.org/stable/20058572
Saturday, January 23, 2010
Cevin Soling's The War on Kids
Here's a clip
Thursday, January 21, 2010
Pay to play, kids
After being in charge with personally overseeing an extremely disappointing internship myself, I have been critical of qui bono when undergrads land these gigs. From what I saw, the one I was involved in helped out the place far more than students by having bodies to keep the place open and do menial low-level administrative tasks. Take that to the interview, kid!
Sure, I understand that experience counts, and that having someone to write you a recommendation is important. Still, having to do it for free PLUS pay for school is shitty -- especially when the take-away is so very little.
Who is quality controlling these things? Young people put their hopes and efforts in these. They deserve better.
Also, who decided that low-paying fields like Social Work should require this type of financial gouging? Within a society that views financial independence as the gatekeeper of adulthood, being weighed down by debt has huge implications for who is able to be a full member of our society, It also shapes our future political climate, as being financial hardship can lessen empathy and fuel conservatism, and also cut into families' ability to take care of children and not send them into the military.
This needs to be looked at much further!
--
My response to comment on the link:
Hm. I would argue that paying to work for free for a job that might or might not provide you the credentials to land an space in the job market (more like a job bodega these days) would not be better than learning theory. To start, I guess, you are losing a lot less money in the class. Not getting paid for an internship AND having to pay for school is problematic on many levels.
Also, who is to say that internships will open the doors to a career? While it is sweet that you have a good situation, Ashley, a lot of internships give interns little guidance, menial tasks, and little assistance in getting their foot in the door. Why are young people putting up with it? Maybe because, in true neoliberalist form, they are once again being told that they are the problem, and that they need to retool themselves. What needs to be retooled is our profit-driven system that has allowed jobs to go overseas, workers' rights to be eroded, and public education to be corporatized and inaccessible to the public. It's not enough to create job-ready people. No matter how much money people are willing to invest in themselves, if you want to get a job, there first needs to be jobs.
Why are young people having to pay so much money to have a shot at being part of the adult working world? Why are they doing it? How will the immense debt that they carry with them from school shape their later lives? And the lengthening of adolescence continues...