Hospital selfies

I feel like every day there’s a new piece on selfies. One article claimed you could get lice from selfies, while another post accumulated a number of selfies at funerals. Now, there is even a song titled “#SELFIE.” But these photos are more than quick self-portraits. These innocent pictures can also have major effects on a person’s well-being, evident in the expanding realm of trauma selfies. “Like the Funeral Selfie before it, the Hospital Selfie exposes a massive generational divide about the etiquette of self-expression and oversharing, especially in the face of disaster.”

If you search #amputee on Instagram, you’ll see a slew of photos of amputee patients, many of them selfies. There are similar hashtags too, like #amputeelife, #amputeeproblems and even #amputeeswag. All of these categories are filled with selfies of patients, and most of them have suffered major trauma.

“The more I repeat it the less real it becomes. The idea of sharing trauma, at least for me, is not so much to elicit anything back but just to get it out of me.”

Like the woman who live-tweeted the birth of her child, these posts have been met with criticism, but the cancer patients, amputees and other trauma victims don’t mind. They’re not using social media just to put their story out there. Instead, they’re using it to connect with people in the same situation. I may follow someone on Instagram based on a shared clothing style, just like they may connect with someone if they are both working though cancer treatments.

“When I was on Instagram, I would click on different hashtags of #brainsurgery or #craniotomy and see so many other people’s pictures of their scars. It was so cool. Just typing in a Google search on the Internet, that’s kind of what I was looking for. I was looking for affirmation from somebody else.”

The selfies are graphic, but they’re helping patients cope with their situations. In a way, it might even be helping them heal.

“With medical stuff, people don’t know how to talk about it and don’t know how to start the conversation. Putting it out there [on social media] really helps. It’s really hard to, but it really helps.”

Twitter: the new writing sample?

I was already fascinated by the power of social media, but then I read the headline, “How a middle-aged IT guy from Peoria tweeted his way into a writing job on Late Night with Seth Meyers,” and I was stunned. A man named Bryan Donaldson operated a Twitter account with the handle @TheNardvark. It was filled with jokes he couldn’t say around the workplace, and it quickly garnered thousands of followers. One important follower was Alex Baze, the head writer and producer for Late Night with Seth Meyers, but Donaldson wasn’t aware of the opportunity on the horizon. Baze kept a list of his favorite tweeters, so when it came time to hire a new writer for the show, he turned to Twitter.

“If I go to somebody’s Twitter, I can see what he’s been doing the last two years — you get a much more complete sense of how he writes," he says. "It’s like you get to flip through somebody’s comedy notebook.”

"Twitter has democratized the process," Seth Meyers says. "We used to look at smaller samples, now you can look back and see what a person thought was funny for the past calendar year."

In this way, Twitter has become the modern writing sample. Gone are the days when employers would look at a resume and a well-written college paper (Meyers didn’t even know where Donaldson was from or what his job was). Now, employers are looking to social media. I discussed this phenomenon in a previous blog post where I invited future employers to check out my Facebook page and other social media accounts. In my post, I highlighted three criteria that employers look for on social media.

  1. If the candidate will be a good fit
  2. A candidate’s qualifications
  3. Their creativity

Through the Twitter account, Baze thought Donaldson represented these criteria, so he was offered the job.

“He still seems a bit dazed by the rapid, unexpected turn his life has taken. ‘I still don’t understand how this all works yet, this whole business,’ he admits, ‘I’m just starting out. But I gotta believe that the people who are not located in New York or L.A. have an equal voice now on the internet, so they’ll be easier to find.’”

For a long time, the Internet has served as a place for you to find jobs, and now, for the first time ever, it may become a place for jobs to find you.

It’s a bird, it’s a plane, it’s…Facebook?

As a class, we’ve looked into the process of “unbundling.” Technologies, including websites, will begin dividing into separate programs, giving users the opportunity to select exactly which ones they want to use. This trend has pros and cons; more specialized content could be delivered directly to users without all the “junk” they don’t want attached to it, but they’ll also pay dearly for these individual services. It’s hard to imagine exactly what this will look like, because the unbundling process is still in its early stages. We can, however, apply it to a site that most Americans use daily: Facebook. According to the recent data, mobile apps will continue to play a major role in the mobile revolution, and one of the most popular apps is Mark Zuckerberg’s very own brainchild.

“Despite every hipster prediction otherwise, the company’s user base keeps growing, and nearly a fifth of the time that Americans spend on their smartphones is spent on Facebook. That surpasses the amount of time we spend on any other single service by a wide margin — and beats just about anything else we do on our phones, or perhaps in our lives, period.”

Let’s think about this. If you spend 20% of our mobile time on our Facebook app, where is the other 80% spent? Probably on messenger apps, entertainment apps or news apps. Now, what if Facebook designed apps to meet these needs? Paper, a new app designed by Facebook developers, shares the news you want; each user can select the content they receive. So, if you usually spend 20% of your time looking at mobile news apps, now you'll be devoting 40% of your time to Facebook without even knowing it. Paper is unique for several reasons, but perhaps the biggest one is that is looks very little like the Facebook app.

“In the past, [Zuckerberg] said, Facebook was one big thing, a website or mobile app that let you indulge all of your online social needs. Now, on mobile phones especially, Facebook will begin to splinter into many smaller, more narrowly focused services, some of which won’t even carry Facebook’s branding, and may not require a Facebook account to use.”

What we’ll likely begin to see is a host of apps by Facebook that look nothing like the original design. That way, if we’re spending time using apps other than Facebook, we could still be putting money in Zuckerberg’s pocket.

"What is a hashtag?"

“But, what is a hashtag?” my mom asked me over breakfast this morning. Sometimes, my parents ask me questions about social media and mass communication because they think I’ll know the answer. C’mon, I’m in the J-School and I’m supposed to know these things, right? The answer is complicated. Yes, I know what a hashtag is, but you try explaining it to someone who has no concept of social media. It’s more difficult than you’d think. Now, this discussion goes far beyond adults on Facebook or texts between parents and their kids in which autocorrect got the best of the adults. This is an outright war between adults and technology, and it’s not what you think. I’m going to use my parents as an example.

My dad is an electrical engineer, and he knows technology. I’m sure he even knows about technology that has yet to be developed, but he didn’t know he could take a picture on his phone in black and white. My mom, a paralegal, knows her way around the Internet. She texts, probably as much as me, but she lays her phone down on a flat surface and types with only her index fingers. It’s the wrong way, but it feels right to her. She has a side business where she lists things on eBay, so she knows how to take photos, upload them, shrink their dimensions, import documents and create a listing. But, when she signed up for Facebook three years ago, she typed “how to delete Dana Dean on facebook” into the status bar, thinking it was the search bar. Dana Dean, although I changed the name, was my mom’s high school friend who had added her on Facebook, but my mom no longer wanted to see her posts. (Luckily, I caught the status and frantically called my mom before any damage was done…as far as we know.)

A discrepancy exists between adults and the way in which they use technology. They adopt it, evident in the increase in adults on Facebook, but they use it in their own way. Now, think about the way we react to their attempts to integrate new technologies or networks into their lives. We laugh. In a post on adults writing on a restaurant’s Facebook page, Mary Madison explains her reaction to the posts.

“Technologically challenged parents writing on company's Facebook walls? Man this stuff is golden. Couldn't stop laughing and relating it to my mom's experience on Facebook.”

I am not at all saying it’s wrong to laugh at them (it really is hilarious), but we have to consider that what we were born into, they have to adopt. That’s the reason adults on Facebook are so funny. We know exactly what’s lame and what’s not because we were with it from the get-go and these technologies and advancements are a central part of our lives. For adults, social media and technologies like it are just an addition.

Neither of my parents like social media, so this morning I asked them why. My mom doesn’t like it because she doesn’t understand how to use it. My dad, who doesn’t use any form of social media and likely never will, doesn’t like it because he doesn’t see the practicality of it.

“I will only care what a hashtag is or does when you explain its practical use in my life,” he says.

And for the most part, I couldn’t explain that to him. I don’t know how using a hashtag would affect his daily life. As far as I can tell, there’s no effective way to fix the disconnect between adults and the way they adopt the next big thing. If I signed my mom up for Twitter today, no matter how much time I spent educating her on the dos and don’ts, she would not use the site like I do, or even the way most people do.

So, I will continue to laugh at the man who can build a cell phone battery but can’t figure out how to make a photo vertical, and the woman who runs a small business based entirely online, but will never understand the difference between her Facebook newsfeed and her own page.

Maybe one day I’ll be able to explain the purpose and practicality of a hashtag. For now, I think I’ll continue making up an answer and hoping my mom buys into it, like in the clip below. Most of the time, she will.

Twitter terrorism

Early this morning, a 14-year-old girl tweeted a terrorist threat at the American Airlines account. It was likely a joke, but the airline responded and turned the girl over to airport security and the U.S. FBI. Screenshots of the exchange are below: sarah

 

aair

 

oops

I’m not sure why, but many people still don’t understand that they can be held accountable for what they post on social media. Maybe it’s because they feel as if their online persona is separate from their real identity. Regardless of the reason, people need to understand that the content they put on social media has real consequences. Unfortunately, this isn’t the first time this has happened. Just two months ago, a woman in Spain was arrested and charged with inciting terror through her tweets.

The use of social media is increasing, so why isn’t online education? We need to have a system in place for new users to learn about what is appropriate and inappropriate to put on these sites. The girl in the tweets above is only 14. Maybe she knew what could happen and just wanted the attention, but it’s also possible she hadn’t been educated about the dangers of making threats such as these online.

As much as we’d like to think that our online personas are separate from our real lives, our words are still binding.

Go home Glass

I remember sitting in JOMC 101 when Professor Robinson showed the class the video below.

I thought, I wonder what it will be like in 20 years when everyone has a pair of these glasses. Google Glass was a high profile venture into wearable technology, but I didn’t expect it to reach the public for many years. Now, just over a year later, Google is selling Glass to the general public. Although the product will only be released for one day to consumers in the U.S., it’s a big step toward the eventual widespread consumer release. I have to admit, I’m surprised Glass became popular this quickly. Around a year ago, I saw the video above as a video from the future, but Google has made it very clear that the future is now.

This may be an unpopular opinion, but I don’t think Google Glass will catch on. If they’re expecting it to be the next iPhone (in terms of popularity and profits), I think Google is mistaken. The company can do a lot of things, but I don’t think they’ll be able to force the population into purchasing a wearable that’s before its time. When I saw the video, I imagined this product 20 years down the road, so maybe Google should have considered the same. Even supporters of wearables find problems with Glass.

“Wearables are a big bet -- one that will likely result in a lot of early failures. Google Glass, for instance, started as an exciting futuristic product and has become an overhyped niche gadget with a public relations problem (and it's still in beta).”

Google is even working with companies like Ray-Ban and Oakley to make Glass a more wearable design, but I doubt the company will be successful. Smartwatches and fitness trackers have already had a difficult time on the consumer market, with one-third of consumers abandoning them. Remember, these wearables are far less noticeable and less expensive than their sister, Glass. April 15 is the big release day, but I wouldn't mark my calendar.

The future is here, but I don’t think Glass should be quite yet. Take note, Google.

This is wholesome

Last month, Honey Maid started a campaign called “This is Wholesome,” featuring families of all types in a short advertisement. The 31-second clip was uploaded to the company’s YouTube and Facebook pages and showed a multitude of non-traditional families, including single fathers, a gay couple and a multiracial couple, with the taglines, “no matter how things change, what makes us wholesome never will” and “everyday wholesome snacks for every wholesome family."

“Because change happens in improbable ways, we now have Teddy Grahams embodying the struggle for basic human rights,” one blogger joked.

The video seemed to come onto the scene rather quietly, compared to Coca-Cola’s 2014 Super Bowl ad. Most of the comments were positive and the campaign was deemed a success. Unfortunately, as the commercial became more popular, the more backlash it sparked. Negative comments poured in on YouTube, Twitter and Facebook, likely from some of the same users who had a problem with Coke’s diversity ad. However, the ad gained the most notoriety after the official response video was posted (see below).

As many negative comments as there were, there were thousands more positive comments. Most sites that covered the story supported Honey Maid and the views expressed in the video. In fact, I found the video under the headline, “Honey Maid's Brilliant 'F*ck You' To Mean Commenters.” Not only was the video an excellent response to the critics, but it paved the way for other major companies to announce their support for the “This is Wholesome” campaign and the views behind it.

betty eonloine

By posting the video on social media, the company received immediate feedback. They then used the negative comments to produce an advertisement far exceeding the goals of the first. Although it may not have been their goal, the campaign garnered more support from the response video than the original commercial. That’s what I call excellent marketing (and a brilliant use of social media).

The future of online education

Last summer, I took two online courses, HIST 128 and POLI 101, through UNC’s Friday Center. The journalism major requires both classes and some of my friends suggested taking them online in order to avoid the additional recitations in the fall and spring courses. I registered for the courses, paid the fee and began my 12-week classes at the end of May. By the end of June, I was almost positive I wasn’t going to pass.

I quickly found out online courses weren’t for me. It was summer, I was busy searching for internships and I struggled to find the motivation to complete the required assignments. It felt like busy work, and the reading was boring. Of course, the trouble I had with these courses could be partially attributed to the content (I already know everything there is to know about the Civil War). When I ran across this article discussing the effectiveness of MOOCs, or massive open online courses, I immediately related to the author’s story.

Her experience was a little different than mine; my online classes were offered through the university and affected my GPA at UNC, whereas she was enrolled in an Internet course not affiliated with a specific school. The idea, however, was the same. We both took courses online and have nothing to show for it.

This wasn’t my first experience with online education. In the last year, I registered on Codecademy, a site teaching basic coding, and Duolingo, a site offering various language courses. As of now, I don’t actively use either site, but my positive experience with the online education further supported the idea of summer courses. In fact, most of my real classes have an online component. I keep a blog for JOMC 240, I submit online paragraphs for PWAD 490 and I’m a member of several Facebook groups for my courses. Now, I even find it odd when classes don’t use Sakai (sorry, Prof. Robinson). Keep in mind, an online component is different from a course taught solely online. In my opinion, online components are effective in educating students and online courses are not.

Regardless, MOOC’s are appealing for many reasons. For one, you never have to leave your home. Professors can teach thousands of students without any face-to-face communication. This saves time and money, but it has a major downside.

“MOOC professors are teaching thousands of students—hundreds of thousands in some cases—thus eliminating the intimacy of one-on-one interactions that are so beneficial in most offline classroom settings.”

Quizzes are submitted electronically and graded automatically. In many cases, the student won’t receive any feedback on their work. Both of my summer courses required two papers and two exams. I was also expected to write posts about the readings on an online discussion forum, as well as reply to my classmates’ posts. MOOC's are similar in that students are often given multiple attempts for a perfect score on quizzes and the essays are peer-reviewed, but just like most areas of the Internet, anonymity became a problem. I didn’t know the other students in my class, but we were expected to peer-review another paper. I found this task difficult; I couldn’t ask the writer questions about their content, and they couldn’t give me reasons for including certain points in their writing. It was ultimately ineffective. So, I didn’t put very much effort into the peer-review draft. The student on the other side was going to give me whatever grade anyway, so I didn’t take it as seriously as papers I wrote for other classes.

I’ll admit, motivation was a major problem. Although I still had to pay college-level prices for my two courses, the author of the piece said it best.

“When I’m taking a college-level course without paying college-level prices, or getting anything in return besides knowledge or a completion certificate, I simply won’t try as hard.”

Fred Wilson discusses three megatrends that are happening now, one of which is the unbundling of technology. He uses education and the learning process as an example of this trend.

“The classic university model has been around for 600-700 years, but we no longer need to be confined by the walls of a classroom – with a professor up front. We don’t need to build a library and fill it with books (we have eReaders now). The current university model is very expensive.”

We now have the ability to take courses online through sites like the ones I mentioned above, Codecademy and Duolingo. The delivery of content is unbundling; instead of attending UNC, I imagine some students could stay at home in front of their computer screen and learn a specific skill set.

I’ve failed to mention the big issue, though. I didn’t learn anything. I got my credit and went on my merry way, but I retained very little information. Education, whether online or in a classroom, is about learning the material, not just passing the course. In fact, the average retention rate for MOOC’s is only four percent. The programs may be successful in handing out degrees and credits, but what do these really mean if we don’t learn anything?

“MOOCs provide invaluable resources for continuing education and opportunities for students to take courses they might not have otherwise taken. But when I compare my experience, albeit just one course, to the education I received at a traditional university, I wouldn’t trade my in-person college career for a suite of online class credentials, no matter how many university heavyweights stand behind them.”

Moving forward, we must be careful when transforming the online component of a course into the course itself, because frankly, I can’t imagine a generation of students with multiple degrees who really don’t know anything.

The breaking news algorithm

Last year, a researcher designed an algorithm that collects data from Wikipedia and Wikidata in order to discover breaking news topics. The algorithm has proven to be successful, identifying stories such as the Boston Marathon bombings and the disappearance of Malaysia flight MH370, but researchers quickly learned users wanted more than just news. Human beings are highly visual, and we enjoy looking at pictures. The new algorithm, called the Social Media Illustrator works with the news algorithm to combine images and breaking news. Arguably, when news is associated with photos, we’re able to understand the content in a more complete way. According to this post, there are many benefits to combining visual content with news. I encourage you to view the full text, but a few statistics are listed below.

 90% of information transmitted to the brain is visual, and visuals are processed 60,000X faster in the brain than text.

 Visual content drives engagement.

 85% of the US internet audience watches videos online.

The use of images with breaking news could be beneficial to viewers, but there may also be negative effects of this connection. I find several issues with the Social Media Illustrator, as well as the original news algorithm.

“One problem is that in many cases, it is not at all clear what breaking news stories the images refer to.”

When something happens, news organizations may not know every detail in the beginning. Often, they’ll report on what they know and add details as they go along. They certainly don’t have access to a database of photos connected to the event. Even if they did, it probably wouldn’t be possible to build a complete story with those images. Wouldn’t that mean an algorithm would have trouble as well, attempting to compile a set of images that could encompass the details of a breaking news story in a way that would make sense to viewers? Don’t forget time is a major factor in this news algorithm; breaking news only remains “breaking” for so long. Building a comprehensive story with images is possible, but I’m not sure I can fully trust an algorithm to provide me with an unbiased, full news account.

Images have the potential to limit viewer imagination. After viewing a set of photos connected to a news event, the user may view that event in terms of the visual representation. It’s difficult to entertain different thoughts after you’ve been given images claiming to represent the breaking news story. Therefore, this algorithm could potentially act as a filter bubble for viewers. Since the article mentioned the Boston Marathon bombing, I’ll use that breaking news story as an example. If the image algorithm had been used as details were unfolding during that event, users would have received incorrect information, based on the photos popular during that time. Sure, I can remember seeing the bloody photos of the victims, but I also distinctly remember photos of two men (later labeled the “Bag Men” by the New York Post) who were originally thought to be the bombers. The image was popularized on social media sites and news sites alike, but the photo didn’t depict the two bombers, just two ordinary men viewing the marathon from the sideline. Since it was popular, the algorithm might have picked it up and used it in connection with its breaking news coverage of the event. Not only does this limit the scope of the story, but it facilitates the spread of incorrect information.

It’s also difficult to measure what events fall into the category of breaking news. Different subjects are important to different people. Although we can all agree events like the Boston Marathon bombing are important, what about events with less direct effect on viewers? A celebrity’s car accident may not be “breaking news” to everyone, but it’s certainly popular news. These algorithms could limit what we as viewers are supposed to care about. Users may begin to believe a lack of breaking news coverage or images suggests a lack of importance. In other words, if there isn't a diverse array of photos connected with the story or updates every five minutes, it must not be worthy of the title of breaking news.

Perhaps the most troubling effect of these algorithms is the lack of human connection with the viewers.

"It’s quite possible that some of the news we consume in the future will be spotted, evaluated and written and illustrated by an algorithm.”

If a formula can tell us everything we need to know about a story (including photos), why do we need journalists? Right now, the media dictates what should be considered breaking news. The media is made up of thousands of people and networks who write and  share their thoughts and ideas with the rest of the world; if most of them decide an event should be considered “breaking news,” it usually becomes such. They work together to quickly assemble a story for the citizens. In this way, viewers receive diverse opinions and viewpoints. Now, imagine a single algorithm dictating what breaking news should be. The lack of input from journalists and news organization could have dire consequences.

“So far, these algorithms are relatively crude and human journalists generally do a significantly better job.”

And for that, I am thankful because I don’t know about you, but I don’t like an algorithm telling me what to think.

Viral potential

Following our discussion on viral content, I looked into a more scientific study on why things go viral and how users can predict that potential. A group in California, with the help of other researchers across the nation, designed a study in order to observe photos on Facebook and measure their potential to go viral, which they call “sharing cascades.” They measure the number of shares when the content is first introduced on the site, claiming the number of shares must double in order for the photo to spread quickly. If the number of shares doubles, the content has the potential to go viral. At a later stage, they begin paying attention to the number of shares the Facebook photo garnered over time, because “the greater the number of observed reshares, the better the prediction.” The researchers have even developed an algorithm and trained a machine to look for certain features in the images.

“These features include the type of image, whether a close-up or outdoors or having a caption and so on; the number of followers the original poster has; the shape of the cascade that forms, whether a simple star graph or more complex structures; and finally how quickly the cascade takes place, its speed.”

As it turns out, the algorithm is accurate almost 80% of the time. In other words, researchers can accurately predict what content will go viral 8 out of 10 times, but I’m skeptical. Although this is one of the first studies of its kind, it leaves a lot of questions unanswered and fails to address two major factors related to viral content, the first of which is the ambiguity of elements in viral content. The researchers claim their algorithm takes into account the features of the image, but it is likely impossible to encapsulate all the random features of every image. Cat videos have the potential to go viral, but no two are just alike. To truly measure the viral potential of the content, researchers may have to label the video, sorting it into a category. So, do all cat videos get the same label?

The second factor the study fails to address in detail is time. The video below was posted on January 8, 2010, and has almost 40 million views on Youtube.

Theoretically, the algorithm could be applied to the video when it was posted in order to measure its potential to reach viral status (which it did). However, take a look at the graphs below taken from the “stats” section of the video information.

cumulative

 

dailyBy looking at the cumulative and daily viewing totals, we can see the video didn’t reach viral status until 6 months after it was posted. How would the algorithm be applied in this situation? This study measures viral content based on potential at the beginning, so it may not be applicable to videos such as this one. Regardless, if this formula is perfected it could unleash an entirely new wave of advertising and disseminating messages online, specifically on social media. If organizations are able to predict what content will go viral, they could include features that would increase viral potential.