INSIGHTS BLOG > Persuasion Technologies in the Digital Age: The Good, The Bad, and the Ugly
Persuasion Technologies in the Digital Age: The Good, The Bad, and the Ugly
Written on 05 September 2017
by Ruth Fisher, PhD
Persuasion technologies include methods and techniques derived from behavioral psychology and behavioral economics used to shape the choices people make. The favorable environment for using such methods, enabled by people’s increasing use of computers and smartphones, has led to the proliferation of their use by software developers.
Like any technology, persuasion technologies can be used for good or evil. However, the increasing dependence of people on digital technologies, together with the increasing prevalence of software developers’ use of persuasion technologies has created emergent behavior in society that’s downright ugly: the emergence of extremism, outrage, and divisiveness among members of society.
This analysis is closely tied to a previous analysis I performed, “Information Distortions on the Internet.”
This analysis will examine
- The nature of persuasive technologies
- The game between software developers and users that has created an environment of good, bad, and ugly
- How the environment might be changed to create more favorable social outcomes
Description of Persuasive Technologies
What They Include
Wikipedia defines persuasive technology as follows.
Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not through coercion.
Persuasive technologies have their basis in behavioral psychology and behavioral economics. B.F. Skinner pioneered “behaviourism” with his experiments during the early 1930s on conditioning animals to push a lever to receive food. More recent behavioral psychologists and economists, notably Amos Tversky, Daniel Kahneman, Richard Thaler, Cass Sunstein, and Dan Ariely, have “established a cognitive basis for common human errors that arise from heuristics and biases.” Wikipedia provides an extensive list of cognitive biases that have been recognized.
In 1996, BJ Fogg coined the term captology “to describe the overlap between persuasion and computers.” Ian Leslie describes captology in more detail in The Scientists Who Make Apps Addictive.
Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behaviour design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and Saws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.
This analysis focuses primarily on the use of persuasive technologies in the digital arena.
Hardware and software providers use the insights derived by behaviorists to incorporate specific design methods into their technologies so as to encourage users to take actions that are favorable to the technology providers. Nanette Byrnes describes this in more detail in Technology and Persuasion.
Insights from psychology and behavioral economics about how and why people make certain choices, combined with digital technologies, social media, and smartphones, have enabled designers of websites, apps, and a wide variety of other products to create sophisticated persuasive technologies.
With new digital tools, companies that might once have been simply hardware makers (such as Jawbone) or service providers (Expedia) are now taking on the role of influencer, attempting to shape the habits of their users by exploiting the psychological underpinnings of how people make choices.
Why the Concern?
Governments have used persuasive technologies throughout history in the form of government propaganda. And since the dawn of commercialism, vendors have incorporated persuasive techniques in their campaigns to enhance the impacts of their advertising on promoting consumer purchases. So why is the use of persuasion technologies such a big deal now? James Williams, PhD student at the Oxford Internet Institute, studies the ethics of attention and persuasion in technology design. In Staying free in a world of persuasive technologies, he explains why persuasion technologies have become such a big deal, namely that the current environment has created a perfect storm as to enable persuasion technologies be able to have a large impact on people.
Advertising has been around for centuries, so we might assume that we have become clever about recognizing and negotiating it — what is it about these online persuasive technologies that poses new ethical questions or concerns?
The ethical questions themselves aren’t new, but the environment in which we’re asking them makes them much more urgent. There are several important trends here. For one, the Internet is becoming part of the background of human experience: devices are shrinking, proliferating, and becoming more persistent companions through life. In tandem with this, rapid advances in measurement and analytics are enabling us to more quickly optimise technologies to reach greater levels of persuasiveness. That persuasiveness is further augmented by applying knowledge of our non-rational psychological biases to technology design, which we are doing much more quickly than in the design of slower-moving systems such as law or ethics. Finally, the explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power.
How often, exactly, are we exposed to persuasive technologies? James Williams notes that every time we use a “free” service on the internet, we become victims of persuasive technologies.
Broadly speaking, most of the online services we think we’re using for “free” — that is, the ones we’re paying for with the currency of our attention — have some sort of persuasive design goal.
It gets worse. With the advent of AI in general, and personal assistants in particular, AIs are currently able to track all our actions, and they soon will be able to understand our moods at any point in time. Add to that the fact that AIs will come to know our weaknesses and vulnerabilities. When you combine persuasive technologies with mood detection and understanding of vulnerabilities, our personal assistants will soon be able to prey on our weaknesses and vulnerabilities precisely when we’re most susceptible. That’s quite the power, indeed.
Legality
The term subliminal advertising was pioneered by James Vicary. According to Wikipedia
He is most famous for having perpetrated a fraudulent subliminal advertising study in 1957. In it, he claimed that an experiment in which moviegoers were repeatedly shown 1/3000-second advertisements for Coca-Cola and popcorn significantly increased product sales.
Many sources indicate that subliminal advertising was banned in the US (and other countries) in 1974. However, according to the FCC, this is not the case. There is a policy statement, not an enforceable rule, indicating that subliminal advertising is "contrary to the public interest." On the other hand, it does appear that FCC regulates broadcasters, but it is not clear whether the regulation prohibits the use of subliminal techniques.
The FCC has no formal rules on "subliminal" advertising.
The FCC has no formal rules on the use of "subliminal perception" techniques. In fact, the Commission appears to have addressed the issue only twice. In 1974, the agency issued a policy statement that the use of "subliminal perception" is "contrary to the public interest." But policy statements are not enforceable rules. Nor would it be appropriate for the Commission to fine a person for failure to comply with a policy statement.
Since 1974, there has been only one instance in which the Commission has received and acted on a "subliminal" message complaint. In that matter, KMEZ(FM) a radio station in Dallas, Texas, was merely "admonished for its repeated transmission of subliminal messages on November 19, 1987" during an anti-smoking program on behalf of the American Cancer Society.
…
The Commission has no authority to regulate advertisers, such as the Republican National Committee.
Federal law authorizes the FCC to regulate its licensees, such as broadcasters. The Commission does not regulate advertisers. Thus, to the extent that the Commission's 1974 policy statement on "subliminal techniques" has any force, it is only with respect to broadcasters. Consistent with this principle, the 1974 statement plainly places the responsibility of compliance on broadcast licensees, not advertisers.
The Gurus
As indicated above, BJ Fogg is the coiner of the term captology and pioneer of the use of persuasion in digital technologies. He’s a professor at Stanford University, where he founded the Stanford Persuasive Technology Lab.
Ian Leslie notes that
B.J. Fogg comes from a Mormon family, which has endowed him with his bulletproof geniality and also with a strong need to believe that his work is making the world a better place.
B.J. Fogg had two students who have also become well-known in the persuasive technology arena as it applies to digital technologies, Nir Eyal and Tristan Harris.
Nir Eyal is known for his promotion of the “hook,” a technique that creates addiction by users to the application or content to which he has become hooked. However, as Ted Greenwald indicates in Compulsive Behavior Sells
He [Nir Eyal] opens with a disclaimer. “I’m not an advocate for creating addiction,” he says. “Addiction has a specific definition: it always hurts the user. I talk about the pathways for addiction because the same things that occur in the brain help us do something that can be good.”
Tristan Harris is a strong advocate for helping people to understand the nature of persuasive technologies. He exposes the potential for harm – particularly in the form of wasted time – they induce in users. He promotes the use of persuasive technologies to help people better manage their use of technology in conscious and purposeful ways, so they feel that the time they spend with technology is “time well spent.”
Persuasive Techniques
“Put Hot Triggers in the Path of Motivated People”
BJ Fogg’s motto is “put hot triggers in the path of motivated people.” The key to Fogg’s methodology is to make sure three components are all present at the same time:
(i) A hot trigger, that is, a trigger that a user can easily act upon;
(ii) The ability of the user to act on that trigger; and
(iii) The motivation of the user to act on that trigger.
In The Scientists Who Make Apps Addictive, Ian Leslie explains Fogg’s methodology.
Fogg explained the building blocks of his theory of behaviour change. For somebody to do something … three things must happen at once. The person must want to do it, they must be able to, and they must be prompted to do it. A trigger – the prompt for the action – is effective only when the person is highly motivated, or the task is very easy. If the task is hard, people end up frustrated; if they’re not motivated, they get annoyed.
…
When motivation is high enough, or a task easy enough, people become responsive to triggers ... The trigger, if it is well designed (or “hot”), finds you at exactly the moment you are most eager to take the action. The most important nine words in behaviour design, says Fogg, are, “Put hot triggers in the path of motivated people.”
Anthony Wing Kosner “Stanford's School Of Persuasion: BJ Fogg On How To Win Users And Influence Behavior” describes the essence of Fogg’s methodology.
the path of least resistance is to tap existing motivations and make a behavior easier to achieve. Most successful mobile apps create new habits for users by making it easier to do something that they already do or want to do.
The Hook
Nir Eyal calls his method of persuasion the hook. The hook consists of a four-step loop, consisting of a trigger, an action, a reward, and an investment, that “leads users into a repetitive cycle that transforms tentative actions into irresistible urges.” Ted Greenwald describes the hook in detail.
It starts with a trigger, a prod that propels users into a four-step loop. Think of the e-mail notification you get when a friend tags you in a photo on Facebook. The trigger prompts you to take an action—say, to log in to Facebook. That leads to a reward: viewing the photo and reading the comments left by others. In the fourth step, you inject a personal stake by making an investment: say, leaving your own comment in the thread.
…
Still, the reward must promise enough pleasure to drive people to take the intended action. In training animals to execute complex behaviors, Skinner discovered that varying the payoff— from highly desirable to nothing at all —both increases a behavior’s frequency and helps keep it from fading once the rewards stop.
A classic example is slot-machine gambling. The player never knows whether the next pull might bring a $5 win or a $50,000 jackpot. The unpredictability of the reward—and the randomness of its arrival—is a powerful motivator to pull the lever again and again.
…
The hook’s final stage, investment, closes the loop by “loading the next trigger,” Eyal says, an idea inspired in part by work on game psychology by Jesse Schell, a Disney Imagineer turned Carnegie Mellon professor. Take Twitter. When you make an investment by posting a tweet, a follower’s reply to your contribution triggers an e-mail notification to your in-box, inciting you to take yet another spin through the cycle.
Hijacking Users’ Minds
Tristan Harris introduces other techniques and how they are used to “hijack” users’ minds. He lists ten techniques in particular that developers exploit to lead users where the developers want them to go. From Tristan Harris (May 18, 2016) How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist (using my words to describe Harris’ points):
Hijack #1: If You Control the Menu, You Control the Choices: The menu includes only the choices the developer wants you to make, not all the choices actually available.
Hijack #2: Put a Slot Machine In a Billion Pockets: The #1 psychological ingredient in slot machines: intermittent variable rewards.
Hijack #3: Fear of Missing Something Important (FOMSI): Induce a “1% chance you could be missing something important.”
Hijack #4: Social Approval: The need to belong, to be approved or appreciated by our peers is among the highest human motivations.
Hijack #5: Social Reciprocity (Tit-for-tat)
Hijack #6: Bottomless bowls, Infinite Feeds, and Autoplay
Hijack #7: Instant Interruption vs. “Respectful” Delivery
Hijack #8: Bundling Your Reasons with Their Reasons: For example, when you want to look up a Facebook event happening tonight (your reason) the Facebook app doesn’t allow you to access it without first landing on the news feed (their reasons)
Hijack #9: Inconvenient Choices: Make what developers want you to do easy to do, while making what users want to do difficult to do.
Hijack #10: Forecasting Errors, “Foot in the Door” strategies: Ask for a small innocuous request to begin with (“just one click to see which tweet got retweeted”) and escalate from there (“why don’t you stay awhile?”).
Recommendation Engines
Another increasingly prevalent form of persuasion technology is the customization of experiences to particular users, in particular the use of recommendation engines. From Tim Mullaney “Everything Is a Recommendation”
But today, new technologies and much bigger arrays of available data are taking recommendation engines like the one Barneys uses to a new place, making them less obvious to the user but more important to website operations.
…
When more sophisticated recommendation engines entice casual browsers with such tailored page selections, the chance they will buy something triples, says Matt Woolsey, executive vice president for digital at Barneys.
While some might consider the use of personal data to tailor recommendations an invasion of privacy, about half of users approve of such practices, if it makes their lives easier. As Accenture reports in “Dynamic Digital Consumers,”
Consumers value personalized services that make life easier and seamless and 50 percent are comfortable with services becoming increasingly personalized through the use of large amounts of personal data.
Dark Patterns
There are many different persuasion technologies, called Dark Patterns, that are used to trick or manipulate users into taking actions they would not consciously take. From DarkPatterns.org, “Dark Patterns: fighting user deception worldwide”
A Dark Pattern is a user interface that has been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills.
Normally when you think of “bad design”, you think of the creator as being sloppy or lazy but with no ill intent. This type of bad design is known as a “UI anti-pattern.” Dark Patterns are different – they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind. We as designers, founders, UX & UI professionals and creators need to take a stance against Dark Patterns.
Dark Patterns describes 14 different kinds of dark patterns:
- Bait and Switch
- Disguised Ads
- Faraway Bill
- Forced Continuity
- Forced Disclosure
- Friend Spam
- Hidden Costs
- Misdirection
- Price Comparison Prevention
- Privacy Zuckering
- Roach Motel
- Road Block
- Sneak in the Basket
- Trick Questions
There are other types of manipulative techniques, for example, the use of manipulative defaults and manipulative copyrights (see Alex Birkett, “Online Manipulation: All The Ways You’re Currently Being Deceived”).
The Good, The Bad, and The Ugly
Like every technology, persuasion technologies can be used for good or evil. The ugly emerges when actions by individual users combine to form broad effects at the social level.
The Good
Persuasion technologies are increasingly being used to help people do things they want to do – such as sticking to a diet or health regimen – or should do – such as save money for retirement or be an organ donor – but wouldn’t without the appropriate help.
For example, John D. Sutter describes “5 'persuasive' technologies to help you be good.”
OPOWER collects energy data from the home and displays it in a chart that compares your energy use to that of your neighbors in aggregate. Such exposure causes 60 to 80 percent of people to change their energy behaviors, the company says.
GlowCaps: A special cap fits on top of a standard pill bottle, and it lights up when the patient needs to take his or her medicine.
The caps are also wireless-enabled and send reports about how well a person is doing at sticking to his or her medication schedule over cellular networks, according to a company spokeswoman.
Weigh yourself on the Withings Wi-Fi-enabled scale, and if you choose, all of your Twitter followers or Facebook friends will be instantly blasted with your current weight.
In hybrid cars, including the Toyota Prius and Ford Fusion hybrid, display panels that tell the driver how efficiently he or she is driving at any given moment.
More generally, Anna Ho, describes the empowering nature of The Rise of Motivational Technologies.
This increasing readiness for smart products and services that empower consumers to achieve specific goals marks the advent of a new era in technology, something I characterize as the rise of motivational technologies.
Motivational technologies, more explicitly than persuasive technologies, are a form of interactive technology designed to appeal to a user’s intrinsic motivations, empowering them to achieve a desired habit or outcome. Moreover, motivational technologies are holistically designed to activate lasting behavior change, not just shape behavior at a specific point of interaction.
…
Motivational technologies leverage the intrinsic motivations of users and help them achieve their desired outcomes.
As previously indicated, despite the invasion of privacy, recommendations that incorporate user data to generate recommendations that are more closely matched to user preferences provide higher utility to about half of users.
The Bad
A clear problem with persuasion technologies is the ability of developers to use them to manipulate users into taking actions they otherwise wouldn’t take, namely purchasing certain products or services.
Another clear problem is the use of persuasion technologies by developers for other nefarious purposes, such as manipulating users’ beliefs and opinions. For example, in “Surveillance-based manipulation: How Facebook or Google could tilt elections,” Bruce Schneier describes how companies can use such methods as (i) strategic product placement, (ii) frequency of product views, and (iii) amplification of favorable views while dampening adversarial views to manipulate public opinion. He gives an example of how Facebook might sway an election by strategically placing an “I Voted” button on certain users’ pages.
During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4 nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.
Perhaps less nefarious, but still socially undesirable for several reasons, is developers’ use of persuasion technologies on unwitting users to encourage them to use applications more frequently and/or for greater durations than they otherwise would. This increase in user stickiness on apps benefits the developers, who generate more revenues from advertisements when users spend more time on the apps.
First there is the ethical question about manipulating people into doing things they might not otherwise do. BJ Fogg himself has been active in writing about and discussing the issue of ethics and the use of persuasive technologies (see, for example, ”7 Points on Ethics and Persuasive Technology” ).
In “Toward an ethics of persuasive technology” Daniel Berdichevsky and Erik Neuenschwander present what they call a “Golden Rule” for the design of persuasion technologies:
The creators of a persuasive technology should never seek to persuade a person of something they themselves would not consent to be persuaded to do.
Another problem with the use of persuasion technologies to encourage user stickiness is that it creates sensations of regret as people spend more time on the apps that they wanted or intended (see http://www.timewellspent.io).
There is also evidence that excess use of social media causes anxiety and depression in users. People have a natural tendency to want to show the best of themselves to others. As a result, we tend to choose only the most favorable information and photo-shopped pictures to post on social media. However, by presenting this distorted image of ourselves to others, other people think that we are happier, more beautiful, or more successful than we really are. And since we also have a natural tendency to compare ourselves to others, constantly seeing other people who seem to be happy, beautiful, and successful makes us feel depressed about our own lives. In “Does Facebook Make You Depressed?” Dr. Perpetua Neo notes
Someone once wrote me that scrolling through Facebook on a Friday afternoon made him feel low throughout the weekend. Everyone else seemed to be having so much fun, it made him “feel like a loser.”
Danielle Chabassol describes this same phenomenon in “Using Social Media Distorts Our Perception of Reality.”
When we decide to share something on social media, we naturally want people to see it, so we end up choosing the most flattering / exciting / out- of-this-world photo or video because those will get the most attention.
…
A similar thing happens when we watch or look at other people’s posts and only see the best version of their lives. We perceive that their lives are better than ours, which further reinforces our discontent with reality.
Finally, there’s the problem that the use of persuasive technologies is turning many users into digital addicts. Graham C.L. Davey, Ph.D. provides more details in Social Media, Loneliness, and Anxiety in Young People.
Social anxiety and the need for social assurance are also associated with problematic use of Facebook to the point where Facebook use can become an addiction, and has even been shown to activate the same brain areas as addictive drugs such as cocaine! This addiction poses a threat to physical and psychological well-being and interferes with performance at school or work, and staying away from Facebook is viewed by users as an act of ‘self-sacrifice’ or a ‘detoxification’. So the vicious cycle is that loneliness and social anxiety generate use of social networking sites, but then problematic addiction to these sites itself causes further forms of anxiety and stress.
The Ugly
The ugly part about the use of persuasive technologies by developers to create user stickiness is that it ends up creating emergent phenomena in society, namely, a culture of outrage, divisiveness, and extremism.
In “How Social Media Created an Echo Chamber for Ideas,” Orion Jones iterates the idea that communication on the Internet has led people to be more extreme and divisive in their views.
… [S]ociologists have concluded that social media often entrench people's ideological positions and even make those positions more extreme. Witness the age of a bitterly divided America.
Harvard law professor Cass Sunstein has studied this phenomenon at length, finding that deliberation among a group of likeminded people moves the group toward a more extreme point of view.
"The mere discussion of, or deliberation over, a certain matter or opinion in a group may shift the position of the entire group in a more radical direction. The point of view of each group member may even shift to a more extreme version of the viewpoint they entertained before deliberating."
Furthermore, it turns out that one of the more effective ways of creating stickiness of attention is to create a sense of outrage, referred to as outrage porn.
Media outlets are often incentivized to feign outrage because it specifically triggers many of the most lucrative online behaviors, including leaving comments, repeat pageviews and social sharing, which the outlets capitalize on.
Michael Shammas provides more detail on the divisiveness created through the use of outrage porn in “Outrage Culture Kills Important Conversation.”
Productive discourse is dying, trampled over by closed minds who value comfortable opinion-holding over uncomfortable soul-searching. As dialogue lies flailing and gasping, outrage culture’s pulse is stronger than ever. We see the degraded consequence everywhere.
We see it in Donald Trump’s xenophobia. We see it in the smug rise of a regressive, illiberal “liberalism” on college campuses that interprets (and misinterprets) the other side’s words in the most negative possible light—even trifling dissent is labeled a product of white male privilege or (when the opponent is neither white nor male) simple ignorance. We see it in any online comments section—cesspools of racism, sexism, xenophobia, naked hatred. At its most extreme, we see it in tribalistic mass murderers, from Dylann Storm Roof to the San Bernardino shooters.
Hatred is everywhere; empathy and its cousin, civility, are nowhere. For in this culture of reflexive outrage, empathy is weakness. Listening? Surrender. When discourse is a competition instead of a dialectic, there are winners and losers; and one wins by persuading the other side (or at least scoring more re-tweets), even though we learn most from engaging our opponents. Outrage culture turns productive discourse into dumb competition.
How Can We Promote More Favorable Social Outcomes?
When thinking about how to change the environment to promote more favorable social outcomes, there are three different approaches that are possible:
(i) Encourage developers to incorporate persuasive techniques in a socially responsible manner.
(ii) Empower users to be more proactive in creating an environment that’s more favorable for them.
(iii) Regulate the use of persuasion technologies by developers.
As James Williams states to succinctly,
To me, the ultimate question in all this is how we can shape technology to support human goals, and not the other way around.
Curtailment by Developers
Currently, many developers, particularly developers of social media websites, are engaged in a race to the bottom (i.e., a prisoner’s dilemma), using every technique available to keep users on the page. I think gaining user attention is too big a prize, that is, there's too much at stake, for the developers to voluntarily dial back on their use of persuasive technologies.
Perhaps there could be more social pressure on developers to dial back on their use of persuasive technologies. However, I fear that too many people are unaware that they’re being manipulated. Also, for the younger generations who have grown up with social media, they don’t know any different and thus would have no reason to object.
Empowerment of Users
Mindful Development of Technology
Tristan Harris provides some fantastic ideas for creating “technology designed for human values” (see www.timewellspent.io). He suggests integrating AI into our devices, which users could draw upon to be prompted with alternatives when they make “unhealthy” decisions. Alternative actions proposed by the AI would be based on users’ pre-indicated preferences, for example, for desired levels of physical activity, diets, or alternative ways to spend time.
Another of his ideas is for technology to be conscious of your time. More specifically, when you visit a website your computer could ask you how much time you wanted to spend on the site, then re-prompt you after that amount of time has elapsed. Alternatively, website developers could indicate “the average time to read” for posted content, so the user could make a conscious decision as to whether or not to proceed.
On his website, Tristan Harris also provides information on “apps and extensions that can help you live without distraction”:
- Flux (Mac, Windows): Reclaim 15 mins of quality sleep by cutting the blue light from our screens.
- Turn on NightShift (iOS): Blue light from screens late at night tricks our body into believing it's still daytime, which disrupts our natural ability to sleep.
- AdBlock Plus (Chrome, Safari): Reclaim 30-40% of your attention with every article you read.
- InboxWhenReady (Gmail): Focus your inbox by only showing messages when you click "Show Inbox" instead of getting distracted as new emails arrive.
- Freedom (Mac & Windows): Temporarily block specific websites or apps on your desktop, tablet and phone for set periods of time.
- Moment (iOS): See how much time you spend on your phone.
- RescueTime(Mac, Windows): See how much time you spend on different apps on your desktop along with various websites.
- Gboard: 50% faster "swipe" typing than regular keyboards so you can respond to a message and get off your device more quickly.
- Send audio messages: Recording a quick audio messages is often faster than typing, and lets many people send a more authentic message.
Mindful Use of Technology
There is a movement called Mindfulness and Technology to address the problems we’ve been having in disconnecting from our devices. According to Wikipedia
Mindfulness and technology is a movement in research and design, that encourages the user to become aware of the present moment, rather than losing oneself in a technological device. This field encompasses multidisciplinary participation between design, psychology, computer science, and religion. Mindfulness stems from Buddhist meditation practices and refers to the awareness that arises through paying attention on purpose in the present moment, and non-judgmentally.
Note: The movement didn’t identify any of its adherents, not did advocates of mindfulness and technology described below specifically identify themselves as being a part of the movement.
Proponents of mindful use of technology provide tips for how to accomplish this, including
- Don’t take your device into the bedroom.
- Don’t wake up with your device: Create an alternative routine to be your first priority in the morning.
- Set a curfew to have some “screen-free” time before you go to bed.
- Take a tech-free vacation: Take a day off, leave your phone at home, or simply turn off your device.
- Create a social-media plan: Designate specific limits on your digital activities.
- Stop counting your likes and hits.
- Before answering the phone or checking email or social media, take a breadth, become mindful of the experience, and reflect on how the experience made you feel when you finished.
- Be more optimistic: Emphasize positive interactions and choose to view networks as supportive rather than competitive.
- Turn off notifications.
- Make an effort to see people face-to-face.
- When you turn to your device to undertake a specific activity, do only that activity.
See, for example,
Dan Erickson, “7 Ways To Be More Mindful Of Your Use Of Technology ”
HuffPost, “7 Ways to Practice Mindfulness in the Technology Age”
The Privacy Guru, “Mindful Living Through the Intelligent Use of Technology”
Jennie Crooks, “Mindful Tech: Practicing Mindfulness with Smart Devices”
Regulation
First Amendment Issues
The First Amendment guarantees the right to free speech. There are, however, exceptions. Kathleen Ann Ruane provides an overview of the exceptions to free speech in “Freedom of Speech and Press: Exceptions to the First Amendment”
This report provides an overview of the major exceptions to the First Amendment—of the ways that the Supreme Court has interpreted the guarantee of freedom of speech and press to provide no protection or only limited protection for some types of speech. For example, the Court has decided that the First Amendment provides no protection for obscenity, child pornography, or speech that constitutes what has become widely known as “fighting words.” The Court has also decided that the First Amendment provides less than full protection to commercial speech, defamation (libel and slander), speech that may be harmful to children, speech broadcast on radio and television (as opposed to speech transmitted via cable or the Internet), and public employees’ speech.
There are restrictions on speech that is “manipulative” to the extent that it incites violence. However, there are no restrictions to speech that simply manipulates (or brainwashes) people into doing something (nonviolent) that they might not otherwise do, such as make a purchase or support a candidate.
It seems, then, that there are no first amendment grounds for challenging developers’ use of persuasion technologies to induce users to make purchases or waste their time on sites they otherwise wouldn’t.
Professional Standards
Perhaps the ideal solution would be for the web development professional organizations to develop standards for their members to delineate acceptable and unacceptable practices. Websites that adhere to the standards set by the professional organization could indicate as much on their sites. In other words, websites could brand themselves as adhering to socially acceptable norms.
The problem comes when developers who want to adhere to best practices find themselves in conflict with the interests of their employers (Apple, Google, Facebook, Twitter, LinkedIn, etc.). This conflict will probably derail the efforts of many individual developers to adhere to socially acceptable practices.
Government Regulation
So it seems it comes down to the question as to whether or not government should regulate, if not prohibit, the use of persuasive technologies so as to induce more favorable social outcomes. Of course, this leads to a slippery slope argument: where do you draw the line on which activities to restrict for the common good?
Many of the same techniques that are used by web developers are the same techniques used by casinos to induce gamblers to spend more money. The obvious parallel would be for the government to limit the use of persuasive technologies on content exposed to minors. Clearly, though, this isn’t possible for most of the websites at issue, because they attract viewers of all ages.
I guess we’ll just have to wait and see what happens.