• Skip to main content
  • Keyboard shortcuts for audio player

Facebook parent Meta will pay $725M to settle a privacy suit over Cambridge Analytica

case study facebook privacy what privacy

Facebook CEO Mark Zuckerberg walks at the company's headquarters in Menlo Park, Calif., on April 4, 2013. Facebook parent company Meta has agreed to pay $725 million to settle a class-action privacy lawsuit. Marcio Jose Sanchez/AP hide caption

Facebook CEO Mark Zuckerberg walks at the company's headquarters in Menlo Park, Calif., on April 4, 2013. Facebook parent company Meta has agreed to pay $725 million to settle a class-action privacy lawsuit.

Facebook parent company Meta has agreed to pay $725 million to settle a class-action lawsuit claiming it improperly shared users' information with Cambridge Analytica, a data analytics firm used by the Trump campaign.

The proposed settlement is a result of revelations in 2018 that information of up to 87 million people may have been improperly accessed by the third-party firm, which filed for bankruptcy in 2018. This is the largest recovery ever in a data privacy class action and the most Facebook has paid to settle a private class action, the plaintiffs' lawyers said in a court filing Thursday.

Meta did not admit wrongdoing and maintains that its users consented to the practices and suffered no actual damages. Meta spokesperson Dina El-Kassaby Luce said in a statement that the settlement was "in the best interest of its community and shareholders" and that the company has revamped its approach to privacy.

Plaintiffs' lawyers said about 250 million to 280 million people may be eligible for payments as part of the class action settlement. The amount of the individual payments will depend on the number of people who come forward with valid claims.

"The amount of the recovery is particularly striking given that Facebook argued that its users consented to the practices at issue, and that the class suffered no actual damages," the plaintiffs' lawyers said in the court filing.

4 Key Takeaways From Washington's Big Tech Hearing On 'Monopoly Power'

4 Key Takeaways From Washington's Big Tech Hearing On 'Monopoly Power'

FTC To Hold Facebook CEO Mark Zuckerberg Liable For Any Future Privacy Violations

FTC To Hold Facebook CEO Mark Zuckerberg Liable For Any Future Privacy Violations

Facebook's data leak to Cambridge Analytica sparked global backlash and government investigations into the company's privacy practices the past several years.

Facebook CEO Mark Zuckerberg gave high-profile testimonies in 2020 before Congress and as part of the Federal Trade Commission's privacy case for which Facebook also agreed to a $5 billion fine. The tech giant also agreed to pay $100 million to resolve U.S. Securities and Exchange Commission claims that Facebook misled investors about the risks of user data misuse.

Facebook first learned of the leak in 2015, tracing the violation back to a Cambridge University psychology professor who harvested data of Facebook users through an app to create a personality test and passed it on to Cambridge Analytica.

Facebook Pays $643,000 Fine For Role In Cambridge Analytica Scandal

Facebook Pays $643,000 Fine For Role In Cambridge Analytica Scandal

Cambridge Analytica was in the business to create psychological profiles of American voters so that campaigns could tailor their pitches to different people. The firm was used by Texas Sen. Ted Cruz's 2016 presidential campaign and then later by former President Donald Trump's campaign after he secured the Republican nomination.

According to a source close to the Trump campaign's data operations, Cambridge Analytica staffers did not use psychological profiling for his campaign but rather focused on more basic goals, like increasing online fundraising and reaching out to undecided voters.

Whistleblower Christopher Wylie then exposed the firm for its role in Brexit in 2019. He said Cambridge Analytica used Facebook user data to target people susceptible to conspiracy theories and convince British voters to support exiting the European Union. Former Trump adviser Steve Bannon was the vice president and U.S. hedge-fund billionaire Robert Mercer owned much of the firm at the time.

The court has set a hearing for March 2, 2023, when a federal judge is expected to give the settlement final approval.

NPR's Bobby Allyn contributed reporting.

  • cambridge analytica

A timeline of Facebook's privacy issues — and its responses

Image: Mark Zuckerberg

SAN FRANCISCO — Facebook’s recent crisis is just one of many privacy issues that company has had to deal with in its relatively short existence.

Barely two years old in 2006, the company faced user outrage when it introduced its News Feed. A year later it had to apologize for telling people what their friends had bought. Years after that, the Federal Trade Commission stepped in — and is now looking at the company again. Facebook has a history of running afoul of regulators and weathering user anger, all the while collecting record profits and racking up more than 2 billion users.

Those privacy issues are now front and center. Facebook's loose handling of how its data was acquired by app developers has plunged the company into the biggest crisis of its 14-year existence. The revelation that a data analytics company used by Donald Trump’s presidential campaign was able to surreptitiously collect data on 50 million people through a seemingly innocuous quiz app has forced CEO Mark Zuckerberg to issue a public apology — and promise changes.

Taking a step back to look at Facebook’s pattern of privacy issues provides an important perspective on just how many times the company has faced serious criticism. What follows is a rundown of the biggest privacy issues Facebook has faced to date:

When : September 2006

What : Facebook debuts News Feed

Facebook’s response : Tells users to relax

Facebook was only two years old when it introduced News Feed on Sept. 5, 2006. The curated feed was intended as a central destination so users didn't have to browse through friends' profiles to see what they had changed.

Facebook had about 8 million users at the time, and not all of them were happy about every move of their personal life being blasted into a daily feed for their friends.

An estimated 1 million users joined "Facebook News Feed protest groups," arguing the feature was too intrusive . But Facebook stayed the course.

“One of the things I'm most proud of about Facebook is that we believe things can always be better, and we're willing to make big bets if we think it will help our community over the long term,” Zuckerberg said in a post reflecting on the 10th anniversary of News Feed.

The outrage died down, and News Feed became a major part of Facebook’s success.

When : December 2007

What : Beacon, Facebook’s first big brush with advertising privacy issues

Facebook’s response : Zuckerberg apologizes, gives users choice to opt out

There was once a time when companies could track purchases by Facebook users and then notify their Facebook friends of what had been bought -- many times without any user consent.

USA - Technology Facebook Creator Mark Zuckerberg

In an apology on Dec. 6, 2007, Zuckerberg explained his thought process behind the program, called Beacon, and announced that users would be given the option to opt out of it.

“We were excited about Beacon because we believe a lot of information people want to share isn’t on Facebook, and if we found the right balance, Beacon would give people an easy and controlled way to share more of that information with their friends,” he said.

At the time, Facebook was also talking to the Federal Trade Commission (FTC) about online privacy and advertising.

When : November 2011

What : Facebook settles FTC privacy charges

Facebook’s response : Facebook agrees to undergo an independent privacy evaluation every other year for the next 20 years.

Facebook settled with the Federal Trade Commission in 2011 over charges that it didn't keep its privacy promise to users by allowing private information to be made public without warning.

Regulators said Facebook falsely claimed that third-party apps were able to access only the data they needed to operate. In fact, the apps could access nearly all of a user’s personal data. Facebook users that never authenticated a third-party app could even have private posts collected if their friends used apps. Facebook was also charged with sharing user information with advertisers, despite a promise they wouldn’t.

"Facebook is obligated to keep the promises about privacy that it makes to its hundreds of millions of users," Jon Leibowitz, then chairman of the FTC, said at the time. "Facebook's innovation does not have to come at the expense of consumer privacy. The FTC action will ensure it will not."

As part of the agreement in 2011, Facebook remains liable for a $16,000-per-day penalty for violating each count of the settlement.

When : June 2013

What : Facebook bug exposes private contact info

Facebook’s response : Facebook fixes bug, notifies people whose info may have been exposed.

A bug exposed the email addresses and phone numbers of 6 million Facebook users to anyone who had some connection to the person or knew at least one piece of their contact information.

The bug was discovered by a White Hat hacker — someone who hacks with the intention of helping companies find bugs and build better security practices.

When people joined Facebook and uploaded their contact lists, Facebook explained it would match that data to other people on Facebook in order to create friend recommendations.

“For example, we don’t want to recommend that people invite contacts to join Facebook if those contacts are already on Facebook; instead, we want to recommend that they invite those contacts to be their friends on Facebook,” Facebook’s team explained in a June 2013 message .

That information was “inadvertently stored in association with people’s contact information,” Facebook said. That meant that when a Facebook user chose to download their information through Facebook’s DYI tool, they were provided with a list of additional contact information for people they knew or with whom they may have had some association.

Facebook said it pulled the tool offline and fixed it. The company also said it had notified regulators and pledged to tell affected users.

When : July 2014

What : Mood-manipulation experiment on thousands of Facebook users

Facebook’s response : Facebook data scientist apologizes

Facebook's mood-manipulation experiment in 2014 included more than half a million randomly selected users. Facebook altered their news feeds to show more positive or negative posts. The purpose of the study was to show how emotions could spread on social media. The results were published in the Proceedings of the National Academy of Sciences , kicking off a firestorm of backlash over whether the study was ethical.

Adam D.I. Kramer, the Facebook data scientist who led the experiment , ultimately posted an apology on Facebook. Four years later, the experiment no longer appears to be online.

“I can understand why some people have concerns about it, and my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused,” he wrote, according to The New York Times .

When : April 2015

What : Facebook cuts off apps from taking basically all the data they want

Facebook’s response : Please keep building apps

If Person A downloads an app, that app shouldn’t be able to suck data from Person B just because they’re friends, right? In 2014, Facebook cited privacy concerns and promised it would limit access to developers. But by the time the policy took effect the next year , Facebook had one big issue: It still couldn’t keep track of how many developers were using previously downloaded data, according to current and former employees who spoke with The Wall Street Journal .

Image: Chris Wylie

When Paul Grewal, Facebook vice president and deputy general counsel announced Cambridge Analytica’s ban from Facebook last week, he said Facebook has a policy of doing ongoing manual and automated checks to ensure apps are complying with Facebook policies.

“These include steps such as random audits of existing apps along with the regular and proactive monitoring of the fastest growing apps,” he said.

When : January 2018

What : Europe’s data protection law

Facebook’s response : Facebook complies

Facebook has also began preparing for the start of a strict European data protection law that takes effect in May. Called the General Data Protection Regulation , the law governs how companies store user information and requires them to disclose a breach within 72 hours.

In January, Facebook released a set of privacy principles explaining how users can take more control of their data.

One particularly notable principle many will be watching to see if Facebook upholds is accountability.

"In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies," Facebook's team said in January.

When : February 2018

What : Belgian court tells Facebook to stop tracking people across the entire internet

Facebook’s response : Appeal the court’s ruling

In February, Facebook was ordered to stop collecting private information about Belgian users on third-party sites through the use of cookies. Facebook was also ordered to delete all data it collected illegally from Belgians, including those who aren't Facebook users but may have still landed on a Facebook page, or risk being fined up to 100 million euros.

Facebook said it has complied with European data protection laws and gives people the choice to opt out of data collection on third-party websites and applications. The company said it would appeal the ruling.

When : March 2018

What : Revealed that Facebook knew about massive data theft and did nothing

Facebook’s response : An apology tour and policy changes

The world finally got the answer to the question “Where’s Zuck?” on Wednesday when the Facebook CEO and co-founder broke his silence on the data harvesting allegations. In a statement posted on his Facebook wall, Zuckerberg avoided the word “sorry” but did express partial blame for Facebook’s role in not doing enough to protect user privacy.

Image: Facebook holds annual F8 developers conference in San Jose, California

He laid out three steps Facebook will take now, including investigating all apps that were able to access user data before 2014, when the company began changing its permissions for developers. Facebook will put restrictions on the data apps can access, limiting them to a person’s name, photo and email. Finally, Zuckerberg said Facebook will make an easy tool that lets everyone see which apps have access to their data and allow them to revoke access.

"I've been working to understand exactly what happened and how to make sure this doesn't happen again,” he wrote. “The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it."

Facebook Guides

Facebook basics & setup.

case study facebook privacy what privacy

How To Use Facebook

case study facebook privacy what privacy

Facebook Settings & Privacy

case study facebook privacy what privacy

Facebook Account & Passwords

case study facebook privacy what privacy

Facebook Recovery

case study facebook privacy what privacy

Facebook Alternatives

case study facebook privacy what privacy

  • Harvard Business School →
  • Faculty & Research →
  • October 2019 (Revised January 2020)
  • HBS Case Collection

Fixing Facebook: Fake News, Privacy, and Platform Governance

  • Format: Print
  • | Language: English
  • | Pages: 30

About The Author

case study facebook privacy what privacy

David B. Yoffie

Related work.

  • December 2019
  • Faculty Research
  • Fixing Facebook: Fake News, Privacy, and Platform Governance  By: David B. Yoffie and Daniel Fisher
  • Fixing Facebook: Fake News, Privacy, and Platform Governance  By: David Yoffie and Daniel Fisher

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Illustration of a person holding chest and stomach while in lotus position.

Had a bad experience meditating? You’re not alone.

Gallery wall of Taylor Swift, William Shakespeare, Mona Lisa, Jane Austen, and the Beatles.

Better to be talented or lucky?

Two surgeons analyzing a patient’s medical scans.

Families may remove brain-injured patients from life support too soon

Harvard case study exposes facebook’s slow response to privacy vulnerability in messaging app.

A new case study released today in the inaugural edition of Technology Science published by Harvard University examines Facebook’s response to the discovery of a glaring privacy vulnerability in its popular messenger app.

The case study comes from Harvard University senior Aran Khanna, who lost an internship with Facebook after discovering a vulnerability in the platform’s Android-based messenger app – a glaring gap which tracked, with unprecedented specificity, the geolocation of users as they sent messages. Khanna drew attention to the privacy gap with his Marauder’s Map, a tool that allows users to plot the actual locations of friends with whom they’re chatting. Over the long-term, this type of data would make it easy for anyone to predict an individual’s specific location on any given day and time.

News of this tool, which mapped out the locational data of others within a meter, spread rapidly. About 85,000 people downloaded it, much to Facebook’s annoyance. The company demanded that the tool be taken out of distribution, which Aran did, and within days Facebook made geolocational data an opt-in feature.

Sharing the geolocational tool prompted Facebook to remove its employment offer to Aran, saying the author fell short of the “high ethical standards” expected of interns. Aran’s experience raises the question of whether one can reasonably expect Facebook or others with an interest in collecting and sharing personal data to be responsible guardians of privacy.

Share this article

You might like.

Altered states of consciousness through yoga, mindfulness more common than thought and mostly beneficial, study finds — though clinicians ill-equipped to help those who struggle

Gallery wall of Taylor Swift, William Shakespeare, Mona Lisa, Jane Austen, and the Beatles.

If you want fame, Cass Sunstein says, it typically requires some of both — and is no pure meritocracy

Two surgeons analyzing a patient’s medical scans.

Of the survivors within one study group, more than 40% recovered at least some independence.

Six receive honorary degrees

Harvard recognizes educator, conductor, theoretical physicist, advocate for elderly, writer, and Nobel laureate

Five alumni elected to the Board of Overseers

Six others join Alumni Association board

Everything counts!

New study finds step-count and time are equally valid in reducing health risks

Case Western Reserve Law Review

Home > Student Journals > LawReview > Vol. 67 > Iss. 1 (2016)

Privacy, Sharing, and Trust: The Facebook Study

Ari Ezra Waldman

Using sharing on Facebook as a case study, this Article presents empirical evidence suggesting that trust is a significant factor in individuals’ willingness to share personal information on online social networks. I then make two arguments, one that explains why Facebook is designed the way it is and one that calls for legal protection against unfair manipulation of users. I argue that Facebook is built on trust: the trust that exists between friends and the trust that exists between users and the platform. In particular, I describe how Facebook designs its platform and interface to leverage the trust we have in our friends to nudge us to share. Sometimes, that helps create a dynamic social environment: knowing what our friends are doing helps us determine when it is safe to interact. Other times, Facebook leverages trust to manipulate us into sharing information with advertisers. This should give us pause. Because Facebook uses trust-based design, users may be confused about the privacy effects of their behavior. Federal and state consumer and privacy protection regulators should step in.

Recommended Citation

Ari Ezra Waldman, Privacy, Sharing, and Trust: The Facebook Study , 67 Case W. Rsrv. L. Rev. 193 (2016) Available at: https://scholarlycommons.law.case.edu/caselrev/vol67/iss1/10

Since February 17, 2017

Included in

Law Commons

  • Journal Home
  • Submissions
  • About This Journal
  • Case Journals Home
  • Most Popular Papers
  • Receive Email Notices or RSS

Advanced Search

ISSN: 0008-7262

Home | About | FAQ | My Account | Accessibility Statement (legal notice)

Privacy Copyright

Brought to you by:

Harvard Business School

Fixing Facebook: Fake News, Privacy, and Platform Governance

By: David B. Yoffie, Daniel Fisher

Mark Zuckerberg founded Facebook based on the idea that connecting people was a fundamentally good thing-and a way to turn a handsome profit. But from the beginning, Facebook received criticism both…

  • Length: 30 page(s)
  • Publication Date: Oct 15, 2019
  • Discipline: Strategy
  • Product #: 720400-PDF-ENG

What's included:

  • Teaching Note
  • Educator Copy

$4.95 per student

degree granting course

$8.95 per student

non-degree granting course

Get access to this material, plus much more with a free Educator Account:

  • Access to world-famous HBS cases
  • Up to 60% off materials for your students
  • Resources for teaching online
  • Tips and reviews from other Educators

Already registered? Sign in

  • Student Registration
  • Non-Academic Registration
  • Included Materials

Mark Zuckerberg founded Facebook based on the idea that connecting people was a fundamentally good thing-and a way to turn a handsome profit. But from the beginning, Facebook received criticism both for how it handled user privacy and how it curated user-generated content. These two issues coalesced in the aftermath of the 2016 United States presidential election, after Facebook's role in the spread of political misinformation and the leak of Facebook user data to political consulting firms began to receive significant media coverage. In 2019, Facebook announced it was shifting to a "digital living room" model that would focus more on private, encrypted conversations and less on sharing viral content. Several important questions remained. Would the digital living room ease users' privacy concerns? Would Facebook still be able to effectively curate content? Would its advertising model still work? Or were financial success and good governance mutually exclusive? This case explores the parameters of governing an internet-based platform.

Learning Objectives

To explore issues of platform governance, particularly balancing governance responsibilities against financial success.

Oct 15, 2019 (Revised: Jan 21, 2020)

Discipline:

Geographies:

United States

Industries:

IT consulting services

Harvard Business School

720400-PDF-ENG

We use cookies to understand how you use our site and to improve your experience, including personalizing content. Learn More . By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

case study facebook privacy what privacy

TechRepublic

Account information.

case study facebook privacy what privacy

Share with Your Friends

Facebook data privacy scandal: A cheat sheet

Your email has been sent

Image of TechRepublic Staff

A decade of apparent indifference for data privacy at Facebook has culminated in revelations that organizations harvested user data for targeted advertising, particularly political advertising, to apparent success. While the most well-known offender is Cambridge Analytica–the political consulting and strategic communication firm behind the pro-Brexit Leave EU campaign, as well as Donald Trump’s 2016 presidential campaign–other companies have likely used similar tactics to collect personal data of Facebook users.

TechRepublic’s cheat sheet about the Facebook data privacy scandal covers the ongoing controversy surrounding the illicit use of profile information. This article will be updated as more information about this developing story comes to the forefront. It is also available as a download, Cheat sheet: Facebook Data Privacy Scandal (free PDF) .

SEE: Navigating data privacy (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

What is the Facebook data privacy scandal?

The Facebook data privacy scandal centers around the collection of personally identifiable information of “ up to 87 million people ” by the political consulting and strategic communication firm Cambridge Analytica. That company–and others–were able to gain access to personal data of Facebook users due to the confluence of a variety of factors, broadly including inadequate safeguards against companies engaging in data harvesting, little to no oversight of developers by Facebook, developer abuse of the Facebook API, and users agreeing to overly broad terms and conditions.

SEE: Information security policy (TechRepublic Premium)

In the case of Cambridge Analytica, the company was able to harvest personally identifiable information through a personality quiz app called thisisyourdigitiallife, based on the OCEAN personality model. Information gathered via this app is useful in building a “psychographic” profile of users (the OCEAN acronym stands for openness, conscientiousness, extraversion, agreeableness, and neuroticism). Adding the app to your Facebook account to take the quiz gives the creator of the app access to profile information and user history for the user taking the quiz, as well as all of the friends that user has on Facebook. This data includes all of the items that users and their friends have liked on Facebook.

Researchers associated with Cambridge University claimed in a paper that it “can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender,” with a model developed by the researchers that uses a combination of dimensionality reduction and logistic/linear regression to infer this information about users.

The model–according to the researchers–is effective due to the relationship of likes to a given attribute. However, most likes are not explicitly indicative of their attributes. The researchers note that “less than 5% of users labeled as gay were connected with explicitly gay groups,” but that liking “Juicy Couture” and “Adam Lambert” are likes indicative of gay men, while “WWE” and “Being Confused After Waking Up From Naps” are likes indicative of straight men. Other such connections are peculiarly lateral, with “curly fries” being an indicator of high IQ, “sour candy” being an indicator of not smoking, and “Gene Wilder” being an indicator that the user’s parents had not separated by age 21.

SEE: Can Russian hackers be stopped? Here’s why it might take 20 years (TechRepublic cover story) | download the PDF version

Additional resources

  • How a Facebook app scraped millions of people’s personal data (CBS News)
  • Facebook reportedly thinks there’s no ‘expectation of privacy’ on social media (CNET)
  • Cambridge Analytica: ‘We know what you want before you want it’ (TechRepublic)
  • Average US citizen had personal information stolen at least 4 times in 2019 (TechRepublic)
  • Facebook: We’ll pay you to track down apps that misuse your data (ZDNet)
  • Most consumers do not trust big tech with their privacy (TechRepublic)
  • Facebook asks permission to use personal data in Brazil (ZDNet)

What is the timeline of the Facebook data privacy scandal?

Facebook has more than a decade-long track record of incidents highlighting inadequate and insufficient measures to protect data privacy. While the severity of these individual cases varies, the sequence of repeated failures paints a larger picture of systemic problems.

SEE: All TechRepublic cheat sheets and smart person’s guides

In 2005, researchers at MIT created a script that downloaded publicly posted information of more than 70,000 users from four schools. (Facebook only began to allow search engines to crawl profiles in September 2007.)

In 2007, activities that users engaged in on other websites was automatically added to Facebook user profiles as part of Beacon, one of Facebook’s first attempts to monetize user profiles. As an example, Beacon indicated on the Facebook News Feed the titles of videos that users rented from Blockbuster Video, which was a violation of the Video Privacy Protection Act . A class action suit was filed, for which Facebook paid $9.5 million to a fund for privacy and security as part of a settlement agreement.

SEE: The Brexit dilemma: Will London’s start-ups stay or go? (TechRepublic cover story)

In 2011, following an FTC investigation, the company entered into a consent decree, promising to address concerns about how user data was tracked and shared. That investigation was prompted by an incident in December 2009 in which information thought private by users was being shared publicly, according to contemporaneous reporting by The New York Times .

In 2013, Facebook disclosed details of a bug that exposed the personal details of six million accounts over approximately a year . When users downloaded their own Facebook history, that user would obtain in the same action not just their own address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books. The data that Facebook exposed had not been given to Facebook by users to begin with–it had been vacuumed from the contact lists of other Facebook users who happen to know that person. This phenomenon has since been described as “shadow profiles.”

The Cambridge Analytica portion of the data privacy scandal starts in February 2014. A spate of reviews on the Turkopticon website–a third-party review website for users of Amazon’s Mechanical Turk–detail a task requested by Aleksandr Kogan asking users to complete a survey in exchange for money. The survey required users to add the thisisyourdigitiallife app to their Facebook account, which is in violation of Mechanical Turk’s terms of service . One review quotes the request as requiring users to “provide our app access to your Facebook so we can download some of your data–some demographic data, your likes, your friends list, whether your friends know one another, and some of your private messages.”

In December 2015, Facebook learned for the first time that the data set Kogan generated with the app was shared with Cambridge Analytica. Facebook founder and CEO Mark Zuckerberg claims “we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.”

According to Cambridge Analytica, the company took legal action in August 2016 against GSR (Kogan) for licensing “illegally acquired data” to the company, with a settlement reached that November.

On March 17, 2018, an exposé was published by The Guardian and The New York Times , initially reporting that 50 million Facebook profiles were harvested by Cambridge Analytica; the figure was later revised to “up to 87 million” profiles. The exposé relies on information provided by Christopher Wylie, a former employee of SCL Elections and Global Science Research, the creator of the thisisyourdigitiallife app. Wylie claimed that the data from that app was sold to Cambridge Analytica, which used the data to develop “psychographic” profiles of users, and target users with pro-Trump advertising, a claim that Cambridge Analytica denied.

On March 16, 2018, Facebook threatened to sue The Guardian over publication of the story, according to a tweet by Guardian reporter Carole Cadwalladr . Campbell Brown, a former CNN journalist who now works as head of news partnerships at Facebook, said it was “not our wisest move,” adding “If it were me I would have probably not threatened to sue The Guardian.” Similarly, Cambridge Analytica threatened to sue The Guardian for defamation .

On March 20, 2018, the FTC opened an investigation to determine if Facebook had violated the terms of the settlement from the 2011 investigation.

In April 2018, reports indicated that Facebook granted Zuckerberg and other high ranking executives powers over controlling personal information on a platform that is not available to normal users. Messages from Zuckerberg sent to other users were remotely deleted from users’ inboxes, which the company claimed was part of a corporate security measure following the 2014 Sony Pictures hack . Facebook subsequently announced plans to make available the “unsend” capability “to all users in several months,” and that Zuckerberg will be unable to unsend messages until such time that feature rolls out. Facebook added the feature 10 months later , on February 6, 2019. The public feature permits users to delete messages up to 10 minutes after the messages were sent. In the controversy prompting this feature to be added, Zuckerberg deleted messages months after they were sent.

On April 4, 2018, The Washington Post reported that Facebook announced “malicious actors” abused the search function to gather public profile information of “most of its 2 billion users worldwide.”

In a CBS News/YouGov poll published on April 10, 2018, 61% of Americans said Congress should do more to regulate social media and tech companies. This sentiment was echoed in a CBS News interview with Box CEO Aaron Levie and YML CEO Ashish Toshniwal who called on Congress to regulate Facebook. According to Levie, “There are so many examples where we don’t have modern ways of either regulating, controlling, or putting the right protections in place in the internet age. And this is a fundamental issue that, that we’re gonna have to grapple with as an industry for the next decade.”

On April 18, 2018, Facebook updated its privacy policy .

On May 2, 2018, SCL Group, which owns Cambridge Analytica, was dissolved. In a press release , the company indicated that “the siege of media coverage has driven away virtually all of the Company’s customers and suppliers.”

On May 15, 2018, The New York Times reported that Cambridge Analytica is being investigated by the FBI and the Justice Department. A source indicated to CBS News that prosecutors are focusing on potential financial crimes.

On May 16, 2018, Christopher Wylie testified before the Senate Judiciary Committee . Among other things, Wylie noted that Cambridge Analytica, under the direction of Steve Bannon, sought to “exploit certain vulnerabilities in certain segments to send them information that will remove them from the public forum, and feed them conspiracies and they’ll never see mainstream media.” Wylie also noted that the company targeted people with “characteristics that would lead them to vote for the Democratic party, particularly African American voters.”

On June 3, 2018, a report in The New York Times indicated that Facebook had maintained data-sharing partnerships with mobile device manufacturers, specifically naming Apple, Amazon, BlackBerry, Microsoft, and Samsung. Under the terms of this personal information sharing, device manufacturers were able to gather information about users in order to deliver “the Facebook experience,” the Times quotes a Facebook official as saying. Additionally, the report indicates that this access allowed device manufacturers to obtain data about a user’s Facebook friends, even if those friends had configured their privacy settings to deny information sharing with third parties.

The same day, Facebook issued a rebuttal to the Times report indicating that the partnerships were conceived because “the demand for Facebook outpaced our ability to build versions of the product that worked on every phone or operating system,” at a time when the smartphone market included BlackBerry’s BB10 and Windows Phone operating systems, among others. Facebook claimed that “contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends. We are not aware of any abuse by these companies.” The distinction being made is partially semantic, as Facebook does not consider these partnerships a third party in this case. Facebook noted that changes to the platform made in April began “winding down” access to these APIs, and that 22 of the partnerships had already been ended.

On June 5, 2018, the The Washington Post and The New York Times reported that the Chinese device manufacturers Huawei, Lenovo, Oppo, and TCL were granted access to user data under this program. Huawei, along with ZTE, are facing scrutiny from the US government on unsubstantiated accusations that products from these companies pose a national security risk .

On July 2, 2018, The Washington Post reported that the US Securities and Exchange Commission, Federal Trade Commission, and Federal Bureau of Investigation have joined the Department of Justice inquiry into the Facebook/Cambridge Analytica data scandal. In a statement to CNET , Facebook indicated that “We’ve provided public testimony, answered questions, and pledged to continue our assistance as their work continues.” On July 11th, the Wall Street Journal reported that the SEC is separately investigating if Facebook adequately warned investors in a timely manner about the possible misuse and improper collection of user data. The same day, the UK assessed a £500,000 fine to Facebook , the maximum permitted by law, over its role in the data scandal. The UK’s Information Commissioner’s Office is also preparing to launch a criminal probe into SCL Elections over their involvement in the scandal.

On July 3, 2018, Facebook acknowledged a “bug” unblocked people that users has blocked between May 29 and June 5.

On July 12, 2018, a CNBC report indicated that a privacy loophole was discovered and closed. A Chrome plug-in intended for marketing research called Grouply.io allowed users to access the list of members for private Facebook groups. Congress sent a letter to Zuckerberg on February 19, 2019 demanding answers about the data leak, stating in part that “labeling these groups as closed or anonymous potentially misled Facebook users into joining these groups and revealing more personal information than they otherwise would have,” and “Facebook may have failed to properly notify group members that their personal health information may have been accessed by health insurance companies and online bullies, among others.”

Fallout from a confluence of factors in the Facebook data privacy scandal has come to bear in the last week of July 2018. On July 25th, Facebook announced that daily active user counts have fallen in Europe, and growth has stagnated in the US and Canada. The following day, Facebook suffered the worst single-day market value decrease for a public company in the US, dropping $120 billion , or 19%. On the July 28th, Reuters reported that shareholders are suing Facebook, Zuckerberg, and CFO David Wehner for “making misleading statements about or failing to disclose slowing revenue growth, falling operating margins, and declines in active users.”

On August 22, 2018, Facebook removed Facebook-owned security app Onavo from the App Store , for violating privacy rules. Data collected through the Onavo app is shared with Facebook.

In testimony before the Senate, on September 5, 2018, COO Sheryl Sandberg conceded that the company “[was] too slow to spot this and too slow to act” on privacy protections. Sandberg, and Twitter CEO Jack Dorsey faced questions focusing on user privacy, election interference, and political censorship. Senator Mark Warner of Virginia even said that, “The era of the wild west in social media is coming to an end,” which seems to indicate coming legislation.

On September 6, 2018, a spokesperson indicated that Joseph Chancellor was no longer employed by Facebook . Chancellor was a co-director of Global Science Research, the firm which improperly provided user data to Cambridge Analytica. An internal investigation was launched in March in part to determine his involvement. No statement was released indicating the result of that investigation.

On September 7, 2018, Zuckerberg stated in a post that fixing issues such as “defending against election interference by nation states, protecting our community from abuse and harm, or making sure people have control of their information and are comfortable with how it’s used,” is a process which “will extend through 2019.”

On September 26, 2018, WhatsApp co-founder Brian Acton stated in an interview with Forbes that “I sold my users’ privacy” as a result of the messaging app being sold to Facebook in 2014 for $22 billion.

On September 28, 2018, Facebook disclosed details of a security breach which affected 50 million users . The vulnerability originated from the “view as” feature which can be used to let users see what their profiles look like to other people. Attackers devised a way to export “access tokens,” which could be used to gain control of other users’ accounts .

A CNET report published on October 5, 2018, details the existence of an “ Internet Bill of Rights ” drafted by Rep. Ro Khanna (D-CA). The bill is likely to be introduced in the event the Democrats regain control of the House of Representatives in the 2018 elections. In a statement, Khanna noted that “As our lives and the economy are more tied to the internet, it is essential to provide Americans with basic protections online.”

On October 11, 2018, Facebook deleted over 800 pages and accounts in advance of the 2018 elections for violating rules against spam and “inauthentic behavior.” The same day, it disabled accounts for a Russian firm called “Social Data Hub,” which claimed to sell scraped user data. A Reuters report indicates that Facebook will ban false information about voting in the midterm elections.

On October 16, 2018, rules requiring public disclosure of who pays for political advertising on Facebook, as well as identity verification of users paying for political advertising, were extended to the UK . The rules were first rolled out in the US in May.

On October 25, 2018, Facebook was fined £500,000 by the UK’s Information Commissioner’s Office for their role in the Cambridge Analytica scandal. The fine is the maximum amount permitted by the Data Protection Act 1998. The ICO indicated that the fine was final. A Facebook spokesperson told ZDNet that the company “respectfully disagreed,” and has filed for appeal .

The same day, Vice published a report indicating that Facebook’s advertiser disclosure policy was trivial to abuse. Reporters from Vice submitted advertisements for approval attributed to Mike Pence, DNC Chairman Tom Perez, and Islamic State, which were approved by Facebook. Further, the contents of the advertisements were copied from Russian advertisements. A spokesperson for Facebook confirmed to Vice that the copied content does not violate rules, though the false attribution does. According to Vice, the only denied submission was attributed to Hillary Clinton.

On October 30, 2018, Vice published a second report in which it claimed that it successfully applied to purchase advertisements attributed to all 100 sitting US Senators, indicating that Facebook had yet to fix the problem reported in the previous week. According to Vice, the only denied submission in this test was attributed to Mark Zuckerberg.

On November 14, 2018, the New York Times published an exposé on the Facebook data privacy scandal, citing interviews of more than 50 people, including current and former Facebook executives and employees. In the exposé, the Times reports:

  • In the Spring of 2016, a security expert employed by Facebook informed Chief Security Officer Alex Stamos of Russian hackers “probing Facebook accounts for people connected to the presidential campaigns,” which Stamos, in turn, informed general counsel Colin Stretch.
  • A group called “Project P” was assembled by Zuckerberg and Sandberg to study false news on Facebook. By January 2017, this group “pressed to issue a public paper” about their findings, but was stopped by board members and Facebook vice president of global public policy Joel Kaplan, who had formerly worked in former US President George W. Bush’s administration.
  • In Spring and Summer of 2017, Facebook was “publicly claiming there had been no Russian effort of any significance on Facebook,” despite an ongoing investigation into the extent of Russian involvement in the election.
  • Sandberg “and deputies” insisted that the post drafted by Stamos to publicly acknowledge Russian involvement for the first time be made “less specific” before publication.
  • In October 2017, Facebook expanded their engagement with Republican-linked firm Definers Public Affairs to discredit “activist protesters.” That firm worked to link people critical of Facebook to liberal philanthropist George Soros , and “[lobbied] a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”
  • Following comments critical of Facebook by Apple CEO Tim Cook , a spate of articles critical of Apple and Google began appearing on NTK Network, an organization which shares an office and staff with Definers. Other articles appeared on the website downplaying the Russians’ use of Facebook.

On November 15, 2018, Facebook announced it had terminated its relationship with Definers Public Affairs, though it disputed that either Zuckerberg or Sandberg was aware of the “specific work being done.” Further, a Facebook spokesperson indicated “It is wrong to suggest that we have ever asked Definers to pay for or write articles on Facebook’s behalf, or communicate anything untrue.”

On November 22, 2018, Sandberg acknowledged that work produced by Definers “was incorporated into materials presented to me and I received a small number of emails where Definers was referenced.”

On November 25, 2018, the founder of Six4Three, on a business trip to London, was compelled by Parliament to hand over documents relating to Facebook . Six4Three obtained these documents during the discovery process relating to an app developed by the startup that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. Reports indicate that Parliament sent an official to the founder’s hotel with a warning that noncompliance would result in possible fines or imprisonment. Despite the warning, the founder of the startup remained noncompliant, prompting him to be escorted to Parliament, where he turned over the documents.

A report in the New York Times published on November 29, 2018, indicates that Sheryl Sandberg personally asked Facebook communications staff in January to “research George Soros’s financial interests in the wake of his high-profile attacks on tech companies.”

On December 5, 2018, documents obtained in the probe of Six4Three were released by Parliament . Damian Collins, the MP who issued the order compelling the handover of the documents in November, highlighted six key points from the documents:

  • Facebook entered into whitelisting agreements with Lyft, Airbnb, Bumble, and Netflix, among others, allowing those groups full access to friends data after Graph API v1 was discontinued. Collins indicates “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.”
  • According to Collins, “increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers’ relationship with Facebook is a recurring feature of the documents.”
  • Data reciprocity between Facebook and app developers was a central focus for the release of Platform v3, with Zuckerberg discussing charging developers for access to API access for friend lists.
  • Internal discussions of changes to the Facebook Android app acknowledge that requesting permissions to collect calls and texts sent by the user would be controversial, with one project manager stating it was “a pretty high-risk thing to do from a PR perspective.”
  • Facebook used data collected through Onavo, a VPN service the company acquired in 2013, to survey the use of mobile apps on smartphones. According to Collins, this occurred “apparently without [users’] knowledge,” and was used by Facebook to determine “which companies to acquire, and which to treat as a threat.”
  • Collins contends that “the files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.” Documents disclosed specifically indicate Facebook revoked API access to video sharing service Vine.

In a statement , Facebook claimed, “Six4Three… cherrypicked these documents from years ago.” Zuckerberg responded separately to the public disclosure on Facebook, acknowledging, “Like any organization, we had a lot of internal discussion and people raised different ideas.” He called the Facebook scrutiny “healthy given the vast number of people who use our services,” but said it shouldn’t “misrepresent our actions or motives.”

On December 14, 2018, a vulnerability was disclosed in the Facebook Photo API that existed between September 13-25, 2018, exposing private photos of 6.8 million users. The Photo API bug affected people who use Facebook to log in to third-party services.

On December 18, 2018, The New York Times reported on special data sharing agreements that “[exempted] business partners from its usual privacy rules, naming Microsoft’s Bing search engine, Netflix, Spotify, Amazon, and Yahoo as partners in the report. Partners were capable of accessing data including friend lists and private messages, “despite public statements it had stopped that type of sharing years earlier.” Facebook claimed the data sharing was about “helping people,” and that this was not done without user consent.

On January 17, 2019, Facebook disclosed that it removed hundreds of pages and accounts controlled by Russian propaganda organization Sputnik, including accounts posing as politicians from primarily Eastern European countries.

On January 29, 2019, a TechCrunch report uncovered the “Facebook Research” program , which paid users aged 13 to 35 to receive up to $20 per month to install a VPN application similar to Onavo that allowed Facebook to gather practically all information about how phones were used. On iOS, this was distributed using Apple’s Developer Enterprise Program, for which Apple briefly revoked Facebook’s certificate as a result of the controversy .

Facebook initially indicated that “less than 5% of the people who chose to participate in this market research program were teens,” and on March 1, 2019 amended the statement to “about 18 percent.”

On February 7, 2019, the German antitrust office ruled that Facebook must obtain consent before collecting data on non-Facebook members, following a three-year investigation.

On February 20, 2019, Facebook added new location controls to its Android app that allows users to limit background data collection when the app is not in use .

The same day, ZDNet reported that Microsoft’s Edge browser contained a secret whitelist allowing Facebook to run Adobe Flash, bypassing the click-to-play policy that other websites are subject to for Flash objects over 398×298 pixels. The whitelist was removed in the February 2019 Patch Tuesday update.

On March 6, 2019, Zuckerberg announced a plan to rebuild services around encryption and privacy , “over the next few years.” As part of these changes, Facebook will make messages between Facebook, Instagram, and WhatsApp interoperable. Former Microsoft executive Steven Sinofsky –who was fired after the poor reception of Windows 8–called the move “fantastic,” comparing it to Microsoft’s Trustworthy Computing initiative in 2002.

CNET and CBS News Senior Producer Dan Patterson noted on CBSN that Facebook can benefit from this consolidation by making the messaging platforms cheaper to operate, as well as profiting from users sending money through the messaging platform, in a business model similar to Venmo.

On March 21, 2019, Facebook disclosed a lapse in security that resulted in hundreds of millions of passwords being stored in plain text, affecting users of Facebook, Facebook Lite, and Instagram. Facebook claimed that “these passwords were never visible to anyone outside of Facebook and we have found no evidence to date that anyone internally abused or improperly accessed them.”

Though Facebook’s post does not provide specifics, a report by veteran security reporter Brian Krebs claimed “between 200 million and 600 million” users were affected, and that “more than 20,000 Facebook employees” would have had access.

On March 22, 2019, a court filing by the attorney general of Washington DC alleged that Facebook knew about the Cambridge Analytica scandal months prior to the first public reports in December 2015. Facebook claimed that employees knew of rumors relating to Cambridge Analytica, but the claims relate to a “different incident” than the main scandal, and insisted that the company did not mislead anyone about the timeline of the scandal.

Facebook is seeking to have the case filed in Washington DC dismissed, as well as to seal a document filed in that case.

On March 31, 2019, The Washington Post published an op-ed by Zuckerberg calling for governments and regulators to take a “more active role” in regulating the internet. Shortly after, Facebook introduced a feature that explains why content is shown to users on their news feeds .

On April 3, 2019, over 540 million Facebook-related records were found on two improperly protected AWS servers . The data was collected by Cultura Colectiva, a Mexico-based online media platform, using Facebook APIs. Amazon deactivated the associated account at Facebook’s request.

On April 15, 2019, it was discovered that Oculus, a company owned by Facebook, shipped VR headsets with internal etchings including text such as “ Big Brother is Watching .”

On April 18, 2019, Facebook disclosed the “unintentional” harvesting of email contacts belonging to approximately 1.5 million users over the course of three years. Affected users were asked to provide email address credentials to verify their identity.

On April 30, 2019, at Facebook’s F8 developer conference , the company unveiled plans to overhaul Messenger and re-orient Facebook to prioritize Groups instead of the timeline view, with Zuckerberg declaring “The future is private.”

On May 9, 2019, Facebook co-founder Chris Hughes called for Facebook to be broken up by government regulators, in an editorial in The New York Times. Hughes, who left the company in 2007, cited concerns that Zuckerberg has surrounded himself with people who do not challenge him . “We are a nation with a tradition of reining in monopolies, no matter how well-intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American,” Hughes said.

Proponents of a Facebook breakup typically point to unwinding the social network’s purchase of Instagram and WhatsApp.

Zuckerberg dismissed Hughes’ appeal for a breakup in comments to France 2, stating in part that “If what you care about is democracy and elections, then you want a company like us to invest billions of dollars a year, like we are, in building up really advanced tools to fight election interference.”

On May 24, 2019, a report from Motherboard claimed “multiple” staff members of Snapchat used internal tools to spy on users .

On July 8, 2019, Apple co-founder Steve Wozniak warned users to get off of Facebook .

On July 18, 2019, lawmakers in a House Committee on Financial Services hearing expressed mistrust of Facebook’s Libra cryptocurrency plan due to its “pattern of failing to keep consumer data private.” Lawmakers had previously issued a letter to Facebook requesting the company pause development of the project.

On July 24, 2019, the FTC announced a $5 billion settlement with Facebook over user privacy violations. Facebook agreed to conduct an overhaul of its consumer privacy practices as part of the settlement. Access to friend data by Sony and Facebook was “immediately” restricted as part of this settlement, according to CNET. Separately, the FTC settled with Aleksandr Kogan and former Cambridge Analytica CEO Alexander Nix , “restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC announced a lawsuit against Cambridge Analytica the same day.

Also on July 24, 2019, Netflix released “The Great Hack,” a documentary about the Cambridge Analytica scandal .

In early July, 2020, Facebook admitted to sharing user data with an estimated 5,000 third-party developers after it access to that data was supposed to expire.

Zuckerberg testified before Congress again on July 29, 2020, as part of an antitrust hearing that included Amazon’s Jeff Bezos, Apple’s Tim Cook, and Google’s Sundar Pichai . The hearing didn’t touch on Facebook’s data privacy scandal, and was instead focused on Facebook’s purchase of Instagram and WhatsApp , as well as its treatment of other competing services.

  • Facebook knew of illicit user profile harvesting for 2 years, never acted (CBS News)
  • Facebook’s FTC consent decree deal: What you need to know (CNET)
  • Australia’s Facebook investigation expected to take at least 8 months (ZDNet)
  • Election tech: The truth about Cambridge Analytica’s political big data (TechRepublic)
  • Google sued by ACCC for allegedly linking data for ads without consent (ZDNet)
  • Midterm elections, social media and hacking: What you need to know (CNET)
  • Critical flaw revealed in Facebook Fizz TLS project (ZDNet)
  • CCPA: What California’s new privacy law means for Facebook, Twitter users (CNET)

What are the key companies involved in the Facebook data privacy scandal?

In addition to Facebook, these are the companies connected to this data privacy story.

SCL Group (formerly Strategic Communication Laboratories) is at the center of the privacy scandal, though it has operated primarily through subsidiaries. Nominally, SCL was a behavioral research/strategic communication company based in the UK. The company was dissolved on May 1, 2018.

Cambridge Analytica and SCL USA are offshoots of SCL Group, primarily operating in the US. Registration documentation indicates the pair formally came into existence in 2013. As with SCL Group, the pair were dissolved on May 1, 2018.

Global Science Research was a market research firm based in the UK from 2014 to 2017. It was the originator of the thisisyourdigitiallife app. The personal data derived from the app (if not the app itself) was sold to Cambridge Analytica for use in campaign messaging.

Emerdata is the functional successor to SCL and Cambridge Analytica. It was founded in August 2017, with registration documents listing several people associated with SCL and Cambridge Analytica, as well as the same address as that of SCL Group’s London headquarters.

AggregateIQ is a Canadian consulting and technology company founded in 2013. The company produced Ripon, the software platform for Cambridge Analytica’s political campaign work, which leaked publicly after being discovered in an unprotected GitLab bucket .

Cubeyou is a US-based data analytics firm that also operated surveys on Facebook, and worked with Cambridge University from 2013 to 2015. It was suspended from Facebook in April 2018 following a CNBC report .

Six4Three was a US-based startup that created an app that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. The company sued Facebook in April 2015, when the app became inoperable after access to this data was revoked when the original version of Facebook’s Graph API was discontinued .

Onavo is an analytics company that develops mobile apps. They created Onavo Extend and Onavo Protect, which are VPN services for data protection and security, respectively. Facebook purchased the company in October 2013 . Data from Onavo is used by Facebook to track usage of non-Facebook apps on smartphones .

The Internet Research Agency is a St. Petersburg-based organization with ties to Russian intelligence services. The organization engages in politically-charged manipulation across English-language social media, including Facebook.

  • If your organization advertises on Facebook, beware of these new limitations (TechRepublic)
  • Data breach exposes Cambridge Analytica’s data mining tools (ZDNet)
  • Was your business’s Twitter feed sold to Cambridge Analytica? (TechRepublic)
  • US special counsel indicts 13 members of Russia’s election meddling troll farm (ZDNet)

Who are the key people involved in the Facebook data privacy scandal?

Nigel Oakes is the founder of SCL Group, the parent company of Cambridge Analytica. A report from Buzzfeed News unearthed a quote from 1992 in which Oakes stated, “We use the same techniques as Aristotle and Hitler. … We appeal to people on an emotional level to get them to agree on a functional level.”

Alexander Nix was the CEO of Cambridge Analytica and a director of SCL Group. He was suspended following reports detailing a video in which Nix claimed the company “offered bribes to smear opponents as corrupt,” and that it “campaigned secretly in elections… through front companies or using subcontractors.”

Robert Mercer is a conservative activist, computer scientist, and a co-founder of Cambridge Analytica. A New York Times report indicates that Mercer invested $15 million in the company. His daughters Jennifer Mercer and Rebekah Anne Mercer serve as directors of Emerdata.

Christopher Wylie is the former director of research at Cambridge Analytica. He provided information to The Guardian for its exposé of the Facebook data privacy scandal. He has since testified before committees in the US and UK about Cambridge Analytica’s involvement in this scandal.

Steve Bannon is a co-founder of Cambridge Analytica, as well as a founding member and former executive chairman of Breitbart News, an alt-right news outlet. Breitbart News has reportedly received funding from the Mercer family as far back as 2010. Bannon left Breitbart in January 2018. According to Christopher Wylie, Bannon is responsible for testing phrases such as “ drain the swamp ” at Cambridge Analytica, which were used extensively on Breitbart.

Aleksandr Kogan is a Senior Research Associate at Cambridge University and co-founder of Global Science Research, which created the data harvesting thisisyourdigitiallife app. He worked as a researcher and consultant for Facebook in 2013 and 2015. Kogan also received Russian government grants and is an associate professor at St. Petersburg State University, though he claims this is an honorary role .

Joseph Chancellor was a co-director of Global Science Research, which created the data harvesting thisisyourdigitiallife app. Around November 2015, he was hired by Facebook as a “quantitative social psychologist.” A spokesperson indicated on September 6, 2018, that he was no longer employed by Facebook.

Michal Kosinski , David Stillwell , and Thore Graepel are the researchers who proposed and developed the model to “psychometrically” analyze users based on their Facebook likes. At the time this model was published, Kosinski and Stillwell were affiliated with Cambridge University, while Graepel was affiliated with the Cambridge-based Microsoft Research. (None have an association with Cambridge Analytica, according to Cambridge University .)

Mark Zuckerberg is the founder and CEO of Facebook. He founded the website in 2004 from his dorm room at Harvard.

Sheryl Sandberg is the COO of Facebook. She left Google to join the company in March 2008. She became the eighth member of the company’s board of directors in 2012 and is the first woman in that role.

Damian Collins is a Conservative Party politician based in the United Kingdom. He currently serves as the Chair of the House of Commons Culture, Media and Sport Select Committee. Collins is responsible for issuing orders to seize documents from the American founder of Six4Three while he was traveling in London, and releasing those documents publicly.

Chris Hughes is one of four Facebook co-founders, who originally took on beta testing and feedback for the website, until leaving in 2007. Hughes is the first to call for Facebook to be broken up by regulators.

  • Facebook investigates employee’s ties to Cambridge Analytica (CBS News)
  • Aleksandr Kogan: The link between Cambridge Analytica and Facebook (CBS News)
  • Video: Cambridge Analytica shuts down following data scandal (CBS News)

How have Facebook and Mark Zuckerberg responded to the data privacy scandal?

Each time Facebook finds itself embroiled in a privacy scandal, the general playbook seems to be the same: Mark Zuckerberg delivers an apology, with oft-recycled lines, such as “this was a big mistake,” or “I know we can do better.” Despite repeated controversies regarding Facebook’s handling of personal data, it has continued to gain new users. This is by design–founding president Sean Parker indicated at an Axios conference in November 2017 that the first step of building Facebook features was “How do we consume as much of your time and conscious attention as possible?” Parker also likened the design of Facebook to “exploiting a vulnerability in human psychology.”

On March 16, 2018, Facebook announced that SCL and Cambridge Analytica had been banned from the platform. The announcement indicated, correctly, that “Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time,” and passing the information to a third party was against the platform policies.

The following day, the announcement was amended to state:

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

On March 21, 2018, Mark Zuckerberg posted his first public statement about the issue, stating in part that:

“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again.”

On March 26, 2018, Facebook placed full-page ads stating : “This was a breach of trust, and I’m sorry we didn’t do more at the time. We’re now taking steps to ensure this doesn’t happen again,” in The New York Times, The Washington Post, and The Wall Street Journal, as well as The Observer, The Sunday Times, Mail on Sunday, Sunday Mirror, Sunday Express, and Sunday Telegraph in the UK.

In a blog post on April 4, 2018, Facebook announced a series of changes to data handling practices and API access capabilities. Foremost among these include limiting the Events API, which is no longer able to access the guest list or wall posts. Additionally, Facebook removed the ability to search for users by phone number or email address and made changes to the account recovery process to fight scraping.

On April 10, 2018, and April 11, 2018, Mark Zuckerberg testified before Congress. Details about his testimony are in the next section of this article.

On April 10, 2018, Facebook announced the launch of its data abuse bug bounty program. While Facebook has an existing security bug bounty program, this is targeted specifically to prevent malicious users from engaging in data harvesting. There is no limit to how much Facebook could potentially pay in a bounty, though to date the highest amount the company has paid is $40,000 for a security bug.

On May 14, 2018, “around 200” apps were banned from Facebook as part of an investigation into if companies have abused APIs to harvest personal information. The company declined to provide a list of offending apps.

On May 22, 2018, Mark Zuckerberg testified, briefly, before the European Parliament about the data privacy scandal and Cambridge Analytica. The format of the testimony has been the subject of derision, as all of the questions were posed to Zuckerberg before he answered. Guy Verhofstadt, an EU Parliament member representing Belgium, said , “I asked you six ‘yes’ and ‘no’ questions, and I got not a single answer.”

What did Mark Zuckerberg say in his testimony to Congress?

In his Senate testimony on April 10, 2018, Zuckerberg reiterated his apology, stating that “We didn’t take a broad enough view of our responsibility, and that was a big mistake. And it was my mistake. And I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here,” adding in a response to Sen. John Thune that “we try not to make the same mistake multiple times.. in general, a lot of the mistakes are around how people connect to each other, just because of the nature of the service.”

Sen. Amy Klobuchar asked if Facebook had determined whether Cambridge Analytica and the Internet Research Agency were targeting the same users. Zuckerberg replied, “We’re investigating that now. We believe that it is entirely possible that there will be a connection there.” According to NBC News , this was the first suggestion there is a link between the activities of Cambridge Analytica and the Russian disinformation campaign.

On June 11, 2018, nearly 500 pages of new testimony from Zuckerberg was released following promises of a follow-up to questions for which he did not have sufficient information to address during his Congressional testimony. The Washington Post notes that the release, “in some instances sidestepped lawmakers’ questions and concerns,” but that the questions being asked were not always relevant, particularly in the case of Sen. Ted Cruz, who attempted to bring attention to Facebook’s donations to political organizations, as well as how Facebook treats criticism of “Taylor Swift’s recent cover of an Earth, Wind and Fire song.”

  • Facebook gave Apple, Samsung access to data about users — and their friends (CNET)
  • Zuckerberg doubles down on Facebook’s fight against fake news, data misuse (CNET)
  • Tech execs react to Mark Zuckerberg’s apology: “I think he’s sorry he has to testify” (CBS News)
  • On Facebook, Zuckerberg gets privacy and you get nothing (ZDNet)
  • 6 Facebook security mistakes to fix on Data Privacy Day (CNET)
  • Zuckerberg takes Facebook data apology tour to Washington (CNET)
  • Zuckerberg’s Senate hearing highlights in 10 minutes (CNET via YouTube)
  • Russian politicians call on Facebook’s Mark Zuckerberg to testify on privacy (CNET)

What is the 2016 US presidential election connection to the Facebook data privacy scandal?

In December 2015, The Guardian broke the story of Cambridge Analytica being contracted by Ted Cruz’s campaign for the Republican Presidential Primary. Despite Cambridge Analytica CEO Alexander Nix’s claim i n an interview with TechRepublic that the company is “fundamentally politically agnostic and an apolitical organization,” the primary financier of the Cruz campaign is Cambridge Analytica co-founder Robert Mercer, who donated $11 million to a pro-Cruz Super PAC. Following Cruz’s withdrawal from the campaign in May 2016, the Mercer family began supporting Donald Trump.

In January 2016, Facebook COO Sheryl Sandberg told investors that the election was “a big deal in terms of ad spend,” and that through “using Facebook and Instagram ads you can target by congressional district, you can target by interest, you can target by demographics or any combination of those.”

In October 2017, Facebook announced changes to its advertising platform, requiring identity and location verification and prior authorization in order to run electoral advertising. In the wake of the fallout from the data privacy scandal, further restrictions were added in April 2018, making “issue ads” regarding topics of current interest similarly restricted .

In secretly recorded conversations by an undercover team from Channel 4 News, Cambridge Analytica’s Nix claimed the firm was behind the “defeat crooked Hillary” advertising campaign, adding, “We just put information into the bloodstream of the internet and then watch it grow, give it a little push every now and again over time to watch it take shape,” and that “this stuff infiltrates the online community, but with no branding, so it’s unattributable, untrackable.” The same exposé quotes Chief Data Officer Alex Tayler as saying, “When you think about the fact that Donald Trump lost the popular vote by 3 million votes but won the electoral college vote, that’s down to the data and the research.”

  • How Cambridge Analytica used your Facebook data to help elect Trump (ZDNet)
  • Facebook takes down fake accounts operated by ‘Roger Stone and his associates’ (ZDNet)
  • Facebook, Cambridge Analytica and data mining: What you need to know (CNET)
  • Civil rights auditors slam Facebook stance on Trump, voter suppression (ZDNet)
  • The Trump campaign app is tapping a “gold mine” of data about Americans (CBS News)

What is the Brexit tie-in to the Facebook data privacy scandal?

AggregateIQ was retained by Nigel Farage’s Vote Leave organization in the Brexit campaign , and both The Guardian and BBC claim that the Canadian company is connected to Cambridge Analytica and its parent organization SCL Group. UpGuard, the organization that found a public GitLab instance with code from AggregateIQ, has extensively detailed its connection to Cambridge Analytica and its involvement in Brexit campaigning .

Additionally, The Guardian quotes Wylie as saying the company “was set up as a Canadian entity for people who wanted to work on SCL projects who didn’t want to move to London.”

  • Brexit: A cheat sheet (TechRepublic)
  • Facebook suspends another data analytics firm, AggregateIQ (CBS News)
  • Lawmakers grill academic at heart of Facebook scandal (CBS News)

How is Facebook affected by the GDPR?

Like any organization providing services to users in European Union countries, Facebook is bound by the EU General Data Protection Regulation ( GDPR ). Due to the scrutiny Facebook is already facing regarding the Cambridge Analytica scandal, as well as the general nature of the social media giant’s product being personal information, its strategy for GDPR compliance is similarly receiving a great deal of focus from users and other companies looking for a model of compliance.

While in theory the GDPR is only applicable to people residing in the EU, Facebook will require users to review their data privacy settings. According to a ZDNet article , Facebook users will be asked if they want to see advertising based on partner information–in practice, websites that feature Facebook’s “Like” buttons. Users globally will be asked if they wish to continue sharing political, religious, and relationship information, while users in Europe and Canada will be given the option of switching automatic facial recognition on again.

Facebook members outside the US and Canada have heretofore been governed by the company’s terms of service in Ireland. This has reportedly been changed prior to the start of GDPR enforcement, as this would seemingly make Facebook liable for damages for users internationally, due to Ireland’s status as an EU member.

  • Google, Facebook hit with serious GDPR complaints: Others will be soon (ZDNet)
  • Facebook rolls out changes to comply with new EU privacy law (CBS News)
  • European court strikes down EU-US Privacy Shield user data exchange agreement as invalid (ZDNet)
  • GDPR security pack: Policies to protect data and achieve compliance (TechRepublic Premium)
  • IT pro’s guide to GDPR compliance (free PDF) (TechRepublic)

What are Facebook “shadow profiles?”

“Shadow profiles” are stores of information that Facebook has obtained about other people–who are not necessarily Facebook users. The existence of “shadow profiles” was discovered as a result of a bug in 2013. When a user downloaded their Facebook history, that user would obtain not just his or her address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books.

Facebook described the issue in an email to the affected users. This is an excerpt of the email, according to security site Packet Storm:

When people upload their contact lists or address books to Facebook, we try to match that data with the contact information of other people on Facebook in order to generate friend recommendations. Because of the bug, the email addresses and phone numbers used to make friend recommendations and reduce the number of invitations we send were inadvertently stored in their account on Facebook, along with their uploaded contacts. As a result, if a person went to download an archive of their Facebook account through our Download Your Information (DYI) tool, which included their uploaded contacts, they may have been provided with additional email addresses or telephone numbers.

Because of the way that Facebook synthesizes data in order to attribute collected data to existing profiles, data of people who do not have Facebook accounts congeals into dossiers, which are popularly called a “shadow profile.” It is unclear what other sources of input are added to said “shadow profiles,” a term that Facebook does not use, according to Zuckerberg in his Senate testimony.

  • Shadow profiles: Facebook has information you didn’t hand over (CNET)
  • Finally, the world is getting concerned about data privacy (TechRepublic)
  • Firm: Facebook’s shadow profiles are ‘frightening’ dossiers on everyone (ZDNet)

What are the possible implications for enterprises and business users?

Business users and business accounts should be aware that they are as vulnerable as consumers to data exposure. Because Facebook harvests and shares metadata–including SMS and voice call records–between the company’s mobile applications, business users should be aware that their risk profile is the same as a consumer’s. The stakes for businesses and employees could be higher, given that incidental or accidental data exposure could expose the company to liability, IP theft, extortion attempts, and cybercriminals.

Though deleting or deactivating Facebook applications won’t prevent the company from creating so-called advertising “shadow profiles,” it will prevent the company from capturing geolocation and other sensitive data. For actional best practices, contact your company’s legal counsel.

  • Social media policy (TechRepublic Premium)
  • Want to attain and retain customers? Adopt data privacy policies (TechRepublic)
  • Hiring kit: Digital campaign manager (TechRepublic Premium)
  • Photos: All the tech celebrities and brands that have deleted Facebook (TechRepublic)

How can I change my Facebook privacy settings?

According to Facebook, in 2014 the company removed the ability for apps that friends use to collect information about an individual user. If you wish to disable third-party use of Facebook altogether–including Login With Facebook and apps that rely on Facebook profiles such as Tinder–this can be done in the Settings menu under Apps And Websites. The Apps, Websites And Games field has an Edit button–click that, and then click Turn Off.

Facebook has been proactively notifying users who had their data collected by Cambridge Analytica, though users can manually check to see if their data was shared by going to this Facebook Help page .

Facebook is also developing a Clear History button, which the company indicates is “their database record of you.” CNET and CBS News Senior Producer Dan Patterson noted on CBSN that “there aren’t a lot of specifics on what that clearing of the database will do, and of course, as soon as you log back in and start creating data again, you set a new cookie and you start the process again.”

To gain a better understanding of how Facebook handles user data, including what options can and cannot be modified by end users, it may be helpful to review Facebook’s Terms of Service , as well as its Data Policy and Cookies Policy .

  • Ultimate guide to Facebook privacy and security (Download.com)
  • Facebook’s new privacy tool lets you manage how you’re tracked across the web (CNET)
  • Securing Facebook: Keep your data safe with these privacy settings (ZDNet)
  • How to check if Facebook shared your data with Cambridge Analytica (CNET)

Note: This article was written and reported by James Sanders and Dan Patterson. It was updated by Brandon Vigliarolo.

case study facebook privacy what privacy

Subscribe to the Cybersecurity Insider Newsletter

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered every Monday, Tuesday and Thursday

Image of TechRepublic Staff

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

case study facebook privacy what privacy

  • Case Study on Online Privacy
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics
  • Internet Ethics Resources
  • Your Privacy Online

(AP Images/Seth Wenig) image link to story

Privacy, Technology, and School Shootings: An Ethics Case Study

The ethics of social media monitoring by school districts.

(AP Images/Seth Wenig)

(AP Images/Seth Wenig)

In the wake of recent school shootings that terrified both campus communities and the broader public, some schools and universities are implementing technical measures in the hope of reducing such incidents. Companies are pitching various services for use in educational settings; those services include facial recognition technology and social media monitoring tools that use sentiment analysis to try to identify (and forward to school administrators) student posts on social media that might portend violent actions.

A New York Times article notes that “[m]ore than 100 public school districts and universities … have hired social media monitoring companies over the past five years.” According to the article, the costs for such services range from a few thousand dollars to tens of thousands per year, and the programs are sometimes implemented by school districts without prior notification to students, parents, or school boards.

The social media posts that are monitored and analyzed are public. The monitoring tools use algorithms to analyze the posts.

A Wired magazine article tilted “ Schools Are Mining Students’ Social Media Posts for Signs of Trouble ” cites Amanda Lenhart, a scholar who notes that research has shown “that it’s difficult for adults peering into those online communities from the outside to easily interpret the meaning of content there.” She adds that in the case of the new tools being offered to schools and universities, the problem “could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing.”

Others have also expressed concerns about the effectiveness of the monitoring programs and about the impact they might have on the relationship between students and administrators. Educational organizations, however, are under pressure to show their communities that they are doing all they can to keep their students safe.

Discussion Questions

Are there some rights that come into conflict in this context? If so, what are they? What is the appropriate balance to strike between them? Why?

Do efforts like the social media monitoring serve the common good? Why, or why not? For a brief explanation of this concept, read “The Common Good.”

Does the fact that the social media posts being analyzed are public impact your analysis of the use of the monitoring technology? If so, in what way(s)?

Should universities not just notify students but also ask them for their input before implementing monitoring of student social media accounts? Why or why not?

Should high schools ask students for their input? Should they ask the students’ parents for consent? Why or why not?

According to The New York Times , a California law requires schools in the state “to notify students and parents if they are even considering a monitoring program. The law also lets students see any information collected about them and tells schools to destroy all data on students once they turn 18 or leave the district.” If all states were to pass similar laws, would that allay concerns you might have had about the monitoring practices otherwise? Why or why not?

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics.

Photo by AP Images/Seth Wenig

  • Defining Privacy
  • Privacy: A Quiz
  • Loss of Online Privacy: What's the Harm?
  • Nothing to Hide
  • Do You Own Your Data?
  • How to Protect Your Online Privacy
  • The Ethics of Online Privacy Protection

Additional Resources

  • Suggested Reading and Viewing Lists
  • A Framework for Thinking Ethically

PrivacyEnd

Case Study: Facebook’s User Control Issues – A Deep Dive

It’s remarkable that despite having over a billion monthly active users globally, Facebook operates with a staff of fewer than 5,000 employees. In “Case Study: Facebook’s User Control Issues – A Deep Dive,” we delve into the intricate web of challenges surrounding Facebook’s user control issues. As one of the world’s largest social media platforms, Facebook has continuously faced scrutiny and criticism regarding its policies and practices concerning user data privacy, content moderation, and algorithmic transparency. By conducting a deep dive into these issues, we aim to shed light on the complexities at play, explore the impacts on users and society, and identify potential avenues for improvement. Join us as we embark on an insightful journey into the heart of Facebook’s user control dilemmas.

Facebook logo

The Evolution of Facebook’s Policies

The evolution of Facebook’s policies has been a dynamic process shaped by regulatory changes, user feedback, and technological advancements. Initially, the platform faced mounting privacy concerns as user control issues came to the forefront. Users expressed apprehensions about how their data was being shared with third-party apps, leading to a reevaluation of Facebook’s data-sharing practices. This shift was not only driven by user feedback but also by increasing privacy regulations globally.

As Facebook adapted its policies to address these issues, it also had to navigate the delicate balance of maintaining advertising revenue, a core part of its business model. The platform introduced new features that gave users more control over their privacy settings , such as the ability to limit the data shared with third-party apps. Simultaneously, Facebook implemented stricter guidelines for developers to ensure better protection of user data. The evolving landscape of privacy regulations continues to shape Facebook’s policies, highlighting the ongoing challenges in balancing user control, privacy concerns, and advertising revenue.

Facebook’s Data Breaches and Fallout

Navigating through the aftermath of data breaches, Facebook grappled with the repercussions on user trust and platform integrity.

The following shed light on the impact of these breaches:

User Privacy Compromised

Facebook’s data breaches led to a significant compromise of user privacy, as personal information was accessed without consent, raising concerns about the platform’s ability to protect sensitive data.

Impact on Business Model

The data breaches not only tarnished Facebook’s reputation but also highlighted flaws in its business model. The reliance on user data for targeted advertising came under scrutiny, prompting a reassessment of the company’s practices.

Response From Mark Zuckerberg

As the face of Facebook, Mark Zuckerberg faced intense scrutiny and pressure to address the fallout from the data breaches. His handling of the situation, including testifying before Congress, played a crucial role in shaping the company’s future actions regarding data protection and user privacy.

Facebook’s Privacy Settings Complexity

Grasping Facebook’s privacy settings complexity requires a detailed examination of the platform’s intricacies and user interface design. The intricate web of privacy settings on Facebook often leaves users bewildered especially when they join Facebook, struggling to navigate the labyrinth of options to protect their data adequately. This complexity contributes significantly to the user control issues that have plagued Facebook, leading to concerns about how effectively the platform can protect user privacy.

The convoluted nature of Facebook’s privacy settings also raises questions about the ease with which users can inadvertently share their information beyond their intended audience. The platform’s history of controversial data-sharing practices has further exacerbated these concerns. Simplifying the privacy settings and enhancing user education on how to effectively manage their data could go a long way in addressing these issues and rebuilding trust among users. As Facebook continues to grapple with balancing user control, privacy protection, and data sharing, a transparent and user-centric approach will be crucial in mitigating privacy risks and fostering a more secure online environment.

privacy policy

Facebook User Consent Challenges

Amidst the evolving landscape of digital privacy regulations, Facebook faces challenges in obtaining user consent for data processing. The platform’s user control issues have been highlighted, especially concerning how Facebook users’ data is accessed and utilized by a third-party app. This has raised significant privacy concerns and led to scrutiny from users and regulatory bodies alike.

Below are key challenges Facebook encounters in securing user consent

  • Complexity of privacy settings
  • Ambiguous consent requests
  • Limited user awareness

These challenges emphasize the importance of enhancing user consent mechanisms and improving transparency to address Facebook’s user control issues effectively.

Impact Facebook’s User Control on User Engagement

As users become more aware of their lack of control over their data on Facebook, their trust in the platform diminishes, impacting their willingness to engage actively. This lack of user control over how their data is utilized has led to concerns about privacy and the misuse of personal information. Consequently, users may be less inclined to interact with content on the platform, affecting their overall engagement levels.

One significant area where this impact is evident is in the news feed algorithm . With users feeling uneasy about how their data is being leveraged, they may be less likely to spend time scrolling through their news feed, interacting with posts, or sharing content. This reduced engagement can have far-reaching implications for Facebook as a social network, as user interaction and time spent on the platform are crucial metrics for its success. Ultimately, the erosion of user trust due to control issues can severely hinder Facebook’s ability to maintain high levels of user engagement.

Effects of Trust Erosion on Facebook

User trust in Facebook gradually diminishes as users increasingly question the platform’s data practices and transparency. The effects of trust erosion on Facebook are profound and have wide-reaching implications for the global community that relies on the platform for social interaction.

Here are key consequences of trust erosion on Facebook:

  • As trust in Facebook wanes due to user control issues and privacy concerns, users are becoming more hesitant to engage with the platform actively. This reluctance to interact can lead to a decline in user-generated content and overall participation within the social media ecosystem.
  • Trust erosion on Facebook can also result in a loss of brand loyalty among users and advertisers. Brands may be reluctant to associate themselves with a platform that faces scrutiny for its data practices, potentially impacting Facebook’s revenue streams.
  • The trust erosion on Facebook can weaken the sense of community and connection among its users, affecting the platform’s ability to foster a vibrant and engaged global social network. Trust is the foundation of any social media platform, and its erosion can have far-reaching consequences on the platform’s viability and relevance in the digital age.

Regulatory Responses on Facebook User Control Issues

Regulatory responses to user control issues on Facebook have been the subject of intense scrutiny and action by various governmental bodies and regulatory agencies worldwide.

Here are some major regulatory responses:

GDPR Compliance

gdpr

The General Data Protection Regulation (GDPR) implemented by the European Union (EU) in 2018 significantly impacts Facebook’s operations, especially concerning user control over personal data. GDPR mandates strict requirements for user consent, transparency, and control over personal data processing. Failure to comply can result in substantial fines. Facebook has had to adjust its practices and policies to adhere to GDPR standards, offering users more control over their data.

Antitrust Investigations

Governments, particularly in the United States and the European Union, have launched antitrust investigations into Facebook’s market dominance. These investigations focus on Facebook’s control over user data and the impact on competition and user choice. The scrutiny could lead to regulatory measures aimed at enhancing user control by potentially forcing Facebook to loosen its grip on user data or even break up the company.

Privacy Regulations

In response to growing concerns about user privacy and control over personal data, various countries have introduced or strengthened privacy regulations. For instance, California’s Consumer Privacy Act (CCPA) and its updated version, the California Privacy Rights Act (CPRA), grant users more control over their personal information, including the right to access, delete, and opt-out of data sharing. Similar privacy laws in other jurisdictions aim to empower users with more control over their data on platforms like Facebook.

User Backlash on Facebook’s User Control Controversy

Amidst the controversy surrounding Facebook’s handling of personal data, scrutiny has intensified on the platform’s approach to user control, prompting an in-depth analysis of the user backlash.

The user backlash analysis delves into the following key aspects:

Impact on User Base

The user backlash has significantly impacted Facebook’s user base, with some users choosing to deactivate their accounts or reduce their usage of the platform. This shift in user behavior highlights the significance of addressing user control issues promptly.

Social Context

The user backlash considers the broader social context in which Facebook operates. Public perception of data privacy and user control has evolved, influencing users’ expectations and demands from social media platforms like Facebook. Understanding this social context is crucial for Facebook to regain user trust and credibility.

Transparency Efforts Evaluation

The evaluation of Facebook’s transparency efforts reveals a quantifiable improvement in disclosing data practices to users. In response to mounting user control issues and privacy concerns, Facebook has made strides in enhancing its transparency mechanisms.

The Electronic Frontier Foundation (EFF) commended Facebook for its increased transparency efforts, acknowledging the company’s efforts to provide users with more information about how their data is being utilized. By implementing clearer privacy settings and making it easier for users to access and understand their privacy options, Facebook has taken a step in the right direction.

However, challenges remain, particularly in ensuring transparency regarding data sharing with third-party websites. While Facebook has made progress in this area, there is still room for improvement. Continued monitoring and assessment of Facebook’s transparency efforts will be crucial in addressing ongoing user control issues and enhancing trust among users concerned about their privacy on the platform.

trust

Facebook’s User Empowerment Initiatives

After many complaints, Facebook launched extensive advertising campaigns to inform users about privacy settings and options for controlling their data, aiming to empower users with the understanding necessary to safeguard their information. Within Facebook’s platform, users are being empowered through the implementation of personalized privacy controls. This strategic move aims to address user control issues and privacy concerns that have plagued the platform in the past. By granting users more control over their data and interactions, Facebook is taking steps towards rebuilding trust and fostering a more user-centric approach.

Here are some initiatives that showcase Facebook’s commitment to empowering its users:

  • Customized privacy settings
  • News feed preferences
  • Enhanced data management tools

Facebook’s Future User Control Strategies

In response to user control issues and privacy concerns, Facebook is focusing on providing users with more control over their data and the content they interact with on the platform. One of the key strategies is to improve privacy settings, making it easier to manage who can access user data, and how it is used. Additionally, Facebook is working on implementing more personalized controls that allow users to tailor their experience based on their preferences.

To address relevant content concerns, Facebook is exploring ways to enhance its algorithms to ensure that users receive more meaningful and accurate information. By giving users the ability to customize their news feeds and filter out unwanted content, Facebook aims to create a more personalized and enjoyable user experience while also promoting user data protection.

Facebook User Control: Industry Comparisons

When comparing user control across social media platforms, Facebook stands out due to its immense user base and corresponding influence. Despite offering a variety of privacy settings and controls, Facebook has faced criticism for its handling of user data and privacy breaches, which have led to regulatory actions and public backlash. In contrast, platforms like Twitter and LinkedIn have relatively fewer user control issues, with clearer privacy policies and less extensive data collection practices. Instagram, owned by Facebook, shares some user control challenges, but its focus on visual content and younger demographics presents distinct considerations. Overall, while various social media platforms grapple with user control issues, Facebook’s scale and history of controversies place it under greater scrutiny within the industry.

Recommendations and Conclusions

To effectively address user control issues in social media platforms, it is imperative for companies to consistently prioritize and invest in enhancing data protection measures. Ensuring user privacy and security is paramount in fostering trust and maintaining a loyal user base.

Based on the analysis of Facebook’s user control issues, the following recommendations and conclusions can be drawn:

  • Implement clear and transparent data practices to provide users with full visibility and control over their personal information. This includes easy-to-understand privacy settings and regular updates on how data is collected and used.
  • Strengthen security measures to safeguard user data from unauthorized access and potential breaches. Regular security audits, encryption protocols, and proactive monitoring can help mitigate security risks.
  • Empower users by offering granular control over their data, such as the ability to opt-in or opt-out of certain data collection practices. Providing users with choices and control can enhance their trust in the platform’s commitment to data protection.

Frequently Asked Questions

What are the consequences of facebook’s user control issues on its reputation and user trust.

The consequences include a decline in user trust, regulatory scrutiny, fines, and potential long-term impacts on user engagement and advertising revenue. Negative perceptions of Facebook’s handling of user data could also hinder its ability to attract new users and business partners.

What Are the Potential Long-Term Consequences of Facebook’s User Control Issues on the Company’s Overall Reputation and User Trust?

Facebook’s user control issues could lead to long-term damage to the company’s reputation and erode user trust. Continual breaches of user privacy and control could drive users away from the platform, leading to a decline in user engagement and advertising revenue. Additionally, negative perceptions of Facebook’s handling of user data could hinder its ability to attract new users and partners, impacting its long-term growth prospects.

How Does Facebook Plan to Address User Control Issues in the Future to Prevent Similar Incidents From Occurring?

Facebook aims to address user control issues by implementing stricter privacy controls, enhancing transparency in data practices, and providing users with more granular control over their data. The company plans to invest in advanced technologies such as artificial intelligence to detect and prevent misuse of user data proactively. Additionally, Facebook continues to collaborate with regulators and privacy advocates to develop and implement best practices that safeguard user control and privacy on its platform.

In recent years, Facebook has faced intense scrutiny and criticism regarding user control issues, ranging from privacy breaches to allegations of manipulating user data for political purposes. These controversies have eroded trust in the platform and led to regulatory investigations and fines worldwide. Facebook’s responses have included implementing stricter privacy policies, enhancing user control features, and engaging in public relations efforts to rebuild trust, but the issues persist, highlighting ongoing challenges in balancing user control with business interests.

Leave a Comment Cancel reply

Columbia Journalism Review

A professor is suing Facebook over its recommendation algorithms

case study facebook privacy what privacy

Facebook users are probably aware that what they see in their news feeds is determined by the company’s recommendation algorithms. (Well, most users .) Many are accustomed to this fact, but some believe that there are alternatives to this kind of centralized control. Ethan Zuckerman is among them—and that’s why he and the Knight First Amendment Institute at Columbia recently filed a lawsuit against Meta, Facebook’s parent company, asking a court to empower users to employ third-party tools to filter their news feeds. The suit relies on a novel interpretation of Section 230 of the Communications Decency Act, which was initially designed to protect digital platforms from legal liability for content posted by users.

Zuckerman is not just any Facebook user. He is an associate professor of public policy at the University of Massachusetts Amherst and the director of the school’s Initiative for Digital Public Infrastructure; previously, he led the Center for Civic Media at the Massachusetts Institute of Technology and was a fellow at the Berkman Klein Center for Internet and Society at Harvard. In a New York Times op-ed published last week, Zuckerman wrote that the Facebook algorithm “forgets friends I want to hear from, becomes obsessed with people to whom I’m only loosely connected, and generally feels like an obstacle to how I’d like to connect with my friends.” But his lawsuit is about more than that, he says. If it succeeds, he argues , “we can decide how social media works for us and for our children through tools we can control,” instead of being at the mercy of The Algorithm.

Zuckerman told me this week that he got the idea for the lawsuit after Louis Barclay, a British software developer, came up with a browser extension called Unfollow Everything , which allowed users to undo some of the workings of Facebook’s algorithm. Meta blocked the extension, describing it as a breach of its terms of service, and banned Barclay from the platform permanently. The more he looked into the decision, the more Zuckerman felt that it was wrong—not only ethically, but legally. Barclay “produced something genuinely helpful,” Zuckerman said, “and I felt there should be a legal argument about whether he could do that or not.”

Zuckerman’s “aha moment” came while he was teaching a class about Section 230, in which he asked his students to go through the text of the law line by line. One section in particular jumped out at him, in which the law states that electronic platforms will not be held liable for “any action taken to enable or make available to information content providers or others the technical means to restrict access to” certain kinds of material. For Zuckerman—and some of the legal experts he later consulted at the Knight First Amendment Institute—this language opened the door to protect third-party software (sometimes called “middleware”) that would allow users to control their Facebook feeds based on their own criteria.

Section 230 goes on to state that one of its goals is to “encourage the development of technologies which maximize user control over what information is received” by individuals using the internet. Zuckerman argues in his Times op-ed that by including these clauses, Congress clearly intended to promote the development of tools that would enable users to curate their online experiences. If software such as Unfollow Everything is allowed, Zuckerman wrote, “we could have better control over what we see on social media,” which in turn might help create “a more civic-minded internet.” Giving users more control, he added , is a way to establish “more of an equilibrium in an online world that is increasingly out of kilter.” (Zuckerman wrote for CJR about building a more honest internet back in 2019 .)

Zuckerman told me that he has designed a program similar to Unfollow Everything that allows users to control their Facebook feeds and to opt in to a research program designed by Zuckerman that will use their browsing data to study how their behavior changes when Facebook’s algorithms are removed. In addition to banning third-party software like the extension designed by Barclay, Meta has, in the past, taken similar action against researchers who have tried to use software to study what happens on Facebook: in 2021, the company not only blocked a program that Laura Edelson, a professor at New York University, and colleagues were using to study behavior on the platform, but also suspended the personal Facebook accounts of some of the researchers, as I wrote for CJR at the time .

Zuckerman’s lawsuit is essentially a gambit to preempt this type of antagonistic response. He and the Knight First Amendment Institute are asking the court to rule that software like Zuckerman’s version of Unfollow Everything is legal so that Meta can’t block it, ban Zuckerman from the platform personally, or use a law such as the federal Computer Fraud and Abuse Act to sue him for offering software that breaches their terms of service.

He concedes that it is impossible to know whether the suit will succeed. “I just don’t know,” he told me. “I’m pretty sure we’re right on matters of law, but obviously it could go differently once we get to court.” When he first proposed the idea, Zuckerman said, a lot of people thought he was unlikely to succeed—but some observers have since come around to his way of thinking. “This is not a stunt,” he said. “We’re planning on winning this.” If a judge denies Zuckerman a preemptive ruling protecting him from Meta’s wrath, he says, he said he’s willing to consider releasing the software anyway and then take his chances. (Meta has yet to comment on Zuckerman’s suit.)

Zuckerman isn’t alone in his impression of legal viability. Mike Masnick, of Techdirt , wrote that the case seemed to take much of the legal community by surprise but that several of his legal sources have come to believe that it makes some sense, after initially dismissing it as a “crazy legal theory.” Others echoed Zuckerman’s inability to predict what might happen in court. Jeff Kosseff, a professor at the Naval Academy and author of a book on Section 230, told the Washington Post that Zuckerman’s case relies on an interpretation of the law that has yet to be tested, and said that he isn’t aware of anyone having used clauses in the law to win a preemptive declaration from a court before a case is even launched. Kosseff said he couldn’t even hazard a guess as to how this case might end.

Sophia Cope, a staff attorney at the Electronic Frontier Foundation, a digital rights group, told Wired that most of Section 230 has been clarified in the courts, but that not many cases have dealt with the part of the law on which Zuckerman’s lawsuit is based. Meta has argued that third-party software such as Unfollow Everything raises security and privacy concerns. But Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, told Wired that Zuckerman’s tool is unlikely to run into this problem because it simply unfollows Facebook users, an act that shouldn’t pose any privacy or security issues.

According to Masnick, Zuckerman’s case embodies a principle that some have called “adversarial interoperability ,” or the idea that new online services should be allowed to interoperate with existing ones—even if the companies that run the latter haven’t explicitly allowed them to do so—provided that they serve the needs of users. If the court buys Zuckerman’s interpretation of Section 230, Masnick argues, then it could “make the open Web a lot more open, while chipping away at the centralized control of the biggest tech companies.” And a win might even make up for the fact that Zuckerman was responsible for a scourge of the early internet: the pop-up ad .

Other notable stories:

  • Yesterday, Robert Fico, the prime minister of Slovakia, was shot and rushed to the hospital; officials said overnight that his condition is stable, but according to the New York Times , a Slovakian TV station has reported that his life remains in the balance and that hospital staffers’ phones have been confiscated to curb leaks about his condition. As we’ve reported before , Fico, a press-bashing populist, was ousted from power following the 2018 assassination of the investigative journalist Ján Kuciak, but he returned as prime minister last year and has since sought to neuter the public broadcaster; a video that circulated of the alleged gunman yesterday suggested that he opposed Fico’s media overhaul and other policies. Political allies of Fico’s have in part blamed the liberal media for the shooting, with one warning that “there will be some changes to the media.”
  • Also yesterday, President Biden announced that he would not participate in the typical fall debate schedule organized by the Commission on Presidential Debates, but would be open to debates on individual networks. Surprisingly, within a few hours, Biden and Donald Trump had agreed to two debates—one on CNN in June; the other on ABC in September— after those networks scrambled to set terms and box out competitors . (ABC will allow some competitors to simulcast its debate.) Trump and the Republican Party long ago soured on the commission that typically organizes the debates , and Politico reports that Biden and his allies have soured on it, too, pointing, among other areas of contention, to Trump being allowed to debate without testing for COVID in 2020.
  • Jim Rutenberg and Michael M. Grynbaum, of the Times , report on tensions between MSNBC, the unapologetically liberal cable channel, and NBC News, which sits under the same corporate parent but strives to present itself as a straight news outfit . “NBC’s traditional political journalists have cycled between rancor and resignation that the cable network’s partisanship…will color perceptions of their straight news reporting,” Rutenberg and Grynbaum report. “Local NBC stations between the coasts have demanded, again and again, that executives in New York do more to preserve NBC’s nonpartisan brand, lest MSNBC’s blue-state bent alienate their red-state viewers.”
  • Slate is out with “Sly as Fox,” a series of articles about “the perils of underestimating Fox News in 2024.” In one essay, Justin Peters pushes back on the “myth” that the network’s power has waned at the hands of media-industry forces and Trump’s takeover of the Republican Party and traditional conservatism. “Sure, Fox couldn’t persuade right-wing voters to abandon Trump and vote for Ron DeSantis,” Peters argues. But “Fox News has rarely been an instrument of direct electoral action. Its greatest power is flexed differently, and it is still capable of deploying that power toward malignant ends.”
  • And a court in Guatemala ordered that José Rubén Zamora, a veteran muckraking journalist and founder of the newspaper elPeriódico , be released from prison while he awaits a retrial on money-laundering charges against him that were overturned last year. Supporters of Zamora, who has been behind bars for nearly two years since his initial arrest, have suggested that the charges were brought in retribution for his critical coverage of former president Alejandro Giammattei. (We covered the case in January .)

case study facebook privacy what privacy

The voice of journalism, since 1961

  • Privacy Policy

Support CJR

  • Become a Member
  • Work & Careers
  • Life & Arts
  • Currently reading: Business school teaching case study: Unilever chief signals rethink on ESG
  • Business school teaching case study: can green hydrogen’s potential be realised?
  • Business school teaching case study: how electric vehicles pose tricky trade dilemmas
  • Business school teaching case study: is private equity responsible for child labour violations?

Business school teaching case study: Unilever chief signals rethink on ESG

A smiling middle-aged Caucasian man in a light blue shirt in front of shelves stocked with various household cleaning products

  • Business school teaching case study: Unilever chief signals rethink on ESG on x (opens in a new window)
  • Business school teaching case study: Unilever chief signals rethink on ESG on facebook (opens in a new window)
  • Business school teaching case study: Unilever chief signals rethink on ESG on linkedin (opens in a new window)
  • Business school teaching case study: Unilever chief signals rethink on ESG on whatsapp (opens in a new window)

Gabriela Salinas and Jeeva Somasundaram

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

In April this year, Hein Schumacher, chief executive of Unilever, announced that the company was entering a “new era for sustainability leadership”, and signalled a shift from the central priority promoted under his predecessor , Alan Jope.

While Jope saw lack of social purpose or environmental sustainability as the way to prune brands from the portfolio, Schumacher has adopted a more balanced approach between purpose and profit. He stresses that Unilever should deliver on both sustainability commitments and financial goals. This approach, which we dub “realistic sustainability”, aims to balance long- and short-term environmental goals, ambition, and delivery.

As a result, Unilever’s refreshed sustainability agenda focuses harder on fewer commitments that the company says remain “very stretching”. In practice, this entails extending deadlines for taking action as well as reducing the scale of its targets for environmental, social and governance measures.

Such backpedalling is becoming widespread — with many companies retracting their commitments to climate targets , for example. According to FactSet, a US financial data and software provider, the number of US companies in the S&P 500 index mentioning “ESG” on their earnings calls has declined sharply : from a peak of 155 in the fourth quarter 2021 to just 29 two years later. This trend towards playing down a company’s ESG efforts, from fear of greater scrutiny or of accusations of empty claims, even has a name: “greenhushing”.

Test yourself

This is the fourth in a series of monthly business school-style teaching case studies devoted to the responsible business dilemmas faced by organisations. Read the piece and FT articles suggested at the end before considering the questions raised.

About the authors: Gabriela Salinas is an adjunct professor of marketing at IE University; Jeeva Somasundaram is an assistant professor of decision sciences in operations and technology at IE University.

The series forms part of a wider collection of FT ‘instant teaching case studies ’, featured across our Business Education publications, that explore management challenges.

The change in approach is not limited to regulatory compliance and corporate reporting; it also affects consumer communications. While Jope believed that brands sold more when “guided by a purpose”, Schumacher argues that “we don’t want to force fit [purpose] on brands unnecessarily”.

His more nuanced view aligns with evidence that consumers’ responses to the sustainability and purpose communication attached to brand names depend on two key variables: the type of industry in which the brand operates; and the specific aspect of sustainability being communicated.

In terms of the sustainability message, research in the Journal of Business Ethics found consumers can be less interested when product functionality is key. Furthermore, a UK survey in 2022 found that about 15 per cent of consumers believed brands should support social causes, but nearly 60 per cent said they would rather see brand owners pay taxes and treat people fairly.

Among investors, too, “anti-purpose” and “anti-ESG” sentiment is growing. One (unnamed) leading bond fund manager even suggested to the FT that “ESG will be dead in five years”.

Media reports on the adverse impact of ESG controversies on investment are certainly now more frequent. For example, while Jope was still at the helm, the FT reported criticism of Unilever by influential fund manager Terry Smith for displaying sustainability credentials at the expense of managing the business.

Yet some executives feel under pressure to take a stand on environmental and social issues — in many cases believing they are morally obliged to do so or through a desire to improve their own reputations. This pressure may lead to a conflict with shareholders if sustainability becomes a promotional tool for managers, or for their personal social responsibility agenda, rather than creating business value .

Such opportunistic behaviours may lead to a perception that corporate sustainability policies are pursued only because of public image concerns.

Alison Taylor, at NYU Stern School of Business, recently described Unilever’s old materiality map — a visual representation of how companies assess which social and environmental factors matter most to them — to Sustainability magazine. She depicted it as an example of “baggy, vague, overambitious goals and self-aggrandising commitments that make little sense and falsely suggest a mayonnaise and soap company can solve intractable societal problems”.

In contrast, the “realism” approach of Schumacher is being promulgated as both more honest and more feasible. Former investment banker Alex Edmans, at London Business School, has coined the term “rational sustainability” to describe an approach that integrates financial principles into decision-making, and avoids using sustainability primarily for enhancing social image and reputation.

Such “rational sustainability” encompasses any business activity that creates long-term value — including product innovation, productivity enhancements, or corporate culture initiatives, regardless of whether they fall under the traditional ESG framework.

Similarly, Schumacher’s approach aims for fewer targets with greater impact, all while keeping financial objectives in sight.

Complex objectives, such as having a positive impact on the world, may be best achieved indirectly, as expounded by economist John Kay in his book, Obliquity . Schumacher’s “realistic sustainability” approach means focusing on long-term value creation, placing customers and investors to the fore. Saving the planet begins with meaningfully helping a company’s consumers and investors. Without their support, broader sustainability efforts risk failure.

Questions for discussion

Read: Unilever has ‘lost the plot’ by fixating on sustainability, says Terry Smith

Companies take step back from making climate target promises

The real impact of the ESG backlash

Unilever’s new chief says corporate purpose can be ‘unwelcome distraction ’

Unilever says new laxer environmental targets aim for ‘realism’

How should business executives incorporate ESG criteria in their commercial, investor, internal, and external communications? How can they strike a balance between purpose and profits?

How does purpose affect business and brand value? Under what circumstances or conditions can the impact of purpose be positive, neutral, or negative?

Are brands vehicles by which to drive social or environmental change? Is this the primary role of brands in the 21st century or do profits and clients’ needs come first?

Which categories or sectors might benefit most from strongly articulating and communicating a corporate purpose? Are there instances in which it might backfire?

In your opinion, is it necessary for brands to take a stance on social issues? Why or why not, and when?

Climate Capital

case study facebook privacy what privacy

Where climate change meets business, markets and politics. Explore the FT’s coverage here .

Are you curious about the FT’s environmental sustainability commitments? Find out more about our science-based targets here

Promoted Content

Explore the series.

Close-up of a green and white sign featuring the chemical symbol for hydrogen, ‘H2’

Follow the topics in this article

  • Sustainability Add to myFT
  • Impact investing Add to myFT
  • Corporate governance Add to myFT
  • Corporate social responsibility Add to myFT
  • Business school case Add to myFT

International Edition

Zabbix Blog

Case Study: Monitoring with Zabbix and AI

case study facebook privacy what privacy

Artificial intelligence (AI) and data monitoring are working together to digitally transform relationships, businesses, and people. In telecommunications, predictive analysis based on data collection plays a crucial role in development. Starting with version 6.0 of Zabbix, users have benefited from updates in predictive functions and machine learning, which make it possible for them to study the data monitored by Zabbix and integrate it with AI modules.

Danilo Barros, co-founder of Lunio (a Zabbix Certified Partner in Brazil), presented the results of using Zabbix combined with telecom data monitoring through AI and machine learning at Zabbix Conference Brazil in 2022. Keep reading to get the whole story!

Table of Contents

The scenario

With over 600 OLTs (Optical Line Terminals – the fiberoptic infrastructure used by internet providers) as well as 400,000 customers across more than 800 cities and 20 states in Brazil, Lunio’s client manages a staggering amount of data. This monitoring is essential for smooth operations and to guarantee that there are no negative impacts on users and no overload for customer service agents in the event of accidents.

A primary challenge for telecom clients is the overload of calls to customer service in the event of massive network incidents. With so many customers, every precaution must be taken to avoid clogging phone lines during outages or service failures.

“You can’t achieve customer satisfaction under such circumstances, and the Net Promoter Score (NPS) drops drastically.”   Danilo Barros, co-founder of Lunio

Mapping needs

Considering the client’s operational structure, a series of customer needs were identified, focusing on six main points:

1. Automation: With notifications via digital channels for each event 2. Speed: Aiming for improved customer service 3. Operational costs: Budget optimization 4. Root cause analysis: Quick identification of the cause of events 5. Predictability: The ability to analyze problems and identify trends 6. Reporting: Identifying incidents and following regulations from ANATEL (National Telecommunications Agency)

With these interests in mind, it was possible to reassess the use of tools previously employed by the telecom client, which at the time served unique functions in the process. Each tool had its usage and information verification time, which could impact hundreds of users in a massive-scale incident. The key challenges identified by the Lunio team included:

  • Integrations: Systems needed to be interconnected
  • Integrity: Constant data updates
  • Topology: With system mapping through specific programs
  • Business rules: Respecting the development of local processes
  • Performance: The monitoring and automation of 600,000 assets
  • High availability: Dozens of data centers catering to local demand

Once the needs and challenges were identified, it was time to promote change within the client. By integrating systems and using Zabbix to monitor over 600,000 items, understand incidents, and predict potential future errors, the technical teams at Lunio created LunioAI, a “super attendant” with analytical and predictive capabilities as well as the ability to continuously learn.

“This guy (LunioIA) learns from each event, understanding each topology that occurs in the client’s network.”   Danilo Barros, co-founder of Lunio

In the initial response tests, LunioAI was able to analyze and evaluate massive events in a minute and a half. Over time, this was reduced to 30 seconds, making the return to the technical team increasingly swift and positively impacting incident resolution.

The results

Throughout the development and improvement of LunioIA, the operations chain was involved in predictive analyses of potential events on the network, providing technical professionals with the information needed to perform preventive maintenance on monitored items.

LunioIA considers data from integrated systems, FTTH (fiber to the home) environments, data centers, and items, all as part of the Zabbix monitoring environment. It can then diagnose events, understand the severity of an event, and find resolution points – without the need for human resources in the process.

As a result, when physical attendants were contacted by customers experiencing difficulties with the service, instead of going through the entire process to understand what happened, the attendant could perform a search using the customer’s CPF (Individual Taxpayer Registry Identification) and then access a summary of the events, causes, and solutions identified by artificial intelligence combined with data monitoring through Zabbix.

In conclusion

This example happens to come from the telecommunications industry, but it’s not difficult to see how the ability of Zabbix to integrate the data monitored by Zabbix with AI modules can benefit companies in almost any industry.

You can find out more about what we can do across a variety of industries by visiting our website or requesting a demo.

case study facebook privacy what privacy

About Aurea Araujo

Digital Communications & Marketing Analyst LatAm

The request is blocked.

Experience new growth possibilities with Microsoft Advertising today >

How Microsoft Advertising helped Air France to increase bookings by 52% into its network

Two passengers sitting on a plane and a flight attendant serving a drink.

In the second half of 2023, Air France decided to shift its paid search strategy from a profitability focus to maximizing revenue. The goal was to optimize all post-crisis opportunities on over 100 markets managed globally. The timing of this strategy revision was perfect—Microsoft Advertising had just released a new feature, Performance Max. To generate incremental bookings and revenue, Air France’s paid search agency partner Performics decided to test Performance Max alongside Search and Audience ads.

The solution

Thanks to advanced import tools, Performics was able to quickly roll-out the Performance Max functionality in Microsoft Advertising, helping them to immediately drive positive impact on key market campaign performance.

Performics applied a targeting strategy to leverage Microsoft's proprietary in-market audiences of users' searches and visit. Using high-quality creative assets for top travel destinations and a ROAS target, Performics effectively helped Air France reach their objectives.

We are pleased with Microsoft Advertising's AI capacity to enhance bookings and reach. If this level of performance is maintained, we plan to launch Performance Max across additional markets.

— Sacha Maniquant,  Head of Search and Metasearch, Air France

Based on the results achieved with Performance Max for Air France, we’re recommending this campaign to other advertisers. We are looking forward to further results.

— Nicolas Pestourie,  Head Of Paid Search Operations, Performics

The results

With Performance Max, Air France achieved their targets. After launching the Performance Max campaign, Air France bookings increased by 52% and revenue from nonbrand campaigns had a 54% increase over the period.

Ready to get started with us?

Stay informed.

Sign up for the Microsoft Advertising Insider newsletter to keep up with the latest insights, product news, tips and tricks, thought leadership, customer case studies, and resources.

Recommended for you

The first Pharmaceutical brand to launch Video ads with Microsoft Advertising saw almost 2X increase in brand searches

May 10, 2024

Happy biker smiling while holding a bike in a mountain setting.

How Gandalf achieved a 1316% ROAS and 100% overall higher revenues with Microsoft Advertising

March 01, 2024

Two people laying on the grass. One is whispering in the other one’s ear.

How Flower Chimp achieved a 231% increase in total conversion rates with Microsoft Advertising

Person sitting in front of a big bouquet of flowers and smiling.

9 of the Biggest Financial Fraud Cases in History

Financial fraud is on the rise, so it's worth taking a trip down a dark memory lane.

9 Biggest Financial Fraud Cases

Reuters

The criminal trial of FTX founder Sam Bankman-Fried was one of the biggest financial fraud cases in history.

Financial fraud is as prevalent today as it was over 100 years ago, when the Italian con artist Charles Ponzi was swindling investors out of their fortunes in one of the earliest high-profile financial scams ever recorded.

With the next recession or economic downturn in the back of investors' minds, law enforcement officials are on the lookout for financial fraud, as scammers tend to rise in influence during difficult market conditions.

So, with scandals recently in the news – the FTX fraud case, for starters – let's look at some of the most infamous financial frauds in recent history and use them as expensive examples of what can go wrong when bad actors get their hands on investors' money.

Ivan Boesky

Bernie madoff.

  • Wells Fargo

Luckin Coffee

Sam Bankman-Fried, founder of collapsed cryptocurrency trading platform FTX, was sentenced to 25 years in prison in March after a jury in Manhattan, New York, found him guilty of seven counts of fraud and conspiracy including wire fraud, securities fraud and money laundering. U.S. government prosecutors have called SBF's downfall one of the biggest financial fraud cases in history. A few of his former collaborators, including business partner Gary Wang, pleaded guilty and cooperated with investigators.

Bankman-Fried launched FTX in May 2019 and was also the driving force behind hedge fund Alameda Research, which he co-founded with Wang. Flush with billions in private financing, Bankman-Fried, along with other FTX senior executives, was accused of using the money to buy plush beach homes in the Caribbean, invest in new ventures, and send money to local and national political causes.

In late 2022, the U.S. Securities and Exchange Commission said Bankman-Fried defrauded his companies' investors by steering money from FTX into Alameda Research between 2019 and 2022. Both FTX and Alameda went bankrupt, and Bankman-Fried was arrested on fraud charges in the Bahamas.

On May 8, FTX reported in a bankruptcy court filing that most of its customers will get their money back, and some may get about 118% of their claim, though that may be cold comfort for investors who missed out on the surge in crypto prices over the past two years. FTX estimated its debt to creditors at about $11.2 billion.

In March 2004, Stanford University sophomore Elizabeth Holmes dropped out of school to focus on her new startup Theranos, which set out to make blood tests more efficient, more accurate and much faster. Five years later, Holmes linked up with a new business partner, Ramesh "Sunny" Balwani, who guaranteed a $10 million loan to Theranos.

The company grew at lightning speed, with Theranos valued at $10 billion by 2014. By 2015, however, the company's highly touted automated compact testing device was exposed as unworkable by medical testing professionals. Soon after, federal and state regulators filed wire fraud and conspiracy charges against the company.

Crushed under the weight of legal costs, Theranos dissolved in June 2018. In November and December 2022, Holmes and Balwani were both found guilty and sentenced to 11 and 12 years in prison, respectively. Holmes and Balwani were ordered in May 2023 to pay restitution of $452 million to fraud victims, with $125 million of that amount owed to media mogul Rupert Murdoch.

Holmes reported to a minimum-security prison in Bryan, Texas, in May 2023 to begin her sentence, which has since been reduced by two years as of May 6.

In January, the Department of Health and Human Services banned Holmes from participating in federal health programs for 90 years, a restriction also previously imposed on Balwani.

Notorious investor Ivan Boesky died in California on May 20 at age 87 after spending the latter part of his life mostly out of the public eye, a stark contrast to his earlier years. The 1980s were fraught with financial fraud, and Boesky was among the first Wall Street traders to go to prison on insider trading charges. Boesky honed his craft that decade in the lucrative arbitrage trading market .

Nicknamed "Ivan the Terrible," Boesky made over $200 million investing in corporate takeovers and company mergers . In 1985, the SEC charged Boesky with illegally profiting from insider trading by acquiring stocks and futures in companies based on tips from company insiders.

A year later Boesky was found guilty and, based on a plea agreement that involved Boesky taping phone calls with other insider trading conspirators, including Drexel Burnham Lambert's junk bond king Michael Milken, Ivan the Terrible was sentenced to three and a half years in prison. He was also slapped with a $100 million fine and ordered never to work in the securities industry again.

Boesky is said to have inspired aspects of film character Gordon Gekko, played by actor Michael Douglas in the 1987 movie "Wall Street."

Former New York City fund manager Bernie Madoff is long gone, having passed away in prison in April 2021 at the age of 82. But the Madoff story was revived in 2023 with the successful Netflix documentary "The Monster of Wall Street," which retold the tale of the mastermind behind the biggest Ponzi scheme ever recorded.

Madoff, a former chair of the Nasdaq with close ties to government financial regulators, was already a Wall Street legend in the 1980s and 1990s. His company, Bernard L. Madoff Investment Securities LLC, was the sixth-largest market maker in S&P 500 stocks . Yet over the course of 17 years, Madoff, assisted by company managers and back office staff, ran a massive Ponzi scheme that promised investors eye-popping returns.

Instead, Madoff and his crew were inventing stock trades and fabricating brokerage accounts , and pocketing the investment money. By 2008, at the height of the Great Recession , Madoff's luck ran out, and a run on deposits and the resulting investigation revealed that his firm stole over $19 billion from 40,000 investors.

Madoff was arrested and charged with 11 counts of fraud, and he was found guilty and sentenced to 150 years in prison in June 2009.

On Dec. 8, 2022, executives at Wirecard, a Munich, Germany-based electronic payments firm, went on trial in what media outlets called the biggest corporate fraud case in German history. Former CEO Markus Braun faces multiple years in prison if convicted, though former manager and chief witness for the prosecution Oliver Bellenhaus was released from jail in February.

Another former Wirecard executive, Jan Marsalek, is reportedly hiding out in Russia and is suspected of working with Russian intelligence, according to the Financial Times. Currently, Marsalek is on Europe's "most wanted" list as an international fugitive, but that didn't stop him from sending a letter in support of Braun by way of his lawyer in July 2023.

Wirecard found itself in the fraud spotlight when it declared insolvency in 2020 and regulators found that 1.9 billion euros ($2.1 billion) was missing from the company's accounts, amid allegations from German regulators that the money never existed. Braun was arrested and Marsalek fled the country, where trial proceedings are expected to run at least until the end of the year.

Investors can only watch as the fraud trial plays out, with little hope of ever recovering their money.

Wells Fargo  

This mega-bank just can't seem to stay out of regulatory trouble. Wells Fargo & Co. (ticker: WFC ) agreed in May 2023 to pay $1 billion to settle a class action lawsuit that accused it of defrauding investors about the progress it had made toward cleaning up its act after a 2016 fake-accounts scandal. In February of this year, the Biden administration relaxed some of the restrictions on Wells Fargo that were put in place after the fiasco. But just a couple of weeks later, another class action suit for $5 million was filed against the bank alleging that it has not taken enough action to help customers hurt by the case.

In 2016, the Consumer Financial Protection Bureau slapped a $100 million fine on Wells Fargo, on top of the SEC's $3 billion in fines against the bank, as officials stated that overworked staffers were incentivized to open approximately 2 million fake accounts under customers' names. The move was eventually blamed on senior management and boosted bank profits for the short term. Yet it damaged the company's brand and alienated customers over the long term.

In March 2023, the former head of Wells Fargo's retail bank and small business lending, Carrie Tolstedt, the only executive to face criminal charges in the scandal, pleaded guilty to an obstruction charge. On Sept. 15, she received three years of probation and a $100,000 fine, but no prison time.

Wells Fargo also was ordered to pay $3.7 billion in December 2022 due to "illegal activity" involving the mismanagement of 16 million client accounts. According to the CFPB, Wells Fargo "repeatedly misapplied loan payments, wrongfully foreclosed on homes and illegally repossessed vehicles, incorrectly assessed fees and interest, and charged surprise overdraft fees."

China-based Luckin Coffee Inc. (OTC: LKNCY) appears to be a turnaround story after years of being immersed in a legal quagmire stemming from a 2020 fake revenue scandal.

The coffee giant gained visibility with a 2019 initial public offering that saw Luckin's stock rise from $17 per share to $50 in a year's time. In early 2020, however, internal financial analysts discovered the company's growth was artificially inflated due to $310 million in bulk sales to businesses linked to the company's chairman. On June 26, 2020, Luckin's shares closed at $1.38.

Investigators also found that Luckin management had fraudulently engineered the purchase of $140 million in raw materials from suppliers. Shortly afterward, the company's stock was delisted from the Nasdaq and the senior executives involved in the scandal were fired.

Now back in business, under new management and trading over the counter , Luckin is the largest coffee retailer in China, well ahead of Starbucks. Its first-quarter revenue increased 41.5% year over year, and it opened 2,342 new stores in Q1 alone. Luckin's monthly customers also increased 103% over the same period in 2023. Though nothing is guaranteed, a re-listing on the Nasdaq is also reportedly in play for Luckin, after the company convinces regulators it's back on track, ethically and legally.

This brand-name international auto manufacturer is coming off a tough year for global economies, but Volkswagen AG (OTC: VWAGY) is pulling clear of its 2015 emission standards debacle. However, residual fumes from the scandal remain, as Volkswagen settled with Italian car owners for $54 million as recently as May 15 to put their "dieselgate" legal dispute to rest.

In 2015, company engineers installed a special type of software in 11 million of its diesel-powered cars to detect when cars were being tested for emissions and change their results. The Volkswagen vehicles' actual nitrogen oxide emissions were 40 times higher than U.S. legal standards allowed. When U.S. regulators discovered the plot, Volkswagen had to recall approximately 480,000 vehicles and fork over $30 billion in fines and penalties.

In recent years, Volkswagen's new sustainability council has steered the company toward a decarbonization and e-vehicle strategy that is beginning to pay dividends, with dieselgate fading in the rearview mirror.

One of the largest corporate fraud cases of the 21st century is Enron, dubbed "America's Most Innovative Company" by Fortune magazine every year from 1996 to 2001. Formed in 1985, the former dot-com supernova made a fortune trading natural gas and other commodities and even rolled out its own digital commodity trading platform in 1999.

In August 2000, Enron shares reached a high of $90, but only a year later Sherron Watkins, an Enron finance executive, warned CEO Ken Lay that a massive accounting scandal was brewing that could take down the entire company.

Amid SEC inquiries into its finances, in November 2001 Enron admitted it overstated profits by nearly $600 million. Within roughly two months, the company declared bankruptcy and the Justice Department launched a criminal investigation of Enron. Before announcing the bankruptcy, Enron cut 4,000 jobs, and many ex-employees saw their pension plans drained.

One outcome of the Enron saga was the passage of the Sarbanes-Oxley Act of 2002, which established stricter accounting rules for public companies. Sarbanes-Oxley got a high-profile airing in April, after federal prosecutors charged defendants in the Jan. 6 assault on the Capitol with violating the act by corruptly obstructing an "official proceeding," according to Bloomberg Law. The Supreme Court is weighing whether that application of the law is an overreach, and its decision is expected in July.

7 Bank Stocks to Buy for the Dividends

Jeff Reeves July 21, 2023

case study facebook privacy what privacy

Tags: investing , money , Bernie Madoff , fraud , Volkswagen , Wells Fargo , Starbucks , Enron , electric vehicles

The Best Financial Tools for You

Credit Cards

case study facebook privacy what privacy

Personal Loans

case study facebook privacy what privacy

Comparative assessments and other editorial opinions are those of U.S. News and have not been previously reviewed, approved or endorsed by any other entities, such as banks, credit card issuers or travel companies. The content on this page is accurate as of the posting date; however, some of our partner offers may have expired.

case study facebook privacy what privacy

Subscribe to our daily newsletter to get investing advice, rankings and stock market news.

See a newsletter example .

You May Also Like

7 best long-term stocks to buy.

Glenn Fydenkevez May 24, 2024

case study facebook privacy what privacy

What Is Par Value for Stocks and Bonds?

Coryanne Hicks May 24, 2024

case study facebook privacy what privacy

11 Top Sector ETFs to Buy

Jeff Reeves May 24, 2024

case study facebook privacy what privacy

Stock Market Holidays in 2024

Daniel J. Lee May 24, 2024

case study facebook privacy what privacy

George Soros' 6 Top Stock Picks

Brian O'Connell May 23, 2024

case study facebook privacy what privacy

5 Best Undervalued Blue-Chip Stocks

Glenn Fydenkevez May 23, 2024

case study facebook privacy what privacy

15 Best Dividend Stocks to Buy for 2024

Ian Bezek May 23, 2024

case study facebook privacy what privacy

7 Best Semiconductor ETFs to Buy in 2024

Tony Dong May 23, 2024

case study facebook privacy what privacy

Best Investments for 2024

Kate Stalter May 23, 2024

case study facebook privacy what privacy

7 Best Cryptocurrency Exchanges

Jeff Reeves May 22, 2024

case study facebook privacy what privacy

6 Best Biotech Stocks to Buy for 2024

Wayne Duggan May 22, 2024

case study facebook privacy what privacy

Bill Gates Portfolio: 7 Best Stocks

Brian O'Connell May 22, 2024

case study facebook privacy what privacy

Top Fidelity Funds for Retirement

Tony Dong May 22, 2024

case study facebook privacy what privacy

7 Stocks That Outperform in a Recession

case study facebook privacy what privacy

8 Best High-Yield REITs to Buy

Tony Dong May 21, 2024

case study facebook privacy what privacy

Elon Musk's Record of Overpromising

Wayne Duggan May 21, 2024

case study facebook privacy what privacy

What Are Magnificent 7 Stocks?

case study facebook privacy what privacy

6 Best Vanguard Funds for Retirement

Coryanne Hicks May 21, 2024

case study facebook privacy what privacy

Sell in May and Go Away in 2024?

Dmytro Spilka May 20, 2024

case study facebook privacy what privacy

7 Best Funds to Hold in a Roth IRA

Tony Dong May 20, 2024

case study facebook privacy what privacy

IMAGES

  1. Case Study Title: "Facebook Privacy: What Privacy"

    case study facebook privacy what privacy

  2. Facebook: Another case study for data Privacy

    case study facebook privacy what privacy

  3. MIS CASE STUDY ( Facebook Privacy. What privacy? )

    case study facebook privacy what privacy

  4. Case Study

    case study facebook privacy what privacy

  5. Case Study #3 Facebook Privacy What Privacy.docx

    case study facebook privacy what privacy

  6. The Ultimate Facebook Privacy and Security Checklist

    case study facebook privacy what privacy

VIDEO

  1. How to Manage Privacy Settings on Facebook

COMMENTS

  1. Social Media & Privacy: A Facebook Case Study

    Globally, the website h as over 968 million. daily users and 1.49 billion monthly users, with nearl y 844 million mobile daily users and. 3.31 billion mobile monthly users ( See Figure 1 ...

  2. Facebook parent Meta agrees to pay $725 million to settle privacy

    Facebook CEO Mark Zuckerberg gave high-profile testimonies in 2020 before Congress and as part of the Federal Trade Commission's privacy case for which Facebook also agreed to a $5 billion fine.

  3. A timeline of Facebook's privacy issues

    Facebook's response: Facebook agrees to undergo an independent privacy evaluation every other year for the next 20 years. Facebook settled with the Federal Trade Commission in 2011 over charges ...

  4. The Privacy Paradox: A Facebook Case Study

    The perceived risk of sharing information is the most important determinant of privacy behaviors; to a lesser extent usability of privacy controls is important. Finally, privacy preferences is the least important factor; thus, the explanation that people don't care is weakest explanation of the privacy paradox (on Facebook).

  5. Facebook, Cambridge Analytica, and the (Uncertain) Future of Online Privacy

    This public-sourced case uses Facebook, the legendary social media platform, to unfold circumstances that allow an analysis of the firm's privacy risk around its marketing tools and use of collected consumer data. Although Facebook had made progress on providing users more transparency around how it operated, how policies were enforced, and how shared data has been collected since 2018 ...

  6. Case Study: Facebook Privacy Policy

    We've gone over some information about privacy policies in general, as well as some tips for staying private on social media, so let's bring it all together in a practical example. In this lesson, we'll use Facebook's privacy policy as a case study to show you what to look for in terms of what's covered in terms of your privacy while using a website, and what you'll need to watch ...

  7. Data Privacy Standards in The United States: a Case Study of Facebook

    In doing so, this thesis. aims to explore fair policy solutions for the United States that keep both consumers and. businesses in mind. Although the imposition of legal restraints for Facebook and others is. necessary to protect individual data privacy, industry indicators reveal that placing burdensome.

  8. Fixing Facebook: Fake News, Privacy, and Platform Governance

    But from the beginning, Facebook received criticism both for how it handled user privacy and how it curated user-generated content. These two issues coalesced in the aftermath of the 2016 United States presidential election, after Facebook's role in the spread of political misinformation and the leak of Facebook user data to political ...

  9. Harvard case study exposes Facebook's slow response to privacy

    A new case study released today in the inaugural edition of Technology Science published by Harvard University examines Facebook's response to the discovery of a glaring privacy vulnerability in its popular messenger app.. The case study comes from Harvard University senior Aran Khanna, who lost an internship with Facebook after discovering a vulnerability in the platform's Android-based ...

  10. Privacy Management Among Social Media Natives: An Exploratory Study of

    For young adults, the social media spectrum may be visualized with Facebook on one end and Snapchat on the other 1; their widespread use and structural differences provide a suitable environment for investigating the differing effects of the audience, persistence, and privacy concerns on privacy management practices.Of online social media natives, 88% use Facebook regularly (Greenwood, Perrin ...

  11. Privacy, Sharing, and Trust: The Facebook Study

    Using sharing on Facebook as a case study, this Article presents empirical evidence suggesting that trust is a significant factor in individuals' willingness to share personal information on online social networks. I then make two arguments, one that explains why Facebook is designed the way it is and one that calls for legal protection against unfair manipulation of users. I argue that ...

  12. PDF Social Media Privacy: A Facebook Case Study

    Introduction. With the growth of social media websites, such as Facebook, our privacy has become increasingly more vulnerable to surveillance and commodification. As we have uploaded personal ...

  13. Fixing Facebook: Fake News, Privacy, and Platform Governance

    Mark Zuckerberg founded Facebook based on the idea that connecting people was a fundamentally good thing-and a way to turn a handsome profit. But from the beginning, Facebook received criticism both for how it handled user privacy and how it curated user-generated content. These two issues coalesced in the aftermath of the 2016 United States presidential election, after Facebook's role in the ...

  14. Facebook data privacy scandal: A cheat sheet

    The rules were first rolled out in the US in May. On October 25, 2018, Facebook was fined £500,000 by the UK's Information Commissioner's Office for their role in the Cambridge Analytica ...

  15. Big data and the Facebook scandal: Issues and responses

    Abstract. The recent scandal over the appropriation of users' data from the Facebook platform serves to surface wider concerns about 'big data', relating inter alia to the ways in which personal data are obtained, stored and used for commercial purposes. This article outlines some of the issues involved, and sketches some of the ways in ...

  16. Case Study Facebook Privacy:What Privacy?

    The document discusses the ethical dilemma presented by Facebook's privacy policies and business model. It describes how Facebook collects extensive personal information from users in order to target advertisements. While this allows Facebook to generate revenue, it conflicts with users' privacy. The weaknesses of Facebook's privacy policies are outlined, including how they make the privacy ...

  17. Case Study on Online Privacy

    The ethics of social media monitoring by school districts. In the wake of recent school shootings that terrified both campus communities and the broader public, some schools and universities are implementing technical measures in the hope of reducing such incidents.

  18. Case study Facebook Privacy Your life for sale

    Facebook exploits users' personal information to benefit third parties without their agreement. Furthermore, Facebook users who do not have access to Facebook support are aware of their privacy since they follow the Facebook business model. Those data that have been accessed can lead to numerous undesirable scenarios occur. 2.

  19. Privacy by Design: A Counterfactual Analysis of Google and Facebook

    a general review of the design principles relevant to privacy. Part III turns to ten case studies of Google and Facebook privacy incidents, relying on the principles identified in Part II to discover what went wrong and what the two companies might have done differently to avoid privacy violations and consumer harms.

  20. Case Study: Facebook's User Control Issues

    In "Case Study: Facebook's User Control Issues - A Deep Dive," we delve into the intricate web of challenges surrounding Facebook's user control issues. As one of the world's largest social media platforms, Facebook has continuously faced scrutiny and criticism regarding its policies and practices concerning user data privacy, content ...

  21. A professor is suing Facebook over its recommendation algorithms

    According to Masnick, Zuckerman's case embodies a principle that some have called "adversarial interoperability," or the idea that new online services should be allowed to interoperate with existing ones—even if the companies that run the latter haven't explicitly allowed them to do so—provided that they serve the needs of users.If the court buys Zuckerman's interpretation of ...

  22. Facebook case study.pdf

    This case study examines the collection and use of Facebook user information as it pertains to individual privacy. By analyzing the ethical dilemma the company faces when it comes to an individual's right to privacy, it becomes obvious that unintended consequences can arise as the result of the unclear regulations surrounding data collection.

  23. Business school teaching case study: Unilever chief signals rethink on ESG

    Unilever has 'lost the plot' by fixating on sustainability, says Terry Smith. Companies take step back from making climate target promises. The real impact of the ESG backlash. Unilever's ...

  24. About fact-checking on Facebook, Instagram, and Threads

    How the program works. Our program includes several key steps: Identify false news: We identify potential misinformation using signals, like feedback from people on Facebook, Instagram, and Threads, and surface the content to fact-checkers. Fact-checkers may also identify content to review on their own. Review content: Fact-checkers will review ...

  25. Case Study: Monitoring with Zabbix and AI

    Starting with version 6.0 of Zabbix, users have benefited from updates in predictive functions and machine learning, which make it possible for them to study the data monitored by Zabbix and integrate it with AI modules. Danilo Barros, co-founder of Lunio (a Zabbix Certified Partner in Brazil), presented the results of using Zabbix combined ...

  26. Search Microsoft Copilot: Your everyday AI companion

    Microsoft Copilot leverages the power of AI to boost productivity, unlock creativity, and helps you understand information better with a simple chat experience.

  27. Air France travel

    In the second half of 2023, Air France decided to shift its paid search strategy from a profitability focus to maximizing revenue. The goal was to optimize all post-crisis opportunities on over 100 markets managed globally. The timing of this strategy revision was perfect—Microsoft Advertising had just released a new feature, Performance Max.

  28. 9 of the Biggest Financial Fraud Cases in History

    May 21, 2024, at 2:02 p.m. 9 Biggest Financial Fraud Cases. More. Reuters. The criminal trial of FTX founder Sam Bankman-Fried was one of the biggest financial fraud cases in history. Financial ...