Facebook quietly relaunches apps for Groups platform after lockdown

Facebook quietly relaunches apps for Groups platform after lockdown

Facebook is becoming a marketplace for enterprise apps that help Group admins manage their communities.

To protect itself and its users in the wake of the Cambridge Analytica scandal, Facebook locked down the Groups API for building apps for Groups. These apps had to go through a human-reviewed approval process, and lost access to Group member lists, plus the names and profile pics of people who posted. Now, approved Groups apps are reemerging on Facebook, accessible to admins through a new in-Facebook Groups apps browser that gives the platform control over discoverability.

Facebook confirmed the new Groups apps browser after our inquiry, telling TechCrunch, “What you’re seeing today is related to changes we announced in April that require developers to go through an updated app review process in order to use the Groups API. As part of this, some developers who have gone through the review process are now able to access the Groups API.”

Facebook wouldn’t comment further, but this Help Center article details how Groups can now add apps. Matt Navarra first spotted the new Groups apps option and tipped us off. Previously, admins would have to find Group management tools outside of Facebook and then use their logged-in Facebook account to give the app permissions to access their Group’s data.

Groups are often a labor of love for admins, but generate tons of engagement for the social network. That’s why the company recently began testing Facebook subscription Groups that allow admins to charge a monthly fee. With the right set of approved partners, the platform offers Group admins some of the capabilities usually reserved for big brands and businesses that pay for enterprise tools to manage their online presences.

Becoming a gateway to enterprise tool sets could make Facebook Groups more engaging, generating more time on site and ad views from users. This also positions Facebook as a natural home for ad campaigns promoting different enterprise tools. And one day, Facebook could potentially try to act more formally as a Groups App Store and try to take a cut of software-as-a-service subscription fees the tool makers charge.

Facebook can’t build every tool that admins might need, so in 2010 it launched the Groups API to enlist some outside help. Moderating comments, gathering analytics and posting pre-composed content were some of the popular capabilities of Facebook Groups apps. But in April, it halted use of the API, announcing that “there is information about people and conversations in groups that we want to make sure is better protected. Going forward, all third-party apps using the Groups API will need approval from Facebook and an admin to ensure they benefit the group.”

Now apps that have received the necessary approval are appearing in this Groups apps browser. It’s available to admins through their Group Settings page. The apps browser lets them pick from a selection of tools like Buffer and Sendible for scheduling posts to their Group, and others for handling commerce messages.

Facebook is still trying to bar the windows of its platform, ensuring there are no more easy ways to slurp up massive amounts of sensitive user data. Yesterday it shut down more APIs and standalone apps in what appears to be an attempt to streamline the platform so there are fewer points of risk and more staff to concentrate on safeguarding the most popular and powerful parts of its developer offering.

The Cambridge Analytica scandal has subsided to some degree, with Facebook’s share price recovering and user growth maintaining at standard levels. However, a new report from The Washington Post says the FBI, FTC and SEC will be investigating Facebook, Cambridge Analytica and the social network’s executives’ testimony to Congress. Facebook surely wants to get back to concentrating on product, not politics, but must take it slow and steady. There are too many eyes on it to move fast or break anything.

Source: Mobile – Techcruch

Zuckerberg’s boring testimony is a big win for Facebook

Zuckerberg’s boring testimony is a big win for Facebook

Mark Zuckerberg ran his apology scripts, trotted out his lists of policy fixes and generally dulled the Senate into submission. And that constitutes success for Facebook.

Zuckerberg testified before the joint Senate judiciary and commerce committee today, capitalizing on the lack of knowledge of the politicians and their surface-level questions. Half the time, Zuckerberg got to simply paraphrase blog posts and statements he’d already released. Much of the other half, he merely explained how basic Facebook functionality works.

The senators hadn’t done their homework, but he had. All that training with D.C. image consultants paid off.

Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC. (Photo: JIM WATSON/AFP/Getty Images)

Sidestepping any gotcha questions or meme-worthy sound bites, Zuckerberg’s repetitive answers gave the impression that there’s little left to uncover, whether or not that’s true. He made a convincing argument that Facebook is atoning for its sins, is cognizant of its responsibility and has a concrete plan in place to improve data privacy.

With just five minutes per senator, and them each with a queue of questions to get through, few focused on the tougher queries, and even fewer had time for follow-ups to dig for real answers.

Did Facebook cover up the Cambridge Analytica scandal or decide against adding privacy protections earlier to protect its developer platform? Is it a breach of trust for Zuckerberg and other executives to have deleted their Facebook messages out of recipients’ inboxes? How has Facebook used a lack of data portability to inhibit the rise of competitors? Why doesn’t Instagram let users export their data the way they can from Facebook?

The public didn’t get answers to any of those questions today. Just Mark’s steady voice regurgitating Facebook’s talking points. Investors rewarded Facebook for its monotony with a 4.5 percent share price boost.

That’s not to say today’s hearing wasn’t effective. It’s just that the impact was felt before Zuckerberg waded through a hundred photographers to take his seat in the Senate office.

Facebook knew this day was coming, and worked to build Zuckerberg a fortress of facts he could point to no matter what he got asked:

  • Was Facebook asleep at the wheel during the 2016 election? Yesterday it revealed it had deleted the accounts of Russian GRU intelligence operatives in June 2016.
  • How will Facebook prevent this from happening again? Last week it announced plans to require identity and location verification for any political advertiser or popular Facebook Page, and significantly restricted its developer platform.
  • Is Facebook taking this seriously? Zuckerberg wrote in his prepared testimony for today that Facebook is doubling its security and content moderation team from 10,000 to 20,000, and that “protecting our community is more important than maximizing our profits.”
  • Is Facebook sorry? “We didn’t take a broad enough view of what our responsibility is and that was a huge mistake. That was my mistake,” Zuckerberg has said, over and over.

Facebook may never have made such sweeping changes and apologies had it not had today and tomorrow’s testimony on the horizon. But this defensive strategy also led to few meaningful disclosures, to the detriment of the understanding of the public and the Senate — and to the benefit of Facebook.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

We did learn that Facebook is working with Special Counsel Robert Mueller on his investigation into election interference. We learned that Zuckerberg thinks it was a mistake not to suspend the advertising account of Cambridge Analytica when Facebook learned it had bought user data from Dr. Aleksandr Kogan. And we learned that the senate will “haul in” Cambridge Analytica for a future hearing about data privacy.

None of those are earth-shaking.

Perhaps the only fireworks during the testimony came when Senator Ted Cruz laid into Zuckerberg over the Gizmodo report citing that Facebook’s trending topics curators suppressed conservative news trends. Cruz badgered Zuckerberg about whether he believes Facebook is politically neutral, whether Facebook has ever taken down Pages from liberal groups like Planned Parenthood or MoveOn.org, if he knows the political leanings of Facebook’s content moderators and whether Facebook fired Oculus co-founder Palmer Luckey over his [radical conservative] political views.

Zuckerberg maintained that he and Facebook are neutral, but that last question was the only one of the day that seemed to visibly perturb him. “That is a specific personnel matter than seems like it would be inappropriate…” Zuckerberg said before Cruz interrupted, pushing the CEO to exasperatedly respond, “Well then I can confirm that it was not because of a political view.” It should be noted that Cruz has received numerous campaign donations from Luckey.

This was the only time Zuckerberg seemed flapped, because he knows the stakes of the public perception of Facebook’s political leanings. Zuckerberg, many Facebook employees and Facebook’s home state of California are all known to lean left. But if the company itself is seen that way, conservative users could flee, shattering Facebook’s network effect. Yet again, Zuckerberg nimbly avoided getting cornered here, and was aided by the bell signaling the end of Cruz’s time. He never noticeably raised his voice, lashed back at the senators or got off message.

By the conclusion of the five hours of questioning, the senators themselves were admitting they hadn’t watched the day’s full testimony. Viewers at home had likely returned to their lives. Even the press corps’ eyes were glazing over. But Zuckerberg was prepared for the marathon. He maintained pace through the finish line. And he made it clear why marathons aren’t TV spectator sports.

The question is no longer what revelations would come from Mr. Zuckerberg going to Washington. Tomorrow’s testimony is likely to go similarly. It’s whether Facebook can coherently execute on the data privacy promises it made leading up to today. This will be a “never-ending battle” as Zuckerberg said, dragging out over many years. And again, that’s in Facebook’s interest. Because in the meantime, everyone’s going back to scrolling their feeds.

Source: Mobile – Techcruch

Zuckerberg admits it was a mistake not to ban Cambridge Analytica’s ads

Zuckerberg admits it was a mistake not to ban Cambridge Analytica’s ads

Facebook didn’t ban Cambridge Analytica when it found out in 2015 that it had received user data from Dr. Aleksandr Kogan, and Zuckerberg called that a mistake during his testimony before the Senate. Cambridge Analytica has since been banned.

Zuckerberg explained that “I want to correct one thing that I said earlier in response to a question from Senator Leahy. He had asked why we didn’t ban Cambridge Analytica at the time when we learned of them in 2015. And I answered that what my understanding was was that they were not on the platform, were not an app developer or advertiser. When I went back and met with my team afterwards, they let me know that Cambridge Analytica actually did start as an advertiser later in 2015, so we could have in theory banned them back then, and made a mistake by not doing so.”

NEW YORK, NY – SEPTEMBER 19: CEO of Cambridge Analytica Alexander Nix speaks at the 2016 Concordia Summit – Day 1 at Grand Hyatt New York on September 19, 2016 in New York City. (Photo by Bryan Bedder/Getty Images for Concordia Summit)

When the Guardian informed Facebook about Kogan sharing user data to Cambridge Analytica, Facebook banned Kogan, and required Cambridge Analytica to formally certify that it had deleted all the improperly attained user data. Cambridge Analytica did so, Zuckerberg confirmed in his prepared testimony for today. But Facebook then stopped short of blocking Cambridge Analytica from buying ads on its platform. The company went on to work with the Trump campaign to help it optimize political messaging and ad targeting.

Had Facebook banned Cambridge Analytica at the time, it wouldn’t have been able to buy ads directly on behalf of political campaigns with which it worked. However, the company might still have been able to help these campaigns to optimize their ads, so a 2015 ban wouldn’t have necessarily prevented second-hand use of improperly attained data.

Source: Mobile – Techcruch

Facebook plans crackdown on ad targeting by email without consent

Facebook plans crackdown on ad targeting by email without consent

Facebook is scrambling to add safeguards against abuse of user data as it reels from backlash over the Cambridge Analytica scandal. Now TechCrunch has learned Facebook will launch a certification tool that demands that marketers guarantee email addresses used for ad targeting were rightfully attained. This new Custom Audiences certification tool was described by Facebook representatives to their marketing clients, according to two sources. Facebook will also prevent the sharing of Custom Audience data across Business accounts.

This snippet of a message sent by a Facebook rep to a client notes that “for any Custom Audiences data imported into Facebook, Advertisers will be required to represent and warrant that proper user content has been obtained.”

Once shown the message, Facebook spokesperson Elisabeth Diana told TechCrunch “I can confirm there is a permissions tool that we’re building.” It will require that advertisers and the agencies representing them pledge that “I certify that I have permission to use this data”, she said.

Diana noted that “We’ve always had terms in place to ensure that advertisers have consent for data they use but we’re going to make that much more prominent and educate advertisers on the way they can use the data.” The change isn’t in response to a specific incident, but Facebook does plan to re-review the way it works with third-party data measurement firms to ensure everything is responsibly used. This is a way to safeguard data” Diana concluded.The company declined to specify whether it’s ever blocked usage of a Custom Audience because it suspected the owner didn’t have user consent. ”

The social network is hoping to prevent further misuse of ill-gotten data after Dr. Aleksandr Kogan’s app that pulled data on 50 million Facebook users was passed to Cambridge Analytica in violation of Facebook policy. That sordid data is suspected to have been used by Cambridge Analytica to support the Trump and Brexit campaigns, which employed Custom Audiences to reach voters.

Facebook launched Custom Audiences back in 2012 to let businesses upload hashed lists of their customers email addresses or phone numbers, allowing advertisers to target specific people instead of broad demographics. Custom Audiences quickly became one of Facebook’s most powerful advertising options because businesses could easily reach existing customers to drive repeat sales. The Custom Audiences terms of service require that businesses have “provided appropriate notice to and secured any necessary consent from the data subjects” to attain and use these people’s contact info.

But just like Facebook’s policy told app developers like Kogan not to sell, share, or misuse data they collected from Facebook users, the company didn’t go further to enforce this rule. It essentially trusted that the fear of legal repercussions or suspension on Facebook would deter violations of both its app data privacy and Custom Audiences consent policies. With clear financial incentives to bend or break those rules and limited effort spent investigating to ensure compliance, Facebook left itself and its users open to exploitation.

Last week Facebook banned the use of third-party data brokers like Experian and Acxiom for ad targeting, closing a marketing featured called Partner Categories. Facebook is believed to have been trying to prevent any ill-gotten data from being laundered through these data brokers and then directly imported to Facebook to target users. But that left open the option for businesses to compile illicit data sets or pull them from data brokers, then upload them to Facebook as Custom Audiences by themselves.

The Custom Audiences certification tool could close that loophole. It’s still being built, so Facebook wouldn’t say exactly how it will work. I asked if Facebook would scan uploaded user lists and try to match them against a database of suspicious data, but for now it sounds more like Facebook will merely require a written promise.

Meanwhile, barring the sharing of Custom Audiences between Business Accounts might prevent those with access to email lists from using them to promote companies unrelated to the one to which users gave their email address. Facebook declined to comment on how the new ban on Custom Audience sharing would work.

Now Facebook must find ways to thwart misuse of its targeting tools and audit anyone it suspects may have already violated its policies. Otherwise it may receive the ire of privacy-conscious users and critics, and strengthen the case for substantial regulation of its ads (though regulation could end up protecting Facebook from competitors who can’t afford compliance). Still the question remains why it took such a massive data privacy scandal for Facebook to take a tougher stance on requiring user consent for ad targeting. And given that written promises didn’t stop Kogan or Cambridge Analytica from misusing data, why would they stop advertisers bent on boosting profits?

For more on Facebook’s recent scandals, check out TechCrunch’s coverage:

 

Source: Mobile – Techcruch

The real threat to Facebook is the Kool-Aid turning sour

The real threat to Facebook is the Kool-Aid turning sour

These kinds of leaks didn’t happen when I started reporting on Facebook eight years ago. It was a tight-knit cult convinced of its mission to connect everyone, but with the discipline of a military unit where everyone knew loose lips sink ships. Motivational posters with bold corporate slogans dotted its offices, rallying the troops. Employees were happy to be evangelists.

But then came the fake news, News Feed addiction, violence on Facebook Live, cyberbullying, abusive ad targeting, election interference and, most recently, the Cambridge Analytica app data privacy scandals. All the while, Facebook either willfully believed the worst case scenarios could never come true, was naive to their existence or calculated the benefits and growth outweighed the risks. And when finally confronted, Facebook often dragged its feet before admitting the extent of the issues.

Inside the social network’s offices, the bonds began to fray. An ethics problem metastisized into a morale problem. Slogans took on sinister second meanings. The Kool-Aid tasted different.

Some hoped they could right the ship but couldn’t. Some craved the influence and intellectual thrill of running one of humanity’s most popular inventions, but now question if that influence and their work is positive. Others surely just wanted to collect salaries, stock and resumé highlights, but lost the stomach for it.

Now the convergence of scandals has come to a head in the form of constant leaks.

The trouble tipping point

The more benign leaks merely cost Facebook a bit of competitive advantage. We’ve learned it’s building a smart speaker, a standalone VR headset and a Houseparty split-screen video chat clone.

Yet policy-focused leaks have exacerbated the backlash against Facebook, putting more pressure on the conscience of employees. As blame fell to Facebook for Trump’s election, word of Facebook prototyping a censorship tool for operating in China escaped, triggering questions about its respect for human rights and free speech. Facebook’s content rulebook got out alongside disturbing tales of the filth the company’s contracted moderators have to sift through. Its ad targeting was revealed to be able to pinpoint emotionally vulnerable teens.

In recent weeks, the leaks have accelerated to a maddening pace in the wake of Facebook’s soggy apologies regarding the Cambridge Analytica debacle. Its weak policy enforcement left the door open to exploitation of data users gave third-party apps, deepening the perception that Facebook doesn’t care about privacy.

Image courtesy of Buzzfeed

And it all culminated with BuzzFeed publishing a leaked “growth at all costs” internal post from Facebook VP Andrew “Boz” Bosworth that substantiated people’s worst fears about the company’s disregard for user safety in pursuit of world domination. Even the ensuing internal discussion about the damage caused by leaks and how to prevent them…leaked.

But the leaks are not the disease, just the symptom. Sunken morale is the cause, and it’s dragging down the company. Former Facebook employee and Wired writer Antonio Garcia Martinez sums it up, saying this kind of vindictive, intentionally destructive leak fills Facebook’s leadership with “horror”:

And that sentiment was confirmed by Facebook’s VP of News Feed Adam Mosseri, who tweeted that leaks “create strong incentives to be less transparent internally and they certainly slow us down,” and will make it tougher to deal with the big problems.

Those thoughts weigh heavy on Facebook’s team. A source close to several Facebook executives tells us they feel “embarrassed to work there” and are increasingly open to other job opportunities. One current employee told us to assume anything certain execs tell the media is “100% false.”

If Facebook can’t internally discuss the problems it faces without being exposed, how can it solve them?

Implosion

The consequences of Facebook’s failures are typically pegged as external hazards.

You might assume the government will finally step in and regulate Facebook. But the Honest Ads Act and other rules about ads transparency and data privacy could end up protecting Facebook by being simply a paperwork speed bump for it while making it tough for competitors to build a rival database of personal info. In our corporation-loving society, it seems unlikely that the administration would go so far as to split up Facebook, Instagram and WhatsApp — one of the few feasible ways to limit the company’s power.

Users have watched Facebook make misstep after misstep over the years, but can’t help but stay glued to its feed. Even those who don’t scroll rely on it as a fundamental utility for messaging and login on other sites. Privacy and transparency are too abstract for most people to care about. Hence, first-time Facebook downloads held steady and its App Store rank actually rose in the week after the Cambridge Analytica fiasco broke. In regards to the #DeleteFacebook movement, Mark Zuckerberg himself said “I don’t think we’ve seen a meaningful number of people act on that.” And as long as they’re browsing, advertisers will keep paying Facebook to reach them.

That’s why the greatest threat of the scandal convergence comes from inside. The leaks are the canary in the noxious blue coal mine.

Can Facebook survive slowing down?

If employees wake up each day unsure whether Facebook’s mission is actually harming the world, they won’t stay. Facebook doesn’t have the same internal work culture problems as some giants like Uber. But there are plenty of other tech companies with less questionable impacts. Some are still private and offer the chance to win big on an IPO or acquisition. At the very least, those in the Bay could find somewhere to work without a spending hours a day on the traffic-snarled 101 freeway.

If they do stay, they won’t work as hard. It’s tough to build if you think you’re building a weapon. Especially if you thought you were going to be making helpful tools. The melancholy and malaise set in. People go into rest-and-vest mode, living out their days at Facebook as a sentence not an opportunity. The next killer product Facebook needs a year or two from now might never coalesce.

And if they do work hard, a culture of anxiety and paralysis will work against them. No one wants to code with their hands tied, and some would prefer a less scrutinized environment. Every decision will require endless philosophizing and risk-reduction. Product changes will be reduced to the lowest common denominator, designed not to offend or appear too tyrannical.

Source: Volkan Furuncu/Anadolu Agency + David Ramos/Getty Images

In fact, that’s partly how Facebook got into this whole mess. A leak by an anonymous former contractor led Gizmodo to report Facebook was suppressing conservative news in its Trending section. Terrified of appearing liberally biased, Facebook reportedly hesitated to take decisive action against fake news. That hands-off approach led to the post-election criticism that degraded morale and pushed the growing snowball of leaks down the mountain.

It’s still rolling.

How to stop morale’s downward momentum will be one of Facebook’s greatest tests of leadership. This isn’t a bug to be squashed. It can’t just roll back a feature update. And an apology won’t suffice. It will have to expel or reeducate the leakers and those disloyal without instilling a witch hunt’s sense of dread. Compensation may have to jump upwards to keep talent aboard like Twitter did when it was floundering. Its top brass will need to show candor and accountability without fueling more indiscretion. And it may need to make a shocking, landmark act of contrition to convince employees its capable of change.

When asked how Facebook could address the morale problem, Mosseri told me “it starts with owning our mistakes and being very clear about what we’re doing now” and noted that “it took a while to get into this place and I think it’ll take a while to work our way out . . . Trust is lost quickly, and takes a long time to rebuild.”

This isn’t about whether Facebook will disappear tomorrow, but whether it will remain unconquerable for the forseeable future.

Growth has been the driving mantra for Facebook since its inception. No matter how employees are evaluated, it’s still the underlying ethos. Facebook has poised itself as a mission-driven company. The implication was always that connecting people is good so connecting more people is better. The only question was how to grow faster.

Now Zuckerberg will have to figure out how to get Facebook to cautiously foresee the consequences of what it says and does while remaining an appealing place to work. “Move slow and think things through” just doesn’t have the same ring to it.

If you’re a Facebook employee or anyone else that has information to share with TechCrunch, you can contact us at [email protected] or this article’s author Josh Constine’s DMs are open on Twitter. Here are some of our feature stories on Facebook’s recent issues:

 

Source: Mobile – Techcruch

Regulation could protect Facebook, not punish it

Regulation could protect Facebook, not punish it

You know what tech startups hate? Complicated legal compliance. The problem is, Facebook isn’t a startup any more, but its competitors are.

There have been plenty of calls from congress and critics to regulate Facebook following the election interference scandal and now the Cambridge Analytica debacle. The government could require extensive ads transparency reporting or data privacy protections. That could cost Facebook a lot of money, slow down its operations, or inhibit its ability to build new products.

But the danger is that those same requirements could be much more onerous for a tiny upstart company to uphold. Without much cash or enough employees, and with product-market fit still to nail down, young startups might be anchored by the weight of regulation. It could prevent them from ever rising to become a true alternative to Facebook. Venture capitalists choosing whether to fund the next Facebook killer might look at the regulations as too high of a price of entry.

STANFORD, CA – JUNE 24: Facebook CEO Mark Zuckerberg (R) hugs U.S. President Barack Obama during the 2016 Global Entrepeneurship Summit at Stanford University on June 24, 2016 in Stanford, California. President Obama joined Silicon Valley leaders on the final day of the Global Entrepreneurship Summit. (Photo by Justin Sullivan/Getty Images)

The lack of viable alternatives has made the #DeleteFacebook movement toothless. Where are people going to go? Instagram? WhatsApp? The government already missed its chances to stop Facebook from acquiring these companies that are massive social networks in their own right.

The only social networks to carve out communities since Facebook’s rise did so largely by being completely different, like the ephemeral Snapchat that purposefully doesn’t serve as a web identity platform, and the mostly-public Twitter that caters to thought leaders and celebrities more than normal people sharing their personal lives. Blockchain-based decentralized social networks sound nice but may be impossible to spin up.

That’s left few places for Facebook haters to migrate. This might explain why despite having so many more users, #DeleteFacebook peaked last week at substantially fewer Twitter mentions than the big #DeleteUber campaign from last January, according to financial data dashboard Sentieo. Lyft’s existence makes #DeleteUber a tenable stance, because you don’t have to change your behavior pattern, just your brand of choice.

If the government actually wants to protect the public against Facebook abusing its power, it would need to go harder than the Honest Ads Act that would put political advertising on Internet platforms under the same scrutiny regarding disclosure of buyers as the rules for TV and radio advertising. That’s basically just extra paperwork for Facebook. We’ve seen regulatory expenses deter competition amongst broadband internet service providers and in other industries. Real change would necessitate regulation that either creates alternatives to Facebook or at least doesn’t inhibit their creation.

That could mean only requiring certain transparency and privacy protections from apps over a certain size, like 200 million daily users. This would put the cap a bit above Twitter and Snapchat’s size today, giving them time to prepare for compliance, while immediately regulating Facebook, Messenger, Instagram, WhatsApp, and Google’s social problem child YouTube.

Still, with Facebook earning billions in profit per quarter and a massive war chest built up, Mark Zuckerberg could effectively pay his way out of the problem. That’s why it makes perfect sense for him to have told CNN “I’m not sure we shouldn’t be regulated” and that “There are things like ad transparency regulation that I would love to see.” Particular regulatory hurdles amount to just tiny speed bumps for Facebook. Courting this level of regulation could bat down the question of whether it should be broken up or its News Feed algorithm needs to change.

Meanwhile, if the government instituted new rules for tech platforms collecting persona information going forward, it could effectively lock in Facebook’s lead in the data race. If it becomes more cumbersome to gather this kind of data, no competitor might ever amass an index of psychographic profiles and social graphs able to rival Facebook’s.

A much more consequential approach would be to break up Facebook, Instagram, and WhatsApp. Facebook is trying to preempt these drastic measures with Zuckerberg’s recent apology tour and its purchase of full-page ads in nine newspapers today claiming it understands its responsibility.

Establishing them as truly independent companies that compete would create meaningful alternatives to Facebook. Instagram and WhatsApp would have to concern themselves with actually becoming sustainable businesses. They’d all lose some economies of data scale, forfeiting the ability to share engineering, anti-spam, localization, ad sales, and other resources that a source close to Instagram told me it gained by being acquired in 2012, and that Facebook later applied to WhatsApp too.

Both permanent photo sharing and messaging would become two-horse races again. That could lead to the consumer-benefiting competition and innovation the government hopes for from regulation.

Yet with strong regulation like dismantling Facebook seeming beyond the resolve of congress, and weak regulation potentially protecting Facebook, perhaps it’s losing the moral high ground that will be Facebook’s real punishment.

Facebook chief legal officer Colin Stretch testifies before congress regarding Russian election interference

We’ve already seen that first-time download rates aren’t plummeting for Facebook, its App Store ranking has actually increased since the Cambridge Analytica scandal broke, and blue chip advertisers aren’t bailing, according to BuzzFeed. But Facebook relies on the perception of its benevolent mission to recruit top talent in Silicon Valley and beyond.

Techies take the job because they wake up each day believing that they’re having a massive positive influence by connecting the world. These people could have founded or worked at a new startup where they’d have discernible input on the direction of the product, and a chance to earn huge return multiples on their stock. Many have historically worked at Facebook because its ads say it’s the “Best place to build and make an impact”.

But if workers start to see that impact as negative, they might not enlist. This is what could achieve that which surface-level regulation can’t. It’s perhaps the most important repercussion of all the backlash about fake news, election interference, well-being, and data privacy: that losing talent could lead to a slow-down of innovation at Facebook that might  leave the door open for a new challenger.

For more on Facebook’s Cambridge Analytica scandal, read our feature pieces:

Source: Mobile – Techcruch