Instagram is building non-SMS 2-factor auth to thwart SIM hackers

Instagram is building non-SMS 2-factor auth to thwart SIM hackers

Hackers can steal your phone number by reassigning it to a different SIM card, use it to reset your passwords, steal your Instagram and other accounts and sell them for bitcoin. As detailed in a harrowing Motherboard article today, Instagram accounts are especially vulnerable because the app only offers two-factor authentication through SMS that delivers a password reset or login code via text message.

But now Instagram has confirmed to TechCrunch that it’s building a non-SMS two-factor authentication system that works with security apps like Google Authenticator or Duo. They generate a special code that you need to log in that can’t be generated on a different phone in case your number is ported to a hacker’s SIM card.

Buried in the Instagram Android app’s APK code is a prototype of the upgraded 2FA feature, discovered by frequent TechCrunch tipster Jane Manchun Wong. Her work has led to confirmed TechCrunch scoops on Instagram Video Calling, Usage Insights, soundtracks for Stories and more.

When presented with the screenshots, an Instagram spokesperson told TechCrunch that yes, it is working on the non-SMS 2FA feature, saying, “We’re continuing to improve the security of Instagram accounts, including strengthening 2-factor authentication.”

Instagram actually lacked any two-factor protection until 2016 when it already had 400 million users. In November 2015, I wrote a story titled “Seriously. Instagram Needs Two-Factor Authentication.” A friend and star Instagram stop-motion animation creator Rachel Ryle had been hacked, costing a lucrative sponsorship deal. The company listened. Three months later, the app began rolling out basic SMS-based 2FA.

But since then, SIM porting has become a much more common problem. Hackers typically call a mobile carrier and use social engineering tactics to convince them they’re you, or bribe an employee to help, and then change your number to a SIM card they control. Whether they’re hoping to steal intimate photos, empty cryptocurrency wallets or sell desirable social media handles like @t or @Rainbow as Motherboard reported, there are plenty of incentives to try a SIM porting attack. This article outlines how you can take steps to protect your phone number.

Hopefully as knowledge of this hacking technique becomes more well-known, more apps will introduce non-SMS 2FA, mobile providers will make it tougher to port numbers and users will take more steps to safeguard their accounts. As our identities and assets increasingly go digital, it’s pin codes and authenticator apps, not just deadbolts and home security systems, that must become a part of our everyday lives.

Source: Mobile – Techcruch

Court victory legalizes 3D-printable gun blueprints

Court victory legalizes 3D-printable gun blueprints
A multi-year legal battle over the ability to distribute computer models of gun parts and replicate them in 3D printers has ended in defeat for government authorities who sought to prevent the practice. Cody Wilson, the gunmaker and free speech advocate behind the lawsuit, now intends to expand his operations, providing printable gun blueprints to all who desire them.
The longer story of the lawsuit is well told by Andy Greenberg over at Wired, but the decision is eloquent on its own. The fundamental question is whether making 3D models of gun components available online is covered by the free speech rights granted by the First Amendment.
This is a timely but complex conflict because it touches on two themes that happen to be, for many, ethically contradictory. Arguments for tighter restrictions on firearms are, in this case, directly opposed to arguments for the unfettered exchange of information on the internet. It’s hard to advocate for both here: restricting firearms and restricting free speech are one and the same.
That at least seems to be conclusion of the government lawyers, who settled Wilson’s lawsuit after years of court battles. In a copy of the settlement provided to me by Wilson, the U.S. government agrees to exempt “the technical data that is the subject of the Action” from legal restriction. The modified rules should appear in the Federal Register soon.
What does this mean? It means that a 3D model that can be used to print the components of a working firearm is legal to own and legal to distribute. You can likely even print it and use the product — you just can’t sell it. There are technicalities to the law here (certain parts are restricted, but can be sold in an incomplete state, etc.), but the implications as regards the files themselves seems clear.
Wilson’s original vision, which he is now pursuing free of legal obstacles, is a repository of gun models, called DEFCAD, much like any other collection of data on the web, though naturally considerably more dangerous and controversial.
“I currently have no national legal barriers to continue or expand DEFCAD,” he wrote in an email to TechCrunch. “This legal victory is the formal beginning to the era of downloadable guns. Guns are as downloadable as music. There will be streaming services for semi-automatics.”
The concepts don’t map perfectly, no doubt, but it’s hard to deny that with the success of this lawsuit, there are few legal restrictions to speak of on the digital distribution of firearms. Before it even, there were few technical restrictions: certainly just as you could download MP3s on Napster in 2002, you can download a gun file today.
Gun control advocates will no doubt argue that greater availability of lethal weaponry is the opposite of what is needed in this country. But others will point out that in a way this is a powerful example of how liberally free speech can be defined. It’s important to note that both of these things can be true.
This court victory settles one case, but marks the beginnings of many another. “I have promoted my values for years with great care and diligence,” Wilson wrote. It’s hard to disagree with that. Those whose values differ are free to pursue them in their own way; perhaps they too will be awarded victories of this scale.

Source: Gadgets – techcrunch

Apple releases new iPad, FaceID ads

Apple releases new iPad, FaceID ads
Apple has released a handful of new ads promoting the iPad’s portability and convenience over both laptops and traditional paper solutions. The 15-second ads focus on how the iPad can make even the most tedious things — travel, notes, paperwork, and ‘stuff’ — just a bit easier.
Three out of the four spots show the sixth-generation iPad, which was revealed at Apple’s education event in March, and which offers a lower-cost ($329 in the U.S.) option with Pencil support.
The ads were released on Apple’s international YouTube channels (UAE, Singapore, and United Kingdom).

This follows another 90-second ad released yesterday, focusing on FaceID. The commercial shows a man in a gameshow-type setting asked to remember the banking password he created earlier that morning. He struggles for an excruciating amount of time before realizing he can access the banking app via FaceID.

There has been some speculation that FaceID may be incorporated into some upcoming models of the iPad, though we’ll have to wait until Apple’s next event (likely in September) to find out for sure.

Source: Gadgets – techcrunch

Verizon and others call a conditional halt on sharing location with data brokers

Verizon and others call a conditional halt on sharing location with data brokers

Verizon is cutting off access to its mobile customers’ real-time locations to two third-party data brokers “to prevent misuse of that information going forward.” The company announced the decision in a letter sent to Senator Ron Wyden (D-OR), who along with others helped reveal improper usage and poor security at these location brokers. It is not, however, getting out of the location-sharing business altogether.

(Update: AT&T and Sprint have also begun the process of ending their location aggregation services — with a caveat, of which below.)

Verizon sold bulk access to its customers’ locations to the brokers in question, LocationSmart and Zumigo, which then turned around and resold that data to dozens of other companies. This isn’t necessarily bad — there are tons of times when location is necessary to provide a service the customer asks for, and supposedly that customer would have to okay the sharing of that data. (Disclosure: Verizon owns Oath, which owns TechCrunch. This does not affect our coverage.)

That doesn’t seem to have been the case at LocationSmart customer Securus, which was selling its data directly to law enforcement so they could find mobile customers quickly and without all that fuss about paperwork and warrants. And then it was found that LocationSmart had exposed an API that allowed anyone to request mobile locations freely and anonymously, and without collecting consent.

When these facts were revealed by security researchers and Sen. Wyden, Verizon immediately looked into it, they reported in a letter sent to the Senator.

“We conducted a comprehensive review of our location aggregator program,” wrote Verizon CTO Karen Zacharia. “As a result of this review, we are initiating a process to terminate our existing agreements for the location aggregator program.”

“We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices,” she wrote later in the letter. In other words, the program is on ice until it can be secured.

Although Verizon claims to have “girded” the system with “mechanisms designed to protect against misuse of our customers’ location data,” the abuses in question clearly slipped through the cracks. Perhaps most notable is the simple fact that Verizon itself does not seem to need to be informed whether a customer has consented to having their location polled. That collection is the responsibility of “the aggregator or corporate customer.”

In other words, Verizon doesn’t need to ask the customer, and the company it sells the data to wholesale doesn’t need to ask the customer — the requirement devolves to the company buying access from the wholesaler. In Securus’s case, it had abstracted things one step further, allowing law enforcement full access when it said it had authority to do so, but apparently without checking, AT&T wrote in its own letter to Sen. Wyden.

And there were 75 other corporate customers. Don’t worry, someone is keeping track of them. Right?

These processes are audited, Verizon wrote, but apparently not an audit that finds things like the abuse by Securus or a poorly secured API. Perhaps how this happened is among the “number of internal questions” raised by the review.

When asked for comment, a Verizon representative offered the following statement:

When these issues were brought to our attention, we took immediate steps to stop it. Customer privacy and security remain a top priority for our customers and our company. We stand-by that commitment to our customers.

And indeed while the program itself appears to have been run with a laxity that should be alarming to all those customers for whom Verizon claims to be so concerned, some of the company’s competitors have yet to take similar action. AT&T, T-Mobile and Sprint were also named by LocationSmart as partners. Their own letters to Sen. Wyden stressed that their systems were similar to the others, with similar safeguards (that were similarly eluded).

In a press release announcing that his pressure on Verizon had borne fruit, Sen. Wyden called on the others to step up:

Verizon deserves credit for taking quick action to protect its customers’ privacy and security. After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.

AT&T actually announced that it is ending its agreements as well, after Sen. Wyden’s call to action was published, and Sprint followed shortly afterwards. AT&T said it “will be ending [its] work with these aggregators for these services as soon as is practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.” Sprint stopped working with LocationSmart last month and is now “beginning the process of terminating its current contracts with data aggregators to whom we provide location data.”

What’s missing from these statements? Among other things: what and how many companies they’re working with, whether they’ll pursue future contracts, and what real changes will be made to prevent future problems like this. Since they’ve been at this for a long time and have had a month to ponder their next course of actions, I don’t think it’s unreasonable to expect more than a carefully worded statement about “these aggregators for these services.”

T-Mobile CEO John Legere tweeted that the company “will not sell customer location data to shady middlemen.” Of course, that doesn’t really mean anything. I await substantive promises from the company pertaining to this “pledge.”

The FCC, meanwhile, has announced that it is looking into the issue — with the considerable handicap that Chairman Ajit Pai represented Securus back in 2012 when he was working as a lawyer. Sen. Wyden has called on him to recuse himself, but that has yet to happen.

I’ve asked Verizon for further clarification on its arrangements and plans, specifically whether it has any other location-sharing agreements in place with other companies. These aren’t, after all, the only players in the game.

Source: Mobile – Techcruch

Purdue’s PHADE technology lets cameras ‘talk’ to you

Purdue’s PHADE technology lets cameras ‘talk’ to you
It’s become almost second nature to accept that cameras everywhere — from streets, to museums and shops — are watching you, but now they may be able to communicate with you, as well. New technology from Purdue University computer science researchers has made this dystopian prospect a reality in a new paper published today. But, they argue, it’s safer than you might think.
The system is called PHADE, which allows for something called “private human addressing,” where camera systems and individual cell phones can communicate without transmitting any personal data, like an IP or Mac address. Instead of using an IP or Mac address, the technology relies on motion patterns for the address code. That way, even if a hacker intercepts it, they won’t be able to access the person’s physical location.
Imagine you’re strolling through a museum and an unfamiliar painting catches your eye. The docents are busy with a tour group far across the gallery and you didn’t pay extra for the clunky recorder and headphones for an audio tour. While pondering the brushwork you feel your phone buzz, and suddenly a detailed description of the artwork and its painter is in the palm of your hand.
To achieve this effect, researchers use an approach similar to the kind of directional audio experience you might find at theme parks. Through processing the live video data, the technology is able to identify the individual motion patterns of pedestrians and when they are within a pertinent range — say, in front of a painting. From there they can broadcast a packet of information linked to the motion address of the pedestrian. When the user’s phone identifies that the motion address matches their own, the message is received.
While this tech can be used to better inform the casual museum-goer, the researchers also believe it has a role in protecting pedestrians from crime in their area.
“Our system serves as a bridge to connect surveillance cameras and people,” He Wang, a co-creator of the technology and assistant professor of computer science, said in a statement. “[It can] be used by government agencies to enhance public safety [by deploying] cameras in high-crime or high-accident areas and warn[ing] specific users about potential threats, such as suspicious followers.”
While the benefits of an increasingly interconnected world are still being debated and critiqued daily, there might just be an upside to knowing a camera’s got its eye on you.

Source: Gadgets – techcrunch

FBI reportedly overestimated inaccessible encrypted phones by thousands

FBI reportedly overestimated inaccessible encrypted phones by thousands
The FBI seems to have been caught fibbing again on the topic of encrypted phones. FBI director Christopher Wray estimated in December that it had almost 7,800 phones from 2017 alone that investigators were unable to access. The real number is likely less than a quarter of that, The Washington Post reports.
Internal records cited by sources put the actual number of encrypted phones at perhaps 1,200 but perhaps as many as 2,000, and the FBI told the paper in a statement that “initial assessment is that programming errors resulted in significant over-counting of mobile devices reported.” Supposedly having three databases tracking the phones led to devices being counted multiple times.
Such a mistake would be so elementary that it’s hard to conceive of how it would be possible. These aren’t court notes, memos or unimportant random pieces of evidence, they’re physical devices with serial numbers and names attached. The idea that no one thought to check for duplicates before giving a number to the director for testimony in Congress suggests either conspiracy or gross incompetence.

Inquiry finds FBI sued Apple to unlock phone without considering all options

The latter seems more likely after a report by the Office of the Inspector General that found the FBI had failed to utilize its own resources to access locked phones, instead suing Apple and then hastily withdrawing the case when its basis (a locked phone from a terror attack) was removed. It seems to have chosen to downplay or ignore its own capabilities in order to pursue the narrative that widespread encryption is dangerous without a backdoor for law enforcement.
An audit is underway at the Bureau to figure out just how many phones it actually has that it can’t access, and hopefully how this all happened.
It is unmistakably among the FBI’s goals to emphasize the problem of devices being fully encrypted and inaccessible to authorities, a trend known as “going dark.” That much it has said publicly, and it is a serious problem for law enforcement. But it seems equally unmistakable that the Bureau is happy to be sloppy, deceptive or both in its advancement of a tailored narrative.

Source: Gadgets – techcrunch

Comcast is (update: was) leaking the names and passwords of customers’ wireless routers

Comcast is (update: was) leaking the names and passwords of customers’ wireless routers
Comcast has just been caught in a major security snafu: revealing the passwords of its customers’ Xfinity-provided wireless routers in plaintext on the web. Anyone with a subscriber’s account number and street address number will be served up the Wi-Fi name and password via the company’s Xfinity internet activation service.
Update: Comcast has taken down the service in question. “There’s nothing more important than our customers’ security,” a Comcast representative said in a statement. “Within hours of learning of this issue, we shut it down.  We are conducting a thorough investigation and will take all necessary steps to ensure that this doesn’t happen again.” Original story follows.
Security researchers Karan Saini and Ryan Stevenson reported the issue to ZDnet.
The site is meant to help people setting up their internet for the first time: ideally, you put in your data, and Comcast sends back the router credentials while activating the service.
The problem is threefold:

You can “activate” an account that’s already active
The data required to do so is minimal and it is not verified via text or email
The wireless name and password are sent on the web in plaintext

This means that anyone with your account number and street address number (e.g. the 1425 in “1425 Alder Ave,” no street name, city, or apartment number needed), both of which can be found on your paper bill or in an email, will instantly be given your router’s SSID and password, allowing them to log in and use it however they like or monitor its traffic. They could also rename the router’s network or change its password, locking out subscribers.
This only affects people who use a router provided by Xfinity/Comcast, which comes with its own name and password built in. Though it also returns custom SSIDs and passwords, since they’re synced with your account and can be changed via app and other methods.
What can you do? While this problem is at large, it’s no good changing your password — Comcast will just provide any malicious actor the new one. So until further notice all of Comcast’s Xfinity customers with routers provided by the company are at risk.
One thing you can do for now is treat your home network as if it is a public one — if you must use it, make sure encryption is enabled if you conduct any private business like buying things online. What will likely happen is Comcast will issue a notice and ask users to change their router passwords at large.
Another is to buy your own router — this is a good idea anyway, as it will pay for itself in a few months and you can do more stuff with it. Which to buy and how to install it, however, are beyond the scope of this article. But if you’re really worried, you could conceivably fix this security issue today by bringing your own hardware to the bargain.

Source: Gadgets – techcrunch

LocationSmart didn’t just sell mobile phone locations, it leaked them

LocationSmart didn’t just sell mobile phone locations, it leaked them

What’s worse than companies selling the real-time locations of cell phones wholesale? Failing to take security precautions that prevent people from abusing the service. LocationSmart did both, as numerous sources indicated this week.

The company is adjacent to a hack of Securus, a company in the lucrative business of prison inmate communication; LocationSmart was the partner that allowed the former to provide mobile device locations in real time to law enforcement and others. There are perfectly good reasons and methods for establishing customer location, but this isn’t one of them.

Police and FBI and the like are supposed to go directly to carriers for this kind of information. But paperwork is such a hassle! If carriers let LocationSmart, a separate company, access that data, and LocationSmart sells it to someone else (Securus), and that someone else sells it to law enforcement, much less paperwork required! That’s what Securus told Senator Ron Wyden (D-OR) it was doing: acting as a middle man between the government and carriers, with help from LocationSmart.

LocationSmart’s service appears to locate phones by which towers they have recently connected to, giving a location within seconds to as close as within a few hundred feet. To prove the service worked, the company (until recently) provided a free trial of its service where a prospective customer could put in a phone number and, once that number replied yes to a consent text, the location would be returned.

It worked quite well, but is now offline. Because in its excitement to demonstrate the ability to locate a given phone, the company appeared to forget to secure the API by which it did so, Brian Krebs reports.

Krebs heard from CMU security researcher Robert Xiao, who had found that LocationSmart “failed to perform basic checks to prevent anonymous and unauthorized queries.” And not through some hardcore hackery — just by poking around.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do. This is something anyone could discover with minimal effort,” he told Krebs. Xiao posted the technical details here.

They verified the back door to the API worked by testing it with some known parties, and when they informed LocationSmart, the company’s CEO said they would investigate.

This is enough of an issue on its own. But it also calls into question what the wireless companies say about their own policies of location sharing. When Krebs contacted the four major U.S. carriers, they all said they all require customer consent or law enforcement requests.

Yet using LocationSmart’s tool, phones could be located without user consent on all four of those carriers. Both of these things can’t be true. Of course, one was just demonstrated and documented, while the other is an assurance from an industry infamous for deception and bad privacy policy.

There are three options that I can think of:

  • LocationSmart has a way of finding location via towers that does not require authorization from the carriers in question. This seems unlikely for technical and business reasons; the company also listed the carriers and other companies on its front page as partners, though their logos have since been removed.
  • LocationSmart has a sort of skeleton key to carrier info; their requests might be assumed to be legit because they have law enforcement clients or the like. This is more likely, but also contradicts the carriers’ requirement that they require consent or some kind of law enforcement justification.
  • Carriers don’t actually check on a case by case basis whether a request has consent; they may foist that duty off on the ones doing the requests, like LocationSmart (which does ask for consent in the official demo). But if carriers don’t ask for consent and third parties don’t either, and neither keeps the other accountable, the requirement for consent may as well not exist.

None of these is particularly heartening. But no one expected anything good to come out of a poorly secured API that let anyone request the approximate location of anyone’s phone. I’ve asked LocationSmart for comment on how the issue was possible (and also Krebs for a bit of extra data that might shed light on this).

It’s worth mentioning that LocationSmart is not the only business that does this, just the one implicated today in this security failure and in the shady practices of Securus.

Source: Mobile – Techcruch

iOS will soon disable USB connection if left locked for a week

iOS will soon disable USB connection if left locked for a week
In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.
The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.
Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.
USB Restricted Mode is likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.
But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.
On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Inquiry finds FBI sued Apple to unlock phone without considering all options

Had this feature been in place a couple of years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities all the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a seven-day deadline.
It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the seven-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.

Source: Gadgets – techcrunch

Twitter has an unlaunched ‘Secret’ encrypted messages feature

Twitter has an unlaunched ‘Secret’ encrypted messages feature

Buried inside Twitter’s Android app is a “Secret conversation” option that if launched would allow users to send encrypted direct messages. The feature could make Twitter a better home for sensitive communications that often end up on encrypted messaging apps like Signal, Telegram or WhatsApp.

The encrypted DMs option was first spotted inside the Twitter for Android application package (APK) by Jane Manchun Wong. APKs often contain code for unlaunched features that companies are quietly testing or will soon make available. A Twitter spokesperson declined to comment on the record. It’s unclear how long it might be before Twitter officially launches the feature, but at least we know it’s been built.

The appearance of encrypted DMs comes 18 months after whistleblower Edward Snowden asked Twitter CEO Jack Dorsey for the feature, which Dorsey said was “reasonable and something we’ll think about.”

Twitter has gone from “thinking about” the feature to prototyping it. The screenshot above shows the options to learn more about encrypted messaging, start a secret conversation and view both your own and your conversation partner’s encryption keys to verify a secure connection.

Twitter’s DMs have become a powerful way for people to contact strangers without needing their phone number or email address. Whether it’s to send a reporter a scoop, warn someone of a problem, discuss business or just “slide into their DMs” to flirt, Twitter has established one of the most open messaging mediums. But without encryption, those messages are subject to snooping by governments, hackers or Twitter itself.

Twitter has long positioned itself as a facilitator of political discourse and even uprisings. But anyone seriously worried about the consequences of political dissonance, whistleblowing or leaking should be using an app like Signal that offers strong end-to-end encryption. Launching encrypted DMs could win back some of those change-makers and protect those still on Twitter.

Source: Mobile – Techcruch