Keepsafe launches a privacy-focused mobile browser

Keepsafe launches a privacy-focused mobile browser

Keepsafe, the company behind the private photo app of the same name, is expanding its product lineup today with the release of a mobile web browser.

Co-founder and CEO Zouhair Belkoura argued that all of Keepsafe’s products (which also include a VPN app and a private phone number generator) are united not just by a focus on privacy, but by a determination to make those features simple and easy-to-understand — in contrast to what Belkoura described as “how security is designed in techland,” with lots of jargon and complicated settings.

Plus, when it comes to your online activity, Belkoura said there are different levels of privacy. There’s the question of the government and large tech companies accessing our personal data, which he argued people care about intellectually, but “they don’t really care about it emotionally.”

Then there’s “the nosy neighbor problem,” which Belkoura suggested is something people feel more strongly about: “A billion people are using Gmail and it’s scanning all their email [for advertising], but if I were to walk up to you and say, ‘Hey, can I read your email?’ you’d be like, ‘No, that’s kind of weird, go away.’ ”

It looks like Keepsafe is trying to tackle both kinds of privacy with its browser. For one thing, you can lock the browser with a PIN (it also supports Touch ID, Face ID and Android Fingerprint).

Keepsafe browser tabs

Then once you’re actually browsing, you can either do it in normal tabs, where social, advertising and analytics trackers are blocked (you can toggle which kinds of trackers are affected), but cookies and caching are still allowed — so you stay logged in to websites, and other session data is retained. But if you want an additional layer of privacy, you can open a private tab, where everything gets forgotten as soon as you close it.

While you can get some of these protections just by turning on private/incognito mode in a regular browser, Belkoura said there’s a clarity for consumers when an app is designed specifically for privacy, and the app is part of a broader suite of privacy-focused products. In addition, he said he’s hoping to build meaningful integrations between the different Keepsafe products.

Keepsafe Browser is available for free on iOS and Android.

When asked about monetization, Belkoura said, “I don’t think that the private browser per se is a good place to directly monetize … I’m more interested in saying this is part of the Keepsafe suite and there are other parts of the Keepsafe Suite that we’ll charge you money for.”

Source: Mobile – Techcruch

Verizon and others call a conditional halt on sharing location with data brokers

Verizon and others call a conditional halt on sharing location with data brokers

Verizon is cutting off access to its mobile customers’ real-time locations to two third-party data brokers “to prevent misuse of that information going forward.” The company announced the decision in a letter sent to Senator Ron Wyden (D-OR), who along with others helped reveal improper usage and poor security at these location brokers. It is not, however, getting out of the location-sharing business altogether.

(Update: AT&T and Sprint have also begun the process of ending their location aggregation services — with a caveat, of which below.)

Verizon sold bulk access to its customers’ locations to the brokers in question, LocationSmart and Zumigo, which then turned around and resold that data to dozens of other companies. This isn’t necessarily bad — there are tons of times when location is necessary to provide a service the customer asks for, and supposedly that customer would have to okay the sharing of that data. (Disclosure: Verizon owns Oath, which owns TechCrunch. This does not affect our coverage.)

That doesn’t seem to have been the case at LocationSmart customer Securus, which was selling its data directly to law enforcement so they could find mobile customers quickly and without all that fuss about paperwork and warrants. And then it was found that LocationSmart had exposed an API that allowed anyone to request mobile locations freely and anonymously, and without collecting consent.

When these facts were revealed by security researchers and Sen. Wyden, Verizon immediately looked into it, they reported in a letter sent to the Senator.

“We conducted a comprehensive review of our location aggregator program,” wrote Verizon CTO Karen Zacharia. “As a result of this review, we are initiating a process to terminate our existing agreements for the location aggregator program.”

“We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices,” she wrote later in the letter. In other words, the program is on ice until it can be secured.

Although Verizon claims to have “girded” the system with “mechanisms designed to protect against misuse of our customers’ location data,” the abuses in question clearly slipped through the cracks. Perhaps most notable is the simple fact that Verizon itself does not seem to need to be informed whether a customer has consented to having their location polled. That collection is the responsibility of “the aggregator or corporate customer.”

In other words, Verizon doesn’t need to ask the customer, and the company it sells the data to wholesale doesn’t need to ask the customer — the requirement devolves to the company buying access from the wholesaler. In Securus’s case, it had abstracted things one step further, allowing law enforcement full access when it said it had authority to do so, but apparently without checking, AT&T wrote in its own letter to Sen. Wyden.

And there were 75 other corporate customers. Don’t worry, someone is keeping track of them. Right?

These processes are audited, Verizon wrote, but apparently not an audit that finds things like the abuse by Securus or a poorly secured API. Perhaps how this happened is among the “number of internal questions” raised by the review.

When asked for comment, a Verizon representative offered the following statement:

When these issues were brought to our attention, we took immediate steps to stop it. Customer privacy and security remain a top priority for our customers and our company. We stand-by that commitment to our customers.

And indeed while the program itself appears to have been run with a laxity that should be alarming to all those customers for whom Verizon claims to be so concerned, some of the company’s competitors have yet to take similar action. AT&T, T-Mobile and Sprint were also named by LocationSmart as partners. Their own letters to Sen. Wyden stressed that their systems were similar to the others, with similar safeguards (that were similarly eluded).

In a press release announcing that his pressure on Verizon had borne fruit, Sen. Wyden called on the others to step up:

Verizon deserves credit for taking quick action to protect its customers’ privacy and security. After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.

AT&T actually announced that it is ending its agreements as well, after Sen. Wyden’s call to action was published, and Sprint followed shortly afterwards. AT&T said it “will be ending [its] work with these aggregators for these services as soon as is practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.” Sprint stopped working with LocationSmart last month and is now “beginning the process of terminating its current contracts with data aggregators to whom we provide location data.”

What’s missing from these statements? Among other things: what and how many companies they’re working with, whether they’ll pursue future contracts, and what real changes will be made to prevent future problems like this. Since they’ve been at this for a long time and have had a month to ponder their next course of actions, I don’t think it’s unreasonable to expect more than a carefully worded statement about “these aggregators for these services.”

T-Mobile CEO John Legere tweeted that the company “will not sell customer location data to shady middlemen.” Of course, that doesn’t really mean anything. I await substantive promises from the company pertaining to this “pledge.”

The FCC, meanwhile, has announced that it is looking into the issue — with the considerable handicap that Chairman Ajit Pai represented Securus back in 2012 when he was working as a lawyer. Sen. Wyden has called on him to recuse himself, but that has yet to happen.

I’ve asked Verizon for further clarification on its arrangements and plans, specifically whether it has any other location-sharing agreements in place with other companies. These aren’t, after all, the only players in the game.

Source: Mobile – Techcruch

Purdue’s PHADE technology lets cameras ‘talk’ to you

Purdue’s PHADE technology lets cameras ‘talk’ to you
It’s become almost second nature to accept that cameras everywhere — from streets, to museums and shops — are watching you, but now they may be able to communicate with you, as well. New technology from Purdue University computer science researchers has made this dystopian prospect a reality in a new paper published today. But, they argue, it’s safer than you might think.
The system is called PHADE, which allows for something called “private human addressing,” where camera systems and individual cell phones can communicate without transmitting any personal data, like an IP or Mac address. Instead of using an IP or Mac address, the technology relies on motion patterns for the address code. That way, even if a hacker intercepts it, they won’t be able to access the person’s physical location.
Imagine you’re strolling through a museum and an unfamiliar painting catches your eye. The docents are busy with a tour group far across the gallery and you didn’t pay extra for the clunky recorder and headphones for an audio tour. While pondering the brushwork you feel your phone buzz, and suddenly a detailed description of the artwork and its painter is in the palm of your hand.
To achieve this effect, researchers use an approach similar to the kind of directional audio experience you might find at theme parks. Through processing the live video data, the technology is able to identify the individual motion patterns of pedestrians and when they are within a pertinent range — say, in front of a painting. From there they can broadcast a packet of information linked to the motion address of the pedestrian. When the user’s phone identifies that the motion address matches their own, the message is received.
While this tech can be used to better inform the casual museum-goer, the researchers also believe it has a role in protecting pedestrians from crime in their area.
“Our system serves as a bridge to connect surveillance cameras and people,” He Wang, a co-creator of the technology and assistant professor of computer science, said in a statement. “[It can] be used by government agencies to enhance public safety [by deploying] cameras in high-crime or high-accident areas and warn[ing] specific users about potential threats, such as suspicious followers.”
While the benefits of an increasingly interconnected world are still being debated and critiqued daily, there might just be an upside to knowing a camera’s got its eye on you.

Source: Gadgets – techcrunch

Students confront the unethical side of tech in ‘Designing for Evil’ course

Students confront the unethical side of tech in ‘Designing for Evil’ course
Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.
“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.
What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?
I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.
The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology.

“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”
The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.
As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.
I found the students fell into one of three categories.
Not fundamentally unethical (but could use an ethical tune-up)
WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.
Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.
Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.
Fundamentally unethical (fixes are still worth making)
FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!
China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.
Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some deal-breaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).
Fundamentally unethical (fixes are essentially impossible)
The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.
Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.

To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.
That may be the difference in a meeting between being able to say something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.
As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.
With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self-esteem.

Source: Gadgets – techcrunch

This family’s Echo sent a private conversation to a random contact

This family’s Echo sent a private conversation to a random contact
A Portland family tells KIRO news that their Echo recorded and then sent a private conversation to someone on its list of contacts without telling them. Amazon called it an “extremely rare occurrence.” (And provided a more detailed explanation, below.)
Portlander Danielle said that she got a call from one of her husband’s employees one day telling her to “unplug your Alexa devices right now,” and suggesting she’d been hacked. He said that he had received recordings of the couple talking about hardwood floors, which Danielle confirmed.
Amazon, when she eventually got hold of the company, had an engineer check the logs, and he apparently discovered what they said was true. In a statement, Amazon said, “We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.”

Can your smart home be used against you in court?

What could have happened? It seems likely that the Echo’s voice recognition service misheard something, interpreting it as instructions to record the conversation like a note or message. And then it apparently also misheard them say to send the recording to this particular person. And it did all this without saying anything back.
The house reportedly had multiple Alexa devices, so it’s also possible that the system decided to ask for confirmation on the wrong device — saying “All right, I’ve sent that to Steve” on the living room Echo because the users’ voices carried from the kitchen. Or something.
Naturally no one expects to have their conversations sent out to an acquaintance, but it must also be admitted that the Echo is, fundamentally, a device that listens to every conversation you have and constantly sends that data to places on the internet. It also remembers more stuff now. If something does go wrong, “sending your conversation somewhere it isn’t supposed to go” seems a pretty reasonable way for it to happen.
Update: I asked Amazon for more details on what happened, and after this article was published it issued the following explanation, which more or less confirms how I suspected this went down:
Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”. As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: Gadgets – techcrunch

FBI reportedly overestimated inaccessible encrypted phones by thousands

FBI reportedly overestimated inaccessible encrypted phones by thousands
The FBI seems to have been caught fibbing again on the topic of encrypted phones. FBI director Christopher Wray estimated in December that it had almost 7,800 phones from 2017 alone that investigators were unable to access. The real number is likely less than a quarter of that, The Washington Post reports.
Internal records cited by sources put the actual number of encrypted phones at perhaps 1,200 but perhaps as many as 2,000, and the FBI told the paper in a statement that “initial assessment is that programming errors resulted in significant over-counting of mobile devices reported.” Supposedly having three databases tracking the phones led to devices being counted multiple times.
Such a mistake would be so elementary that it’s hard to conceive of how it would be possible. These aren’t court notes, memos or unimportant random pieces of evidence, they’re physical devices with serial numbers and names attached. The idea that no one thought to check for duplicates before giving a number to the director for testimony in Congress suggests either conspiracy or gross incompetence.

Inquiry finds FBI sued Apple to unlock phone without considering all options

The latter seems more likely after a report by the Office of the Inspector General that found the FBI had failed to utilize its own resources to access locked phones, instead suing Apple and then hastily withdrawing the case when its basis (a locked phone from a terror attack) was removed. It seems to have chosen to downplay or ignore its own capabilities in order to pursue the narrative that widespread encryption is dangerous without a backdoor for law enforcement.
An audit is underway at the Bureau to figure out just how many phones it actually has that it can’t access, and hopefully how this all happened.
It is unmistakably among the FBI’s goals to emphasize the problem of devices being fully encrypted and inaccessible to authorities, a trend known as “going dark.” That much it has said publicly, and it is a serious problem for law enforcement. But it seems equally unmistakable that the Bureau is happy to be sloppy, deceptive or both in its advancement of a tailored narrative.

Source: Gadgets – techcrunch

Comcast is (update: was) leaking the names and passwords of customers’ wireless routers

Comcast is (update: was) leaking the names and passwords of customers’ wireless routers
Comcast has just been caught in a major security snafu: revealing the passwords of its customers’ Xfinity-provided wireless routers in plaintext on the web. Anyone with a subscriber’s account number and street address number will be served up the Wi-Fi name and password via the company’s Xfinity internet activation service.
Update: Comcast has taken down the service in question. “There’s nothing more important than our customers’ security,” a Comcast representative said in a statement. “Within hours of learning of this issue, we shut it down.  We are conducting a thorough investigation and will take all necessary steps to ensure that this doesn’t happen again.” Original story follows.
Security researchers Karan Saini and Ryan Stevenson reported the issue to ZDnet.
The site is meant to help people setting up their internet for the first time: ideally, you put in your data, and Comcast sends back the router credentials while activating the service.
The problem is threefold:

You can “activate” an account that’s already active
The data required to do so is minimal and it is not verified via text or email
The wireless name and password are sent on the web in plaintext

This means that anyone with your account number and street address number (e.g. the 1425 in “1425 Alder Ave,” no street name, city, or apartment number needed), both of which can be found on your paper bill or in an email, will instantly be given your router’s SSID and password, allowing them to log in and use it however they like or monitor its traffic. They could also rename the router’s network or change its password, locking out subscribers.
This only affects people who use a router provided by Xfinity/Comcast, which comes with its own name and password built in. Though it also returns custom SSIDs and passwords, since they’re synced with your account and can be changed via app and other methods.
What can you do? While this problem is at large, it’s no good changing your password — Comcast will just provide any malicious actor the new one. So until further notice all of Comcast’s Xfinity customers with routers provided by the company are at risk.
One thing you can do for now is treat your home network as if it is a public one — if you must use it, make sure encryption is enabled if you conduct any private business like buying things online. What will likely happen is Comcast will issue a notice and ask users to change their router passwords at large.
Another is to buy your own router — this is a good idea anyway, as it will pay for itself in a few months and you can do more stuff with it. Which to buy and how to install it, however, are beyond the scope of this article. But if you’re really worried, you could conceivably fix this security issue today by bringing your own hardware to the bargain.

Source: Gadgets – techcrunch

Does Google’s Duplex violate two-party consent laws?

Does Google’s Duplex violate two-party consent laws?

Google’s Duplex, which calls businesses on your behalf and imitates a real human, ums and ahs included, has sparked a bit of controversy among privacy advocates. Doesn’t Google recording a person’s voice and sending it to a data center for analysis violate two-party consent law, which requires everyone in a conversation to agree to being recorded? The answer isn’t immediately clear, and Google’s silence isn’t helping.

Let’s take California’s law as the example, since that’s the state where Google is based and where it used the system. Penal Code section 632 forbids recording any “confidential communication” (defined more or less as any non-public conversation) without the consent of all parties. (The Reporters Committee for the Freedom of the Press has a good state-by-state guide to these laws.)

Google has provided very little in the way of details about how Duplex actually works, so attempting to answer this question involves a certain amount of informed speculation.

To begin with I’m going to consider all phone calls as “confidential” for the purposes of the law. What constitutes a reasonable expectation of privacy is far from settled, and some will have it that you there isn’t such an expectation when making an appointment with a salon. But what about a doctor’s office, or if you need to give personal details over the phone? Though some edge cases may qualify as public, it’s simpler and safer (for us and for Google) to treat all phone conversations as confidential.

As a second assumption, it seems clear that, like most Google services, Duplex’s work takes place in a data center somewhere, not locally on your device. So fundamentally there is a requirement in the system that the other party’s audio will be recorded and sent in some form to that data center for processing, at which point a response is formulated and spoken.

On its face it sounds bad for Google. There’s no way the system is getting consent from whomever picks up the phone. That would spoil the whole interaction — “This call is being conducted by a Google system using speech recognition and synthesis; your voice will be analyzed at Google data centers. Press 1 or say ‘I consent’ to consent.” I would have hung up after about two words. The whole idea is to mask the fact that it’s an AI system at all, so getting consent that way won’t work.

But there’s wiggle room as far as the consent requirement in how the audio is recorded, transmitted and stored. After all, there are systems out there that may have to temporarily store a recording of a person’s voice without their consent — think of a VoIP call that caches audio for a fraction of a second in case of packet loss. There’s even a specific cutout in the law for hearing aids, which if you think about it do in fact do “record” private conversations. Temporary copies produced as part of a legal, beneficial service aren’t the target of this law.

This is partly because the law is about preventing eavesdropping and wiretapping, not preventing any recorded representation of conversation whatsoever that isn’t explicitly authorized. Legislative intent is important.

“There’s a little legal uncertainty there, in the sense of what degree of permanence is required to constitute eavesdropping,” said Mason Kortz, of Harvard’s Berkman Klein Center for Internet & Society. “The big question is what is being sent to the data center and how is it being retained. If it’s retained in the condition that the original conversation is understandable, that’s a violation.”

For instance, Google could conceivably keep a recording of the call, perhaps for AI training purposes, perhaps for quality assurance, perhaps for users’ own records (in case of time slot dispute at the salon, for example). They do retain other data along these lines.

But it would be foolish. Google has an army of lawyers and consent would have been one of the first things they tackled in the deployment of Duplex. For the onstage demos it would be simple enough to collect proactive consent from the businesses they were going to contact. But for actual use by consumers the system needs to engineered with the law in mind.

What would a functioning but legal Duplex look like? The conversation would likely have to be deconstructed and permanently discarded immediately after intake, the way audio is cached in a device like a hearing aid or a service like digital voice transmission.

A closer example of this is Amazon, which might have found itself in violation of COPPA, a law protecting children’s data, whenever a kid asked an Echo to play a Raffi song or do long division. The FTC decided that as long as Amazon and companies in that position immediately turn the data into text and then delete it afterwards, no harm and, therefore, no violation. That’s not an exact analogue to Google’s system, but it is nonetheless instructive.

“It may be possible with careful design to extract the features you need without keeping the original, in a way where it’s mathematically impossible to recreate the recording,” Kortz said.

If that process is verifiable and there’s no possibility of eavesdropping — no chance any Google employee, law enforcement officer or hacker could get into the system and intercept or collect that data — then potentially Duplex could be deemed benign, transitory recording in the eye of the law.

That assumes a lot, though. Frustratingly, Google could clear this up with a sentence or two. It’s suspicious that the company didn’t address this obvious question with even a single phrase, like Sundar Pichai adding during the presentation that “yes, we are compliant with recording consent laws.” Instead of people wondering if, they’d be wondering how. And of course we’d all still be wondering why.

We’ve reached out to Google multiple times on various aspects of this story, but for a company with such talkative products, they sure clammed up fast.

Source: Mobile – Techcruch

LocationSmart didn’t just sell mobile phone locations, it leaked them

LocationSmart didn’t just sell mobile phone locations, it leaked them

What’s worse than companies selling the real-time locations of cell phones wholesale? Failing to take security precautions that prevent people from abusing the service. LocationSmart did both, as numerous sources indicated this week.

The company is adjacent to a hack of Securus, a company in the lucrative business of prison inmate communication; LocationSmart was the partner that allowed the former to provide mobile device locations in real time to law enforcement and others. There are perfectly good reasons and methods for establishing customer location, but this isn’t one of them.

Police and FBI and the like are supposed to go directly to carriers for this kind of information. But paperwork is such a hassle! If carriers let LocationSmart, a separate company, access that data, and LocationSmart sells it to someone else (Securus), and that someone else sells it to law enforcement, much less paperwork required! That’s what Securus told Senator Ron Wyden (D-OR) it was doing: acting as a middle man between the government and carriers, with help from LocationSmart.

LocationSmart’s service appears to locate phones by which towers they have recently connected to, giving a location within seconds to as close as within a few hundred feet. To prove the service worked, the company (until recently) provided a free trial of its service where a prospective customer could put in a phone number and, once that number replied yes to a consent text, the location would be returned.

It worked quite well, but is now offline. Because in its excitement to demonstrate the ability to locate a given phone, the company appeared to forget to secure the API by which it did so, Brian Krebs reports.

Krebs heard from CMU security researcher Robert Xiao, who had found that LocationSmart “failed to perform basic checks to prevent anonymous and unauthorized queries.” And not through some hardcore hackery — just by poking around.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do. This is something anyone could discover with minimal effort,” he told Krebs. Xiao posted the technical details here.

They verified the back door to the API worked by testing it with some known parties, and when they informed LocationSmart, the company’s CEO said they would investigate.

This is enough of an issue on its own. But it also calls into question what the wireless companies say about their own policies of location sharing. When Krebs contacted the four major U.S. carriers, they all said they all require customer consent or law enforcement requests.

Yet using LocationSmart’s tool, phones could be located without user consent on all four of those carriers. Both of these things can’t be true. Of course, one was just demonstrated and documented, while the other is an assurance from an industry infamous for deception and bad privacy policy.

There are three options that I can think of:

  • LocationSmart has a way of finding location via towers that does not require authorization from the carriers in question. This seems unlikely for technical and business reasons; the company also listed the carriers and other companies on its front page as partners, though their logos have since been removed.
  • LocationSmart has a sort of skeleton key to carrier info; their requests might be assumed to be legit because they have law enforcement clients or the like. This is more likely, but also contradicts the carriers’ requirement that they require consent or some kind of law enforcement justification.
  • Carriers don’t actually check on a case by case basis whether a request has consent; they may foist that duty off on the ones doing the requests, like LocationSmart (which does ask for consent in the official demo). But if carriers don’t ask for consent and third parties don’t either, and neither keeps the other accountable, the requirement for consent may as well not exist.

None of these is particularly heartening. But no one expected anything good to come out of a poorly secured API that let anyone request the approximate location of anyone’s phone. I’ve asked LocationSmart for comment on how the issue was possible (and also Krebs for a bit of extra data that might shed light on this).

It’s worth mentioning that LocationSmart is not the only business that does this, just the one implicated today in this security failure and in the shady practices of Securus.

Source: Mobile – Techcruch

iOS will soon disable USB connection if left locked for a week

iOS will soon disable USB connection if left locked for a week
In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.
The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.
Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.
USB Restricted Mode is likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.
But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.
On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Inquiry finds FBI sued Apple to unlock phone without considering all options

Had this feature been in place a couple of years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities all the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a seven-day deadline.
It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the seven-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.

Source: Gadgets – techcrunch